Introducing the GHES repository cache
If you’re a GHES customer with heavy read traffic on your monorepo, check out the repository cache, especially if you have CI workloads distributed around the world.
Continuous integration (CI) runners often drive the lion’s share of load on a Git server. They can build everything from merges to main, pull requests, or even every single commit. Any of these mean frequent clones and pulls. A large enough farm of CI runners may cause slowdowns for users of Git. A related problem for large organizations is geo-distribution, such as a CI farm in Europe pulling from a Git server in North America. Each runner has to pull each change over the same high-latency, possibly expensive, link. It can be cheaper, faster, and even more reliable if a single machine pulls the changes from North America and then distributes them to the runners locally in Europe.
Since Git traffic isn’t suitable for classic content delivery networks, and geo-replicas can cause slowdowns for developers trying to push data, large organizations need something different. Enter the GitHub Enterprise Server repository cache, now available in public beta.
The repository cache is an eventually-consistent replica of your Git data and is updated by a background job. This offers the data locality and convenience of geo-replicas, but doesn’t participate in the initial receipt of Git data, meaning that developer push workflows aren’t affected.
The other advantage repository caches offer is selective replication: you choose exactly which repositories are replicated to specific caches. Several enterprise customers expressed a need for strict data residency. There are repositories that should be replicated elsewhere, and repositories that should remain solely in their home location. Replication policies allow you to identify these repositories and honor these restrictions.
There are a few tradeoffs when using a repository cache. Repository caches are purely read-only; all writes have to go to the primary. This is why we recommend that only CI farms use repository caches, and that developers continue to work with the primary. We’ve detailed other strategies for speeding up developer workflows with large monorepos, and those strategies are compatible with offloading CI traffic to repository caches. Also, it can take up to several minutes for new Git data to arrive at the cache. Readers must implement a backoff-and-retry strategy for commits which they expect but don’t find immediately.
This feature shipped in beta with GHES 3.3 and has been improved in GHES 3.4. There are a handful of known issues, listed below, but if these aren’t showstoppers for you, we’d love to have you try it out and share feedback with us. Get started by visiting the documentation.
Known cache server issues in GHES 3.3 and 3.4:
- LFS files are not replicated. Instead, LFS acts as a proxy and will stream the content from the primary instance. If you make heavy use of LFS, then you may not reap much benefit from the cache server in its current state. LFS caching will be implemented in a future release of the cache server.
- Repositories which aren’t allowed by replication policy may still be streamed through the cache server. It’s important to note that the cache server is not an authorization boundary. If a user with access to a repository attempts to access it through a cache server, the cache server will proxy the data, but will not store the data. This behavior will be changed in a future release of the cache server.
Tags:
Written by
Related posts
Celebrating the GitHub Awards 2024 recipients 🎉
The GitHub Awards celebrates the outstanding contributions and achievements in the developer community by honoring individuals, projects, and organizations for creating an outsized positive impact on the community.
New from Universe 2024: Get the latest previews and releases
Find out how we’re evolving GitHub and GitHub Copilot—and get access to the latest previews and GA releases.
Bringing developer choice to Copilot with Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview
At GitHub Universe, we announced Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s o1-preview and o1-mini are coming to GitHub Copilot—bringing a new level of choice to every developer.