Miguel Fernández
Senior Platform Engineer
GitHub Profile | Twitter Profile
Moving persistent data out of Redis
Historically, we have used Redis in two ways at GitHub: We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in…
|
|
8 minutes
Historically, we have used Redis in two ways at GitHub:
We used it as an LRU cache to conveniently store the results of expensive computations over data originally persisted in Git repositories or MySQL. We call this transient Redis.
We also enabled persistence, which gave us durability guarantees over data that was not stored anywhere else. We used it to store a wide range of values: from sparse data with high read/write ratios, like configuration settings, counters, or quality metrics, to very dynamic information powering core features like spam analysis. We call this persistent Redis.
Recently we made the decision to disable persistence in Redis and stop using it as a source of truth for our data. The main motivations behind this choice were to:
- Reduce the operational cost of our persistence infrastructure by removing some of its complexity.
- Take advantage of our expertise operating MySQL.
- Gain some extra performance, by eliminating the I/O latency during the process of writing big changes on the server state to disk.
Transitioning all that information transparently involved planning and coordination. For each problem domain using persistent Redis, we considered the volume of operations, the structure of the data, and the different access patterns to predict the impact on our current MySQL capacity, and the need for provisioning new hardware.
For the majority of callsites, we replaced persistent Redis with GitHub::KV
, a MySQL key/value store of our own built atop InnoDB, with features like key expiration. We were able to use GitHub::KV
almost identically as we used Redis: from trending repositories and users for the explore page, to rate limiting to spammy user detection.
Our biggest challenge: Migrating the activity feeds
We have lots of “events” at GitHub. Starring a repository, closing an issue and pushing commits are all events that we display on our activity feeds, like the one found on your GitHub homepage.
We used Redis as a secondary indexer for the MySQL table that stores all our events. Previously, when an event happened, we “dispatched” the event identifier to Redis keys corresponding to each user’s feed that should display the event. That’s a lot of write operations and a lot of Redis keys and no single table would be able to handle that fanout. We weren’t going to be able to simply replace Redis with GitHub::KV
everywhere in this code path and call it a day.
Our first step was to gather some metrics and let them tell us what to do. We pulled numbers for the different types of feeds we had and calculated the writes and reads per second for each timeline type (e.g., issue events in a repository, public events performed by a user, etc.). One timeline wasn’t ever read, so we were able to axe it right away and immediately knock one off the list. Of the remaining timelines, two were so write-heavy that we knew we couldn’t port them to MySQL as is. So that’s where we began.
Let’s walk through how we handled one of the two problematic timelines. The “organization timeline” that you can see if you toggle the event feed on your home page to one of the organizations you belong to, accounted for 67% of the more than 350 million total writes per day to Redis for these timelines. Remember when I said we “dispatched” event IDs to Redis for every user that should see them? Long story short – we were pushing event IDs to separate Redis keys for every event and every user within an org. So for an active organization that produces, say, 100 events per day and has 1000 members, that would potentially be 100,000 writes to Redis for only 100 events. Not good, not efficient, and would require far more MySQL capacity than what we are willing to accept.
We changed up how writing to and reading from Redis keys worked for this timeline before even thinking about MySQL. We’d write every event happening to one key for the org, and then on retrieval, we’d reject those events that the requesting user shouldn’t see. Instead of doing the filtering each time the event is fanned out, we’d do it on reads.
This resulted in a dramatic 65% reduction of the write operations in for this feature, getting us closer to the point were we could move the activity feeds to MySQL entirely.
Although the single goal in mind was to stop using Redis as a persistent datastore, we thought that, given this was a legacy piece of code that evolved organically over the years, there would be some room for improving its efficiency as well. Reads were fast because the data was properly indexed and compact. Knowing that, we decided to stop writing separately to certain timelines that we could compose from the events contained in others, and therefore reduce the remaining writes another 30% (~11% overall). We got to a point that we were writing less than 1500 keys per second 98% of the time, with spikes below 2100 keys written per second. This was a volume of operations we thought we could handle with our current MySQL infrastructure without adding any new servers.
While we prepared to migrate the activity feeds to MySQL, we experimented with different schema designs, tried out one-record-per-event normalization and fixed-size feed subsets per record, and we even experimented with MySQL 5.7 JSON data type for modeling the list of event IDs. However we finally went with a schema similar to that of GitHub::KV
, just without some of the features we didn’t need, like the record’s last updated at and expiration timestamps.
On top of that schema, and inspired by Redis pipelining, we created a small library for batching and throttling writes of the same event that were dispatched to different feeds.
With all that in place, we began migrating each type of feed we had, starting with the least “risky”. We measured risk of migrating any given type based on its number of write operations, as reads were not really the bottleneck.
After we migrated each feed type, we checked cluster capacity, contention and replication delay. We had feature flags in place that enabled writes to MySQL, while still writing to persistent Redis, so that we wouldn’t disrupt user experience if we had to roll back. Once we were sure writes were performing well, and that all the events in Redis were copied to MySQL, we flipped another feature flag to read from the new data store, again measured capacity, and then proceeded with the next activity feed type.
When we were sure everything was migrated and performing properly we deployed a new pull request removing all callsites to persistent Redis. These are the resulting performance figures as of today:
We can see how at the store level, writes (mset
) are below 270wps at peak, with reads (mget
) below 460ps. These values are way lower than the number of events being written thanks to the way events are batched before writes.
Replication delay is below 180 milliseconds at peak. The blue line, correlated with the number of write operations, shows how delay is checked before any batch is written to prevent replicas from getting out of sync.
What we learned
At the end of the day we just grew out of Redis as a persistent datastore for some of our use cases. We needed something that would work for both github.com and GitHub Enterprise, so we decided to lean on our operational experience with MySQL. However, clearly MySQL isn’t a one-size-fits-all solution and we had to rely on data and metrics to guide us in our usage of it for our event feeds at GitHub. Our first priority was moving off of persistent Redis, and our data-driven approach enabled us to optimize and improve performance along the way.
Work with us
Thank you to everybody on the Platform and Infrastucture teams who contributed to this project. If you would like to work on problems that help scale GitHub out, we are looking for an engineer to join us. The Platform team is responsible for building a resilient, highly available platform for internal engineers and external integrators to add value to our users.
We would love you to join us. Apply here!
Authors
Written by
Related posts
Breaking down CPU speed: How utilization impacts performance
The Performance Engineering team at GitHub assessed how CPU performance degrades as utilization increases and how this relates to capacity.
How to make Storybook Interactions respect user motion preferences
With this custom addon, you can ensure your workplace remains accessible to users with motion sensitivities while benefiting from Storybook’s Interactions.
GitHub Enterprise Cloud with data residency: How we built the next evolution of GitHub Enterprise using GitHub
How we used GitHub to build GitHub Enterprise Cloud with data residency.