Yesterday’s Outage
A scheduled DB maintenance went haywire yesterday, taking a number of repositories temporarily offline. While pushing and pulling were briefly offline (and for that we’re sorry!), the first phase of…
A scheduled DB maintenance went haywire yesterday, taking a number of repositories temporarily offline.
While pushing and pulling were briefly offline (and for that we’re sorry!), the first phase of the migration worked. The problem was we didn’t know it worked – the tools we were using failed to report success (or anything, really). As a result we weren’t able to start phase two, which left some repositories unaccessible via the web interface.
What should have been a few minutes of interrupted service for some users turned into a huge pain. But I don’t want to blame our tools – the real problem is our maintenance strategy. Any amount of interrupted service is unacceptable at this point.
With that in mind, we’re going to re-think the way we do maintenance. Zero downtime and uninterrupted service is the goal. GitHub should be there when you need it.
When we have a solution we’ll post about it here (like we always do). Sorry for the outage – we really don’t want it to happen again.
Written by
Related posts
The future of AI-powered software optimization (and how it can help your team)
We envision the future of AI-enabled tooling to look like near-effortless engineering for sustainability. We call it Continuous Efficiency.
Let’s talk about GitHub Actions
A look at how we rebuilt GitHub Actions’ core architecture and shipped long-requested upgrades to improve performance, workflow flexibility, reliability, and everyday developer experience.
GitHub Availability Report: November 2025
In November, we experienced three incidents that resulted in degraded performance across GitHub services.