GitHub Availability Report: June 2025
In June, we experienced three incidents that resulted in degraded performance across GitHub services.
In June, we experienced three incidents that resulted in degraded performance across GitHub services.
June 5 17:47 UTC (lasting 1 hour and 33 minutes)
On June 5, 2025, between 17:47 UTC and 19:20 UTC, the Actions service was degraded, leading to run start delays and intermittent job failures. During this period, 47.2% of runs had delayed starts of 14 minutes on average, and 21.0% of runs failed. The impact extended beyond Actions itself; 60% of Copilot Coding Agent sessions were cancelled, and all Pages sites using branch-based builds failed to deploy (though Pages serving remained unaffected). The issue was caused by a spike in load between internal Actions services exposing a misconfiguration that caused throttling of requests in the critical path of run starts. We mitigated the incident by correcting the service configuration to prevent throttling and have updated our deployment process to ensure the correct configuration is preserved moving forward.
June 12 17:55 UTC (lasting 3 hours and 12 minutes)
On June 12, 2025, between 17:55 UTC and 21:07 UTC, the GitHub Copilot service was degraded and experienced unavailability for Gemini models and reduced availability for Claude models. Users experienced significantly elevated error rates for chat completions, slow response times, timeouts, and chat functionality interruptions across VS Code, JetBrains IDEs, and GitHub Copilot Chat. This was due to an outage affecting one of our model providers.
We mitigated the incident by temporarily disabling the affected provider endpoints to reduce user impact.
We are working to update our incident response playbooks for infrastructure provider outages and improve our monitoring and alerting systems to reduce our time to detection and mitigation of issues like this one in the future.
June 17 19:32 UTC (lasting 31 minutes)
On June 17, 2025, between 19:32 UTC and 20:03 UTC, an internal routing policy deployment to a subset of network devices caused reachability issues for certain network address blocks within our datacenters. Authenticated users of the github.com UI experienced 3-4% error rates for the duration of the incident. Authenticated callers of the API experienced 40% error rates. Unauthenticated requests to the UI and API experienced nearly 100% error rates. Actions experienced 2.5% of runs being delayed for an average of 8 minutes and 3% of runs failing. Large File Storage (LFS) requests experienced 1% errors. At 19:54 UTC, the deployment was rolled back, and network availability for the affected systems was restored. At 20:03 UTC, we fully restored normal operations. To prevent similar issues, we are expanding our validation process for routing policy changes.
Please follow our status page for real-time updates on status changes and post-incident recaps. To learn more about what we’re working on, check out the GitHub Engineering Blog.
Tags:
Written by
Related posts
Pick your agent: Use Claude and Codex on Agent HQ
Claude by Anthropic and OpenAI Codex are now available in public preview on GitHub and VS Code with a Copilot Pro+ or Copilot Enterprise subscription. Here’s what you need to know and how to get started today.
What the fastest-growing tools reveal about how software is being built
What languages are growing fastest, and why? What about the projects that people are interested in the most? Where are new developers cutting their teeth? Let’s take a look at Octoverse data to find out.
Year recap and future goals for the GitHub Innovation Graph
Discover the latest trends and insights on public software development activity on GitHub with data from the Innovation Graph through Q3 2025.