GitHub celebrates the ingenuity of developers with disabilities in new video series
Learn how developers with disabilities are pushing the boundaries of accessibility with ingenuity, open source, and generative AI on The ReadME Project.
Monday has come and gone, no one is happier about this than I am. As day 2 is wrapping up, I'd like to post an update for those of you…
Monday has come and gone, no one is happier about this than I am. As day 2 is wrapping up, I’d like to post an update for those of you that have been following us.
Before I get to the details, I’d like to remind everyone to open tickets on Tender if you have any issues. For those of you that have open tickets never fear, we’re burning through tickets as fast as we can. When I logged in this morning we had over 120 tickets in the inbox, right now we’re sitting at 35. Unfortunately Tender doesn’t give me stats on the number of tickets opened today, but we’ve dealt with at least 85 plus however many new tickets were opened during the day. I’d bet that’s a few dozen at least. To put that in perspective, before the move on any given stable day we saw maybe a dozen tickets, and on unstable days anywhere from 20 to 40 new tickets on average.
Everyone was busy all day today killing bugs. At one point I estimate mojombo was handling up to 800 problems a minute. Here’s a rundown of the major bug fixes:
502 Bad Gatewayerrors should, for the most part, be gone. These were hitting gist creation, applying commits in the fork queue and user creation the worst. If you run into this error please open a ticket and detail what you were doing when you encountered it.
Repo under migrationerrors should also be gone. This included a patch to ensure user paths were generated if they didn’t exist on lookup and a few batch jobs to force repos that had not been created on disk to generate. If you have any new, unpushed repos throwing errors please let us know.
Since the move, we’ve seen over 1000 new users and 2000 new repos. We’ve processed over 800k background jobs, and the background job queues are blazing fast on the new servers. Before the move our low-priority queue (network graph updates, http cloning updates and some other jobs) was backed up for a few weeks. At peak times it would rise to 40k jobs or more. Graph updates were often delayed many hours. We had 25 job runners working nearly full time on the low prio queue. On the new servers we have 40 workers, however they are sitting idle most of the time. Usually only 2 or 3 are active at any given time. The jobs queues, including the low prio queue, have stayed near zero since we cut over the DNS.