Today’s Downtime

Earlier today, we received an alert from our monitoring system that a disk in our active MySQL server’s RAID10 array had started to clock up some media errors. Being a…

|
| 1 minutes

Earlier today, we received an alert from our monitoring system that a disk in our active MySQL server’s RAID10 array had started to clock up some media errors. Being a non-critical fault at this stage, we decided to hold off pulling the disk until after the work day had ended. Just before 7:30PM PST tonight, we logged into the machine and marked the disk as failed in the RAID controller in preparation for the disk replacement.

Due to some very unfortunate timing, the command to remove the disk from the array happened to lock the kernel for a few moments just as our HA system was performing a health check on the active database server. This caused our HA system to mark the active database server as problematic and it stepped in to fix the situation the only way it knows how: power down the affected machine and bring MySQL up on the standby machine. This process actually went very smoothly, however due to the size of some of our tables, the InnoDB recovery process after the unclean shutdown of MySQL took the better part of 20 minutes to run.

Full services were restored by 7:45PM PST.

Now that we are aware of the issue, we have adjusted our procedure to take this into account to ensure that it doesn’t happen again.

Written by

Related posts