Simon Walker wrote:
How long are the binlogs kept for on Wikimedia servers?
Surely it would be possible to take a dump now, import it to s3, start replication, then import the same dump onto the new server, and let it catch up from a month of replag?
Of course, this wouldn't be possible if the binlogs are not kept for that long.
No need to reimport the same dump. Dump Import Start replication Normal work with the toolserver *New boxes arrive here* Configure new servers Stop s3 Move mysql data files to the new box Restart s3
Which would also be faster. Importing a dump is slow with this amount of data. The files are bigger, but you avoid the index rebuild. And the servers being on the same location, transfer speed would be really high.
River Tarnell wrote:
there isn't enough disk space on the master to store the logs longer than they are now.
And on other servers? Or even copying the binlogs to knams/toolserver. After all, binlog deletion is a manual process.
The same problem arises every few months. The procedures aren't good enough yet. :-(