On Fri, Jan 09, 2004 at 10:17:33AM -0500, Evan Prodromou wrote:
So, MediaWiki has internal code to differentiate read and write database requests.
Yes, it is well prepared for this. It would require some changes, but for my test setup I didn't have to change more than about 30 lines and most changes were minor.
I'm not sufficiently familiar with mySQL replication to recommend or anti-recommend it. But if it works as advertised, it seems like a good configuration would be to have one master database server that takes all write requests, and one slave database server per Web server for read requests.
I would consider this "step II". According to the numbers I've seen from geoffrin, it wasn't heavily loaded, neither CPU nor I/O were high. So if it is back soon (tomorrow I've heard), the DB will not be the bottleneck for at least the next 4-6 months.
I would be conservative in this respect. First, introduce the squids. This should solve the most pressing issues. Adding more and especially more stable apaches will help, too. A distributed DB would be the thing to do if DB performance starts being an issue.
Regards,
JeLuF