So, MediaWiki has internal code to differentiate read and write database requests.
I'm not sufficiently familiar with mySQL replication to recommend or anti-recommend it. But if it works as advertised, it seems like a good configuration would be to have one master database server that takes all write requests, and one slave database server per Web server for read requests.
The slave DB server instances could live on the same machine as the Web server, or on different machines. There's some metering to be done to determine if the bandwidth advantages of having a DB server on the same machine outweigh the disadvantages of sharing memory and CPU resources.
In other words, the configuration would look something like this (leaving out the Web load-balancing stuff):
+--------+ +--------+ +--------+ | Web 1 | | Web 2 | | Web 3 | +--------+ +--------+ +--------+ | | | | | | +--------------+ | +-------------+---------------+-----------| DB Master | | | | +--------------+ +--------+ +--------+ +---------+ | DB 1 | | DB 2 | | DB 3 | +--------+ +--------+ +---------+
This would obviously put a lot of load on the DB master; it might be preferable to have different master DB servers for the different language versions and projects. Not doing _any_ read requests on the master DB server should also reduce load on it.
At the same time, it's probably worth recognizing that we have about a 5:1 -- 10:1 ratio of views to edits. Edits aren't the only writes to the DB, but it might be a good rough metric... The master may end up being _underloaded_.
It might also be possible to share slave DB servers between Web servers -- say, 2 or 3 Web servers using one slave DB server.
~ESP
-- Evan Prodromou evan@wikitravel.org Wikitravel - http://www.wikitravel.org/ The free, complete, up-to-date and reliable world-wide travel guide