Quoting Laurent CHASTEL, from the post of Thu, 31 Mar:
I would like to have 2 instances of mediawiki on 2 servers, to synchronize them and to allow users to make modification on both instances. For php scripts and uploaded files, I have the solution, but I don't have one for database.
well, I'm no DBA, but one way is to create a set of triggers to have updates exchanged in a safe manner, however I think this is quite a heavy task that should really be done by the product itself (writing to both DBs, reading from one)
another option is to upgrade the bandwidth somehow, but rather than pulling the pages from a remote LAMP, only have the local mediawiki contact the remote mysql (maybe there's even a mysql proxycache I don't know about?)
last and craziest idea is to use a mysql cluster and let the nodes do the replication in the background over the VPN for you, however that is a management headache, may be a performance degrader, and most importantly there are unsupported features like full text search, so I'm not sure you'll be too happy having to do heavy editing to the mediawiki sources when all sorts of features break.
how does the wikimedia superstracture handle load balancing of the database? I can't imagine all querries of the wikipedia are directed at only a single DB server... maybe it's possible to hack that all writes are to DB1 and all read-only ops are from DB[1-5] whichever is closer, with triggers to replicate transactions from DB1 to the rest?
again, I'm no DBA, so I'm not sure which would be the best implementation of this...