Hello,
I have implemented mediawiki to manage documentation on development rules and source code management tools usage from my firm. All users find it very usefull and simple to use and easy to make modifications on articles.
We have now a new office in another country. We have a VPN between the 2 locations but the bandwith is to small. The users in the new office find borring to use wiki (20 secondes to display a page against instant display in "old" office)
I would like to have 2 instances of mediawiki on 2 servers, to synchronize them and to allow users to make modification on both instances. For php scripts and uploaded files, I have the solution, but I don't have one for database.
We have a server A (located in old office) and a server B (located in new office), both of them are complete webserver (L.A.M.P.) Users from the old office are connected to A, Users from the new one to B. Users make modification on A, they are transfert to B ("simple" master/slave replication in mysql). But if users modify something on B ... it's not transfered (mysql don't have master/master replication).
Is there a way to redirect mysql queries according to their type in mediawiki : select on B(slave) and insert/update/delete on master(A) ? (mysql replication is used to push modification for A to B)
Perhaps, the solution implemented to wikipedia is the one I need. Could you explain it ?
Best regards, Laurent
PS1 : I tryed a solution with cache Turck MMCache and mediawiki on B that connect to the database on A. The time to display a page is of 10s. PS2 : Sorry for the long mail ... PS3 : Soon ;)
Quoting Laurent CHASTEL, from the post of Thu, 31 Mar:
I would like to have 2 instances of mediawiki on 2 servers, to synchronize them and to allow users to make modification on both instances. For php scripts and uploaded files, I have the solution, but I don't have one for database.
well, I'm no DBA, but one way is to create a set of triggers to have updates exchanged in a safe manner, however I think this is quite a heavy task that should really be done by the product itself (writing to both DBs, reading from one)
another option is to upgrade the bandwidth somehow, but rather than pulling the pages from a remote LAMP, only have the local mediawiki contact the remote mysql (maybe there's even a mysql proxycache I don't know about?)
last and craziest idea is to use a mysql cluster and let the nodes do the replication in the background over the VPN for you, however that is a management headache, may be a performance degrader, and most importantly there are unsupported features like full text search, so I'm not sure you'll be too happy having to do heavy editing to the mediawiki sources when all sorts of features break.
how does the wikimedia superstracture handle load balancing of the database? I can't imagine all querries of the wikipedia are directed at only a single DB server... maybe it's possible to hack that all writes are to DB1 and all read-only ops are from DB[1-5] whichever is closer, with triggers to replicate transactions from DB1 to the rest?
again, I'm no DBA, so I'm not sure which would be the best implementation of this...
mediawiki-l@lists.wikimedia.org