On Mon, 2003-04-28 at 19:21, Erik Moeller wrote: [on a persistent link-existence table]
I might. I'll have to see if it makes any difference on the relatively small de database which I'm currently using locally. It would have to be optional -- setting up the software is already difficult enough.
I don't know whether you've already looked into this, but PHP does seem to have some support for shared memory:
http://www.php.net/manual/en/ref.sem.php or http://www.php.net/manual/en/ref.shmop.php
These seem to require enabling compile-time options for PHP.
It's also possible to create an in-memory-only table in MySQL (type=HEAP), which may be able to bypass other MySQL slownesses (but it may not, I haven't tested it).
Slow saving impacts everyone who tries to edit articles; four edits per minute may be _relatively_ rare compared to page views, but we're still running thousands of edits per day and it's a fundamental part of what a wiki is. It's absolutely vital that editing be both swift and bug-free, and if we can reduce the opportunities for saving to get hung up, so much the better.
Yeah yeah yeah. I still think we should care more about the real showstoppers. But hey, you can always _code it_. (Finally an opportunity to strike back ;-)
Touché. :) My point is just that we need to keep that critical path clean and smooth -- and working. (I would consider not differentiating live from broken links, or getting frequent failures on page save to be fatal flaws, whereas not having a working search or orphans function is just danged annoying.)
If this downtime is unacceptable, we might indeed have to think about a query only server with somewhat delayed data availability. This could be a replacement for the sysops, too. Mirroring the Wikipedia database files (raw) should be no issue with a SCSI system, or a low priority copy process.
Sure, MySQL's database replication can provide for keeping a synched db on another server. (Which, too, could provide for some emergency fail-over in case the main machine croaks.)
The wiki would just need a config option to query the replicated server for certain slow/nonessential operations (search, various special pages, sysop queries) and leave the main db server free to take care of the business of showing and saving pages and logging in users.
However this is all academic until we have reason to believe that a second server will be available to us in the near future.
Maybe we should stop looking in the Himalayan mountains and start searching the lowlands .. In other words: Don't search those who will do it for society or for the glory. Just hand over the cash and be done with it.
A lovely idea, but there _isn't_ any cash as of yet, nor a non-profit foundation to formally solicit donations with which to fund programmers. Until this gets done, or unless someone wants to fund people more directly, all we've got is volunteer developers, who are only rarely unemployed database gurus who can spend all day working on Wikipedia. :)
-- brion vibber (brion @ pobox.com)