Is it just me, or is the server bogged down? 50% of clicks to another page are timing out, and the ones that succeed take multiple minutes. Hardly seems worthwhile trying to edit anything when it's like this. Is the load coming from users or wikipedians or what?
Stan
(Brion Vibber brion@pobox.com): On Mon, 2003-04-28 at 11:26, Stan Shebs wrote:
Is it just me, or is the server bogged down?
Ah, you must be new here. :)
Yes, the server is overloaded because there are too many people trying to use the site.
Grub seems to be doing its thing at the moment as well.
Brion Vibber wrote:
On Mon, 2003-04-28 at 11:26, Stan Shebs wrote:
Is it just me, or is the server bogged down?
Ah, you must be new here. :)
Yes, the server is overloaded because there are too many people trying to use the site.
It seems more frequently bogged down than two months ago, which I guess is good if it means more random people are coming by. Does anybody have numbers comparing load due to logged-in users vs random anons?
Stan
Stan Shebs wrote:
Brion Vibber wrote: It seems more frequently bogged down than two months ago, which I guess is good if it means more random people are coming by. Does anybody have numbers comparing load due to logged-in users vs random anons?
I only see increased complains from logged-in users in the german wikipedia. Mostly it is worried if the permanently impossibility to edit articles (getting no answers from the server over 10 min is unreachable in the Web) drives away new authors and of course some of the old one's.
I only see myself that in the timespan normal people would write articles for wikipedia the best you can do is writing mails here.
I hope you get the new server running as fast as possible.
Smurf
Brion Vibber brion@pobox.com writes:
Yes, the server is overloaded because there are too many people trying to use the site.
If the server cannot answer within a acceptable time frame (10 s?), it should say so. Letting editors wait for a longer period of time, is userunfriendly.
If the database is too busy, return this info to the editor immediately, please.
Its amazingly simple what needs to be done:
either
software wise: make wikipedia less database reliant, namely, most of wikiepedia's CPU time is being wasted on relational operations for ultimately simple queries. EG, when you go to a page, it hits the Database, this is clearly unnaccetable for a site of this scale. Simple solution is to dump out current versions of articles to file for orders of performance increase. And you could even have pre-prepared edit pages as well. And im even told that the database is still being hit for "recent-changes", whereas it should be dumped to file every 1-5 minutes.
hardware wise: put up a donation sign and run a completely transparent donation campaign. eg: say "we need $4000 for a new server, we have 2000 so far".
The hardware donation issue will eventually become necessary, but it would be an emabarrassing course of action given what should have been done software wise ages ago.
I'll do it if you give me an admin account.
--- Karl Eichwalder ke@gnu.franken.de wrote:
Brion Vibber brion@pobox.com writes:
Yes, the server is overloaded because there are too many people trying to use the site.
If the server cannot answer within a acceptable time frame (10 s?), it should say so. Letting editors wait for a longer period of time, is userunfriendly.
If the database is too busy, return this info to the editor immediately, please.
-- | ,__o http://www.gnu.franken.de/ke/ | _-_<, ke@suse.de (work) / keichwa@gmx.net (home) | (*)/'(*) _______________________________________________ Wikipedia-l mailing list Wikipedia-l@wikipedia.org http://www.wikipedia.org/mailman/listinfo/wikipedia-l
__________________________________ Do you Yahoo!? The New Yahoo! Search - Faster. Easier. Bingo. http://search.yahoo.com
Its amazingly simple what needs to be done:
I always fear and like simplicity at the same time :)
either
software wise: make wikipedia less database reliant, namely, most of wikiepedia's CPU time is being wasted on relational operations for ultimately simple queries. EG, when you go to a page, it hits the Database, this is clearly unnaccetable for a site of this scale. Simple solution is to dump out current versions of articles to file for orders of performance increase. And you could even have pre-prepared edit pages as well. And im even told that the database is still being hit for"recent-changes", whereas it should be dumped to file every 1-5 minutes.
The cache strategy is the most used nowadays in dynamic website, so to say the commit mechanism could both add an article in the db and commit a pure html version in a cache that would be accessed by everybody. After, if there is a space disk constraint you can choose to cache the more than 1000 hits in the last week page or something like that .
hardware wise:
[...]
Before buying harware, maybe conceiving the architecture could be a + !)
And last but not least, I guess it has already be done but there could be some tuning on the servers : - filesystem - web server in itself - db - changing some transparent component,
and last but not least there could be a mirror strategy : http://mirror1.fr.wikipedia.org => read only for instance and the edit option would bring to the core server :)
wikipedia-l@lists.wikimedia.org