Tyan S2462 motherboard with
Dual AMD Athlon MP 1800+/266FSB
2 GB of ECC Reg. DDR-266/PC2100 ram (4 - 512 Mb pieces)
IBM 36 GB Ultra 160 / 10 K RPM
Dual onboard 3Com 10/100 adapters
For the install, we'll go with MySQL 4.0 unless someone gives us a
good reason not to do it -- this will enable us to cut the minimum
keyword length in the search engine down to less than 4.
We'll probably install the latest stable Kernel at the time we get the machine,
2.4.18 right now.
I'm tempted to go with Apache 2.0.36, but I've haven't used Apache 2.x
in production, so maybe we won't start with that. Opinions welcome.
This should be a pretty sweet setup.
The new server will host Wikipedia and Nupedia ONLY.
--Jimbo
Good news! I'm buying Wikipedia a new and much more powerful server.
Bomis.com needs a new server in our web cluster. As it turns out,
ross.bomis.com (the current wikipedia server) is physically identical
to our existing servers in our web cluster (because I bought ross at
the same time), and is therefore ideal to include in the web cluster
without complicating my load balancing over there.
This means that the newly purchased server, which should be here in a
couple of weeks, will be set up exclusively for wikipedia. I will be
moving personal websites of mine (like jimmywales.com and
kirawales.com) and experimental websites of mine OFF this machine, but
not to the new machine.
The new machine will be setup with different root password, and no
real connection to the rest of my network. This means that I can give
developers Unix shell accounts on the machine, and even the root
password in some cases.
The new machine will be a dual Athlon with 2 gig of ram and a much
bigger SCSI harddrive than what we have now. The performance of the
machine should be a lot better than the existing single PIII with 1
gig of ram.
This will all come to pass in the next couple of weeks. This will remove
me as a key bottleneck in the development/release cycle.
--Jimbo
-- Original Nachricht --
>Gareth Owen wrote:
>>
>> Karen AKA Kajikit <kaji(a)labyrinth.net.au> writes:
>>
>> > Lars Aronsson wrote:
>> >> I don't know what caused this, but I love it.
>> >> Now I can start to promote Wikipedia more actively.
>> >
>> > It's good isn't it!
>>
>> I'm glad someone's getting joy. I'm timing out 4 times out of 5...
>
>I said it WAS good... alas it seems to have ground to a halt again. I
>went from rapid access to all pages to no access at all. :(
I had similar experiences today - either really fast, or not at all. Strangely,
I had a fast page, a link from there (in a new window) timed out, and another
one (started some seconds later) was fast again.
The only explanation I can come up with is that some requests go to a slow
thread/process, which would match with the "really big process" bug we encountered
some time ago. Jimbos top-dump showed a zombie process, which was probably
it.
Is there a way we can find out if the bad process starts only on certain
pages (e.g., the Most Wanted), or if it just happens at random? That would
help enormously in error search.
Magnus
________________________________________
Zeitschriftenabos online bestellen - jetzt neu im Infoboten! http://www.epost.de
Since 21:00 GMT (2 pm PDT) Wednesday, Wikipedia is really fast:
- Static images load in less than one second.
- Wiki pages load in 5 seconds (often less than 2).
- Recent Changes loads in 5 seconds in 86% of my samples.
- No responses ever take longer than 15 seconds.
I don't know what caused this, but I love it.
Now I can start to promote Wikipedia more actively.
Note that I'm in Sweden and the server is in San Diego, and the single
roundtrip time (ping) alone accounts for 0.25 seconds. Theoretically
(speed of light, size of Earth), this could be reduced to 0.10
seconds. Starting "wget", opening a socket (one roundtrip), sending a
request, and receiving a response (second roundtrip) containing the
Wikipedia logotype, all in 0.70 seconds is pretty amazing. It never
took longer than 0.91 seconds in the last 14 hours.
--
Lars Aronsson <lars(a)aronsson.se>
tel +46-70-7891609
http://aronsson.se/http://elektrosmog.nu/http://susning.nu/
Hello,
I just clicked a red link (indicating that there currently is no article yet).
The link triggered had a &action=edit -suffix and thus invoked the new page
in edit-mode. The only problem was that in the meantime, just after the
page was put to the pagecache, the article has been written. So I ended with
the edit-mode of a quite fine article. While this is probably OK for a
wikipedia contributor I think this might be confusing for a "user".
I would like to ask whether it might make sense to change the URL invoked
by red links to not end in "&action=edit". Then the automatic edit-mode for
non-existent articles is triggered in case of a non-existing article and
for articles written in the meantime the link would just show the article
expected.
Best Regards,
jens frank
I'm looking again at my log files, comparing three days:
- Thursday May 9, before Jimmy's change (old talk links)
- Friday May 10, after Jimmy's change (no talk links)
- Tuesday May 14, with the new version (new talk links)
The overall performance was best on Friday. It is now getting worse
again, albeit not as bad as Thursday. The new version is a real
improvement over the old talk links, but we aren't quite done yet.
The number of OK responses (HTTP status code 200) which take absurdly
long (longer than 60 seconds) is still very high (3-30 %), with the
Main Page of the English Wikipedia being the main exception (0 %).
Response times above some limit (say, 30, 60 or 120 seconds) can be
defined as absurdly long, because the user will have left for other
websites and is no longer waiting for the response. Instead of
spending more system resources (CPU cycles and allocated memory) on
these requests, it would be better to set a hard time out (in PHP or
Apache) and return an error message that says "sorry for the delay".
This would free up system resources that can be better used to serve
other requests.
In <http://www.php.net/manual/en/function.set-time-limit.php>, the PHP
function set_time_limit() is said to have a default of 30 seconds,
unless the configuration file has defined max_execution_time. Will
calling this function set the time limit for the current request only,
or set a permanent value for the server? What happens when PHP
execution times out? Is the connection to the client abruptly closed?
Or is an error message returned? Does an error message appear in the
log file? I haven't seen any timeouts of this kind.
However, even if a PHP execution timeout is set, the limit will not
include time that the request spent waiting to start execution. This
wait could happen in the UNIX socket listen backlog, waiting for the
connection to be accepted, or inside Apache, waiting for a child
process to become available. Increasing the value of a parameter like
ListenBacklog (in Apache httpd.conf) is not necessarily a solution,
because this will only keep more requests in queue, increasing overall
response time. Instead, the problem should be fixed at the exit end
of the queue. The key to better performance is keeping the server
fast and queues short, getting things done.
Here are some pages on Apache performance issues:
- Hints on Running a High-Performance Web Server,
http://httpd.apache.org/docs/misc/perf.html
- Apache Performance Notes,
http://httpd.apache.org/docs/misc/perf-tuning.html
- Professional Apache, chapter 8,
http://www.devshed.com/Talk/Books/ProApache/
- Tuning Your Apache Web Server,
http://dcb.sun.com/practices/howtos/tuning_apache.jsp
- Linux HTTP Benchmarking HOWTO,
http://www.xenoclast.org/doc/benchmark/HTTP-benchmarking-HOWTO/
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik
Teknikringen 1e, SE-583 30 Linuxköping, Sweden
tel +46-70-7891609
http://aronsson.se/http://elektrosmog.nu/http://susning.nu/
> Anyway, currently the 'pedia seems to run fast and stable.
I have been seeing several fast accesses and every once in a while an
extremely slow one. It could be that one of the special functions (or
search?) behaves badly.
> Is the "really big process" bug back??
What is the "really big process" bug?
I have been wondering before whether it's possible to have memory
leaks in php scripts: is everything automatically released once the
page is served? Somebody somewhere could be eating memory, and that
would slow everything. If memory is indeed the bottleneck, then
caching might not be a good idea, since we in effect transport twice
as much data between database and php script, and I would expect that
the data is buffered in memory on both ends.
Some top/ps outputs would really help.
Axel
Tisdag, 14 Maj 2002, Jimmy Wales skribte:
> I've switched to the CVS version and corrected a few bugs. But the
> site is sluggish again, and load is creeping upward. I wonder if
> getOtherNamespaces is the issue again?
It could be the temporary workaround I had put in place for the
certain-common-words-cause-searches-to-fail-completely bug, which falls
back to a slow but reliable LIKE search when the MATCH fails.
I've disabled the fallback in CVS; update special_dosearch.php and see if
performance (if not search results) improves.
The search system needs some major work, alas...
-- brion vibber (brion @ pobox.com)
I've just checked in a rewrite of the watchlist function; instead of
doing a zillion separate database queries, it now does just two: one to
get the watchlist from the user table, the other to grab the info needed
to fill out the table/list.
Please, check over it and make sure it looks okay before we let yet
another horribly misshappen watchlist version wander over the land!
As a hack to get non-blank, non-talk namespaces listed until we have a
separate field that contains the title but *not* the namespace, all
valid namespaces get prepended to the titles-to-be-watched in the list
given to the database to check against.
I've also fixed the bug whereby clicking "add to my watchlist" on a page
with a non-blank namespace results in a redirect to the equivalent title
in the blank namespace.
(It would also be possible, if preferred, to limit the namespace-laxity
in the watchlist to just "X" and "X talk" pairs. Personally, I've seen
enough pages get moved from blank to "wikipedia:" that I'd prefer to
keep complete namespace agnosticism in my watchlist: the less I have to
add and subtract things to keep my watchlist relevent, the better.)
-- brion vibber (brion @ pobox.com)