Austin et al.-
is there any actual use case of a dynamic content website that handles
as much traffic as we do with less hardware? My guess would be that in
most cases of "professional" deployment, the hardware budget for a
network of sites like Wikimedia would be one or two orders of magnitude
larger than ours at the very least, not to mention an impressive
full-time staff.
The truth is probably that this rag tag team of volunteers has
accomplished things on a shoestring budget -- we're one of the largest
websites on the planet! -- where countless dot coms have utterly failed
with a lot more resources at their disposal. And Wikipedia is not just
any website, it's an extremely dynamic and rapidly growing one.
Much, much development energy in the last years has focused on
performance optimization, from parser and message caching in the
database to memory caching with memcached, from bytecode caching to
Squid proxies to load balancing. Profiling has been done, problem areas
of the code have been isolated and optimized, slow queries are only
periodically run.
That is not to say that we shouldn't invest in both software development
and hardware/network deployment, that we don't need to decentralize our
infrastructure, or that we don't need to build reserves to cope with
growth better than we do. Perhaps we can work on a roadmap which
outlines the key issues, and then the Board could allocate resources to
address them?
Erik