Mark Williamson wrote:
It would be especially neat if we could get at least a little bit of server space somewhere in Africa, Australia, and South America.
Having said that, it would also be easy/nice to get server space in California or somewhere else in the Western US (Google would be California, would it not?)
I think that it's always best to have everything distributed over multiple locations in case of a disaster, natural or otherwise (this way, we won't have to worry about Wikipedia in hurricanes, and if one location gets broken into/burns down/explodes/disappears mysteriously in a puff of steam, Wikipedia will not have problems because of it).
Domas has been advocating further expansion of the Florida colo, and I can see his point. Even a 10ms RTT delay (say from Tampa to Miami) would have a large negative impact on performance if we tried to have a single memcached namespace stretching over both locations. Our idea for using the European and Asian data centres is to move entire wikis. Each wiki needs to be served by a master database server and a cluster of apaches in close proximity to each other.
Because of this need for clustering, I don't think we can reasonably expect to be able to cope with the Tampa colo disappearing without some reduction in performance. It's not worth having a slow wiki 365 days a year just so that we can recover quickly after a one in a million chance event. However, it's reasonable to expect that there will be no loss of data, and that we'll be able to get read-only service up fairly quickly.
Squid servers are a different story, they can be spread all over the world. There's no preference for clumping them together except for convenience of administration and maintenance. Performance-wise, it's better if they're near the users. That's why we've set up a page for organisations that wish to volunteer to host individual squids:
http://wp.wikidev.net/Volunteer_Squid_Sites
-- Tim Starling