On Thu, 08 Jan 2004 14:16:48 -0800, Jimmy Wales wrote:
Even so, I don't want us to be too stingy. What I'm hearing loud and clear is that people want wikipedia to be fast, highly responsive, and reliable. We don't want to be sketchy about reaching those goals, we want to embrace those goals wholeheartedly and make sure we aren't sitting on money in the bank while the site lags.
In the case of the Squids it wouldn't exactly be a case of 'survive on one'. A Pentium 2.4 or similar with 4Gigs of Ram will do at the very minimum 15 times the current traffic, i would bet it will do 25 times the current load. I fear wikipedia won't be that big in three months time...
From the current plans it seems that there will be two database servers-
if we went for three Squids it would make sense to buy a third DB as well. Same for any load balancers (if we wnat any) and file servers. That would be very expensive. It's not hard to add another Squid if one goes down for a longer time (no data to transfer for once), but i would guess it's harder to do with the DB. The other thing is that three Squids might be harder to connect with Heartbeat- i'm pretty shure it's possible, but definetely more work. How much ram is in larousse/pliny? One of them could be readily configured as a third (possibly standby) Squid and DNS/mailserver- the Squids are not demanding after all.
I'd propose to get going as soon as possible with at least 6 Apaches, 2 Squids, 2 DBs and 2 Fileservers (propably one of them combined with DB?), plus mail/dns box. Once that's running we can add a third Squid and more Apaches or Dbs- depending on the load we observe.
And we can do some testing with mirrors (possibly including Squid3/ ESI) until then. Somebody has to pay the bandwidth after all ;-)