On Fri, 02 Jan 2004 23:34:14 +0000, Nick Hill wrote:
The probability of many fairly reliable cheap units with no common point of failure breaking down simaultaneously is much lower than the probability of a costly reliable unit failing.
I agree with all you're saying and like the thought of having a global cluster with arbitration, but i have some doubts:
* What's the minimum hardware capable of running the databases, the webserver, the cache etc? Is all this possible on a cheap unit while still being fast? I would expect a RAM requirement of at least 4Gb, but i might be wrong. This would certainly increase once more languages start to grow, so it might be necessary to have separate machines for separate languages.
* With the number of nodes increasing, replication traffic might be fairly high (imagine mass undoing somebody's changes replicated to ten machines)
* encryption of replication traffic will drain the cpu, even a simple scp does this- imagine the same for ten streams
If no single machine is critical and machines are widely separated, we would not even need to worry whether the machines are equipped with UPS or redundant supplies.
If the switchover is quick, this would be perfect- no need for separate backups and so on.
To get an idea of the hardware requirements it would be nice if somebody could install all of wikipedia on a cheap box and do some load testing on it ( if possible with replication).
Gabriel Wicke