Andy Spencer wrote:
This would be done in a way analogous projects such as SETI@home,
where anyone with access to a server could install a client and host
data.
There is very little analogy between your suggestion and SETI@home (or
Folding@home or
distributed.net or any other distributed computing
project). Those distribute only CPU usage (and possibly RAM), but not
bandwidth usage.
Your idea necessitates that users (who are trying to read an article)
would be redirected to some random volunteer computer that is running an
HTTP daemon. But what do you do when it goes down? The central server
that does the redirecting would take a while to determine that you are
down, and until then, would continue to redirect requests to it.
Wikipedia would become very unreliable.
I do however have access to a server that's using
roughly 0.5% it's
CPU and 1.5% of it's allocated bandwidth and would be more than
willing to contribute those resources if it were possible.
You may consider donating the CPU to a distributed computing project of
your choice. As for the bandwidth, I'm sure there are services on the
net that are trying hard to find people to mirror their large files
(download sites, for example).
Timwi