On 2/17/07, Timwi timwi@gmx.net wrote:
There is very little analogy between your suggestion and SETI@home (or Folding@home or distributed.net or any other distributed computing project). Those distribute only CPU usage (and possibly RAM), but not bandwidth usage.
You're right in that they operate differently, but I think a similar interface would be useful (e.g. you download a program that runs in the system tray or as a daemon.
Your idea necessitates that users (who are trying to read an article) would be redirected to some random volunteer computer that is running an HTTP daemon. But what do you do when it goes down? The central server that does the redirecting would take a while to determine that you are down, and until then, would continue to redirect requests to it. Wikipedia would become very unreliable.
Even if the central server had to ping the volunteers after every single request to check status, that would still be a fraction of the bandwidth taken up by sending wiki page.