Oldak Quill wrote:
On 31/07/06, Lars Aronsson <lars(a)aronsson.se>
wrote:
What exactly are you trying to achieve? If you
want immunity
against legal threats in the western world (and for what outlaw
purpose would you want this?), you probably have to relocate to
North Korea, Syria or Zimbabwe.
Since we're being a little unrealistic here anyway, I would suggest we
develop some kind of distributed hosting system to supplement our
servers. It would be similar to those distributed computing clients
(Folding@home, Seti@home, &c.), except the client would be using some
of your disc space and bandwidth instead of your computing power.
But then, I'm only dreaming.
Both of those systems use centralized database servers and coordination
bandwidth. Main advantage is in distibuting processing. For example,
when I was running Seti@Home and Folding@Home a processing packet would
take a couple of minutes to download and then spend days processing in
the screensaver before linking to upload results and download another
processing packet.
Might be useful if we ever have heavy processing requirements such as
creating and maintaining read only controlled files in a distributed
content cache for serving to the public unless they request editable
version.
You might find
http://www.dijjer.org/ of interest. This a plugin
that provides distributed caching. It would probably take some
modifications to our server software to return a dijjer based URL to a
cached version if the user is not editing and the current editable
version when they click the edit button. As I recall the Dijjer URL
contacts a Dijjer server of some kind so modification might be required
to talk to our own Dijjer URL server/tracker to be able to guarantee
decent performance.
Oceanstore (currently prototyped as "Pond") or BitTorrent (particularly
in serving large backup files) might offer some reduced server loading
advantages if we ever get to the point of serving controlled or official
articles to the non editing public or casual users while editing and
improving in the background at the central WMF project server farm.
Last time I checked it uses a large number of servers to establish core
funtionality. IIRC individual desktops can be used for caching
reliable chunk of information
Likewise:
http://en.wikipedia.org/wiki/GNUnet except I think it
distibutes files to caches not reliability enhanced chunks for later
reassembly and service to requestor.
later,
lazyquasar