Nick Hill wrote:
I envisage many wikipedia servers around the world,
supported by private
individuals, companies and universities. Much like the system of mirror FTP
and mirror web sites. All these servers are updated in real time from the
core wikipedia server. From the user's perspective, all are equivalent.
My experience from situations like the one you describe tells me that
the designed system can easily get more complex and cause more
overhead than the needed performance gain, and that Moores law will
give us the speed that we need in time when we need it.
Do you have any experience from designing systems like this? Would you
write a prototype for this system that could be tested? The vision
sounds like science fiction to me, but a prototype that I can run is
not science fiction, so that would make all the difference.
Here is another vision: I envision a system where I can synchronize
my laptop or PDA with a wiki, then go offline and use it, update it,
and when I return to my office I can resynchronize the two again.
I have no idea on how to implement this vision. I think it would be a
lot of work. But I think the result could be really useful.
I also see there are similarities between your vision and mine. The
idea is to express the update activity as a series of transactions
(update submits) that can be transfered to another instance or
multiple instances and be applied there. In either case, one must
take care of the case that the transmission of updates gets
interrupted or delayed, and the potential "edit conflicts" that would
result. It doesn't seem trivial to me.
--
Lars Aronsson (lars(a)aronsson.se)
Aronsson Datateknik
Teknikringen 1e, SE-583 30 Linuxköping, Sweden
tel +46-70-7891609
http://aronsson.se/ http://elektrosmog.nu/ http://susning.nu/