Vandals are not a technical problem.
Asynchronous updating has been in production in for instance the Novell NDS
for many many, many years. Calling it Byzantine and research only does not
negate the fact that our existing solution is not the only one. There are
reasons why we stick with what we have but it does not follow that other
avenues are not open to us.
On 24 January 2012 00:23, George Herbert <george.herbert(a)gmail.com> wrote:
On Mon, Jan 23, 2012 at 11:41 AM, Gerard Meijssen
What we need is a database with a robust and fool proof synchronisation
process. Latency is important when all the databases have to be
synchronised all the time. When articles only need to be the latest when
they are readied for editing it is less of an issue.
When the data is shared over many computers in for instance a peer to
network, the real cost can be high but when the
cost is shared by
volunteers and organisations volunteering it is no longer much of an
for the central organisation.
Databases of such a design are not that rare. There has been research on
this for Wikipedia usage in Amsterdam where it is considered feasible.
On 23 January 2012 18:48, Leslie Carr <lcarr(a)wikimedia.org> wrote:
> On Mon, Jan 23, 2012 at 9:38 AM, cyrano <cyrano.fawkes(a)gmail.com>
> > Hello, this topic is from foundation-l,
I think it is more suited on
> > wikitech-l.
> > -------- Message original --------
> > What about sharing the whole databases among the millions of users, in
> > some p2p net with a lot of redundancies?, something like a dense,
> > internet of databases who remains whole
even if it looses part of
> > itself? Does it sound unwordly?
> > It could be a good complement to the server based versions.
> this sounds nice but just wouldn't work at all. we need to have
> reliable databases with a consistant latency.
> > Le 22/01/2012 20:50, Jussi-Ville Heiskanen a écrit :
> >> The simple option that will just blow all this talk fo lobbying away,
> >> is to migrate outside US jurisdiction entirely. It does entail some
> >> costs, and may well not be optimal, on many fronts.
> s/some/lots of/
> >> A medium option is to do a plan on the lines of the actions that
> >> Google has already put into force, of diversifying datacenters that
> >> have our non-fungible assets, so that for enforcement they would
> >> have to invade sovreign territory. But for a non-profit, our best
> >> would be to say that we are making
those plans, but actually want
> >> to keep the US have the PR benefit of being able to say that WMF
> >> like entities find the US best to be incorporated in. And then grin
> >> very hard, so they know we mean business. Follow up with saying
> >> the very real contingency plans can not wait on their realizing they
> >> have the wrong end of the stick, so we have to act now.
> >> So we will put a few fallback datacenters elsewhere, just so our
> >> various communities and chapters realize we aren't going to be
> >> bullied by US jurisdiction. But we have a much more expansive
> >> plan which we tell we will eventually realize. But the legislators
> >> in the US have to understand we are doing this all so they realize
> >> what they are working on is harmful to prosperity around the globe.
> again, expensive!
> >> And if they play ball, (we won't give a cent of tribute, sorry) we
> >> not accelerate the rate at which we
realize the full international
> >> nature of the Wikimedia Foundation.
> >> That is pretty much the line of "education" that might be
> >> without costing the Foundation a single backhander.
Byzantine fault tolerance and proving provenance of the data are major
issues with distributed peer-to-peer systems; we have bad enough
problems with vandals now, without cloudsourcing our database storage
Today, this is a computer science solution, not an operations
solution. The Foundation doesn't exist to be a testbed for making
computer science technology operationally mature.
-george william herbert
Wikitech-l mailing list