Yes, there's a difference. But in this case, as far as I understand it, a direct cost (or casualty) of setting up Wikimedia Labs is the existence of the Toolserver. Does Wikimedia need a great testing infrastructure? Yes, of course. (And it's not as though the Toolserver has ever been without its share of issues; I'm not trying to white-wash the past here.) But the question is: if such a Wikimedia testing infrastructure comes at the cost of losing the Toolserver, is that acceptable?
This is a scarecrow argument. The mere existence of Labs doesn't mean the loss of Toolserver.
Labs is more than just a testing infrastructure. It's an infrastructure for creating things, for enable volunteer operations, for bringing operations and development together, for integrating other projects, and for providing free hosting to projects that may not have it otherwise. Labs just also happens to need some of the same features as Toolserver.
Again, as I've mentioned, Labs purpose isn't a Toolserver replacement. It's vision is much, much larger than what the Toolserver can do.
Ryan Lane wrote:
If WMF becomes evil, fork the entire infrastructure into EC2, Rackspace cloud, HP cloud, etc. and bring the community operations people along for the ride. Hell, use the replicated databases in Labs to populate your database in the cloud.
Tim Landscheidt wrote:
But the nice thing about Labs is that you can try out (re- plicable :-)) replication setups at no cost, and don't have to upfront investments on hardware, etc., so when time comes, you can just upload your setup to EC2 or whatever and have a working Wikipedia clone running in a manageable time- frame.
This is not an easy task. Replicating the databases is enormously challenging (they're huge datasets in the cases of the big wikis) and they're constantly changing. If you tried to rely on dumps alone, you'd always be out of date by at least two weeks (assuming dumps are working properly). Two weeks on the Internet is a lot of time.
But more to the point, even if you suddenly had a lot of infrastructure (bandwidth for constantly retrieving the data, space to store it all, and extra memory and CPU to allow users to, y'know, do something with it) and even if you suddenly had staff capable of managing these databases, not every table is in even available currently. As far as I'm aware, http://dumps.wikimedia.org doesn't include tables such as "user", "ipblocks", "archive", "watchlist", any tables related to global images or global user accounts, and probably many others. I'm not sure a full audit has ever been done, but this is partially tracked by https://bugzilla.wikimedia.org/show_bug.cgi?id=25602.
So beyond the silly simplicity of the suggestion that one could simply "move to the cloud!", there are currently technical impossibilities to doing so.
It's the same impossibilities for forking any single CC project online. We're not allowed by our privacy policy (and very likely by law) to provide that information. It's absurd to fault us on this. I guess we're being evil by not being evil.
We've providing every single other needed piece of the puzzle required for forking.
- Ryan