If, as it seems right now, the problem is technical (weed out the bots and vandals) rather than ideological (as we allow anonymous contributions after all) we can find a way to allow people to edit any wikipedia via TOR while minimizing the amount of vandalism allowed.
Of course, let's not kid ourselves - it will require some special measures probably, and editing via TOR would probably end up not being as easy as editing via a public-facing IP (we may e.g. restrict publishing via TOR to users that have logged in and have done 5 "good" edits reviewed by others, or we can use modern bot-detecting techniques in that case - those are just ideas).
I would be curious to see what percentage of problematic edits are caught by running all prospective edits through AbuseFilter and ClueBotNG. I suspect those two tools would catch a large percentage of the vandalism edits. I understand that they catch most of such edits that regular IP users make. This would be a good start and would give us a little bit of data as to what other sorts of measures might need to be taken to make this sort of thing work.
AbuseFilter has the ability to tag edits for further review so we could leverage that functionality to tag Tor edits during a trial.
I could reach out to the maintainer of ClueBotNG and see what could be done to get it to interface with AbuseFilter such that any edits it sees as unconstructive are tagged, and if that isn't possible maybe just have it log such edits somewhere special.
We've had this conversation a few times and I'd love to see creative approaches to a trial/pilot with data driving future decisions.
If I approached Wikimedia-l with the idea of a limited trial with the above approach for maybe two weeks' time with all Tor edits being tagged, do you think they might bite?
It clearly is the kind of problem where people do like to _look_ for clever technical fixes, which is why it's a recurring topic on this list.
I suspect one exists somewhere. I'll reach out to the folks at the Tor project and see if they have any suggestions for ways to prevent abuse from a technical standpoint. Especially in regards to Sockpuppet abuse. I agree with Giuseppe that the measures that will need to be put in place will make editing via Tor more difficult than editing without Tor, but that's acceptable so long as they are not as prohibitively difficult as they are currently.
Without having spoken to the Tor Project though, the Nymble approach seems like a reasonable way to go to me. The protocol could potentially be modified to accept some sort of proof of work rather than their public facing IP address as well. If we had a system where in order to be issued a certificate in Nymble you had to complete a proof-of-work that took perhaps several hours of computation and was issued for a week, that might be a sufficient barrier to stop most socks, though definitely some more data needs gathered.
Thank you, Derric Atzrott