I share Risker’s concerns here and limiting the anonymity set to the intersection of Tor users and established wiki contributors seems problematic. Also, the bootstrapping issue needs working out and relegating Tor users to second class citizens that need to edit through a proxy seems less than ideal (though the specifics of that are unclear to me).
But, at a minimum, this seems like a useful exercise to run if only for the experimental results and to show good faith.
I’m more than willing to help out. Please get in touch.
Arlo
On Wednesday, March 11, 2015 at 9:10 AM, Chris Steipp wrote:
On Mar 11, 2015 2:23 AM, "Gergo Tisza" <gtisza@wikimedia.org (mailto:gtisza@wikimedia.org)> wrote:
On Tue, Mar 10, 2015 at 5:40 PM, Chris Steipp <csteipp@wikimedia.org (mailto:csteipp@wikimedia.org)>
wrote:
I'm actually envisioning that the user would edit through the third
party's
proxy (via OAuth, linked to the new, "Special Account"), so no special permissions are needed by the "Special Account", and a standard block on that username can prevent them from editing. Additionally, revoking the OAuth token of the proxy itself would stop all editing by this process,
so
there's a quick way to "pull the plug" if it looks like the edits are predominantly unproductive.
I'm probably missing the point here but how is this better than a plain edit proxy, available as a Tor hidden service, which a 3rd party can set
up
at any time without the need to coordinate with us (apart from getting an OAuth key)? Since the user connects to them via Tor, they would not learn any private information; they could be authorized to edit via normal OAuth web flow (that is not blocked from a Tor IP); the edit would seemingly
come
from the IP address of the proxy so it would not be subject to Tor
blocking.
Setting up a proxy like this is definitely an option I've considered. As I did, I couldn't think of a good way to limit the types of accounts that used it, or come up with an acceptable collateral I could keep from the user, that would prevent enough spammers to keep it from being blocked while being open to people who needed it. The blinded token approach lets the proxy rely on a trusted assertion about the identity, by the people who it will impact if they get it wrong. That seemed like a good thing to me.
However, we could substitute the entire blinding process with a public page that the proxy posts to that says, "this user wants to use tor to edit, vote yes or no and we'll allow them based on your opinion". And the proxy only allows tor editing by users with a passing vote.
That might be more palatable for enwiki's socking policy, with the risk that if the user's IP has ever been revealed before (even if they went through the effort of getting it deleted), there is still data to link them to their real identity. The blinding breaks that correlation. But maybe a more likely first step to actually getting tor edits?
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org (mailto:Wikitech-l@lists.wikimedia.org) https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org (mailto:Wikitech-l@lists.wikimedia.org) https://lists.wikimedia.org/mailman/listinfo/wikitech-l