Hello everyone,
I've been a Tor user for many years and I frequently make use of anonymising proxies services. Recently (yesterday), I set up my first Tor relay.[1] This has once again gotten the use of Tor and other anonymising services with Wikipedia on my mind again.
In a recent article on the Tor blog,[2] Wikipedia is actually called out a number of times for being unfriendly to Tor, and I think they make a good point.
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to blacklist all these IP addresses because of trolls' and 'Wikipedia is rotting because nobody wants to edit it anymore' in the same breath, and we believe these points are related."
There must be a way that we can allow users to work from Tor. My understanding of why we block Tor categorically is that it is very hard to block individual Tor users. Perhaps we could allow Tor users to only edit pages if they make an account? That would allow us to at least block those accounts, which increases the cost of being problematic on Wikipedia a bit.
Or to take from the blog post, perhaps Tor users could be issued a certificate that they could use to prove their identity from one session to another. New Tor users would need to prove they are the same person as someone we already trust or their edits would be put in some sort of review queue.
Or combine the two and new accounts made from Tor connections would need to have their edits reviewed, or perhaps just wouldn't get autopatrolled status as quickly (if ever).
There has got to be a better solution to the problem than just blocking all Tor users completely.
Thank you, Derric Atzrott
[1]: https://atlas.torproject.org/#details/6413D947D15B81B423D65D76DA3F2BFEF76BEE... [2]: https://blog.torproject.org/blog/call-arms-helping-internet-services-accept-... ymous-users
I hope we can make this work and help Tor users at least contribute some content to some Wikimedia projects, even if English Wikipedia needs to keep up its current policy. Places to convene to work on this include: the MediaWiki developers' summit in January in San Francisco https://www.mediawiki.org/wiki/MediaWiki_Developer_Summit_2015 , FOSDEM Jan 31-Feb 1 in Brussels https://fosdem.org/2015/ , the Circumvention Tech Festival in Spain in March https://openitp.org/news-events/save-the-date-march-1-6-2015.html .
Some previous discussions on wikitech-l :
"Can we help Tor users make legitimate edits?" 2012. http://www.gossamer-threads.com/lists/wiki/wikitech/323006
"Jake requests enabling access and edit access to Wikipedia via TOR" 2013. http://www.gossamer-threads.com/lists/wiki/wikitech/420039
"Tor exemption process" January 2014. http://www.gossamer-threads.com/lists/wiki/wikitech/425124
"Anonymous editors & IP addresses" July 2014. http://www.gossamer-threads.com/lists/wiki/wikitech/482562
Sumana Harihareswara Senior Technical Writer Wikimedia Foundation
On Tue, Sep 30, 2014 at 9:08 AM, Derric Atzrott < datzrott@alizeepathology.com> wrote:
Hello everyone,
I've been a Tor user for many years and I frequently make use of
anonymising
proxies services. Recently (yesterday), I set up my first Tor relay.[1]
This
has once again gotten the use of Tor and other anonymising services with Wikipedia on my mind again.
In a recent article on the Tor blog,[2] Wikipedia is actually called out a number of times for being unfriendly to Tor, and I think they make a good
point.
"[H]ow can we quantify the loss to Wikipedia, and to society at large,
from
turning away anonymous contributors? Wikipedians say 'we have to
blacklist all
these IP addresses because of trolls' and 'Wikipedia is rotting because
nobody
wants to edit it anymore' in the same breath, and we believe these points are related."
There must be a way that we can allow users to work from Tor. My
understanding
of why we block Tor categorically is that it is very hard to block
individual
Tor users. Perhaps we could allow Tor users to only edit pages if they
make
an account? That would allow us to at least block those accounts, which increases the cost of being problematic on Wikipedia a bit.
Or to take from the blog post, perhaps Tor users could be issued a
certificate
that they could use to prove their identity from one session to another.
New
Tor users would need to prove they are the same person as someone we
already
trust or their edits would be put in some sort of review queue.
Or combine the two and new accounts made from Tor connections would need
to have
their edits reviewed, or perhaps just wouldn't get autopatrolled status as quickly (if ever).
There has got to be a better solution to the problem than just blocking
all Tor
users completely.
Thank you, Derric Atzrott
[1]:
https://atlas.torproject.org/#details/6413D947D15B81B423D65D76DA3F2BFEF76BEE...
[2]:
https://blog.torproject.org/blog/call-arms-helping-internet-services-accept-...
ymous-users
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hey, Overall you are suggesting that WMF changes the policy about anonymity and accept anonymous users. In my view it's not a technical thing and it should be brought up in wikimedia-l.
BTW: I need to add something about anonymous users and how the system treats them. When you block all open proxies you close the gate for sock-puppeteers, zombies and specially trolls which I'm grateful but If you change the prospective and see the issues as an Iranian, Chinese or other similar countries the whole thing changes. In these countries using proxies and "anti-filter" is as common as the using internet. People are using it literally all the time. as an obvious result, Persian Wikipedia and Chinese Wikipedia are losing users in great numbers. A troll-minimized environment came with a great cost for us. Even though Wikipedia is not blocked (at least in Iran) but switching off the proxy (and dropping all the connection) just to make an edit simply doesn't worth it for millions of users. And it gets worse: Even trusted users in these Wikis that are editing in sensitive materials [1] can't get the global ip-block exempt right easily and we see the right as a sensitive right (which it shouldn't be at least for Iranian and Chinese users).
[1]: By saying sensitive material I don't mean some random political articles. I mean things that can cause death penalty and execution. We already saw that for bloggers and facebook users that wrote things against: leaders, Islam, homosexuality, or even history(!) and they faced death. (If you want I can show you the news in reliable sources)
Best
On Tue, Sep 30, 2014 at 4:38 PM, Derric Atzrott < datzrott@alizeepathology.com> wrote:
Hello everyone,
I've been a Tor user for many years and I frequently make use of anonymising proxies services. Recently (yesterday), I set up my first Tor relay.[1] This has once again gotten the use of Tor and other anonymising services with Wikipedia on my mind again.
In a recent article on the Tor blog,[2] Wikipedia is actually called out a number of times for being unfriendly to Tor, and I think they make a good point.
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to blacklist all these IP addresses because of trolls' and 'Wikipedia is rotting because nobody wants to edit it anymore' in the same breath, and we believe these points are related."
There must be a way that we can allow users to work from Tor. My understanding of why we block Tor categorically is that it is very hard to block individual Tor users. Perhaps we could allow Tor users to only edit pages if they make an account? That would allow us to at least block those accounts, which increases the cost of being problematic on Wikipedia a bit.
Or to take from the blog post, perhaps Tor users could be issued a certificate that they could use to prove their identity from one session to another. New Tor users would need to prove they are the same person as someone we already trust or their edits would be put in some sort of review queue.
Or combine the two and new accounts made from Tor connections would need to have their edits reviewed, or perhaps just wouldn't get autopatrolled status as quickly (if ever).
There has got to be a better solution to the problem than just blocking all Tor users completely.
Thank you, Derric Atzrott
[1]:
https://atlas.torproject.org/#details/6413D947D15B81B423D65D76DA3F2BFEF76BEE... [2]:
https://blog.torproject.org/blog/call-arms-helping-internet-services-accept-... ymous-users
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
I agree, it's a matter of consensus which is definitely beyond any technical discussion.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 30 settembre 2014 15:55:34 Amir Ladsgroup ladsgroup@gmail.com ha scritto:
Hey, Overall you are suggesting that WMF changes the policy about anonymity and accept anonymous users. In my view it's not a technical thing and it should be brought up in wikimedia-l.
BTW: I need to add something about anonymous users and how the system treats them. When you block all open proxies you close the gate for sock-puppeteers, zombies and specially trolls which I'm grateful but If you change the prospective and see the issues as an Iranian, Chinese or other similar countries the whole thing changes. In these countries using proxies and "anti-filter" is as common as the using internet. People are using it literally all the time. as an obvious result, Persian Wikipedia and Chinese Wikipedia are losing users in great numbers. A troll-minimized environment came with a great cost for us. Even though Wikipedia is not blocked (at least in Iran) but switching off the proxy (and dropping all the connection) just to make an edit simply doesn't worth it for millions of users. And it gets worse: Even trusted users in these Wikis that are editing in sensitive materials [1] can't get the global ip-block exempt right easily and we see the right as a sensitive right (which it shouldn't be at least for Iranian and Chinese users).
[1]: By saying sensitive material I don't mean some random political articles. I mean things that can cause death penalty and execution. We already saw that for bloggers and facebook users that wrote things against: leaders, Islam, homosexuality, or even history(!) and they faced death. (If you want I can show you the news in reliable sources)
Best
On Tue, Sep 30, 2014 at 4:38 PM, Derric Atzrott < datzrott@alizeepathology.com> wrote:
Hello everyone,
I've been a Tor user for many years and I frequently make use of anonymising proxies services. Recently (yesterday), I set up my first Tor relay.[1] This has once again gotten the use of Tor and other anonymising services with Wikipedia on my mind again.
In a recent article on the Tor blog,[2] Wikipedia is actually called out a number of times for being unfriendly to Tor, and I think they make a good point.
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to blacklist all these IP addresses because of trolls' and 'Wikipedia is rotting because nobody wants to edit it anymore' in the same breath, and we believe these points are related."
There must be a way that we can allow users to work from Tor. My understanding of why we block Tor categorically is that it is very hard to block individual Tor users. Perhaps we could allow Tor users to only edit pages if they make an account? That would allow us to at least block those accounts, which increases the cost of being problematic on Wikipedia a bit.
Or to take from the blog post, perhaps Tor users could be issued a certificate that they could use to prove their identity from one session to another. New Tor users would need to prove they are the same person as someone we already trust or their edits would be put in some sort of review queue.
Or combine the two and new accounts made from Tor connections would need to have their edits reviewed, or perhaps just wouldn't get autopatrolled status as quickly (if ever).
There has got to be a better solution to the problem than just blocking all Tor users completely.
Thank you, Derric Atzrott
[1]:
https://atlas.torproject.org/#details/6413D947D15B81B423D65D76DA3F2BFEF76BEE...
[2]:
https://blog.torproject.org/blog/call-arms-helping-internet-services-accept-...
ymous-users
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-- Amir _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Hey, Overall you are suggesting that WMF changes the policy about anonymity and accept anonymous users. In my view it's not a technical thing and it should be brought up in wikimedia-l.
I agree, it's a matter of consensus which is definitely beyond any technical discussion.
Fair, I had thought that the decision to make the block had primarily been made by us in the technical community as I imagine the average editor knows little to nothing about Tor or other anonymising services.
I'll bring up the topic in another venue.
Some previous discussions on wikitech-l:
Thank you for that list Sumana. I'll give it a look over and might continue to use this thread for anything that comes up from that that does seem appropriate for this list. Based on the number of times this has come up, it does at least appear there is at least some merit to discussing it, or aspects of it, on this list.
Thank you, Derric Atzrott
Are there figures proving that closing Tor/open proxy access significantly reduced the amount of vandalism/sock pupetting in the long term? Versus just making the unwanted users switch to another way of achieving their goal?
Sure, Tor traffic will have a high correlation with unwanted activity, but that doesn't mean the people who've been shut off by Tor being blocked aren't still here doing the same thing, using IPs that we can't as easily pinpoint. If anything, it's an escalation and it invites them to be more creative about their vandalism, which would make them harder to catch.
I know that there's a limit to how far unwatned users go when you block them, though, at some point they run out of ideas and give up. Which is why I wonder if Tor blocking was that last step that made them go away or if it wasn't.
On Tue, Sep 30, 2014 at 4:14 PM, Derric Atzrott < datzrott@alizeepathology.com> wrote:
Hey, Overall you are suggesting that WMF changes the policy about anonymity
and
accept anonymous users. In my view it's not a technical thing and it
should
be brought up in wikimedia-l.
I agree, it's a matter of consensus which is definitely beyond any technical discussion.
Fair, I had thought that the decision to make the block had primarily been made by us in the technical community as I imagine the average editor knows little to nothing about Tor or other anonymising services.
I'll bring up the topic in another venue.
Some previous discussions on wikitech-l:
Thank you for that list Sumana. I'll give it a look over and might continue to use this thread for anything that comes up from that that does seem appropriate for this list. Based on the number of times this has come up, it does at least appear there is at least some merit to discussing it, or aspects of it, on this list.
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Alright, this is a long email, and it acts to basically summarise all of the discussions that have already happened on this topic. I'll be posting a copy of it to Mediawiki.org as well so that it will be easier to find out about what has already been proposed in the future.
There is a policy side to this, Meta has the "No open proxies" policy, which would need to be changed, but I doubt that such policies will be changed unless those of us on this list can come up with a good way to allow Tor users to edit. If we can come up with a way that solves most of the problems the community has, then I think there is a good chance that this policy can be changed.
Table Of Contents ================================================================================ 1. Relavent Quotes 2. Ideas 2.1. Nymble 2.2. Blind Signing 2.3. FlaggedRevs 2.4. Tor Exemption Userright 2.5. Policy Changes 2.6. OAuth 2.7. Donate for Access 2.8. Account creation off Tor 2.9. Fingerprinting 2.10. Tor Hidden Service 3. A Note on Current Policy 4. References ================================================================================
Relavent Quotes -------------------------------------------------------------------------------- "Not every Tor user is vandal or troll, and assuming that all of them are by default is not assuming good faith. Some people are just really paranoid about their internet anonymity or live in restrictive countries (both of which I sympathize with), so this idea would let them edit in good faith while filtering out vandal/troll edits." -- Arcane 21
"Well the issue is not whether we want Tor users editing or not. We do. The issue is finding a software solution that makes it possible." -- Tyler Romeo (Though Risker disagrees with the quote above, I get the feeling Tyler encapsulates the overall consensus, based on the discussions I've read.)
"Many people believe that Wikipedia has become so socially important that being able to edit it even if just to leave talk page comments is an essential part of participating in worldwide society. Unfortunately, not all people are equally free and some can only access Wikipedia via anti-censorship technology or can only speak without fear of retaliation via anonymity technology." -- Gregory Maxwell
"'Preventing' abuse is the wrong goal. There is plenty of abuse even with all the privacy smashing new editor deterring convolutions that we can think up. Abuse is part of the cost of doing business of operating a publicly editable Wiki ... The goal needs to merely be to limit the abuse enough so as not to upset the abuse vs benefit equation. Today, people abuse, they get blocked, they go to another library/coffee shop/find another proxy/wash rinse repeat. We can't do any better than that model, and it turns out that it's okay" -- Gregory Maxwell
"My personal view is that we should transition away from tools relying on IP disclosure, given the global state of Internet surveillance and censorship which makes tools like Tor necessary." -- Erik Moller
"The vast majority of socks are blocked without checkuser evidence, and always have been, on all projects; the evidence is often in the edits, and doesn't need any privacy-invading tools to confirm." -- Risker
Ideas: -------------------------------------------------------------------------------- ==Nymble== http://cgi.soic.indiana.edu/~kapadia/nymble/overview.php
Users get a psuedonym from a Psuedonym Manager which maps a psuedonym to an IP address for a defined duration (linkability window, default 24 hours). This must be done from a unanonymised connection. All steps after this can be done anonymised. The user passes that psuedonym to a Nymble Manager to get a Nymble ticket which is good for a defined duration (time period, default 5 minutes). This ticket is passed to the service anytime an action is performed. If a Nymble user acts up, the service can contact the Nymble Manager and get a Linkability Token which allows the service to link all Nymble tickets that a psuedonym used and uses during a single linkability window.
The Psuedonym Manager, the Nymble Manager, and the Service would have to cooperate to deanonymise a users actions. Assuming that they do not, and that all three maintain minimal logs, this should protect the users privacy while still allowing them to perform actions and be blocked for misbehaving.
It additionally appears that with its default settings, the Nymble Manager rate limits the user to a single action per time period. This means that they should in theory only be able to make a single Wikipedia edit every five minutes, which while not great, is a definite improvement. There is a negative in that misbehaving users could only be blocked for a single linkability window (so one day) using this scheme. Still blocking was never meant to be punitive, so perhaps that might be acceptable. I don't know, and it really isn't a discussion for this list.
Wikimedia would likely have to run our own servers for this too, which could have some implications for user privacy. Its also possible for us to use something other than IP addresses for the non-anonymous item that the Psuedonym Manager collects.
==Blind Signing== For this we'd have non-anonymised users submit a token to Wikimedia which would be blind signed. They would then represent this token when editing through Tor. You would only be allowed to request a token every say week, which would allow for blocks up to that amount of time by blocking the token.
Apparently Tyler Romeo actually has this solution pretty much completely ready to go, or did as of the end of 2013. See Extension:TokenAuth.
There do appear to be some concerns about figuring out how to hand out tokens to folks. It seems that its basically the same sorts of problems that we face with IPBEs or Tor Exemptions (see below) though.
==FlaggedRevs== As I brought up in my first email, we could just put all Tor edits into a queue to be reviewed by non-Tor users. This idea has been brought up plenty of time and I believe that the biggest problem with it is that such a queue would put a lot of strain on already strained editors.
This seems like the easiest solution to implement. I suspect that the review queues will be a lot smaller than people expect too. We have fancy things like AbuseFilter and CluebotNG now that can filter out the vast majority of the junk.
==Tor Exemption Userright== Make Tor Exemption a different userright than IP Block Exemption. The biggest reason that IPBEs are not given out is that it allows people to bypass a block if they are blocked for socking. This would reduce the risk of that somewhat significantly.
If we created a Tor Exemption right that people could ask for, I imagine it would see a lot more use than the IPBE right does. It still doesn't fix the problem that Tor users would have to register their account from a non-Tor connection and also ask for the exemption from a non-Tor connection. Still, its an improvement.
Apparently this is trivially easy to do as well. It seems the rights are already seperate, we just have them in the same usergroup.
==Policy Changes== IPBEs could just be given out more liberally than they currently are. This though is a policy change, and not really a great fit for this list. Nevertheless this is a viable option.
==OAuth== Chris Steipp describes this one better than I could summarise it.
"I was talking with Tom Lowenthal, who is a tor developer. He was trying to convince Tilman and I that IP's were just a form of collateral that we implicitly hold for anonymous editors. If they edit badly, we take away the right of that IP to edit, so they have to expend some effort to get a new one. Tor makes that impossible for us, so one of his ideas is that we shift to some other form of collateral-- an email address, mobile phone number, etc. Tilman wasn't convinced, but I think I'm mostly there.
We probably don't want to do that work in MediaWiki, but with OAuth, anyone can write an editing proxy that allows connections from Tor, ideally negotiates some kind of collateral (proof of work, bitcoin, whatever), and edits on behalf of the tor user. Individuals can still be held accountable (either blocked on wiki, or you can block them in your app), or if your app lets too many vandals in, we'll revoke your entire OAuth consumer key."
It was suggested that email addresses would be a good form of collateral as they take pretty much the same amount of effort to change as an IP address does. Possibly blocking email addresses from "throw-away" providers like Guerrilla Mail that require no effort to get an address.
Whatever the collateral that is used is, it would have to be verified in some fashion, so we send them an email or call their mobile or something.
==Donations for Access== Create a new special page that accepts an unblocked username and a random number signed by a Wikimedia controlled key. If the signature passes and the number entered has not ever been used before, the account gets a Tor block exemption or an IPBE.
The donation page is then changed such that for every $10 donation the user's browser is allowed to submit a single randomly generated value to be blind signed by Wikimedia's servers.
These signed tokens can then be used to allow folks to access Tor. There would be no restriction on what the user could do with them (perhaps they could donate them to the Tor project to give out). In the case of any abuse, the token would be blocked, which would remove the exemption from the account it was given to. If they really wanted to, they could donate another $10 to get another, but chances are most attackers aren't made of money.
This idea does have some problems though in that those who use Tor may not be able to or may not want to pay to be unblocked, and in general the scheme seems somewhat unfair. Marc A. Pelletier described this idea as a "pay to get an untracable sockpuppet system". This solution may present some legal issues for the fundraising team as well. Still the idea has some merits and certainly might inspire some better ideas.
==Account creation off Tor== Easy to implement, we could just allow all registered accounts to edit via Tor and make sure that registration is disabled when using Tor. This means that we can still block problematic users and that CheckUsers will at least still have one useful data point to go off of when deciding to do IP Blocks.
This solution is by no means perfect as it still exposes data that Tor users consider sensative, and additionally its not hard to just make an account somewhere that you don't care if it is blocked. Its entirely possible that this solution will just lead to more abuse as Tor users seem like the type of people who would tend to want to avoid revealing their IP address at all, even if only for registration. If the WMF were ever served with a court order from say the Chinese or Iranian government, that single IP address could have drastic consequences still.
==Allow Tor Users to only access Talk Pages== Allowing Tor users to only edit talk pages would limit any potential damage that they could do to an area that is out of the public eye. This would still allow them to make suggestions to articles though. Essentially we would treat every page as a protected page when dealing with Tor users.
==Fingerprinting== Come up with some way to finger print various Tor users and block them based off of those finger prints when they appear. This seems like it would be hard to do as the Tor Browser Bundle is designed specifically to make this hard. There are other human elements though that can be fingerprinted. I once knew a man who set up the login for his website to monitor how he typed and how long it took him to fill out the login form. If it didn't approximately match his usual timings it required him to visit a link sent to his email in order to login.
Similar fingerprinting could likely be done.
==Tor Hidden Service== We could create a Tor hidden service that allows people to view and edit Wikipedia and disable edit access to it if a large scale attack happens. This solution doesn't really fix the problem of abuse really, but it might not be a bad idea to set up a read-only Wikipedia mirror as a hidden service. :)
A Note on Current Policy -------------------------------------------------------------------------------- We do currently have a process in place for Tor users to request account and then to request IPBE for those accounts. In the beginning of 2014 Erik Moller tested this process (http://www.gossamer-threads.com/lists/wiki/wikitech/425124) and found that it wasn't adequate. He was able to get the account created, but he was not able to get the IPBE.
When emailing in he gave the following reasoning: "My reason for editing through Tor is that I would like to write about sensitive issues (e.g. government surveillance practices) and prefer not to be identified when doing so. I have some prior editing experience, but would rather not disclose further information about it to avoid any correlation of identities."
Nathan Awrich also attempted to get an IPBE, this time for his account so that he could edit while using an anonymising proxy. This is actually something I myself have attempted to do as well. He was denied initially because he could simply turn it off. An admin who knew him did eventually step in and give him the IPBE, but that is not going to be the case with the vast majority of users.
I don't think that our current policy of allowing folks to email in is really the way to go. Nor do I think that unilaterally blocking Tor solves anyone's problems. So clearly something needs to be done.
References -------------------------------------------------------------------------------- "Can we help Tor users make legitimate edits?" 2012. http://www.gossamer-threads.com/lists/wiki/wikitech/323006
"Jake requests enabling access and edit access to Wikipedia via TOR" 2013. http://www.gossamer-threads.com/lists/wiki/wikitech/420039
"Tor exemption process" January 2014. http://www.gossamer-threads.com/lists/wiki/wikitech/425124
"Anonymous editors & IP addresses" July 2014. http://www.gossamer-threads.com/lists/wiki/wikitech/482562
"No Open Proxies" https://meta.wikimedia.org/wiki/No_open_proxies
"Editing with Tor" https://meta.wikimedia.org/wiki/Editing_with_Tor
"Bug 59146 - Enabling also edit access to Wikipedia via TOR" December 2013. https://bugzilla.wikimedia.org/show_bug.cgi?id=59146
On the other hand there are no evidences blocking TOR significantly reduced the number of editors. Btw anyone with a good reason to use TOR has been granted with global exemption.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 30 settembre 2014 16:40:13 Gilles Dubuc gilles@wikimedia.org ha scritto:
Are there figures proving that closing Tor/open proxy access significantly reduced the amount of vandalism/sock pupetting in the long term? Versus just making the unwanted users switch to another way of achieving their goal?
Sure, Tor traffic will have a high correlation with unwanted activity, but that doesn't mean the people who've been shut off by Tor being blocked aren't still here doing the same thing, using IPs that we can't as easily pinpoint. If anything, it's an escalation and it invites them to be more creative about their vandalism, which would make them harder to catch.
I know that there's a limit to how far unwatned users go when you block them, though, at some point they run out of ideas and give up. Which is why I wonder if Tor blocking was that last step that made them go away or if it wasn't.
On Tue, Sep 30, 2014 at 4:14 PM, Derric Atzrott < datzrott@alizeepathology.com> wrote:
Hey, Overall you are suggesting that WMF changes the policy about anonymity
and
accept anonymous users. In my view it's not a technical thing and it
should
be brought up in wikimedia-l.
I agree, it's a matter of consensus which is definitely beyond any technical discussion.
Fair, I had thought that the decision to make the block had primarily been made by us in the technical community as I imagine the average editor knows little to nothing about Tor or other anonymising services.
I'll bring up the topic in another venue.
Some previous discussions on wikitech-l:
Thank you for that list Sumana. I'll give it a look over and might continue to use this thread for anything that comes up from that that does seem appropriate for this list. Based on the number of times this has come up, it does at least appear there is at least some merit to discussing it, or aspects of it, on this list.
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On the other hand there are no evidences blocking TOR significantly reduced the number of editors. Btw anyone with a good reason to use TOR has been granted with global exemption.
This is demonstrably not true. I for one have a good reason to use Tor and have not been granted an IPBE. Contact me off-list for more information, I'd be happy to talk about it in a less public venue.
The lack of an IPBE is one of the primary reasons I don't use Tor or my anonymous proxy all the time when using the web at home. I hate to say it but I am often times willing to give up the privacy I get when browsing the rest of the web just to not have to disconnect from Tor or my VPN in order to fix a typo on Wikipedia. Actually makes me feel like quite the hypocrite at times as that very behaviour is something I'm always nagging folks about...
Additionally in the environment we currently live in, with the NSA doing their thing, I feel we shouldn't be punishing those who care about their privacy. You don't need to have anything to hide to want to protect yourself.
Thank you, Derric Atzrott
(Also, and not to nitpick, "Tor" not "TOR", please see https://www.torproject.org/docs/faq#WhyCalledTor)
Speaking frainkly I find (on a daily basis) too many abused VPNs to think TOR won't bring tons of abuses. Some months ago (I cannot remember when) TORblock stopped working. Having a look at what did happen at time would be an interesting path. In my perception it did bring to an increase in abuse (spam/trolling/vandalism).
(Here what I would write in a RfC at meta) Taking into consideration that: *our logs are stored for 90 days only *WMF is pretty conservative from releasing any data *our privacy policy is *so* strict but also taking into consideration everyone is personally responsible for its own edit I think the current system of torblock+exemption is the optimal solution.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
On 9/30/14, Derric Atzrott datzrott@alizeepathology.com wrote:
Alright, this is a long email, and it acts to basically summarise all of the discussions that have already happened on this topic. I'll be posting a copy of it to Mediawiki.org as well so that it will be easier to find out about what has already been proposed in the future.
There is a policy side to this, Meta has the "No open proxies" policy, which would need to be changed, but I doubt that such policies will be changed unless those of us on this list can come up with a good way to allow Tor users to edit. If we can come up with a way that solves most of the problems the community has, then I think there is a good chance that this policy can be changed.
I'd like to add an idea I've been thinking about to make TOR more acceptable.
A big part of the problem is that there are hundreds (thousands?) of exit nodes, so if someone is being bad, they just have to wait 5 minutes to get a new one, making it very hard to block them.
So what we could do, is map all tor connections to appear (To MW) as if they are coming from a few private IP addresses. This way its easy to block temporarily (in case of a whole slew of vandalism comes in), the political decision on whether to block or not becomes a local problem (The best kind of solution to a problem is the type that makes it somebody else's problem ;) I would personally hope that admins would only give short term block to such an address during waves of vandalism, but ultimately it would be up to them.
To be explicit, the potential idea is as follows: *User access via tor *MediaWiki sees its a tor request *Try to do limited browser fingerprinting, to perhaps mitigate the affect of an unclued user not using tor browser being bad ruining it for everyone. Say take a hash of the user-agent and various accept headers, and turn it into a number between 1 and 16. *Make MW think the IP is 172.16.0.<number from previous step>
Then all the tor edits are all together, and easy to notice if somebody is abusing them, and easy for a local admin to block all at once if need be.
This would also make most of the rate limiting apply against all people accessing via tor instead of doing rate limiting per exit node, which is probably a good thing, and would prevent repetitive abuse, people registering 10 billion accounts, etc. If we did this, we may also want to make pretty much every action trigger a captcha for those addresses (perhaps even if you are logged in from those addresses), instead of the current lax captcha triggering (On the bright side, our captchas are actually readable by people, unlike say cloudflare's (recaptcha) which I can't make heads or tails of).
If there are further concerns about potential abuse, we could tag all edits coming from TOR (including if user is logged in) with an edit tag of "tor" (Although that might be in violation of privacy policy by exposing how a logged in user is accessing the site).
Thoughts? Would this actually make TOR be acceptable to the Wikipedians?
--bawolff
Okay, so I have to ask. What is this obsession with enabling TOR editing?
Stewards are having to routinely disable significant IP ranges because of spamming/vandalism/obvious paid editing/etc through anonymizing proxies, open proxies, and VPNs - so I'm not really seeing a positive advantage in enabling an editing vector that would be as useful to block as the old AOL IPs.[1] If the advocates of enabling TOR were all willing to come play whack-a-mole - and keep doing it, day in and day out, for years - there might be something to be said for it. But it would be a terrible waste of a lot of talent, and I'm pretty sure none of you are all that interested in devoting your volunteer time that way.
We know what the "technical" solution would be here: to turn the on/off switch to "on". Enabling TOR from a technical perspective is simple. Don't forget, while you're at it, to address the unregistered editing attribution conundrum that has always been the significant secondary issue.
I'd encourage all of you to focus on technical ways to prevent abusive/inappropriate editing from all types of anonymizing edit platforms, including VPNs, sites like Anonymouse, etc. TOR is but one editing vector that is similarly problematic, and it would boggle the minds of most users to discover that developers are more interested in enabling another of these vectors rather than thinking about how to prevent problems from the ones that are currently not systemically shut down.
Risker/Anne
[1] Historical note - back in the day, AOL used to reassign IPs with every new link accessed through the internet (i.e., new IP every time someone went to a new Wikipedia page). It was impossible to block AOL vandals. This resulted in most of the known AOL IP ranges being blocked, since there was no other way to address the problem.
On 30 September 2014 14:52, Brian Wolff bawolff@gmail.com wrote:
On 9/30/14, Derric Atzrott datzrott@alizeepathology.com wrote:
Alright, this is a long email, and it acts to basically summarise all of
the
discussions that have already happened on this topic. I'll be posting a copy of it to Mediawiki.org as well so that it will be easier to find out
about
what has already been proposed in the future.
There is a policy side to this, Meta has the "No open proxies" policy,
which
would need to be changed, but I doubt that such policies will be changed unless those of us on this list can come up with a good way to allow Tor users to edit. If we can come up with a way that solves most of the problems
the
community has, then I think there is a good chance that this policy can
be
changed.
I'd like to add an idea I've been thinking about to make TOR more acceptable.
A big part of the problem is that there are hundreds (thousands?) of exit nodes, so if someone is being bad, they just have to wait 5 minutes to get a new one, making it very hard to block them.
So what we could do, is map all tor connections to appear (To MW) as if they are coming from a few private IP addresses. This way its easy to block temporarily (in case of a whole slew of vandalism comes in), the political decision on whether to block or not becomes a local problem (The best kind of solution to a problem is the type that makes it somebody else's problem ;) I would personally hope that admins would only give short term block to such an address during waves of vandalism, but ultimately it would be up to them.
To be explicit, the potential idea is as follows: *User access via tor *MediaWiki sees its a tor request *Try to do limited browser fingerprinting, to perhaps mitigate the affect of an unclued user not using tor browser being bad ruining it for everyone. Say take a hash of the user-agent and various accept headers, and turn it into a number between 1 and 16. *Make MW think the IP is 172.16.0.<number from previous step>
Then all the tor edits are all together, and easy to notice if somebody is abusing them, and easy for a local admin to block all at once if need be.
This would also make most of the rate limiting apply against all people accessing via tor instead of doing rate limiting per exit node, which is probably a good thing, and would prevent repetitive abuse, people registering 10 billion accounts, etc. If we did this, we may also want to make pretty much every action trigger a captcha for those addresses (perhaps even if you are logged in from those addresses), instead of the current lax captcha triggering (On the bright side, our captchas are actually readable by people, unlike say cloudflare's (recaptcha) which I can't make heads or tails of).
If there are further concerns about potential abuse, we could tag all edits coming from TOR (including if user is logged in) with an edit tag of "tor" (Although that might be in violation of privacy policy by exposing how a logged in user is accessing the site).
Thoughts? Would this actually make TOR be acceptable to the Wikipedians?
--bawolff
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Speaking frainkly I find (on a daily basis) too many abused VPNs to think TOR won't bring tons of abuses. Some months ago (I cannot remember when) TORblock stopped working. Having a look at what did happen at time would be an interesting path. In my perception it did bring to an increase in abuse (spam/trolling/vandalism).
Might you have been talking about Bug #30716? [1] It happened in September of 2011. In May of 2012 the bug was re-opened and it hasn't yet been closed.
Thank you, Derric Atzrott
I still believe that Nymble is the way to go here. It is the only solution that successfully allows negotiation of a secure collateral that can still be blacklisted after abuse has occurred.
Although, as mentioned, it is all about the collateral. Making the user provide something that requires work to obtain.
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science
On Tue, Sep 30, 2014 at 3:40 PM, Risker risker.wp@gmail.com wrote:
Okay, so I have to ask. What is this obsession with enabling TOR editing?
Stewards are having to routinely disable significant IP ranges because of spamming/vandalism/obvious paid editing/etc through anonymizing proxies, open proxies, and VPNs - so I'm not really seeing a positive advantage in enabling an editing vector that would be as useful to block as the old AOL IPs.[1] If the advocates of enabling TOR were all willing to come play whack-a-mole - and keep doing it, day in and day out, for years - there might be something to be said for it. But it would be a terrible waste of a lot of talent, and I'm pretty sure none of you are all that interested in devoting your volunteer time that way.
We know what the "technical" solution would be here: to turn the on/off switch to "on". Enabling TOR from a technical perspective is simple. Don't forget, while you're at it, to address the unregistered editing attribution conundrum that has always been the significant secondary issue.
I'd encourage all of you to focus on technical ways to prevent abusive/inappropriate editing from all types of anonymizing edit platforms, including VPNs, sites like Anonymouse, etc. TOR is but one editing vector that is similarly problematic, and it would boggle the minds of most users to discover that developers are more interested in enabling another of these vectors rather than thinking about how to prevent problems from the ones that are currently not systemically shut down.
Risker/Anne
[1] Historical note - back in the day, AOL used to reassign IPs with every new link accessed through the internet (i.e., new IP every time someone went to a new Wikipedia page). It was impossible to block AOL vandals. This resulted in most of the known AOL IP ranges being blocked, since there was no other way to address the problem.
On 30 September 2014 14:52, Brian Wolff bawolff@gmail.com wrote:
On 9/30/14, Derric Atzrott datzrott@alizeepathology.com wrote:
Alright, this is a long email, and it acts to basically summarise all
of
the
discussions that have already happened on this topic. I'll be posting
a
copy of it to Mediawiki.org as well so that it will be easier to find out
about
what has already been proposed in the future.
There is a policy side to this, Meta has the "No open proxies" policy,
which
would need to be changed, but I doubt that such policies will be
changed
unless those of us on this list can come up with a good way to allow
Tor
users to edit. If we can come up with a way that solves most of the problems
the
community has, then I think there is a good chance that this policy can
be
changed.
I'd like to add an idea I've been thinking about to make TOR more acceptable.
A big part of the problem is that there are hundreds (thousands?) of exit nodes, so if someone is being bad, they just have to wait 5 minutes to get a new one, making it very hard to block them.
So what we could do, is map all tor connections to appear (To MW) as if they are coming from a few private IP addresses. This way its easy to block temporarily (in case of a whole slew of vandalism comes in), the political decision on whether to block or not becomes a local problem (The best kind of solution to a problem is the type that makes it somebody else's problem ;) I would personally hope that admins would only give short term block to such an address during waves of vandalism, but ultimately it would be up to them.
To be explicit, the potential idea is as follows: *User access via tor *MediaWiki sees its a tor request *Try to do limited browser fingerprinting, to perhaps mitigate the affect of an unclued user not using tor browser being bad ruining it for everyone. Say take a hash of the user-agent and various accept headers, and turn it into a number between 1 and 16. *Make MW think the IP is 172.16.0.<number from previous step>
Then all the tor edits are all together, and easy to notice if somebody is abusing them, and easy for a local admin to block all at once if need be.
This would also make most of the rate limiting apply against all people accessing via tor instead of doing rate limiting per exit node, which is probably a good thing, and would prevent repetitive abuse, people registering 10 billion accounts, etc. If we did this, we may also want to make pretty much every action trigger a captcha for those addresses (perhaps even if you are logged in from those addresses), instead of the current lax captcha triggering (On the bright side, our captchas are actually readable by people, unlike say cloudflare's (recaptcha) which I can't make heads or tails of).
If there are further concerns about potential abuse, we could tag all edits coming from TOR (including if user is logged in) with an edit tag of "tor" (Although that might be in violation of privacy policy by exposing how a logged in user is accessing the site).
Thoughts? Would this actually make TOR be acceptable to the Wikipedians?
--bawolff
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Okay, so I have to ask. What is this obsession with enabling TOR editing?
It's the most well-known of the anonymizers and probably has the most traffic.
I'd encourage all of you to focus on technical ways to prevent abusive/inappropriate editing from all types of anonymizing edit platforms, including VPNs, sites like Anonymouse, etc. TOR is but one editing vector that is similarly problematic, and it would boggle the minds of most users to discover that developers are more interested in enabling another of these vectors rather than thinking about how to prevent problems from the ones that are currently not systemically shut down.
I'd completely agree with this. Most of the suggestions that were outlined in my summary email would work for more than just Tor. There is a great quote from Erik that I included in there as well that points towards this.
We need to transition away from a framework where IP addresses are our only means to block problematic editors and towards a framework where we can do so via other less intrusive means.
Thank you, Derric Atzrott
On 30 September 2014 15:46, Derric Atzrott datzrott@alizeepathology.com wrote:
Okay, so I have to ask. What is this obsession with enabling TOR
editing?
It's the most well-known of the anonymizers and probably has the most traffic.
I suspect it's the most well known anonymizer amongst a limited group of technically knowledgeable people. There is an absolute proliferation of anonymizing services out there today, many with millions of users, and stunningly inexpensive ones are regularly advertised in mainstream media.
I don't have the imagination to try to come up with an overall solution, although I agree that IP/IP range-specific blocking is becoming increasingly problematic as these systems proliferate. IPv6 notwithstanding, we're shutting off an ever-increasing percentage of internet users from participating because of the behaviour of a comparatively few commercially- or philosophically-driven problem editors. Unfortunately, with our limited human resources (what with everyone being volunteers, and most editors just editing), it doesn't take a lot of problem editors to overwhelm our resources.
Risker/Anne
On 09/30/2014 09:08 AM, Derric Atzrott wrote:
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to blacklist all these IP addresses because of trolls' and 'Wikipedia is rotting because nobody wants to edit it anymore' in the same breath, and we believe these points are related."
I've been doing adminwork on enwiki since 2007 and I can tell give you two anecdotal data points:
(a) Previously unknown TOR endpoints get found out because they invariably are the source of vandalism and/or spam.
(b) I have never seen a good edit from a TOR endpoint. Ever.
A third one I can add since I have held checkuser (2009):
(c) I have never seen accounts created via TOR or that edited through TOR that weren't demonstrably block evasion, vandalism or (most often) spamming.
None of this is TOR-specific, the same observations apply to open proxies in general, and the almost totality of hosted servers. Long blocks of open proxies or co-lo ranges that time out after *years* being blocked invariably start spewing spam and vandalism, often the very day the block expired.
-- Marc
There must be a way that we can allow users to work from Tor.
RESOLVED FIXED http://meta.wikimedia.org/wiki/NOP
Nemo
P.s.: Indeed, a million times, and every time more boring. Please reopen the issue only with concrete experience of issues with the fix.
We need to transition away from a framework where IP addresses are our only means to block problematic editors and towards a framework where we can do so via other less intrusive means.
And use what instead? Identities based on proof of possession of a phone numbers? Surety bonds paid in bitcoin? Faxing a drivers license to the foundaion? PKI? Web of trust system where existing wikipedians can invite people in?
As Tyler said, "it is all about the collateral.", Well not using IPs is great in principle, I'm not seeing anything equivalent to IP addresses that we could use instead of IPs.
--bawolff
p.s. That Nymble thing is cool.
It's not true for you then ;)
Dealing with IPBE we tend to be conservative but if you want to send me an off-list email I'll take your reasons into the deepest consideration possible.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 30 settembre 2014 20:45:22 "Derric Atzrott" datzrott@alizeepathology.com ha scritto:
On the other hand there are no evidences blocking TOR significantly reduced the number of editors. Btw anyone with a good reason to use TOR has been granted with global exemption.
This is demonstrably not true. I for one have a good reason to use Tor and have not been granted an IPBE. Contact me off-list for more information, I'd be happy to talk about it in a less public venue.
The lack of an IPBE is one of the primary reasons I don't use Tor or my anonymous proxy all the time when using the web at home. I hate to say it but I am often times willing to give up the privacy I get when browsing the rest of the web just to not have to disconnect from Tor or my VPN in order to fix a typo on Wikipedia. Actually makes me feel like quite the hypocrite at times as that very behaviour is something I'm always nagging folks about...
Additionally in the environment we currently live in, with the NSA doing their thing, I feel we shouldn't be punishing those who care about their privacy. You don't need to have anything to hide to want to protect yourself.
Thank you, Derric Atzrott
(Also, and not to nitpick, "Tor" not "TOR", please see https://www.torproject.org/docs/faq#WhyCalledTor)
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
There are some possible alternatives but none of them will apply to our overall (non-geek) audience.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 30 settembre 2014 23:39:50 Brian Wolff bawolff@gmail.com ha scritto:
We need to transition away from a framework where IP addresses are our only means to block problematic editors and towards a framework where we can do so via other less intrusive means.
And use what instead? Identities based on proof of possession of a phone numbers? Surety bonds paid in bitcoin? Faxing a drivers license to the foundaion? PKI? Web of trust system where existing wikipedians can invite people in?
As Tyler said, "it is all about the collateral.", Well not using IPs is great in principle, I'm not seeing anything equivalent to IP addresses that we could use instead of IPs.
--bawolff
p.s. That Nymble thing is cool.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Yep but last time I checked I wasn't able to gblock an exit node because it was already blocked by tb.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 30 settembre 2014 21:41:42 "Derric Atzrott" datzrott@alizeepathology.com ha scritto:
Speaking frainkly I find (on a daily basis) too many abused VPNs to think TOR won't bring tons of abuses. Some months ago (I cannot remember when) TORblock stopped working. Having a look at what did happen at time would be an interesting path. In my perception it did bring to an increase in abuse (spam/trolling/vandalism).
Might you have been talking about Bug #30716? [1] It happened in September of 2011. In May of 2012 the bug was re-opened and it hasn't yet been closed.
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 30/09/14 23:02, Marc A. Pelletier wrote:
On 09/30/2014 09:08 AM, Derric Atzrott wrote:
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to blacklist all these IP addresses because of trolls' and 'Wikipedia is rotting because nobody wants to edit it anymore' in the same breath, and we believe these points are related."
I've been doing adminwork on enwiki since 2007 and I can tell give you two anecdotal data points:
(a) Previously unknown TOR endpoints get found out because they invariably are the source of vandalism and/or spam.
(b) I have never seen a good edit from a TOR endpoint. Ever.
A third one I can add since I have held checkuser (2009):
(c) I have never seen accounts created via TOR or that edited through TOR that weren't demonstrably block evasion, vandalism or (most often) spamming.
None of this is TOR-specific, the same observations apply to open proxies in general, and the almost totality of hosted servers. Long blocks of open proxies or co-lo ranges that time out after *years* being blocked invariably start spewing spam and vandalism, often the very day the block expired.
Hi Marc :)
I know I don't need to convince you that TOR is a good thing in general.
Still, I don't see how the abusive nature of what is being done via TOR makes it less valuable to our community, in particular in the post-Snowden era. Without involving countries where freedom of speech is not legally granted, it is reasonable to assume someone doing an edit that may look 'unfriendly' to the US or UK governments will feel uncomfortable doing that without TOR.
If, as it seems right now, the problem is technical (weed out the bots and vandals) rather than ideological (as we allow anonymous contributions after all) we can find a way to allow people to edit any wikipedia via TOR while minimizing the amount of vandalism allowed.
Of course, let's not kid ourselves - it will require some special measures probably, and editing via TOR would probably end up not being as easy as editing via a public-facing IP (we may e.g. restrict publishing via TOR to users that have logged in and have done 5 "good" edits reviewed by others, or we can use modern bot-detecting techniques in that case - those are just ideas).
Cheers, Giuseppe - -- Giuseppe Lavagetto Wikimedia Foundation - TechOps Team
On Tue, Sep 30, 2014 at 2:33 PM, Federico Leva (Nemo) nemowiki@gmail.com wrote:
There must be a way that we can allow users to work from Tor.
RESOLVED FIXED http://meta.wikimedia.org/wiki/NOP
Not quite; if your _only_ means of access is Tor and you have no prior editing history to point to (which may be a situation if you're in a country where Internet access is heavily censored/monitored), this process is currently quite restrictive in terms of actually granting global exemptions as previously demonstrated. [1]
We've had this conversation a few times and I'd love to see creative approaches to a trial/pilot with data driving future decisions. But given that the global exemption process is entirely a community (steward) process, it's not clear to me that WMF can/should do very much here directly. I also don't think it's really a technical problem first and foremost. It clearly is the kind of problem where people do like to _look_ for clever technical fixes, which is why it's a recurring topic on this list.
As a social problem, I stick with my original suggestion [2] to relax the global exemption rules a bit, monitor globally exempt accounts for abuse and constructive activity, and try to determine whether the cost/benefit ratio of relaxed rules is worth it. This could be done as a time-limited trial (say 30 days), and requires no new technology. If the cost/benefit ratio actually is worse, there are many non-technical ways to raise the barrier while still having a clearer path to success for sufficiently motivated people than today (say, the well-worn tool all bureaucracies use to manage intake, "fill out this form").
As Derric pointed out, as a policy issue it's a bit OT here, though it requires people who understand the full technical complexity to make a cogent case for a pilot on Meta and elsewhere. IOW -- I think many people who've been talking on this list about this issue share the right end goal, but it's the wrong target audience.
Erik
[1] https://lists.wikimedia.org/pipermail/wikitech-l/2014-January/074049.html [2] https://lists.wikimedia.org/pipermail/wikitech-l/2014-January/074070.html
The impact of Tor upon editors' accountability must be, anyway, clearly discussed with the Foundation as maintainer (from a legal pov too). I can be considered a sort of "stakeholder" for patrollers and what I want is "something" lowering Tor risk of vandalism/sockpuppeting at an ADSL-like level. Once that level would be reached, to me, you can even block every non-Tor user ;p
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 01 ottobre 2014 09:23:08 Giuseppe Lavagetto glavagetto@wikimedia.org ha scritto:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 30/09/14 23:02, Marc A. Pelletier wrote:
On 09/30/2014 09:08 AM, Derric Atzrott wrote:
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to blacklist all these IP addresses because of trolls' and 'Wikipedia is rotting because nobody wants to edit it anymore' in the same breath, and we believe these points are related."
I've been doing adminwork on enwiki since 2007 and I can tell give you two anecdotal data points:
(a) Previously unknown TOR endpoints get found out because they invariably are the source of vandalism and/or spam.
(b) I have never seen a good edit from a TOR endpoint. Ever.
A third one I can add since I have held checkuser (2009):
(c) I have never seen accounts created via TOR or that edited through TOR that weren't demonstrably block evasion, vandalism or (most often) spamming.
None of this is TOR-specific, the same observations apply to open proxies in general, and the almost totality of hosted servers. Long blocks of open proxies or co-lo ranges that time out after *years* being blocked invariably start spewing spam and vandalism, often the very day the block expired.
Hi Marc :)
I know I don't need to convince you that TOR is a good thing in general.
Still, I don't see how the abusive nature of what is being done via TOR makes it less valuable to our community, in particular in the post-Snowden era. Without involving countries where freedom of speech is not legally granted, it is reasonable to assume someone doing an edit that may look 'unfriendly' to the US or UK governments will feel uncomfortable doing that without TOR.
If, as it seems right now, the problem is technical (weed out the bots and vandals) rather than ideological (as we allow anonymous contributions after all) we can find a way to allow people to edit any wikipedia via TOR while minimizing the amount of vandalism allowed.
Of course, let's not kid ourselves - it will require some special measures probably, and editing via TOR would probably end up not being as easy as editing via a public-facing IP (we may e.g. restrict publishing via TOR to users that have logged in and have done 5 "good" edits reviewed by others, or we can use modern bot-detecting techniques in that case - those are just ideas).
Cheers, Giuseppe
Giuseppe Lavagetto Wikimedia Foundation - TechOps Team -----BEGIN PGP SIGNATURE----- Version: GnuPG v1
iEYEARECAAYFAlQrq84ACgkQTwZ0G8La7IAWLgCglkaCutKP64khUn4zXpSsFnlD HMkAoL4HoAw7Rx4PoGvqo0D5lDKOBawd =RIjq -----END PGP SIGNATURE-----
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
From my experience too, though I definitely appreciate Tor's
transparency/fairness compared to VPNs/other stuffs'.
Vito
Inviato con AquaMail per Android http://www.aqua-mail.com
Il 30 settembre 2014 23:02:27 "Marc A. Pelletier" marc@uberbox.org ha scritto:
On 09/30/2014 09:08 AM, Derric Atzrott wrote:
"[H]ow can we quantify the loss to Wikipedia, and to society at large, from turning away anonymous contributors? Wikipedians say 'we have to
blacklist all
these IP addresses because of trolls' and 'Wikipedia is rotting because
nobody
wants to edit it anymore' in the same breath, and we believe these points are related."
I've been doing adminwork on enwiki since 2007 and I can tell give you two anecdotal data points:
(a) Previously unknown TOR endpoints get found out because they invariably are the source of vandalism and/or spam.
(b) I have never seen a good edit from a TOR endpoint. Ever.
A third one I can add since I have held checkuser (2009):
(c) I have never seen accounts created via TOR or that edited through TOR that weren't demonstrably block evasion, vandalism or (most often) spamming.
None of this is TOR-specific, the same observations apply to open proxies in general, and the almost totality of hosted servers. Long blocks of open proxies or co-lo ranges that time out after *years* being blocked invariably start spewing spam and vandalism, often the very day the block expired.
-- Marc
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
If, as it seems right now, the problem is technical (weed out the bots and vandals) rather than ideological (as we allow anonymous contributions after all) we can find a way to allow people to edit any wikipedia via TOR while minimizing the amount of vandalism allowed.
Of course, let's not kid ourselves - it will require some special measures probably, and editing via TOR would probably end up not being as easy as editing via a public-facing IP (we may e.g. restrict publishing via TOR to users that have logged in and have done 5 "good" edits reviewed by others, or we can use modern bot-detecting techniques in that case - those are just ideas).
I would be curious to see what percentage of problematic edits are caught by running all prospective edits through AbuseFilter and ClueBotNG. I suspect those two tools would catch a large percentage of the vandalism edits. I understand that they catch most of such edits that regular IP users make. This would be a good start and would give us a little bit of data as to what other sorts of measures might need to be taken to make this sort of thing work.
AbuseFilter has the ability to tag edits for further review so we could leverage that functionality to tag Tor edits during a trial.
I could reach out to the maintainer of ClueBotNG and see what could be done to get it to interface with AbuseFilter such that any edits it sees as unconstructive are tagged, and if that isn't possible maybe just have it log such edits somewhere special.
We've had this conversation a few times and I'd love to see creative approaches to a trial/pilot with data driving future decisions.
If I approached Wikimedia-l with the idea of a limited trial with the above approach for maybe two weeks' time with all Tor edits being tagged, do you think they might bite?
It clearly is the kind of problem where people do like to _look_ for clever technical fixes, which is why it's a recurring topic on this list.
I suspect one exists somewhere. I'll reach out to the folks at the Tor project and see if they have any suggestions for ways to prevent abuse from a technical standpoint. Especially in regards to Sockpuppet abuse. I agree with Giuseppe that the measures that will need to be put in place will make editing via Tor more difficult than editing without Tor, but that's acceptable so long as they are not as prohibitively difficult as they are currently.
Without having spoken to the Tor Project though, the Nymble approach seems like a reasonable way to go to me. The protocol could potentially be modified to accept some sort of proof of work rather than their public facing IP address as well. If we had a system where in order to be issued a certificate in Nymble you had to complete a proof-of-work that took perhaps several hours of computation and was issued for a week, that might be a sufficient barrier to stop most socks, though definitely some more data needs gathered.
Thank you, Derric Atzrott
If, as it seems right now, the problem is technical (weed out the bots and vandals) rather than ideological (as we allow anonymous contributions after all) we can find a way to allow people to edit any wikipedia via TOR while minimizing the amount of vandalism allowed.
Of course, let's not kid ourselves - it will require some special measures probably, and editing via TOR would probably end up not being as easy as editing via a public-facing IP (we may e.g. restrict publishing via TOR to users that have logged in and have done 5 "good" edits reviewed by others, or we can use modern bot-detecting techniques in that case - those are just ideas).
I would be curious to see what percentage of problematic edits are caught by running all prospective edits through AbuseFilter and ClueBotNG. I suspect those two tools would catch a large percentage of the vandalism edits. I understand that they catch most of such edits that regular IP users make. This would be a good start and would give us a little bit of data as to what other sorts of measures might need to be taken to make this sort of thing work.
AbuseFilter has the ability to tag edits for further review so we could leverage that functionality to tag Tor edits during a trial.
I could reach out to the maintainer of ClueBotNG and see what could be done to get it to interface with AbuseFilter such that any edits it sees as unconstructive are tagged, and if that isn't possible maybe just have it log such edits somewhere special.
We've had this conversation a few times and I'd love to see creative approaches to a trial/pilot with data driving future decisions.
If I approached Wikimedia-l with the idea of a limited trial with the above approach for maybe two weeks' time with all Tor edits being tagged, do you think they might bite?
It clearly is the kind of problem where people do like to _look_ for clever technical fixes, which is why it's a recurring topic on this list.
I suspect one exists somewhere. I'll reach out to the folks at the Tor project and see if they have any suggestions for ways to prevent abuse from a technical standpoint. Especially in regards to Sockpuppet abuse. I agree with Giuseppe that the measures that will need to be put in place will make editing via Tor more difficult than editing without Tor, but that's acceptable so long as they are not as prohibitively difficult as they are currently.
Without having spoken to the Tor Project though, the Nymble approach seems like a reasonable way to go to me. The protocol could potentially be modified to accept some sort of proof of work rather than their public facing IP address as well. If we had a system where in order to be issued a certificate in Nymble you had to complete a proof-of-work that took perhaps several hours of computation and was issued for a week, that might be a sufficient barrier to stop most socks, though definitely some more data needs gathered.
Thank you, Derric Atzrott
This is something that has to be discussed *on the projects themselves*, not on mailing lists that have (comparatively) very low participation by active editors. Sending to another mailing list, even a broader one than this, isn't going to get the buy-in needed from the people who will have to clean up the messes. You will need buy-in from at least the following groups:
- A significant number of editors from the project involved in the trial - Stewards - Global sysops/global rollbackers - Checkusers
You will also have to absolutely guarantee that the trial will end on the date stated *regardless of what happens during the trial*, and that there will be non-project support for the collection and analysis of data. One of the reasons projects tend to not want to participate in trials is the unwillingness to return to status quo ante because someone/developers/the WMF/etc has decided on their own basis that the results were favourable without any analysis of actual data. Frankly, we've experienced this so often on English Wikipedia that it's resulted in major showdowns with the WMF that have had a real and ongoing impact on the WMF's ability to develop and improve software. (Don't kid yourself, this will be seen as a WMF proposal even though it may be coming from volunteer developers.)
Edit filters are developed project-by-project, and cannot be relied upon to catch problem edits; even with the huge number of edit filters on enwiki, there is still significant spamming and vandalism happening. Many of the projects most severely impacted by inappropriate editing are smaller projects with comparatively few active editors and few edit filters, where recent changes are not routinely reviewed; stewards and global sysops/rollbackers are often the people who clean up the messes there.
There also needs to be a good answer to the "attribution problem" that has long been identified as a secondary concern related to Tor and other proxy systems. The absence of a good answer to this issue may be sufficient in itself to derail any proposed trial.
Not saying a trial can't happen....just making it clear that it's not something that is within the purview of developers (volunteer or staff) because the blocking of Tor has always been directly linked to behaviour and core policy, not to technical issues. I very much disagree that this is a technical issue; Tor's blocking is a technical solution to a genuine policy/behaviour problem.
Risker/Anne
On 1 October 2014 09:05, Derric Atzrott datzrott@alizeepathology.com wrote:
If, as it seems right now, the problem is technical (weed out the bots and vandals) rather than ideological (as we allow anonymous contributions after all) we can find a way to allow people to edit any wikipedia via TOR while minimizing the amount of vandalism allowed.
Of course, let's not kid ourselves - it will require some special measures probably, and editing via TOR would probably end up not being as easy as editing via a public-facing IP (we may e.g. restrict publishing via TOR to users that have logged in and have done 5 "good" edits reviewed by others, or we can use modern bot-detecting techniques in that case - those are just ideas).
I would be curious to see what percentage of problematic edits are caught by running all prospective edits through AbuseFilter and ClueBotNG. I suspect those two tools would catch a large percentage of the vandalism edits. I understand that they catch most of such edits that regular IP users make. This would be a good start and would give us a little bit of data as to what other sorts of measures might need to be taken to make this sort of thing work.
AbuseFilter has the ability to tag edits for further review so we could leverage that functionality to tag Tor edits during a trial.
I could reach out to the maintainer of ClueBotNG and see what could be done to get it to interface with AbuseFilter such that any edits it sees as unconstructive are tagged, and if that isn't possible maybe just have it log such edits somewhere special.
We've had this conversation a few times and I'd love to see creative approaches to a trial/pilot with data driving future decisions.
If I approached Wikimedia-l with the idea of a limited trial with the above approach for maybe two weeks' time with all Tor edits being tagged, do you think they might bite?
It clearly is the kind of problem where people do like to _look_ for clever technical fixes, which is why it's a recurring topic on this list.
I suspect one exists somewhere. I'll reach out to the folks at the Tor project and see if they have any suggestions for ways to prevent abuse from a technical standpoint. Especially in regards to Sockpuppet abuse. I agree with Giuseppe that the measures that will need to be put in place will make editing via Tor more difficult than editing without Tor, but that's acceptable so long as they are not as prohibitively difficult as they are currently.
Without having spoken to the Tor Project though, the Nymble approach seems like a reasonable way to go to me. The protocol could potentially be modified to accept some sort of proof of work rather than their public facing IP address as well. If we had a system where in order to be issued a certificate in Nymble you had to complete a proof-of-work that took perhaps several hours of computation and was issued for a week, that might be a sufficient barrier to stop most socks, though definitely some more data needs gathered.
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Oct 1, 2014 10:55 AM, "Risker" risker.wp@gmail.com wrote:
This is something that has to be discussed *on the projects themselves*, not on mailing lists that have (comparatively) very low participation by active editors.
Unless people want to trial on mw.org (assuming there is dev buy in, not sure we are there yet)
There also needs to be a good answer to the "attribution problem" that has long been identified as a secondary concern related to Tor and other proxy systems. The absence of a good answer to this issue may be sufficient in itself to derail any proposed trial.
Which problem is that?
Not saying a trial can't happen....just making it clear that it's not something that is within the purview of developers (volunteer or staff) because the blocking of Tor has always been directly linked to behaviour and core policy, not to technical issues.
I agree that any such trial should have local community buy in.
--bawolff
On Wed, Oct 1, 2014 at 10:29 AM, Brian Wolff bawolff@gmail.com wrote:
On Oct 1, 2014 10:55 AM, "Risker" risker.wp@gmail.com wrote:
This is something that has to be discussed *on the projects themselves*, not on mailing lists that have (comparatively) very low participation by active editors.
Unless people want to trial on mw.org (assuming there is dev buy in, not sure we are there yet)
Does mw.org receive the level of vandalism and other unhelpful edits (where people would like to use Tor to avoid IP blocking in making those edits) that it would make for a useful test?
There also needs to be a good answer to the "attribution problem" that
has
long been identified as a secondary concern related to Tor and other
proxy
systems. The absence of a good answer to this issue may be sufficient in itself to derail any proposed trial.
Which problem is that?
If I understand it correctly, right now we attribute edits made without an account to the IP address. Allowing edits via Tor should probably not be attributing such edits to the exit node's IP.
One simple solution would be to disallow IP edits via Tor, i.e. softblock[1] all Tor exit nodes instead of hardblocking them.
[1]: https://en.wikipedia.org/wiki/Wikipedia:Blocking_policy#Setting_block_option...
On Wed, Oct 1, 2014 at 10:40 AM, Brad Jorsch (Anomie) <bjorsch@wikimedia.org
wrote:
One simple solution would be to disallow IP edits via Tor, i.e. softblock[1] all Tor exit nodes instead of hardblocking them.
[1]:
https://en.wikipedia.org/wiki/Wikipedia:Blocking_policy#Setting_block_option...
I'd agree with this. I've never understood why we even hardblock open proxies at all instead of just softblocking with account creation disabled.
Uh, Creating sleeper accounts from good IPs lettting them go stale beyond CU retention, and you have an infinite number of accounts you can then use to skip past the softblocks on tor and create havoc. Anything short of a hard block wont stop open proxy abuse.
On Wed, Oct 1, 2014 at 10:44 AM, Jackmcbarn jackmcbarn@gmail.com wrote:
On Wed, Oct 1, 2014 at 10:40 AM, Brad Jorsch (Anomie) < bjorsch@wikimedia.org
wrote:
One simple solution would be to disallow IP edits via Tor, i.e. softblock[1] all Tor exit nodes instead of hardblocking them.
[1]:
https://en.wikipedia.org/wiki/Wikipedia:Blocking_policy#Setting_block_option...
I'd agree with this. I've never understood why we even hardblock open proxies at all instead of just softblocking with account creation disabled. _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
And any kind of account creation block will cause issues with users who work across multiple projects as SUL auto account creation is also blocked.
On Wed, Oct 1, 2014 at 10:57 AM, John phoenixoverride@gmail.com wrote:
Uh, Creating sleeper accounts from good IPs lettting them go stale beyond CU retention, and you have an infinite number of accounts you can then use to skip past the softblocks on tor and create havoc. Anything short of a hard block wont stop open proxy abuse.
On Wed, Oct 1, 2014 at 10:44 AM, Jackmcbarn jackmcbarn@gmail.com wrote:
On Wed, Oct 1, 2014 at 10:40 AM, Brad Jorsch (Anomie) < bjorsch@wikimedia.org
wrote:
One simple solution would be to disallow IP edits via Tor, i.e. softblock[1] all Tor exit nodes instead of hardblocking them.
[1]:
https://en.wikipedia.org/wiki/Wikipedia:Blocking_policy#Setting_block_option...
I'd agree with this. I've never understood why we even hardblock open proxies at all instead of just softblocking with account creation disabled. _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On Oct 1, 2014 11:40 AM, "Brad Jorsch (Anomie)" bjorsch@wikimedia.org wrote:
On Wed, Oct 1, 2014 at 10:29 AM, Brian Wolff bawolff@gmail.com wrote:
On Oct 1, 2014 10:55 AM, "Risker" risker.wp@gmail.com wrote:
This is something that has to be discussed *on the projects
themselves*,
not on mailing lists that have (comparatively) very low participation
by
active editors.
Unless people want to trial on mw.org (assuming there is dev buy in, not sure we are there yet)
Does mw.org receive the level of vandalism and other unhelpful edits
(where
people would like to use Tor to avoid IP blocking in making those edits) that it would make for a useful test?
If we are testing something potentially very disruptive, no harm starting small. At the very least it would show if we could enable tor on mw.org. The results could help decide if further testing on more "real" wikis is justified.
There also needs to be a good answer to the "attribution problem"
that
has
long been identified as a secondary concern related to Tor and other
proxy
systems. The absence of a good answer to this issue may be
sufficient in
itself to derail any proposed trial.
Which problem is that?
If I understand it correctly, right now we attribute edits made without an account to the IP address. Allowing edits via Tor should probably not be attributing such edits to the exit node's IP.
This quite frankly seems like a contrived problem. A random (normal) ip address hardly associates an edit to a person unless you steal an isps records. Wait a year and it would probably be impossible to figure out who owned some random dynamic ip address no matter how hard you tried. I dont think attributing edits to an exit node introduces any new attribution issues that are not already present.
--bawolff
Prior to TOR being enabled we need to be able to flag both logged in and logged out edits made via TOR.
On Wed, Oct 1, 2014 at 11:00 AM, Brian Wolff bawolff@gmail.com wrote:
On Oct 1, 2014 11:40 AM, "Brad Jorsch (Anomie)" bjorsch@wikimedia.org wrote:
On Wed, Oct 1, 2014 at 10:29 AM, Brian Wolff bawolff@gmail.com wrote:
On Oct 1, 2014 10:55 AM, "Risker" risker.wp@gmail.com wrote:
This is something that has to be discussed *on the projects
themselves*,
not on mailing lists that have (comparatively) very low participation
by
active editors.
Unless people want to trial on mw.org (assuming there is dev buy in,
not
sure we are there yet)
Does mw.org receive the level of vandalism and other unhelpful edits
(where
people would like to use Tor to avoid IP blocking in making those edits) that it would make for a useful test?
If we are testing something potentially very disruptive, no harm starting small. At the very least it would show if we could enable tor on mw.org. The results could help decide if further testing on more "real" wikis is justified.
There also needs to be a good answer to the "attribution problem"
that
has
long been identified as a secondary concern related to Tor and other
proxy
systems. The absence of a good answer to this issue may be
sufficient in
itself to derail any proposed trial.
Which problem is that?
If I understand it correctly, right now we attribute edits made without
an
account to the IP address. Allowing edits via Tor should probably not be attributing such edits to the exit node's IP.
This quite frankly seems like a contrived problem. A random (normal) ip address hardly associates an edit to a person unless you steal an isps records. Wait a year and it would probably be impossible to figure out who owned some random dynamic ip address no matter how hard you tried. I dont think attributing edits to an exit node introduces any new attribution issues that are not already present.
--bawolff _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Good point; I hadn't thought of that. What if we made some sort of semi-soft IP block that allowed accounts to edit only if they had fresh CheckUser data from a non-blocked IP, or something along those lines?
On Wed, Oct 1, 2014 at 10:57 AM, John phoenixoverride@gmail.com wrote:
Uh, Creating sleeper accounts from good IPs lettting them go stale beyond CU retention, and you have an infinite number of accounts you can then use to skip past the softblocks on tor and create havoc. Anything short of a hard block wont stop open proxy abuse.
On Wed, Oct 1, 2014 at 10:44 AM, Jackmcbarn jackmcbarn@gmail.com wrote:
On Wed, Oct 1, 2014 at 10:40 AM, Brad Jorsch (Anomie) < bjorsch@wikimedia.org
wrote:
One simple solution would be to disallow IP edits via Tor, i.e. softblock[1] all Tor exit nodes instead of hardblocking them.
[1]:
https://en.wikipedia.org/wiki/Wikipedia:Blocking_policy#Setting_block_option...
I'd agree with this. I've never understood why we even hardblock open proxies at all instead of just softblocking with account creation
disabled.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Prior to TOR being enabled we need to be able to flag both logged in and logged out edits made via TOR.
This is something that can be handled easily by AbuseFilter. It has the option to flag edits made by certain users or from certain IP addresses if I remember correctly.
Even if it doesn't this should be fairly trivial to put together I would imagine (though correct me if I am wrong).
Thank you, Derric Atzrott
On 1 October 2014 11:00, Brian Wolff bawolff@gmail.com wrote:
<snip> > > > > There also needs to be a good answer to the "attribution problem" that > > has > > > long been identified as a secondary concern related to Tor and other > > proxy > > > systems. The absence of a good answer to this issue may be sufficient in > > > itself to derail any proposed trial. > > > > Which problem is that? > > > > If I understand it correctly, right now we attribute edits made without an > account to the IP address. Allowing edits via Tor should probably not be > attributing such edits to the exit node's IP. >
This quite frankly seems like a contrived problem. A random (normal) ip address hardly associates an edit to a person unless you steal an isps records. Wait a year and it would probably be impossible to figure out who owned some random dynamic ip address no matter how hard you tried. I dont think attributing edits to an exit node introduces any new attribution issues that are not already present.
I wish it was a contrived problem. However, this is the conceit by which the edits are attributed for licensing purposes, and it's a non-trivial matter. While I'm fully supportive of finding another way to do this, it is a fundamental issue that would require fairly extensive legal consultation to change, given that we've been using "IP address as assigned to a specific individual" as the licensee for...what, almost 14 years?
We know that Tor exit nodes are (by definition) not IP addresses assigned to the contributor, and there is no reasonable prospect of tracing back to the original IP address (unlike many other anonymising proxies). Thus the attribution issue.
I've copied Luis Villa on this specific email just as a heads up that this matter might land on the Legal & Community Advocacy doorstep, but I don't think we should expect a formal legal response about this.
Risker/Anne
The abuse filter has no way of identifying TOR exit nodes, thus it cannot be used for this. Some developer will need to interface with the TOR blocking code and use the same TOR identification methods to ID and label both logged in and logged out edits made via TOR.
I wish it was a contrived problem. However, this is the conceit by which the edits are attributed for licensing purposes, and it's a non-trivial matter. While I'm fully supportive of finding another way to do this, it is a fundamental issue that would require fairly extensive legal consultation to change, given that we've been using "IP address as assigned to a specific individual" as the licensee for...what, almost 14 years?
We know that Tor exit nodes are (by definition) not IP addresses assigned to the contributor, and there is no reasonable prospect of tracing back to the original IP address (unlike many other anonymising proxies). Thus the attribution issue.
Realistically there is no reasonable prospect of tracing back an individual IP to a real person 80% of the time without a court order, which is extremely unlikely to ever happen. Even then you can only really link the IP to who's paying the bill, which is only weakly circumstantially related to who really "owns" the edit.
If we're going to consider the theoretical possibility that we can might be able to link back an IP to a person with certainly, we might as well start considering that we might be able to do the same if we get everyone in the tor circuit to collude...
--bawolff
On Wed, Oct 1, 2014 at 11:05 AM, Jackmcbarn jackmcbarn@gmail.com wrote:
Good point; I hadn't thought of that. What if we made some sort of semi-soft IP block that allowed accounts to edit only if they had fresh CheckUser data from a non-blocked IP, or something along those lines?
That would rather defeat the purpose of using Tor, if you had to sign in from a non-Tor IP every month or so.
Another idea for a potential technical solution, this one provided by the user Mirimir on the Tor mailing list. I thought this was actually a pretty good idea.
Wikimedia could authenticate users with GnuPG keys. As part of the process of creating a new account, Wikimedia could randomly specify the key ID (or even a longer piece of the fingerprint) of the key that the user needs to generate. Generating the key would require arbitrarily great effort, but would impose negligible cost on Wikimedia or users during subsequent use. Although there's nothing special about such GnuPG keys as proof of work, they're more generally useful.
As a proof of work I think it works out pretty well. The cost of creating a key with a given fingerprint is non-trivial, but low enough that someone wishing to create an account to edit might well go through with it if they knew it would only be a one-time thing.
This doesn't completely eliminate the issue of socks, but honestly if we make the key generation time reasonably long, it would probably deter most socks as they might as well just drive to the nearest Starbucks.
Someone else on the Tor mailing list suggested that we basically relax IPBE, which while not on topic for this list, I thought I'd mention just because it has been mentioned. They actually basically described our current system, except with the getting the IPBE stage a lot easier.
The following was also pointed out to me:
[I]t's also trivial to evade using proxies, with or without Tor. Blocking Tor (or even all known proxies) only stops the clueless. Anyone serious about evading a block could just use a private proxy on AWS (via Tor). [snip] The bottom line is that blocking Tor harms numerous innocent users, and by no means excludes seriously malicious users.
I did respond to this to explain our concerns, which is what netted the GPG idea. Does anyone see any glaringly obvious problems with requiring an easily blockable and difficult to create proof of work to edit via Tor?
Thank you, Derric Atzrott
On Oct 1, 2014 3:56 PM, "Derric Atzrott" datzrott@alizeepathology.com wrote:
Another idea for a potential technical solution, this one provided by the user Mirimir on the Tor mailing list. I thought this was actually a pretty good idea.
Wikimedia could authenticate users with GnuPG keys. As part of the process of creating a new account, Wikimedia could randomly specify the key ID (or even a longer piece of the fingerprint) of the key that the user needs to generate. Generating the key would require arbitrarily great effort, but would impose negligible cost on Wikimedia or users during subsequent use. Although there's nothing special about such GnuPG keys as proof of work, they're more generally useful.
As a proof of work I think it works out pretty well. The cost of creating a key with a given fingerprint is non-trivial, but low enough that someone wishing to create an account to edit might well go through with it if they knew it would only be a one-time thing.
This doesn't completely eliminate the issue of socks, but honestly if we make the key generation time reasonably long, it would probably deter most socks as they might as well just drive to the nearest Starbucks.
Someone else on the Tor mailing list suggested that we basically relax IPBE, which while not on topic for this list, I thought I'd mention just because it has been mentioned. They actually basically described our current system, except with the getting the IPBE stage a lot easier.
The following was also pointed out to me:
[I]t's also trivial to evade using proxies, with or without Tor. Blocking Tor (or even all known proxies) only stops the clueless. Anyone serious about evading a block could just use a private proxy on AWS (via Tor). [snip] The bottom line is that blocking Tor harms numerous innocent users, and by no means excludes seriously malicious users.
I did respond to this to explain our concerns, which is what netted the GPG idea. Does anyone see any glaringly obvious problems with requiring an easily blockable and difficult to create proof of work to edit via Tor?
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
The problem with proof of work things is that they kind of have the wrong kind of scarcity for this problem.
*someone legit wants to edit, takes them hours to be able to. (Which is not ideal) *someone wants to abuse the system, spend a couple months before hand generating the work offline, use all at once for thousand strong sock puppet army. (Which makes the system ineffective at preventing abuse)
--bawolff
My example means that unless TOR is hard blocked attackers can create 6 accounts per day on there home IP and just wait till they go stale and use 6 attack accounts per day. There isn't a need for infinite accounts, just that soft blocking is pointless in this case
On Wednesday, October 1, 2014, Brian Wolff bawolff@gmail.com wrote:
On Oct 1, 2014 3:56 PM, "Derric Atzrott" <datzrott@alizeepathology.com javascript:;> wrote:
Another idea for a potential technical solution, this one provided by the user Mirimir on the Tor mailing list. I thought this was actually a pretty good idea.
Wikimedia could authenticate users with GnuPG keys. As part of the process of creating a new account, Wikimedia could randomly specify the key ID (or even a longer piece of the fingerprint) of the key that the user needs to generate. Generating the key would require arbitrarily great effort, but would impose negligible cost on Wikimedia or users during subsequent use. Although there's nothing special about such
GnuPG
keys as proof of work, they're more generally useful.
As a proof of work I think it works out pretty well. The cost of
creating
a key with a given fingerprint is non-trivial, but low enough that someone wishing to create an account to edit might well go through with it if they knew it would only be a one-time thing.
This doesn't completely eliminate the issue of socks, but honestly if we make the key generation time reasonably long, it would probably deter most socks as they might as well just drive to the nearest Starbucks.
Someone else on the Tor mailing list suggested that we basically relax IPBE, which while not on topic for this list, I thought I'd mention just because it has been mentioned. They actually basically described our current system, except with the getting the IPBE stage a lot easier.
The following was also pointed out to me:
[I]t's also trivial to evade using proxies, with or without Tor. Blocking Tor (or even all known proxies) only stops the clueless. Anyone serious about evading a block could just use a private proxy on AWS (via Tor). [snip] The bottom line is that blocking Tor harms numerous innocent users, and by no means excludes seriously malicious users.
I did respond to this to explain our concerns, which is what netted the GPG idea. Does anyone see any glaringly obvious problems with requiring an easily blockable and difficult to create proof of work to edit via Tor?
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org javascript:; https://lists.wikimedia.org/mailman/listinfo/wikitech-l
The problem with proof of work things is that they kind of have the wrong kind of scarcity for this problem.
*someone legit wants to edit, takes them hours to be able to. (Which is not ideal) *someone wants to abuse the system, spend a couple months before hand generating the work offline, use all at once for thousand strong sock puppet army. (Which makes the system ineffective at preventing abuse)
--bawolff _______________________________________________ Wikitech-l mailing list Wikitech-l@lists.wikimedia.org javascript:; https://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 10/1/14 9:09 AM, John wrote:
The abuse filter has no way of identifying TOR exit nodes, thus it cannot be used for this. Some developer will need to interface with the TOR blocking code and use the same TOR identification methods to ID and label both logged in and logged out edits made via TOR.
The TorBlock extension already adds a "tor_exit_node" variable[1] to the AbuseFilter, which is a simple boolean value whether the edit is being made through tor or not.
[1] https://github.com/wikimedia/mediawiki-extensions-TorBlock/blob/master/TorBl...
-- Legoktm
On 10/1/14 8:02 AM, John wrote:
Prior to TOR being enabled we need to be able to flag both logged in and logged out edits made via TOR.
There's a $wgTorTagChanges option which does exactly that, except it's currently disabled in CommonSettings.php.
-- Legoktm
Derric Atzrott schreef op 2014/09/30 6:08:
Hello everyone, [snip] There must be a way that we can allow users to work from Tor. [snip more]
I think the first step is to work harder to block devices, not IP addresses. One jerk with a cell phone cycles through so many IP addresses so quickly in such active ranges that our current protection techniques are useless. Any child can figure out how to pull his cable modem out of the wall and plug it back in.
Focusing on what signature we can obtain from (or plant on) the device and how to make that signature available to and manageable by admins is the key. Maybe we require a WMF supplied app before one can edit from a mobile device. Maybe we plant cookies on every machine that edits Wikipedia to allow us to track who's using the machine and block access to anyone that won't permit the cookies to be stored. There are probably other techniques. The thing to remember is that the vast majority of our sockpuppeteers are actually fairly stupid and the ones that aren't will make their way past any technique short of retina scanning. It doesn't matter whether a blocking technique allows a tech-savvy user to bypass it somehow. Anything is better than a system that anyone can bypass by turning his cable modem off and on.
Once we have a system that allows us to block individual devices reasonably effectively, it won't matter whether those people are using Tor to get to us or not.
KWW
On 10/2/14, Kevin Wayne Williams kwwilliams@kwwilliams.com wrote:
Derric Atzrott schreef op 2014/09/30 6:08:
Hello everyone, [snip] There must be a way that we can allow users to work from Tor. [snip more]
I think the first step is to work harder to block devices, not IP addresses. One jerk with a cell phone cycles through so many IP addresses so quickly in such active ranges that our current protection techniques are useless. Any child can figure out how to pull his cable modem out of the wall and plug it back in.
Focusing on what signature we can obtain from (or plant on) the device and how to make that signature available to and manageable by admins is the key. Maybe we require a WMF supplied app before one can edit from a mobile device. Maybe we plant cookies on every machine that edits Wikipedia to allow us to track who's using the machine and block access to anyone that won't permit the cookies to be stored. There are probably other techniques. The thing to remember is that the vast majority of our sockpuppeteers are actually fairly stupid and the ones that aren't will make their way past any technique short of retina scanning. It doesn't matter whether a blocking technique allows a tech-savvy user to bypass it somehow. Anything is better than a system that anyone can bypass by turning his cable modem off and on.
Once we have a system that allows us to block individual devices reasonably effectively, it won't matter whether those people are using Tor to get to us or not.
KWW
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
So all we need is either: A) Magic browser fingerprinting with no (or almost no) false positives when used against everyone in the world. With the fingerprinting code having at most access to javascript to run code (but preferably not even needing that) and it has to be robust in the face of the user being able to maliciously modify the code as they please. B) tamper proof modules inside every device to uniquely identify it. (Can we say police state?)
Arguably those aren't making the assumption that "[users] are actually fairly stupid". But even a simplified version of those requirements, such as, must block on per device basis, must involve more work than unpluging a cable modem to get unblocked, dwells into the territory of absurdly hard.
Although perhaps there are some subset of the population we could use additional methods on. Cookies are pretty useless (If you think getting a new IP is easy, you should see what it takes to delete a cookie). Supercookies (e.g. Evercookie ) might be more useful, but many people view such things as evil. Certain browsers might have a distinctive enough fingerprint to block based on that, but I doubt we'd ever be able to do that for all browsers. These things are also likely to be considered "security vulnrabilities", so probably not something to be relied on over long term as people fix the issues that allow people to be tracked this way.
Once we have a system that allows us to block individual devices reasonably effectively, it won't matter whether those people are using Tor to get to us or not
If you can find a way to link a tor user to the device they are using, then you have essentially broken Tor. Which is not an easy feat.
--bawolff
p.s. Obligatory xkcd https://xkcd.com/1425/
Hello everyone, [snip] There must be a way that we can allow users to work from Tor. [snip more]
I think the first step is to work harder to block devices, not IP addresses. [snip]
Focusing on what signature we can obtain from (or plant on) the device and how to make that signature available to and manageable by admins is the key.
These things are also likely to be considered "security vulnrabilities", so probably not something to be relied on over long term as people fix the issues that allow people to be tracked this way.
The folks over at the Tor project actually pride themselves on making a browser that is hard to fingerprint. If we came up with any way to fingerprint individual browser sessions, they'd likely fix it pretty quickly.
Once we have a system that allows us to block individual devices reasonably effectively, it won't matter whether those people are using Tor to get to us or not
If you can find a way to link a tor user to the device they are using, then you have essentially broken Tor. Which is not an easy feat.
And of course, this is where the difficulty comes in. All of our current blocking measures are based around using information that is specifically hidden by Tor. The idea is to find a way to block individuals on Tor without having any information about those individuals that might be useful to someone trying to kill them (or at least identify their real world identity).
Thank you, Derric Atzrott
The problem with proof of work things is that they kind of have the wrong kind of scarcity for this problem.
*someone legit wants to edit, takes them hours to be able to. (Which is not ideal)
Indeed, this isn't ideal, but its better than the current situation, and at least it is only a one-time thing.
*someone wants to abuse the system, spend a couple months before hand generating the work offline, use all at once for thousand strong sock puppet army. (Which makes the system ineffective at preventing abuse)
I mean, I know we have some crazy socks, but "spend a couple months" seems to me to indicate a fairly expensive attack. I imagine that this might be enough of a deterrence. If someone is willing to invest months of effort to sockpuppet on Wikimedia projects, I don't really think that there is anything we can do to stop them.
We could probably reduce this risk slightly as well by providing software that provides a GUI for generating the GPG keys for the user. This software could impose a high-rate limit on how often new keys are made. This could be easily worked around by anyone who knows how to make their own GPG keys, or has access to several computers, but it would stop a lot of would-be-sockpuppeteers.
Thank you, Derric Atzrott
On 10/02/2014 01:27 AM, Kevin Wayne Williams wrote:
Focusing on what signature we can obtain from (or plant on) the device and how to make that signature available to and manageable by admins is the key.
... wait. Did you just suggest that we mitigate the inability to use an anonymizing system by a minuscule minority by imposing a massive privacy violation on every user?
-- Marc
On Wed, Oct 1, 2014 at 11:27 PM, Kevin Wayne Williams kwwilliams@kwwilliams.com wrote:
Focusing on what signature we can obtain from (or plant on) the device and how to make that signature available to and manageable by admins is the key.
I used to do this for a living in the name of "credit card fraud prevention". Not only is it a difficult problem, but it is also evil.
You will not find a method that is fool proof. It is completely possible to partition the browser space into 90% known good and 10% "looks funny". Separating the wheat from the chaff in that 10% is the hard problem however. In the retail space this grey area ends up being managed by heuristics, ad hoc rules that only apply for a brief period of time and labor intensive manual review. Ultimately in the retail space it comes down to letting in enough bad actors that you don't exclude more sales than necessary. You figure out what your acceptable loss rate is and manage the real time transaction approval stream to maximize sales while keeping losses at or below an acceptable threshold. That threshold is typically something between 1% and 1.5% of your total sales volume by both transaction count and dollar value.
In a space where we are actually arguing that there is a potential of loss of life for exposed actors, I don't think that it is reasonable at all to discuss ways to increase the risk of exposure by creating and publishing (oh yeah, we are open source and open config for most things here) a recipe for tracking users in a durable fashion based on device fingerprints and other sticky token techniques.
Bryan
Bryan Davis schreef op 2014/10/02 8:46:
On Wed, Oct 1, 2014 at 11:27 PM, Kevin Wayne Williams kwwilliams@kwwilliams.com wrote:
Focusing on what signature we can obtain from (or plant on) the device and how to make that signature available to and manageable by admins is the key.
I used to do this for a living in the name of "credit card fraud prevention". Not only is it a difficult problem, but it is also evil.
[snip]
In a space where we are actually arguing that there is a potential of loss of life for exposed actors, I don't think that it is reasonable at all to discuss ways to increase the risk of exposure by creating and publishing (oh yeah, we are open source and open config for most things here) a recipe for tracking users in a durable fashion based on device fingerprints and other sticky token techniques.
Anybody that risks death by editing Wikipedia is an idiot: no privacy system is secure enough and no information is important enough to make that a reasonable decision. Treating editing Wikipedia as some noble effort that we must protect by at the cost of increasing the vulnerability of the website is unreasonable.
There's no sacred right to privacy involved in editing the kind of material found on Wikipedia. Recognizing that it is nothing more but a repository of pop culture would allow us to prioritize protecting the site over the imaginary right to privately edit articles about Disney starlets.
KWW
On 10/02/2014 09:07 PM, Kevin Wayne Williams wrote:
Anybody that risks death by editing Wikipedia is an idiot: no privacy system is secure enough and no information is important enough to make that a reasonable decision.
I wouldn't have put it that way, but I've been saying something to that effect to sockmasters for some time when they pull out the "my security is in peril" card -- editing Wikipedia is an intrinsically *public* activity, and if doing so places you at risk of harm then you should not be editing at all as no technology or privacy policy will protect you to that level.
[...] Recognizing that it is nothing more but a repository of pop culture would allow us to prioritize protecting the site over the imaginary right to privately edit articles about Disney starlets.
That, on the other hand, is a both unfair and unwarranted slur on the work of countless volunteers. Even those that /do/ work on topics of popular culture bring value, but that characterization is nothing short of a demeaning insult to all -- including those volunteers who slave away on the parts of the encyclopedia even the snottiest of elitist must admit has value to mankind.
Please read he (coincidentally topical) https://lists.wikimedia.org/pipermail/wikimedia-l/2014-October/074792.html before you embarass yourself further. (And a worth of thanks perhaps couched as an apology to those volunteers might be in order).
-- Marc
Marc A. Pelletier schreef op 2014/10/02 18:39:
On 10/02/2014 09:07 PM, Kevin Wayne Williams wrote:
Anybody that risks death by editing Wikipedia is an idiot: no privacy system is secure enough and no information is important enough to make that a reasonable decision.
I wouldn't have put it that way, but I've been saying something to that effect to sockmasters for some time when they pull out the "my security is in peril" card -- editing Wikipedia is an intrinsically *public* activity, and if doing so places you at risk of harm then you should not be editing at all as no technology or privacy policy will protect you to that level.
[...] Recognizing that it is nothing more but a repository of pop culture would allow us to prioritize protecting the site over the imaginary right to privately edit articles about Disney starlets.
That, on the other hand, is a both unfair and unwarranted slur on the work of countless volunteers. Even those that /do/ work on topics of popular culture bring value, but that characterization is nothing short of a demeaning insult to all -- including those volunteers who slave away on the parts of the encyclopedia even the snottiest of elitist must admit has value to mankind.
Check my edit history, and you will see that I spend the bulk of my time administering pop culture articles, including those self-same articles about Disney starlets. I'm surprised at the effort people put into it, but I respect it enough to prevent it from being vandalized. I'm just amused by people that view making such edits anonymously as some intrinsic right.
KWW
On 10/02/2014 09:57 PM, Kevin Wayne Williams wrote:
I'm just amused by people that view making such edits anonymously as some intrinsic right.
I would expect that most of the people who (sincerely) feel strongly about a putative right to edit anonymously are more likely to be looking for edits about political topics than pop culture; few people are hunted down for spewing trivia about the Mousketeers, tarnishing the image of one's Glorious Leader might be more perilous.
Which is, IMO, a good reason to not attempt to do so.
-- Marc
I heard from one editor, who shall remain nameless, that they had a lot to fear from certain people for political reasons and they edit anyway.
As we have seen with incidents in even democratic countries, even their officials, deep-pocketed litigators, businesses, or extrimists sometimes threaten or take a variety of hostile actions against Wikimedia contributors, bloggers, journalists, or members of groups that they dislike.
Pine On Oct 2, 2014 7:12 PM, "Marc A. Pelletier" marc@uberbox.org wrote:
On 10/02/2014 09:57 PM, Kevin Wayne Williams wrote:
I'm just amused by people that view making such edits anonymously as some intrinsic right.
I would expect that most of the people who (sincerely) feel strongly about a putative right to edit anonymously are more likely to be looking for edits about political topics than pop culture; few people are hunted down for spewing trivia about the Mousketeers, tarnishing the image of one's Glorious Leader might be more perilous.
Which is, IMO, a good reason to not attempt to do so.
-- Marc
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
Thanks for initiating the conversation Derric. I've tried to put together a proposal addressing the general problem of allowing edits from a proxy. Feedback is appreciated.
Proposal:
* Require an account to edit via proxy.
* Allow creating accounts from proxies but globally rate limit account creations from all proxies (to once per five mins? or some data driven number that makes sense).
* Tag any edits made through a proxy as such and put them in a queue.
* Limit the amount of edits in that queue per account (to one? again, look at the data).
* Apply a first pass of abuse filtering on those edits before notifying a human of their presence to approve.
* Rate limit global proxy edits per second to something manageable (see data)
This limits the amount of backlog work a single user can create to how many captchas they can solve / accounts they can create. But I think it's enough a deterrent in that 1) their edits aren't immediately visible, 2) if they're abusive, they won't show up on the site at all, and 3) it forces the act to premeditated creation of accounts which can be associated at the time of an attack and deleted together.
Rate limiting account creation seems to open a DOS vector but combining that with the captcha hopefully helps.
Attribution / Licensing:
As a consequence of requiring an account to edit via proxy, we avoid the issue of attributing edits to a shared IP.
Sybil attack:
Or, as it's called around here, sockpuppeting. CheckUser would presumably provide less useful information but the edit history of the accounts would still lend themselves to the same sorts of behavioural evidence gathering that is undertaken at present.
Class system:
This makes a set of users concerned about their security and privacy trade off some usability but that seems acceptable.
A reputation threshold for proxy users can be introduced. After a substantial amount of edits and enough time has lapsed, the above edit restrictions can be lifted from an account. Admins would still have recourse to block/suspend the account if it becomes abusive.
Blacklisting:
Anonymous credential systems (like Nymble) are interesting research directions but the appropriate collateral to use is still unsolved.
On Sun, Oct 12, 2014 at 3:24 AM, Arlo Breault abreault@wikimedia.org wrote:
Proposal:
Unless there is further discussion to be had on a new *technical* solution to Tor users, this is the wrong mailing list to be making these proposals. At the very least take it to the main wikimedia list, or on-wiki, where this is a lot more relevant.
*-- * *Tyler Romeo* Stevens Institute of Technology, Class of 2016 Major in Computer Science
Unless there is further discussion to be had on a new *technical* solution to Tor users, this is the wrong mailing list to be making these proposals. At the very least take it to the main wikimedia list, or on-wiki, where this is a lot more relevant.
Thanks Tyler. I kept the discussion going here because it sounded above like Derric may already be in the process of doing that and I wanted to keep a unified voice there.
Although my suggestion is similar in kind to what had already been proposed, the main object to it was that it would create too much work for our already constrained resources. The addition of rate limiting is a technical solution that may or may not be feasible.
The people on this list can best answer that.
On 10/12/2014 12:50 PM, Arlo Breault wrote:
The people on this list can best answer that.
What the people on this list cannot answer is /whether/ and under what conditions it would desirable to allow proxy editing in the first place.
-- Marc
On Sunday, October 12, 2014 at 4:45 PM, Marc A. Pelletier wrote:
On 10/12/2014 12:50 PM, Arlo Breault wrote:
The people on this list can best answer that.
What the people on this list cannot answer is /whether/ and under what conditions it would desirable to allow proxy editing in the first place.
The “that” I was referring to was whether the rate limiting, as I described it above, was technically feasible.
Sorry if that wasn’t clear.
Although my suggestion is similar in kind to what had already been proposed, the main object to it was that it would create too much work for our already constrained resources. The addition of rate limiting is a technical solution that may or may not be feasible.
The people on this list can best answer that.
Does anyone know of any extensions that do something similar to the rate limiting that he described? Force edits into a queue to be reviewed (sort of like FlaggedRevs), but limit selected users to only a single edit? I can't imagine something like that would be hard to modify to pull from the list of Tor nodes to get its list of users.
I'll take a look at the TorBlock extension and the FlaggedRevs extension code and see what I can see.
Thank you, Derric Atzrott
On Mon, Oct 13, 2014 at 9:10 AM, Derric Atzrott datzrott@alizeepathology.com wrote:
Although my suggestion is similar in kind to what had already been proposed, the main object to it was that it would create too much work for our already constrained resources. The addition of rate limiting is a technical solution that may or may not be feasible.
The people on this list can best answer that.
Does anyone know of any extensions that do something similar to the rate limiting that he described? Force edits into a queue to be reviewed (sort of like FlaggedRevs), but limit selected users to only a single edit? I can't imagine something like that would be hard to modify to pull from the list of Tor nodes to get its list of users.
AbuseFilter can rate limit per account globally, and edits via tor have an abuse filter variable set. So a global filter (and all wikis would have to enable global filters... which is another political discussion) could be used to rate limit tor edits, and also tag any that are made.
The review queue I'm not sure about.. not sure if FlaggedRevs can keep a queue of edits with a particular tag.
I'll take a look at the TorBlock extension and the FlaggedRevs extension code and see what I can see.
Thank you, Derric Atzrott
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikitech-l
wikitech-l@lists.wikimedia.org