Thanks for initiating the conversation Derric. I've tried to put together a
proposal addressing the general problem of allowing edits from a proxy.
Feedback is appreciated.
Proposal:
* Require an account to edit via proxy.
* Allow creating accounts from proxies but globally rate limit account creations
from all proxies (to once per five mins? or some data driven number that makes
sense).
* Tag any edits made through a proxy as such and put them in a queue.
* Limit the amount of edits in that queue per account (to one? again, look at
the data).
* Apply a first pass of abuse filtering on those edits before notifying a human
of their presence to approve.
* Rate limit global proxy edits per second to something manageable (see data)
This limits the amount of backlog work a single user can create to how many
captchas they can solve / accounts they can create. But I think it's enough a
deterrent in that 1) their edits aren't immediately visible, 2) if they're
abusive, they won't show up on the site at all, and 3) it forces the act to
premeditated creation of accounts which can be associated at the time of an
attack and deleted together.
Rate limiting account creation seems to open a DOS vector but combining
that with the captcha hopefully helps.
Attribution / Licensing:
As a consequence of requiring an account to edit via proxy, we avoid the
issue of attributing edits to a shared IP.
Sybil attack:
Or, as it's called around here, sockpuppeting. CheckUser would presumably
provide less useful information but the edit history of the accounts would
still lend themselves to the same sorts of behavioural evidence gathering
that is undertaken at present.
Class system:
This makes a set of users concerned about their security and privacy trade off
some usability but that seems acceptable.
A reputation threshold for proxy users can be introduced. After a substantial
amount of edits and enough time has lapsed, the above edit restrictions can be
lifted from an account. Admins would still have recourse to block/suspend the
account if it becomes abusive.
Blacklisting:
Anonymous credential systems (like Nymble) are interesting research directions
but the appropriate collateral to use is still unsolved.