On Fri, Dec 5, 2014 at 11:08 AM, MZMcBride z@mzmcbride.com wrote:
Chad wrote:
Well that was a fun experiment for an hour. Turns out captchas do actually stop a non-zero amount of spam on non-test wikis.
Mediawiki.org logs tell the story pretty clearly.
This has been rolled back.
:-(
https://www.mediawiki.org/wiki/Special:Log/delete
I spent a bit of time poking around. The spam seems to primarily be related to page creation. A slightly smarter heuristic (such as requiring that your edit count be > 0 before you can create a page) might mitigate this. Disallowing edits that contain "<a href" might also help.
The more I think about this, the more I wonder whether we should change the CAPTCHA model so that instead of applying CAPTCHAs in a blanket manner to types of actions (page creation, account creation, etc.), we could instead only force users to solve a CAPTCHA when certain abuse filters are triggered. Adding this functionality to the AbuseFilter extension is tracked at https://phabricator.wikimedia.org/T20110.
By shifting from the current rigid PHP configuration model to a looser and more flexible AbuseFilter model, we could hopefully ensure that anti-abuse measures (warning about an action, disallowing an action, or requiring a CAPTCHA before allowing an action) are more narrowly tailored to address specific problematic behavior. Even triggering AbuseFilter warnings that simply add an extra click/form submission for specific patterns of problematic behavior might trip up many of these spambots.
Big +1 for this.
Then, if a wiki community really wants to use CAPTCHA on all account creations or link additions, they can add that as AF rules.
I am quite sure that on English Wikipedia the AF admins would enjoy defining precise rules and monitor them for effectiveness using the AF log.