On 31/07/12 19:53, James Forrester wrote:
I agree. But when you're spammed to death if there's no captcha, you end up accepting it as a necessary evil.
Just to jump in here, it's not actually clear that our CAPTCHAs work at all at this point (per Tim's e-mail from last year of being able to robotically break ours 75% of the time).
On https://www.mediawiki.org/wiki/Admin_tools_development (created last week), we in WMF Engineering noted that we'd want to look properly at some data around these CAPTCHAs and how they're working. This might show us that it would be sensible to just turn them off (which of course would help usability for all users), as long as we're happy that the tools for preventing the vandalism they were intended to stop are working well.
Yours,
I went to a certain site I was recently pointed to. Site is plain MediaWiki. No antispam extensions installed. Bot ips weren't blocked either. Bots seem to have been editing in a single article.
Article created in 2011 10 July 2012: First vandalism edit. Page replaced with gibberish, including gibberish links. This looks like a test to see if it is patrolled or not. On 2012-07-15 they start replacing with working domains and keywords.
From 2012-07-15 to 2012-07-30 there are 500-600 spammy edits *per day*.
Today (2012-07-31) edit count raised to 1643 edits. That's a rate of 1.14 edits per minute!
Those look like generic bots, though. SimpleAntiSpam or MathCaptcha may be able to stop them. It may be worth preparing some honeypots for them and observing their behavior.
Our wikis are much better protected, though. Any such bot would be blocked, the article protected, the ips added to the SpamBlacklist, and an EditFilter written to autoblock him everytime. But it is useful to see the sharks that are out there. And even with many wikignomes, they can easily get overwhelmed when trying to stop it first time.
Regards