Actually, account creation has a captcha, so it wouldn't be using accounts likely.
-Aaron Schulz
From: jschulz_4587@msn.com
To: wikiquality-l@lists.wikimedia.org
Date: Fri, 21 Dec 2007 19:55:04 -0500
Subject: Re: [Wikiquality-l] Wikipedia colored according to trust
That would still be quite a feat. The bot would have to IP hop over several A-classes. Make some random vandalism that looks like random vandalism (rather than the same set of garbage or random characters/blanking). The IP would have to change enough for it to not look like one bot on RC. The good bot would have to give a time pause before reverting to let other beat it to the punch. Also, if all it did was revert bad bot, then as I said, the bad bot's IP would really have to be all over the place, unless the bot made random accounts too. It would still take a while to build up trust this way...and even with the max, it will still take several edits to get bad content white. This would have to be good enough to set the "most trusted" version. And even if this is done on some pages, it will get reverted and the user blocked, and they have to use another "good bot".
It certainly is always better to be less easily spoofed, and if there are good practical things that can stop this without making the software too underinclusive, I'm all for it. I said earlier that users in the formal bot group should not count as adding credibility. In this vein, we could do the opposite and have groups that increase the max trust (credits) a user can have. That way the default max trust can be lowered to deal further with stuff like good/bad bot networks. This would work if integrated with FlaggedRevs, as the 'editor' group could have a higher max trust limit.
-Aaron Schulz
Get the power of Windows + Web with the new Windows Live. Get it now!