I am told that devs aren't keen on making an exception. While they (at least Tim Starling) agrees the current method is rather messed up. They were talking about a more permanent solution.
A suggestion was that to make the spam autoblocker only black the edit if a new spam link is being introduced and spam already on the pages do not get affected. This comes at the expense of performance though.
Then there is the matter of the meta spam autoblocker page has started getting very large. Soon it will not be possible to load the page.
- White Cat
On Thu, May 1, 2008 at 3:56 AM, Andrew Whitworth wknight8111@gmail.com wrote:
On Wed, Apr 30, 2008 at 7:37 PM, Mark Wagner carnildo@gmail.com wrote:
On 4/28/08, Chad innocentkiller@gmail.com wrote:
From a technical standpoint: I agree with Brion. There are a whole
host
of reasons why an edit might fail (locked db's, protected pages, or
even
the server dying), and the bot needs to be designed to deal with
that. If
your bot crashes, etc. due to an edit failing: well that's your
fault as a
developer.
It would be nice if flagged bots were exempt from the spamfilter. Spam URLs and protected pages are the situations that my bots can't handle -- for everything else, the bot can either wait or try again.
This is something that I can agree with, if a user is trusted enough to receive the bot flag in the first place (or "trusted not to make spam/vandalism/controversial mass edits"), we shouldn't have to worry about spam filtering them.
--Andrew Whitworth
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l