On 31 May 2012 16:59, WereSpielChequers <werespielchequers(a)gmail.com> wrote:
There were a number of flaws in this experiment that
IMHO reduce its value.
Firstly rather than measure vandalism it created vandalism, and vandalism
that didn't look like typical vandalism. Aside from the ethical issue
involved, this will have skewed the result. In particular the edit
summaries were very atypical for vandalism, if I'd seen that edit summary
on my watchlist I would probably have just sighed and taken it as another
example of deletionism in action. Of the more than 13,000 pages on my
watchlist I doubt there are 13 where I would look at such an edit, and
that's if it was one of the changes on my watchlist that I was even aware
of - it is far too big to fully check every day. Most IP vandals don't use
jargon in edit summaries, and I know I'm not the only editor who is more
suspicious of IP edits with blank edit summaries.
This, I think, is a major issue which make the results useless
* The edit summary implies policy knowledge, I'd only check an edit like
that on my watchlist on occasion. Not every edit needs checking, so we use
our common sense over what likely need checking
* I believe that edit summary probably met a number of heuristics used by
the anti-vandal tools to filter out "good" edits. Which means it
immediately removes them from the "front line" of scrutiny.