we  would like to announce a research project with the goal of studying
whether user interactions recorded at the time of editing are suitable to
predict vandalism in real time.
Should vandal editing behavior be sufficiently different from normal
editing behavior, this would allow for a number of interesting real-time
prevention techniques. For example:
- withholding confidently suspicious edits for review before publishing
- a popup asking "I am not a vandal" (as in Google's "I am not a
analyze vandal reactions,
- a popup with a chat box to personally engage vandals, e.g., to help them
find other ways of stress relief or to understand them better,
- or at the very least: a new signal to improve traditional vandalism
We have set up a laboratory environment to study editor behavior in a
realistic setting using a private mirror of Wikipedia. No editing
whatsoever is conducted on the real Wikipedia as part of our experiments,
and all test subjects of our user studies are made aware of the
experimental nature of their editing. We plan on making use of
crowdsourcing as a means to attain scale and diversity.
If you wish to participate in this study as a test subject yourself, please
get in touch. The more diversity, the more insightful the results will be.
We are also happy to collaborate and to answer all questions that may arise
in relation to the project. For example, our setup and tooling may turn out
to be useful to study other user behavior-related things without having to
actually deploy experiments within the live MediaWiki.
PS: The AICaptcha project seems most closely related. @Vinitha and Gergő:
If you wish, we can set up a Skype meeting to talk about a avenues for
 A group of students and researchers from Bauhaus-Universität Weimar (
) and Leipzig University (www.temir.org
); project PI: Martin