Thanks for the detailed explanation, Aaron. As always your work is a model in transparency for the rest of us :)

On Tue, Aug 23, 2016 at 12:40 PM Aaron Halfaker <aaron.halfaker@gmail.com> wrote:
Hi Luis!  Thanks for taking a look.  

First, I should say that false-positives should be expected.  We're working on better signaling in the UI so that you can differentiate the edits that ORES is confident about and those that it isn't confident about -- but are still worth your review. 

So, in order to avoid a bias feedback loop, we don't want to feed any observations you made *using* ORES back into the model -- since ORES' prediction itself could bias your assessment and we'd re-perpetuate that bias.  Still, we can use these misclassification reports to direct our attention to problematic behaviors in the model.  We use the Wiki Labels system[1] to gather reviews of random samples of edits from Wikipedians in order to train the model.

Misclassification reports:

We're still working out the Right(TM) way to report false positives.  Right now, we ask that you do so on-wiki and in the future, we'll be exploring a nicer interface so that you can report them while using the tool.  We review these misclassification reports manually to focus our work on the models and to report progress made.  This data is never directly used in training the machine learning models due to issues around bias. 

Wiki labels campaigns:
In order to avoid the biases in who gets reviewed and why, we generate random samples of edits for review using our Wiki Labels[1] system.  We've completed a labeling campaign for English Wikipedia[2], but we could run an additional campaign to gather more data.  I'll get that set up and respond to this message when it is ready.


-Aaron

On Tue, Aug 23, 2016 at 1:30 PM, Luis Villa <luis@lu.is> wrote:
Very cool! Is there any way for users of this tool to help train it? For example, the first four things it flagged in my watchlist were all false positives (next 5-6 were correctly flagged.) It'd be nice to be able to contribute to training the model somehow when we see these false-positives.

On Tue, Aug 23, 2016 at 11:10 AM Amir Ladsgroup <ladsgroup@gmail.com> wrote:

We (The Revision Scoring Team) are happy to announce the deployment of the ORES review tool as a beta feature on English Wikipedia. Once enabled, ORES highlights edits that are likely to be damaging in Special:RecentChanges, Special:Watchlist and Special:Contributions to help you prioritize your patrolling work. ORES detects damaging edits using a basic prediction model based on past damage. ORES is an experimental technology. We encourage you to take advantage of it but also to be skeptical of the predictions made. It's a tool to support you – it can't replace you. Please reach out to us with your questions and concerns.

Documentation
mw:ORES review tool, mw:Extension:ORES, and m:ORES
Bugs & feature requests
https://phabricator.wikimedia.org/tag/revision-scoring-as-a-service-backlog/
IRC
#wikimedia-aiconnect

Sincerely,
Amir from the Revision Scoring team
_______________________________________________
AI mailing list
AI@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/ai

_______________________________________________
AI mailing list
AI@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/ai


_______________________________________________
AI mailing list
AI@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/ai