Hi Aaron,

Since implementing the library I have noticed that NLTK has a PCFG class.  I have not looked closely at it, but you might consider taking a look at that first.  Either way I'd be happy to talk about this with you.

best,
Arthur


On Mon, Aug 29, 2016 at 6:15 AM, Aaron Halfaker <ahalfaker@wikimedia.org> wrote:
It seems that ORES can't tell the difference between these types of edits and a similar style that truly are damaging, so it flags them for human review. 

Right now, I'm working on implementing a strategy called Hashing vectorization[1] to get some more signal out of an edit.  But I think this strategy will fail to capture the kinds of things that are OK or not OK with this type of edit.  I think we really need to finish the implementation of probabilistic context free grammars (PCFG) that aetilley started work on.  It turns out that a lot of the work that I'd done to get vectorization working will lend itself to PCFGs too.  I have some hope there.  In the meantime, we might have to suffer reviewing this type of false positive.  Once we're ready to try some new strategies, it will be helpful to have a rich library of false-positives to compare against, so it'll be great if you can keep adding interesting examples to https://meta.wikimedia.org/wiki/Research:Revision_scoring_as_a_service/Misclassifications/Edit_quality#English_Wikipedia

1. https://en.wikipedia.org/wiki/Feature_hashing#Feature_vectorization_using_the_hashing_trick

On Fri, Aug 26, 2016 at 4:02 PM, Ryan Kaldari <rkaldari@wikimedia.org> wrote:
FWIW, all of the ORES false positives that I've seen so far have been anonymous users fixing single words, for example, correcting verb tense or changing to a more specific word. ORES typically marks these as damaging with a high confidence regardless of the substance of the change.

On Wed, Aug 24, 2016 at 6:07 AM, Amir Ladsgroup <ladsgroup@gmail.com> wrote:
I also want to add that you can change ores sensitivity in your preferences
and add "We deliberately set the default threshold so low to capture all
vandalism cases so false positives are expected unlike anti-vandalism bot
that set the threshold so high to capture only vandalism cases (and don't
have false positives)."

Best

On Wed, Aug 24, 2016 at 2:13 AM Aaron Halfaker <aaron.halfaker@gmail.com>
wrote:

> Thanks Luis!  :)
>
> And I just finished setting up a new labeling campaign for English
> Wikipedia.  This data will help us train/test more accurate models.
>
> See https://en.wikipedia.org/wiki/Wikipedia:Labels/Edit_quality for
> instructions on how to get started.
>
> -Aaron
>
> On Tue, Aug 23, 2016 at 4:05 PM, Luis Villa <luis@lu.is> wrote:
>
>> Thanks for the detailed explanation, Aaron. As always your work is a
>> model in transparency for the rest of us :)
>>
>>
>> On Tue, Aug 23, 2016 at 12:40 PM Aaron Halfaker <aaron.halfaker@gmail.com>
>> wrote:
>>
>>> Hi Luis!  Thanks for taking a look.
>>>
>>> First, I should say that false-positives should be expected.  We're
>>> working on better signaling in the UI so that you can differentiate the
>>> edits that ORES is confident about and those that it isn't confident about
>>> -- but are still worth your review.
>>>
>>> So, in order to avoid a bias feedback loop, we don't want to feed any
>>> observations you made *using* ORES back into the model -- since ORES'
>>> prediction itself could bias your assessment and we'd re-perpetuate that
>>> bias.  Still, we can use these misclassification reports to direct our
>>> attention to problematic behaviors in the model.  We use the Wiki Labels
>>> system[1] to gather reviews of random samples of edits from Wikipedians in
>>> order to train the model.
>>>
>>> *Misclassification reports:*
>>> See
>>> https://meta.wikimedia.org/wiki/Research:Revision_scoring_as_a_service/Misclassifications/Edit_quality
>>>
>>> We're still working out the Right(TM) way to report false positives.
>>> Right now, we ask that you do so on-wiki and in the future, we'll be
>>> exploring a nicer interface so that you can report them while using the
>>> tool.  We review these misclassification reports manually to focus our work
>>> on the models and to report progress made.  This data is never directly
>>> used in training the machine learning models due to issues around bias.
>>>
>>> *Wiki labels campaigns:*
>>> In order to avoid the biases in who gets reviewed and why, we generate
>>> random samples of edits for review using our Wiki Labels[1] system.  We've
>>> completed a labeling campaign for English Wikipedia[2], but we could run an
>>> additional campaign to gather more data.  I'll get that set up and respond
>>> to this message when it is ready.
>>>
>>> 1. https://meta.wikimedia.org/wiki/Wiki_labels
>>> 2. https://en.wikipedia.org/wiki/Wikipedia:Labels/Edit_quality
>>>
>>> -Aaron
>>>
>>> On Tue, Aug 23, 2016 at 1:30 PM, Luis Villa <luis@lu.is> wrote:
>>>
>>>> Very cool! Is there any way for users of this tool to help train it?
>>>> For example, the first four things it flagged in my watchlist were all
>>>> false positives (next 5-6 were correctly flagged.) It'd be nice to be able
>>>> to contribute to training the model somehow when we see these
>>>> false-positives.
>>>>
>>>> On Tue, Aug 23, 2016 at 11:10 AM Amir Ladsgroup <ladsgroup@gmail.com>
>>>> wrote:
>>>>
>>>>> We (The Revision Scoring Team
>>>>> <https://meta.wikimedia.org/wiki/Research:Revision_scoring_as_a_service#Team>)
>>>>> are happy to announce the deployment of the ORES
>>>>> <https://meta.wikimedia.org/wiki/ORES> review tool
>>>>> <https://www.mediawiki.org/wiki/ORES_review_tool> as a beta feature
>>>>> <https://en.wikipedia.org/wiki/Special:Preferences#mw-prefsection-betafeatures>
>>>>>  on *English Wikipedia*. Once enabled, ORES highlights edits that are
>>>>> likely to be damaging in Special:RecentChanges
>>>>> <https://en.wikipedia.org/wiki/Special:RecentChanges>,
>>>>> Special:Watchlist <https://en.wikipedia.org/wiki/Special:Watchlist>
>>>>> and Special:Contributions
>>>>> <https://en.wikipedia.org/wiki/Special:Contributions> to help you
>>>>> prioritize your patrolling work. ORES detects damaging edits using a
>>>>> basic prediction model based on past damage
>>>>> <https://meta.wikimedia.org/wiki/Research:Automated_classification_of_edit_quality>.
>>>>> ORES is an experimental technology. We encourage you to take advantage of
>>>>> it but also to be skeptical of the predictions made. It's a tool to support
>>>>> you – it can't replace you. Please reach out to us with your questions and
>>>>> concerns.
>>>>> Documentationmw:ORES review tool
>>>>> <https://www.mediawiki.org/wiki/ORES_review_tool>, mw:Extension:ORES
>>>>> <https://www.mediawiki.org/wiki/Extension:ORES>, and m:ORES
>>>>> <https://meta.wikimedia.org/wiki/ORES>Bugs & feature requests
>>>>> https://phabricator.wikimedia.org/tag/revision-scoring-as-a-service-backlog/
>>>>> IRC#wikimedia-aiconnect
>>>>> <http://webchat.freenode.net/?channels=#wikimedia-ai>
>>>>> Sincerely,Amir from the Revision Scoring team
>>>>> _______________________________________________
>>>>> AI mailing list
>>>>> AI@lists.wikimedia.org
>>>>> https://lists.wikimedia.org/mailman/listinfo/ai
>>>>>
>>>>
>>>> _______________________________________________
>>>> AI mailing list
>>>> AI@lists.wikimedia.org
>>>> https://lists.wikimedia.org/mailman/listinfo/ai
>>>>
>>>>
>>> _______________________________________________
>>> AI mailing list
>>> AI@lists.wikimedia.org
>>> https://lists.wikimedia.org/mailman/listinfo/ai
>>>
>>
>> _______________________________________________
>> AI mailing list
>> AI@lists.wikimedia.org
>> https://lists.wikimedia.org/mailman/listinfo/ai
>>
>>
> _______________________________________________
> AI mailing list
> AI@lists.wikimedia.org
> https://lists.wikimedia.org/mailman/listinfo/ai
>
_______________________________________________
Wikitech-l mailing list
Wikitech-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wikitech-l



_______________________________________________
AI mailing list
AI@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/ai