James said: 
revision scoring as a service will
not actually categorize the nature of what it is learning.

See https://en.wikipedia.org/wiki/Wikipedia:Labels/Edit_quality  We're almost ready to train and deploy a model with some nuance in it's prediction based on the *reason* that something should be reverted.   (E.g. damaging/not-damaging and good-faith/bad-faith)   We already have the labeling campaign done for Portuguese Wikipedia and we're nearly done for Turkish, Persian and English. 

Beyond that work, I think there's a fun clustering project to be done here to discover categories of revert reasons.  I'm always looking for collaborators to advise on these types of fun projects.  *hint hint* 

But really, getting back to what I think Jane was referring to: we're not just building revert predictors.  We're also building article quality models (e.g. http://ores.wmflabs.org/scores/enwiki/wp10/674383487/ -- The most recent edit of the article "Waffle" is probably of Feature Article quality) and edit type classifiers.  See https://meta.wikimedia.org/wiki/Research:Automated_classification_of_edit_types  So, you'll be able to look at how many article edits were "Information-Insertion/Modification" and how many were just "Copyedit" in X-tools or Special:UserContributions (assuming someone uses our service to implement that)

But when it comes down to it, I think our best measures of value-added won't be the output of a machine classifier, but rather some careful work in measurement theory.  As Pine hopes ("assigning value to edits or editors; I would still like that project to go forward."), the project is continuing to move forward -- just slower than I had planned.  Due to the massive interest in Revision Scoring, I've been putting a lot more of my time there recently. 

Again, I'm always looking for collaborators on these projects.  I do as much work as I can to get them online and I have a small team working with me, but we can always use a hand.  There are lots of ways to contribute.  You don't need to code.  We need help labeling edits, doing outreach in new wikis that we'd like to support and translating our software and docs.  


On Tue, Aug 4, 2015 at 7:53 AM, James Salsman <jsalsman@gmail.com> wrote:
> To answer your point about "basic categorisation of the nature of edits" I
> have two words for you: Revision Scoring

As Adam Wight pointed out at
the Mediawiki system doesn't allow the editor to categorize their
reason for reverting, so currently revision scoring as a service will
not actually categorize the nature of what it is learning.

Supervised learning tasks have the ability to include such categories,
and although something derived from the ontology at
will be selectable, probably from radio buttons or a pull-down menu,
during the http://mediawiki.org/wiki/Accuracy_review
pilot, there will still be an "other" catch-all option.

Wiki-research-l mailing list