I'll take the baton from LiAnna here, since this is a problem space I've been working on.

On Fri, Apr 17, 2015 at 3:16 PM, James Salsman <jsalsman@gmail.com> wrote:
 
I wish I could think of a budget for Accuracy Review. It would really help if someone from WEF would co-mentor it. LeAnna, would you like to co-mentor, or do you know anyone at Wiki Ed who would? I have to reach out to the Simple English Wikipedia and inform them I am asking WEF to help make a bot for them. Luckily, my experience with up-goer five talk, LOGLAN, Freudenthal's 1960 LINCOS, and English should make that easy. If you want to co-mentor, you can try that, or tell me to do it as you prefer.

We're broadly interested in the concept of 'accuracy review' (which would be quite different from the revision scoring, at least as laid out by Aaron Halfaker and company), but it's not something we've got the bandwidth for right now. We're also, at least for the time being, just focused on English Wikipedia.
 
Would it be okay to ask you to reach out to the Revision Scoring as a Service people and ask them, if you paid people to score revisions, how much you should offer? Although, it's a perfectly legitimate question to ask if that would put their (WMF's) fair harbor provisions at risk. I doubt it would, so I'll ignore the possibility for now. Please correct me if I'm wrong. If I had to guess at the starting amount, it would be $20 per hour plus pension and benefits. I don't know if that's right so I would love to hear other opinions.

I've been chatting with the folks working on this, and they are actually quite close to having a usable API for estimated article quality — which I'm super excited about building into our dashboard. The human part of it will be down the road a bit, but the main purpose there will be to continually improve the model by having experienced editors create good ratings data for training the model. But I expect that there won't be much trouble in finding Wikipedians to pitch on that.

I had actually been exploring the idea of setting up a crowdsourcing system where we might pay experienced editors to do before and after ratings for student work, but at this point I'm much more enthusiastic about the machine learning approach that the revision-scoring-as-a-service project is taking — since that is easy to scale up and maintain long term.

-Sage