Sage, the Wikiwho programmers in Germany volunteered to help with the Accuracy Review bot, and fundraising is proceeding well, so this can happen sooner. I have been trying to ask Frank Schulenberg to join me in supporting measuring both student edits and a wider random sample of edits, as I believe a decent experimental design allowing for a long-term cost-benefit analysis would require anyway, and I'm interested in measuring human paid accuracy review on other subsets of articles, such as those with known conflicted interest advocacy, controversy, and high readership. From my perspective, that a student made an edit is just one of those factors which could put a revision towards the front of a review queue.
So, where is a list of recent student editors?
Another topic from last month is whether participating in an accuracy review task which would ordinarily be paid (for humans) is somewhat indistinguishable from a very general form of computer aided instruction. I hope this has utopian implications that tuition and labor will somehow cancel out, but while very optimistic, I am not yet anywhere nearly that optimistic. This does seem similar to the parsimonious situation in my professional field where a reading tutor can be indistinguishable from a pronunciation tutor in certain circumstances; it doesn't alleviate the need for writing instruction.
In any case, that question suggests collecting measurements about whether unpaid volunteers are willing to participate in accuracy review tasks. So, nobody will be turned away just because they want to work for free. In fact, we may try to attract conflict of interest advocacy editors into the volunteer pool in order to see if we can automatically discover them via second order review.
Please let me know your thoughts.
On Monday, April 20, 2015, Sage Ross ragesoss+wikipedia@gmail.com wrote:
On Fri, Apr 17, 2015 at 5:27 PM, James Salsman <jsalsman@gmail.com javascript:_e(%7B%7D,'cvml','jsalsman@gmail.com');> wrote:
Thank you, Sage, for your reply:
... I've been chatting with the folks working on this, and they are
actually
quite close to having a usable API for estimated article quality —
which I'm
super excited about building into our dashboard. The human part of it
will
be down the road a bit, but the main purpose there will be to
continually
improve the model by having experienced editors create good ratings
data for
training the model. But I expect that there won't be much trouble in
finding
Wikipedians to pitch on that.
I had actually been exploring the idea of setting up a crowdsourcing
system
where we might pay experienced editors to do before and after ratings
for
student work, but at this point I'm much more enthusiastic about the
machine
learning approach that the revision-scoring-as-a-service project is
taking —
since that is easy to scale up and maintain long term.
I recommend measuring the optimal amount of human input and review. It is very substantially nonzero if you want to maximize the encyclopedia's utility function. There is really nobody at the WEF who wants to try to co-mentor accuracy review? What if there was a cap on total hours needed. I am sure you wouldn't regret it, but I am also happy to continue on my own for the time being.
I'm definitely interested in better systems for human review — especially for the work of student editors — alongside automated qualtiy estimation tools. It's not a project Wiki Ed has the capacity to take on right now, though.
-Sage
James, as we've said before, this is not something Wiki Ed is interested in collaborating with you on.
All student usernames from last term are available on the course pages at dashboard.wikiedu.org.
LiAnna
On Sun, Jul 26, 2015 at 11:37 AM, James Salsman jsalsman@gmail.com wrote:
Sage, the Wikiwho programmers in Germany volunteered to help with the Accuracy Review bot, and fundraising is proceeding well, so this can happen sooner. I have been trying to ask Frank Schulenberg to join me in supporting measuring both student edits and a wider random sample of edits, as I believe a decent experimental design allowing for a long-term cost-benefit analysis would require anyway, and I'm interested in measuring human paid accuracy review on other subsets of articles, such as those with known conflicted interest advocacy, controversy, and high readership. From my perspective, that a student made an edit is just one of those factors which could put a revision towards the front of a review queue.
So, where is a list of recent student editors?
Another topic from last month is whether participating in an accuracy review task which would ordinarily be paid (for humans) is somewhat indistinguishable from a very general form of computer aided instruction. I hope this has utopian implications that tuition and labor will somehow cancel out, but while very optimistic, I am not yet anywhere nearly that optimistic. This does seem similar to the parsimonious situation in my professional field where a reading tutor can be indistinguishable from a pronunciation tutor in certain circumstances; it doesn't alleviate the need for writing instruction.
In any case, that question suggests collecting measurements about whether unpaid volunteers are willing to participate in accuracy review tasks. So, nobody will be turned away just because they want to work for free. In fact, we may try to attract conflict of interest advocacy editors into the volunteer pool in order to see if we can automatically discover them via second order review.
Please let me know your thoughts.
On Monday, April 20, 2015, Sage Ross ragesoss+wikipedia@gmail.com wrote:
On Fri, Apr 17, 2015 at 5:27 PM, James Salsman jsalsman@gmail.com wrote:
Thank you, Sage, for your reply:
... I've been chatting with the folks working on this, and they are
actually
quite close to having a usable API for estimated article quality —
which I'm
super excited about building into our dashboard. The human part of it
will
be down the road a bit, but the main purpose there will be to
continually
improve the model by having experienced editors create good ratings
data for
training the model. But I expect that there won't be much trouble in
finding
Wikipedians to pitch on that.
I had actually been exploring the idea of setting up a crowdsourcing
system
where we might pay experienced editors to do before and after ratings
for
student work, but at this point I'm much more enthusiastic about the
machine
learning approach that the revision-scoring-as-a-service project is
taking —
since that is easy to scale up and maintain long term.
I recommend measuring the optimal amount of human input and review. It is very substantially nonzero if you want to maximize the encyclopedia's utility function. There is really nobody at the WEF who wants to try to co-mentor accuracy review? What if there was a cap on total hours needed. I am sure you wouldn't regret it, but I am also happy to continue on my own for the time being.
I'm definitely interested in better systems for human review — especially for the work of student editors — alongside automated qualtiy estimation tools. It's not a project Wiki Ed has the capacity to take on right now, though.
-Sage
Education mailing list Education@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/education