We are doing research on a similar situation combining machine vision processing with volunteer annotations in a citizen science project. It would be interesting to see how much translates across these settings, e.g., if our ideas about using the machine annotations are applicable here as well.
Kevin Crowston | Associate Dean for Research and Distinguished Professor of Information Science | School of Information Studies
Syracuse University 348 Hinds Hall Syracuse, New York 13244 t (315) 443.1676 f 315.443.5806 e crowston@syr.edu mailto:crowston@syr.edu
crowston.syr.edu http://crowston.syr.edu/
From: Jan Dittrich <jan.dittrich@wikimedia.demailto:jan.dittrich@wikimedia.de> Subject: Re: [Wiki-research-l] Google open source research on automatic image captioning
I find it interesting which impact this could have on the sense of achievement for volunteers, if captions are autogenerated or suggested and them possibly affirmed or corrected. On one hand one could assume a decreased sense of ownership, on the other hand, it might be more easier to comment/correct then to write from scratch and feel much more efficient.
Jan
2016-09-27 23:08 GMT+02:00http://airmail.calendar/2016-09-27%2017:08:00%20EDT Dario Taraborelli <dtaraborelli@wikimedia.orgmailto:dtaraborelli@wikimedia.org>:
I forwarded this separately to internally at WMF a few days ago. Clearly – before thinking of building workflows for human contributors to generate captions or rich descriptors of media files in Commons – we should look at what's available in terms of off-the-shelf machine learning services and libraries.
#1 rule of sane citizen science/crowdsourcing projects: don't ask humans to perform tedious tasks machines are pretty good at, get humans to curate inputs and outputs of machines instead.
D