We are doing research on a similar situation combining machine vision processing with volunteer annotations in a citizen science project. It would be interesting to see how much translates across these settings, e.g., if our ideas about using the machine annotations
are applicable here as well.
| Associate Dean for Research and Distinguished Professor of Information Science |
School of Information Studies
348 Hinds Hall
Syracuse, New York 13244
Jan Dittrich <email@example.com>
Re: [Wiki-research-l] Google open source research on automatic
find it interesting which impact this could have on the sense of
for volunteers, if captions are autogenerated or suggested and
possibly affirmed or corrected.
one hand one could assume a decreased sense of ownership,
the other hand, it might be more easier to comment/correct then to write
scratch and feel much more efficient.
23:08 GMT+02:00 Dario
I forwarded this separately to internally at WMF a few days ago. Clearly –
before thinking of building workflows for human contributors to generate
captions or rich descriptors of media files in Commons – we should look at
what's available in terms of off-the-shelf machine learning services and
#1 rule of sane citizen science/crowdsourcing projects: don't ask humans
to perform tedious tasks machines are pretty good at, get humans to curate
inputs and outputs of machines instead.