Hi Heather,
Thanks for writing. Below are some of my thoughts.
* Whether automatic recommendations work rely heavily on at least a few factors: the users who interact with these recommendations and their level of expertise with editing Wikimedia projects, the quality of the recommendations, how much context is provided as part of the recommendations, incentives, and the design of the platform/tool/etc. where these recommendations get surfaced. The last point is something very critical. Design is key in this context.
* We've had some good success stories with recommendations. As you have seen, the work we did in 2015 shows that you can significantly increase article creation rate (factor of 3.2 without loss in quality) if you do personalized recommendations.[0] Obviously, creation of an article is a task suited more towards the more experienced editors as newcomers. Had we done a similar experiment with newcomers, my gut feeling is that we would have seen a very different result. We also build a recommendation API [1] that is now being used in Content Translation for editors to receive Suggestions on what to edit next. We could see a spike of increase in contributions in the tool after this feature was introduced. somewhere between 8-15% of the contributions through the tool come thanks to the recommendations today.[2] There are other success stories around as well. For example, Ma Commune [3] focuses on helping French Wikipedia editors expand the already existing articles (specific and limited types of articles for now). Recommendations have also worked really well in the context of Wikidata, where contributions can be made through games such as The Distributed Game [4].
* Specifically about the work we do in knowledge gaps, we're at the moment very much focused on the realm of machine in the loop (as opposed to human in the loop) [5]. By this I mean: our aim is to understand what humans are trying to do on Wikimedia projects and bring in machines/algorithms to do what they want to do more easily/efficiently, with least frustration and pain. An example of this approach was when we interviewed a couple of editathon organizers in Africa as part of The Africa Destubathon and learned that they were doing a lot of manual work extracting structures of articles to create templates for newcomers to learn how to expand an already existing article. That's when we became sure that investing on section recommendations actually makes sense (later we learned we can help other projects such as Ma Commune, too, which is great.)
* More recently, Contributors team conducted a research study to understand the needs of Wikipedia editors through in-person interviews with editors. The focus areas coming out of this research [6] suggest that proving in-context help and task recommendations are important.
I hope these pointers help. I know we will talk about these more when we talk next, but if you or others have questions or comments in the mean time, I'd be happy to expand. Just be aware that it's annual planning time around here and we may be slow in responding. :)
Best, Leila
[0] https://arxiv.org/abs/1604.03235 [1] https://www.mediawiki.org/wiki/GapFinder/Developers [2] These numbers are a few months old, I need to get updates. :) [3] https://macommune.wikipedia.fr/ [4] http://magnusmanske.de/wordpress/?p=362 [5] Borrowing the term from Ricardo Baeza-Yates. [6] https://www.mediawiki.org/wiki/New_Editor_Experiences#Focuses
-- Leila Zia Senior Research Scientist Wikimedia Foundation
On Thu, Feb 8, 2018 at 7:03 PM, Heather Ford hfordsa@gmail.com wrote:
Having a look at the new WMF research site, I noticed that it seems that notification and recommendations mechanisms are the key strategy being focused on re. the filling of Wikipedia's content gaps. Having just finished a research project on just this problem and coming to the opposite conclusion i.e. that automated mechanisms were insufficient for solving the gaps problem, I was curious to find out more.
This latest research that I was involved in with colleagues was based on an action research project aiming to fill gaps in topics relating to South Africa. The team tried a range of different strategies discussed in the literature for filling Wikipedia's gaps without any wild success. Automated mechanisms that featured missing and incomplete articles catalysed very few edits.
When looking for related research, it seemed that others had come to a similar conclusion i.e. that automated notification/recommendations alone didn't lead to improvements in particular target areas. That makes me think that a) I just haven't come across the right research or b) that there are different types of gaps and that those different types require different solutions i.e. the difference between filling gaps across language versions, gaps created by incomplete articles about topics for which there are few online/reliable sources is different from the lack of articles about topics for which there are many online/reliable sources, gaps in articles about particular topics, relating to particular geographic areas etc.
Does anyone have any insight here? - either on research that would help practitioners decide how to go about a project of filling gaps in a particular subject area or about whether the key focus of research at the WMF is on filling gaps via automated means such as recommendation and notification mechanisms?
Many thanks!
Best, Heather. _______________________________________________ Wiki-research-l mailing list Wiki-research-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wiki-research-l