The next Research Showcase will be live-streamed on Wednesday, December 16,
at 9:30 AM PST/17:30 UTC, and will be on the theme of disinformation and
reliability of sources in Wikipedia. In the first talk, Włodzimierz
Lewoniewski will present recent work around multilingual approaches for the
assessment of content quality and reliability of sources in Wikipedia
leveraging machine learning algorithms. In the second talk, Diego
Saez-Trumper will give an overview of ongoing work on fighting
disinformation in Wikipedia; specifically, the development of tools and
datasets aimed at supporting the discovery of suspicious content and
Youtube stream: https://www.youtube.com/watch?v=v9Wcc-TeaEY
As usual, you can join the conversation on IRC at #wikimedia-research. You
can also watch our past research showcases here:
Speaker: Włodzimierz Lewoniewski (Poznań University of Economics and
Title: Quality assessment of Wikipedia and its sources
Abstract: Information in Wikipedia can be edited in over 300 languages
independently. Therefore often the same subject in Wikipedia can be
described differently depending on language edition. In order to compare
information between them one usually needs to understand each of considered
languages. We work on solutions that can help to automate this process.
They leverage machine learning and artificial intelligence algorithms. The
crucial component, however, is assessment of article quality therefore we
need to know how to define and extract different quality measures. This
presentation briefly introduces some of the recent activities of Department
of Information Systems at Poznań University of Economics and Business
related to quality assessment of multilingual content in Wikipedia. In
demonstrate some of the approaches for the reliability assessment of
sources in Wikipedia articles. Such solutions can help to enrich various
language editions of Wikipedia and other knowledge bases with information
of better quality.
Speaker: Diego Saez-Trumper (Research, Wikimedia Foundation)
Title: Challenges on fighting Disinformation in Wikipedia: Who has the
Abstract: Different from the major social media websites where the fight
against disinformation mainly refers to preventing users to massively
replicate fake content, fighting disinformation in Wikipedia requires tools
that allows editors to apply the content policies of: verifiability,
non-original research, and neutral point of view. Moreover, while other
platforms try to apply automatic fact checking techniques to verify
content, the ground-truth for such verification is done based on Wikipedia,
for obvious reasons we can't follow the same pipeline for fact checking
content on Wikipedia. In this talk we will explain the ML approach we are
developing to build tools to efficiently support wikipedians to discover
suspicious content and how we collaborate with external researchers on this
task. We will also describe a group of datasets we are preparing to share
with the research community in order to produce state-of-the-art algorithms
to improve the verifiability of content on Wikipedia.
Janna Layton (she/her)
Administrative Associate - Product & Technology
Wikimedia Foundation <https://wikimediafoundation.org/>
Join the Research Team at the Wikimedia Foundation  for their monthly
Office hours on 2020-12-01 at 17:00-18:00 PM UTC (9am PT/6pm CET).
To participate, join the video-call via this Wikimedia-meet link . There
is no set agenda - feel free to add your item to the list of topics in the
etherpad  (You can do this after you join the meeting, too.), otherwise
you are welcome to also just hang out. More detailed information (e.g.
about how to attend) can be found here .
Through these office hours, we aim to make ourselves more available to
answer some of the research related questions that you as Wikimedia
volunteer editors, organizers, affiliates, staff, and researchers face in
your projects and initiatives. Some example cases we hope to be able to
support you in:
You have a specific research related question that you suspect you
should be able to answer with the publicly available data and you don’t
know how to find an answer for it, or you just need some more help with it.
For example, how can I compute the ratio of anonymous to registered editors
in my wiki?
You run into repetitive or very manual work as part of your Wikimedia
contributions and you wish to find out if there are ways to use machines to
improve your workflows. These types of conversations can sometimes be
harder to find an answer for during an office hour, however, discussing
them can help us understand your challenges better and we may find ways to
work with each other to support you in addressing it in the future.
You want to learn what the Research team at the Wikimedia Foundation
does and how we can potentially support you. Specifically for affiliates:
if you are interested in building relationships with the academic
institutions in your country, we would love to talk with you and learn
more. We have a series of programs that aim to expand the network of
Wikimedia researchers globally and we would love to collaborate with those
of you interested more closely in this space.
You want to talk with us about one of our existing programs .
Hope to see many of you,
Martin (WMF Research Team)