[Wikimedia-l] The most controversial topics in Wikipedia: A multilingual and geographical analysis
Tilman Bayer
tbayer at wikimedia.org
Mon Jul 22 04:58:03 UTC 2013
On Sun, Jul 21, 2013 at 2:32 PM, MZMcBride <z at mzmcbride.com> wrote:
> Anders Wennersten wrote:
>>A most interesting study looking at findings from 10 different language
>>versions.
>>
>>Jesus and Middle east are the most controversial articles seen over the
>>world, but George Bush on en:wp and Chile on es:wp
>>
>>http://arxiv.org/ftp/arxiv/papers/1305/1305.5566.pdf
>
FWIW, here is the review by Giovanni Luca Ciampaglia in last month's
Wikimedia Research Newsletter:
https://blog.wikimedia.org/2013/06/28/wikimedia-research-newsletter-june-2013/#.22The_most_controversial_topics_in_Wikipedia:_a_multilingual_and_geographical_analysis.22
(also published in the Signpost, the weekly newsletter on the English
Wikipedia)
> Thanks for sharing this.
>
> I had a bit of free time last night waiting for trains and I skimmed
> through the study and its findings. Two points stuck out at me: a
> seemingly fatally flawed methodology and the age of data used.
>
> The methodology used in this study seems to be pretty inherently flawed.
> According to the paper, controversiality was measured by full page
> reverts, which are fairly trivial to identify and study in a database dump
> (using cryptographic hashes, as the study did), but I don't think full
> reverts give an accurate impression _at all_ of which articles are the
> most controversial.
>
> Pages with many full reverts are indicative of pages that are heavily
> vandalized. For example, the "George W. Bush" article is/was heavily
> vandalized for years on the English Wikipedia. Does blanking the article
> or replacing its contents with the word "penis" mean that it's a very
> controversial article? Of course not. Measuring only full reverts (as the
> study seems to have done, though it's certainly possible I've overlooked
> something) seems to be really misleading and inaccurate.
They didn't. You may have overlooked the description of the
methodology on p.5: It's based on "mutual reverts" where user A has
reverted user B and user B has reverted user A, and gives higher
weight to disputes between more experienced editors. This should
exclude most vandalism reverts of the sort you describe. As noted in
Giovanni's review, this method was proposed in an earlier paper, Sumi
et al. (https://meta.wikimedia.org/wiki/Research:Newsletter/2011/July#Edit_wars_and_conflict_metrics
). That paper explains at length how this metric serves to distinguish
vandalism reverts from edit wars. Of course there are ample
possibilities to refine it, e.g. taking into account page protection
logs.
Personally, I'm more concerned that the new paper totally fails to put
its subject into perspective by stating how frequent such
controversial articles are overall on Wikipedia. Thus it's no wonder
that the ample international media coverage that it generated mostly
transports the notion (or reinforces the preconception) of Wikipedia
as a huge battleground.
The 2011 Sumi et al. paper did a better job in that respect: "less
than 25k articles, i.e. less than 1% of the 3m articles available in
the November 2009 English WP dump, can be called controversial, and of
these, less than half are truly edit wars."
>
> In order to measure how controversial an article is, there are a number of
> metrics that could be used, though of course no metric is perfect and many
> metrics can be very difficult to accurately and rigorously measure:
>
> * amount of talk page discussion generated for each article;
> * number of page watchers;
> * number of page views (possibly);
> * number of arbitration cases or other dispute resolution procedures
> related to the article (perhaps a key metric in determining which articles
> are truly most controversial); and
> * edit frequency and time between certain edits and partial or full
> reverts of those edits.
>
> There are likely a number of other metrics that could be used as well to
> measure controversiality; these were simply off the top of my head.
Perhaps you are interested in this 2012 paper comparing such metrics,
which the authors of the present paper cite to justify their choice of
metric:
Sepehri Rad, H., Barbosa, D.: Identifying controversial articles in
Wikipedia: A comparative study.
http://www.wikisym.org/ws2012/p18wikisym2012.pdf
Regarding detection of (partial or full) reverts, see also
https://meta.wikimedia.org/wiki/Research:Revert_detection
>
> The second point that stuck out at me was that the study relied on a
> database dump from March 2010. While this may be unavoidable, being over
> three years later, this introduces obvious bias into the data and its
> findings. Put another way, for the English Wikipedia started in 2001, this
> omits a quarter of the project's history(!). Again, given the length of
> time needed to draft and prepare a study, this gap may very well be
> unavoidable, but it certainly made me raise an eyebrow.
>
> One final comment I had from briefly reading the study was that in the
> past few years we've made good strides in making research like this
> easier. Not that computing cryptographic hashes is particularly intensive,
> but these days we now store such hashes directly in the database (though
> we store SHA-1 hashes, not MD5 hashes as the study used). Storing these
> hashes in the database saves researchers the need to compute the hashes
> themselves and allows MediaWiki and other software the ability to easily
> and quickly detect full reverts.
>
> MZMcBride
>
> P.S. Noting that this study is still a draft, I happened to notice a small
> typo on page nine: "We tried to a as diverse as possible sample including
> West European [...]". Hopefully this can be corrected before formal
> publication.
>
--
Tilman Bayer
Senior Operations Analyst (Movement Communications)
Wikimedia Foundation
IRC (Freenode): HaeB
More information about the Wikimedia-l
mailing list