Hello Everybody,

Thank so much for fantastic suggestions.



Morten,

Thank you for the tell me more paper, those kind of features were exactly what I was looking for. I will report my results to let you know how they compare.


Maik,

Thanks for introducing the idea of flaw-based assessment. I will see which are the most frequent clean-up tags I come across in the many different languages, I hadn't thought of using flaws as actionable features.


Laura,

Your conclusions about football player biographies are instructive, I will see if diversity of editorship relates to quality in country articles.


Oliver,

Thanks for the warning about topic bias, I see that the problem affects Wikipedias as a whole. Since I am only looking at articles in a specific category at a time, I think my guiding assumption is that topic bias is not an issue inter-category, but it is good to keep in mind.


Maximilian Klein
Wikipedian in Residence, OCLC
+17074787023



From: wiki-research-l-bounces@lists.wikimedia.org <wiki-research-l-bounces@lists.wikimedia.org> on behalf of Maik Anderka <maik.anderka@uni-weimar.de>
Sent: Monday, December 16, 2013 12:33 AM
To: wiki-research-l@lists.wikimedia.org
Subject: Re: [Wiki-research-l] Existitng Research on Article Quality Heuristics?
 
Hi!

Oliver already mentioned my dissertation [3] on analyzing and predicting quality flaws in Wikipedia. Instead of classifying articles into some quality grading scheme (e.g.  featured, non-featured etc.), the main idea is to investigate specific quality flaws, and thus providing indications of the respects in which  low-quality content needs improvement. We proposed this idea in [1] and pushed it further in [2]. The second paper comprises a listing of more than 100 article features (heuristics) that have been used in previous research on automated quality assessment in Wikipedia. An in-depth description and implementation details of these features can be found in my dissertation [3] (Appendix B).

Best regards,
Maik

[1] Maik Anderka, Benno Stein, and Nedim Lipka. Towards Automatic Quality Assurance in Wikipedia. In Proceedings of the 20th International Conference on World Wide Web (WWW 2011), Hyderabad, India, pages 5-6, 2011. ACM.
http://www.uni-weimar.de/medien/webis/publications/papers/stein_2011d.pdf

[2] Maik Anderka, Benno Stein, and Nedim Lipka. Predicting Quality Flaws in User-generated Content: The Case of Wikipedia. In Proceedings of the 35th International ACM SIGIR Conference on Research and Development in Information Retrieval (SIGIR 2012), Portland, USA, pages 981-990, 2012. ACM.
http://www.uni-weimar.de/medien/webis/publications/papers/stein_2012i.pdf

[3] Maik Anderka. Analyzing and Predicting Quality Flaws in User-generated Content: The Case of Wikipedia. Dissertation, Bauhaus-Universität Weimar, June 2013.
http://www.uni-weimar.de/medien/webis/publications/papers/anderka_2013.pdf


On 15.12.2013 20:22, Oliver Ferschke wrote:
Hello everybody,

I've been doing quite some work on article quality in Wikipedia - many heuristics have been mentioned here already.
In my opinion, a set of universal indicators for quality that works for all of Wikipedia does not exist.
This is mainly because the perception of quality is so different across various WikiProjects and subject areas in a single Wikipedia and even more so across different Wikipedia language versions.
On a theoretical level, some universals can be identified. But as soon as concrete heuristics are to be identified, you will always have a bias towards the articles you used to identify these heuristics.

This aspect aside, having an abstract quality score that tells you how good an article is according to your heuristics doesn't help a lot in most cases.
I much more like the approach to identify quality problems, which also gives you an idea of the quality of an article.
I have done some work on this [1], [2] and there was a recent dissertation on the same topic [3].

I'm currently writing my dissertation on language technology methods to assist quality management in collaborative environments like Wikipedia. There, I start with a theoretical model, but as soon as the concrete heuristics come in to play, the model has to be grounded according to the concrete quality standards that have been established in a particular sub-community of Wikipedia. I'm still wrapping up my work, but if anybody wants to talk, I'll be happy to.

Regards, 
Oliver


[1] The Impact of Topic Bias on Quality Flaw Prediction in Wikipedia
Oliver Ferschke and Iryna Gurevych and Marc Rittberger
In: Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). p. 721-730, August 2013. Sofia, Bulgaria.

[2] FlawFinder: A Modular System for Predicting Quality Flaws in Wikipedia - Notebook for PAN at CLEF 2012
Oliver Ferschke and Iryna Gurevych and Marc Rittberger
In:  CLEF 2012 Labs and Workshop, Notebook Papers, n. pag. September 2012. Rome, Italy.

[3] Analyzing and Predicting Quality Flaws in User-generated Content: The Case of Wikipedia. 
Maik Anderka
Dissertation, Bauhaus-Universität Weimar, June 2013  

--
-------------------------------------------------------------------
Oliver Ferschke, M.A.
Doctoral Researcher
Ubiquitous Knowledge Processing Lab (UKP-TU DA)
FB 20 Computer Science Department
Technische Universität Darmstadt
Hochschulstr. 10, D-64289 Darmstadt, Germany
phone [+49] (0)6151 16-6227, fax -5455, room S2/02/B111
ferschke@cs.tu-darmstadt.de
www.ukp.tu-darmstadt.de
Web Research at TU Darmstadt (WeRC) www.werc.tu-darmstadt.de
-------------------------------------------------------------------

Von: wiki-research-l-bounces@lists.wikimedia.org [wiki-research-l-bounces@lists.wikimedia.org]" im Auftrag von "WereSpielChequers [werespielchequers@gmail.com]
Gesendet: Sonntag, 15. Dezember 2013 14:27
An: Research into Wikimedia content and communities
Betreff: Re: [Wiki-research-l] Existitng Research on Article Quality Heuristics?

Re Laura's comment.

I don't dispute that there are plenty of high quality articles which have had only one or two contributors. However my assumption and experience is that in general the more editors the better the quality, and I'd love to see that assumption tested by research. There may be some maximum above which quality does not rise, and there are clearly a number of gifted members of the community whose work is as good as our best crowdsourced work, especially when the crowdsourcing element is to address the minor imperfection that comes from their own blind spot. It would be well worthwhile to learn if Women's football is an exception to this, or indeed if my own confidence in crowd sourcing is mistaken

I should also add that while I wouldn't filter out minor edits you might as well filter out reverted edits and their reversion. Some of our articles are notorious vandal targets and their quality is usually unaffected by a hundred vandalisms and reversions of vandalism per annum. Beaver before it was semi protected in Autumn 2011 being a case in point. This also feeds into Kerry's point that many assessments are outdated. An article that has been a vandalism target might have been edited a hundred times since it was assessed, and yet it is likely to have changed less than one with only half a dozen edits all of which added content.

Jonathan


On 15 December 2013 09:44, Laura Hale <laura@fanhistory.com> wrote:

On Sun, Dec 15, 2013 at 9:53 AM, WereSpielChequers <werespielchequers@gmail.com> wrote:
Re other dimensions or heuristics:

Very few articles are rated as Featured, and not that many as Good, if you are going to use that rating system I'd suggest also including the lower levels, and indeed whether an article has been assessed and typically how long it takes for a new article to be assessed. Uganda for example has 1 Featured article, 3 Good Articles and nearly 400 unassessed on the English language Wikipedia.

For a crowd sourced project like Wikipedia the size of the crowd is crucial and varies hugely per article. So I'd suggest counting the number of different editors other than bots who have contributed to the article.

Except why would this be something that would be an indicator of quality?  I've done an analysis recently of football player biographies where I looked at the total volume of edits, date created, total number of citations and total number of pictures and none of these factors correlates to article quality.  You can have an article with 1,400 editors and still have it be assessed as a start.  Indeed, some of the lesser known articles may actually attract specialist contributors who almost exclusively write to one topic and then take the article to DYK, GA, A or FA.  The end result is you have articles with low page views that are really great that are maintained by one or two writers. 



>Whether or not a Wikipedia article has references is a quality dimension you might want to look at. At least on EN it is widely assumed to 
>be a measure of quality, though I don't recall ever seeing a study of the relative accuracy of cited and uncited Wikipedia information.

Yeah, I'd be skeptical of this overall though it might be bad.  The problem is you could get say one contentious section of the article that ends up fully cited or overcited while the rest of the article ends up poorly cited.  At the same time, you can get B articles that really should be GAs but people have been burned by that process so they just take it to B and left it there.  I have heard this quite a few time from female Wikipedians operating in certain places that the process actually puts them off.

--
twitter: purplepopple
blog: ozziesport.com

_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l




_______________________________________________
Wiki-research-l mailing list
Wiki-research-l@lists.wikimedia.org
https://lists.wikimedia.org/mailman/listinfo/wiki-research-l