This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
I think a sane, well researched (with actual subjects) rating system
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
To post to LINGUIST, use our convenient web form at
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
The speed with which we met our goal, thanks to the interest and generosity
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
leisure time, we are hoping that you will be able to help us get started on
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
We hope you are as enthusiastic about this effort as we are. Just to help us
get started looking at Wikipedia more critically, and to easily identify an
needing improvement, we suggest that you take a look at the List of
Many people are not listed there; others need to have more facts and
added. If you would like to participate in this exciting update effort,
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
entries you feel should be updated or added. Some linguists who saw our
on the Internet have already written us with specific suggestions, which we
share with you soon.
This update project will take major time and effort on all our parts. The
result will be a much richer internet resource of information on the breadth
depth of the field of linguistics. Our efforts should also stimulate
students to consider studying linguistics and to educate a wider public on
we do. Please consider participating.
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
LINGUIST List: Vol-18-1831
There are an increasing number of organisations which have indicated
that their output is Creative Commons by default, however there are
not as many that have a public IP policy which clearly allows staff to
publish "their" work.
i.e. We have moved from the IP policy being the stick used to prevent
openness, and the "work for hire" and "publish process" are the next
A few staff at University of Canberra (UC) have written an IP policy
proposal which clearly gives staff ownership of their work, and
requires CC licensing if their staff use organisational infrastructure
to create their work.
Otago Polytechnic adopted an IP policy like that in 2007.
Are there other examples, within or outside academia, where the
organisation empowers its staff by providing a policy which clarifies
when "work for hire" principle is enforced in this murky world of
Does the WMF have an intellectual property policy for works created by
Employees edit and upload using free licenses under their own name,
but does the copyright belong to the employee or to the WMF?
Is anyone in our community going to:
Global Congress on Intellectual Property and the Public Interest
Washington College of Law
American University, Washington, DC
August 25-27, 2011
> Message: 7
> Date: Wed, 12 Oct 2011 11:07:54 -0300
> From: Andrew Crawford <acrawford(a)laetabilis.com>
> Subject: Re: [Foundation-l] Image filtering without undermining the
> category system
> To: Wikimedia Foundation Mailing List
> Content-Type: text/plain; charset=ISO-8859-1
> In general I think this is the best and most practical proposal so far.
Thanks I appreciate that.
> Having filter users do the classifying is the only practical option. In my
> opinion, it is unfortunately still problematic.
> 1. It is quite complicated from the user's point of view. Not only do they
> have to register an account, but they have to find and understand these
> options. For the casual reader who just doesn't want to see any more
> penises, or pictures of Mohammed, that is quite a lot to ask. The effort it
> would take to implement a system like this might outweigh the benefit to
> small number of readers who would actually go through this process.
Yes my wording of the options is not ideal, and I'm hoping we can make it
more user friendly. But the process isn't very complex. If we create
It need be no more complex than
I'm pretty sure we can make it simpler than buying some censorship software
with a credit card and then installing it on your PC.
> 2. It is obviously subject to gaming. How long would it take 4chan to
> out they can create new accounts, and start thumbs-upping newly-uploaded
> pictures of penises while mass thumbs-downing depictions of Mohammed?
Subject to gaming, well it's bound to be. But vulnerable to gaming,
hopefully not. Fans of penises are welcome to add their preferences. That's
why I didn't include the option "Hide all images except those that a fellow
filterer has whitelisted".
If some people find naked bodies wholesome but crucifixes troubling, and
others the reverse, then the filter will pick up on that as an easy
scenario, and once you've indicated that you are happy to see one or the
other it will start giving a high score to things that have been deemed
objectionable to people who've made similar choices to you, or things that
were deemed wholesome by people whose tastes run counter to yours.
Conversely it will give low scores to images cleared by people whose tastes
are highly similar to yours or to images objected to by people whose tastes
are the reverse of yours.
> 3. How can we prevent the use of this data for censorship purposes?
We prevent the use of this data for censorship by not releasing the
knowledge base, only showing logged in users the results that are relevant
to them, and not saying how we've come up with a score. If we only had a
small number of images and a limited set of reasons why people could object
to them then it would be simple to impute the data in our knowledge base,
but we have a large and complex system, and some aspects would be inherently
difficult to hack by automated weapons. An experienced human looking at an
image with a filter score would sometimes be able to guess what common
reasons had caused a filterer or filterers not to want to see it again, but
a computer would struggle and often anyone but the filterer who'd applied
that score would be baffled. If you had access to that individuals filter
list it might be obvious that they were blocking images that triggered their
vertigo, depicted people associated with a particular sports team or train
engines that lacked a boiler. But without the context of knowing which
filter lists an image was on it would be difficult to get meaningful
information out of the system.
> keep the reputation information of each image secret? I imagine many
> Wikipedians would want to access that data for legitimate editorial
> Well of course any of the editors could themselves have the filter set on
and would know what the score was relative to their preferences. But
otherwise the information would be secret. I don't see how we could give
editors access to the reputation information without it leaking to censors,
or indeed divulging it generally. Remember the person with vertigo might not
want that publicly known, the pyromaniac who blocked images that might
trigger their pyromania would almost certainly not want their filter to be
public. As for "legitimate editorial reasons", I think it would be quite
contentious if anyone started making editorial decisions based on the filter
results, so best not to enable that - but I'll clarify that in the proposal
Thanks for your feedback
> Andrew (Thparkth)
> On Tue, Oct 11, 2011 at 5:55 PM, WereSpielChequers <
> werespielchequers(a)gmail.com> wrote:
> > OK in a spirit of compromise I have designed an Image filter which should
> > meet most of the needs that people have expressed and resolve most of the
> > objections that I'm aware of. Just as importantly it should actually
> > http://meta.wikimedia.org/wiki/User:WereSpielChequers/filter
> > WereSpielChequers
> > _______________________
> Thanks for that and for your comments on
On 10/31/2011 6:01 AM, foundation-l-request(a)lists.wikimedia.org wrote:
> On 31 October 2011 12:30, Oliver Keyes<scire.facias(a)gmail.com> wrote:
>> > Not sure about that specific change, but one illustration might be the
>> > Article Feedback Tool, which contains a "you know you can edit, right?"
>> > thing. Off the top of my head I think 17.4 percent of the 30-40,000 people
>> > who use it per day attempt to edit as a result of that inducement.
>> > Admittedly only 2 percent of them*succeed*, but it's not a lack of
>> > motivation, methinks.
> What's the definition of "succeed" there - they save an edit with a change?
> Is that 2% of the 17.4%, or 2% of those giving feedback?
> I wonder if there's a way to detect a failure to edit and ask what went wrong.
In a text driven interface it is a little difficult to float an
interactive window asking if a reader saw any errors and if they'd like
to fix them - yet that's the level most readers are on.
We must also remember that the wiki edit interface and markup can be a
little intimidating to a newbie, so opening an edit window and making no
changes may be more common than we think. Are there any stats on this?
I’ve been into Wikipedia for several years, and all my friends know
this. I *still* find myself having to explain to them in small words
that that “edit” link really does include them fixing typos when they
So my suggestion: tiny tiny steps like this: things people can do that
have a strong probability of sticking.
Anyone else got ideas based on their (admittedly anecdotal) experience?
[inspired by Oliver Keyes' blog post: http://quominus.org/archives/524 ]
I am writing a book on the history of Wikipedia and the Wikimedia movement, focusing on its 'history of ideas'. Would any Wikipedians be prepared to be interviewed for this? Obviously long-standing Wikipedians would be a focus but I am interested in anyone who is involved in the movement because of passionately held convictions or 'ideology'.
A general question: is there a Wikipedian ideology? What is it? In particular, how does the current ideology, if there is one, compare with the ideology which inspired its founding fathers. And mothers - many of the founding editors of Wikipedia were women, I don't know how many people know that.
Since it hasn't really been mentioned, I just wanted to point out that this
image, never before available to the public in high resolution, was uploaded
to Commons as a result of our ongoing cooperative efforts with the US
National Archives (i.e., my residency). Its copyright status was listed as
unrestricted in the National Archives' online catalog, where the scaled-down
image has been displayed for several years without (apparently) any
incident. Of course, these copyright statuses can often use a second look,
and I am happy for it to get the extra scrutiny at Commons, especially one
as complex as this. I don't have any extra insight to offer copyright-wise,
and am interested to see the community's decision.
However, I would also like to take the opportunity to talk about the broader
effort here, which I think is more important than one image of Mickey Mouse
from a war poster, as symbolic as that is. Beginning in July, I began an
effort, in collaboration with NARA staff, to quite literally upload the
entire National Archives library of digital content in high resolution. The
National Archives—with billions of pages of records, tens of millions of
photographs, and hundreds of thousands more sound recordings, videos, and
artifacts—has hundreds of thousands of digital images in their catalog,
nearly all of which is in the public domain. The 60,000 uploaded so far
include thousands more posters like the Mickey one from the WWII and WWI
era; historically significant photography from Mathew Brady, Dorothea Lange,
Ansel Adams, and other notable photographers; photos of Native Americans, of
the Depression, of the national parks and the environment, of the Civil
Rights Movement, of presidents and their activities, and of every US war
from the Civil War to Vietnam, including incredible manufacturing and
Japanese internment scenes from the home front in WWII; ultra high-res TIFFs
(~150 MB) of the Declaration of Independence and other founding documents;
other textual documents, including historical maps, laws, court records,
census cards, and the letters of diverse personalities, from Susan B.
Anthony to Albert Einstein to Winston Churchill to Elvis Presley; and even
other oddities like an ancient Roman bust, a Remington statue, ancient
Chinese terracotta soldiers, a Diego Rivera painting, bullets and other
evidence from the JFK assassination, a First Lady's evening gown, and a
ceremonial Beninese wooden headdress(!).
This is a huge task, and it requires a community effort to help categorize
images, to use them in Wikipedia articles, to transcribe them on Wikisource,
and just generally show them some love. If finding Mickey Mouse in the
National Archives means anything, hopefully it's that this is a diverse and
significant, and sometimes surprising, collection that deserves more care
and attention—especially since many cultural institutions, domestically and
internationally, are following the project with interest. For more
information, check out the partnerships page on Commons <
and its sister WikiProjects on Wikipedia and Wikisource, linked in the tab
 See the upload feed at <
> Message: 1
> Date: Fri, 28 Oct 2011 15:31:07 -0700
> From: Brandon Harris <bharris(a)wikimedia.org>
> Subject: Re: [Foundation-l] On certain shallow, American-centered,
> foolish software initiatives backed by WMF
> To: foundation-l(a)lists.wikimedia.org
> Message-ID: <4EAB2D2B.3020803(a)wikimedia.org>
> Content-Type: text/plain; charset=ISO-8859-1; format=flowed
> On 10/28/11 3:27 PM, Etienne Beaule wrote:
> > It's disabled on certain wikis because of technical problems.
> Oh? I wasn't aware that it had been disabled anywhere as yet.
> WikiLove was not rolled out "en mass"; the policy for deployment of
> tool is that it is by request only, and the requesting wiki must:
> a) Make sure the tool is localized (via TranslateWiki);
> b) Make sure they have a local configuration; and
> c) Show community consensus.
> So if it was enabled and then *disabled*, I have not heard of this.
> there a bug report I can look to? Or if you know of a wiki where this
> is the case, I can do a search.
> Brandon Harris, Senior Designer, Wikimedia Foundation
> Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
Good to hear that wikilove is only going in on wikis where there is
consensus for it. Can anyone give me a link to the discussion that
established consensus on EN wikipedia? The nearest I could find was