This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
I think a sane, well researched (with actual subjects) rating system
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
To post to LINGUIST, use our convenient web form at
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
The speed with which we met our goal, thanks to the interest and generosity
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
leisure time, we are hoping that you will be able to help us get started on
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
We hope you are as enthusiastic about this effort as we are. Just to help us
get started looking at Wikipedia more critically, and to easily identify an
needing improvement, we suggest that you take a look at the List of
Many people are not listed there; others need to have more facts and
added. If you would like to participate in this exciting update effort,
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
entries you feel should be updated or added. Some linguists who saw our
on the Internet have already written us with specific suggestions, which we
share with you soon.
This update project will take major time and effort on all our parts. The
result will be a much richer internet resource of information on the breadth
depth of the field of linguistics. Our efforts should also stimulate
students to consider studying linguistics and to educate a wider public on
we do. Please consider participating.
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
LINGUIST List: Vol-18-1831
I was asked by a volunteer for help getting stats on the gender gap in
content on a certain Wikipedia, and came up with simple Wikidata Query
Service queries that pulled the total number of articles on a given
Wikipedia about men and about women, to calculate *the proportion of
articles about women out of all articles about humans*.
Then I was curious about how that wiki compared to other wikis, so I ran
the queries on a bunch of languages, and gathered the results into a table,
(please see the *caveat* there.)
I don't have time to fully write-up everything I find interesting in those
results, but I will quickly point out the following:
1. The Nepali statistic is simply astonishing! There must be a story
there. I'm keen on learning more about this, if anyone can shed light.
2. Evidently, ~13%-17% seems like a robust average of the proportion of
articles about women among all biographies.
3. among the top 10 largest wikis, Japanese is the least imbalanced. Good
job, Japanese Wikipedians! I wonder if you have a good sense of what
drives this relatively better balance. (my instinctive guess is pop culture
4. among the top 10 largest wikis, Russian is the most imbalanced.
5. I intend to re-generate these stats every two months or so, to
eventually have some sense of trends and changes.
6. Your efforts, particularly on small-to-medium wikis, can really make a
dent in these numbers! For example, it seems I am personally
responsible for almost 1% of the coverage of women on Hebrew Wikipedia!
7. I encourage you to share these numbers with your communities. Perhaps
you'd like to overtake the wiki just above yours? :)
8. I'm happy to add additional languages to the table, by request. Or you
can do it yourself, too. :)
 Yay #100wikidays :) https://meta.wikimedia.org/wiki/100wikidays
Wikimedia Foundation <http://www.wikimediafoundation.org>
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us make it a reality!
Being put together by Eliezer Yudkowsky of LessWrong. Content is
cc-by-sa 3.0, don't know about the software.
Rather than the "encyclopedia" approach, it tries to be more
pedagogical, teaching the reader at their level.
Analysis from a sometime Yudkowsky critic on Tumblr:
(there's a pile more comments linked from the notes on that post,
mostly from quasi-fans; I have an acerbic comment in there, but you
should look at the site yourself first.)
No idea if this will go anywhere, but might be of interest; new
approaches generally are. They started in December, first publicised
it a week ago and have been scaling up. First day it collapsed due to
load from a Facebook post announcement ... so maybe hold off before
announcing it everywhere :-)
(this is an announcement in my capacity as a volunteer.)
Inspired by a lightning talk at the recent CEE Meeting by our colleague
Lars Aronsson, I made a little command-line tool to automate batch
recording of pronunciations of words by native speakers, for uploading to
Commons and integration into Wiktionary etc. It is called *pronuncify*, is
written in Ruby and uses the sox(1) tool, and should work on any modern
Linux (and possibly OS X) machine. It is available here, with
I was then asked about a Windows version, and agreed to attempt one. This
version is called *pronuncify.net <http://pronuncify.net>*, and is a .NET
gooey GUI version of the same tool, with slightly different functions. It
is available here, with instructions.
Both tools require word-list files in plaintext, with one word (or phrase)
per line. Both tools name the files according to the standard established
in [[commons:Category:Pronunciation]], and convert them to Ogg Vorbis for
you, so they are ready to upload.
In the future, I may add OAuth-based direct uploading to Commons. If you
run into difficulties, please file issues on GitHub, for the appropriate
tool. Feedback is welcome.
Hi, how about a wikipedia about objects?
Instead of generic articles of , for example, "Ballpoint pen" or "Bic
cristal" it would be "Ballpoint pen Bic cristal 2014"
Doing these for millions of objects would allow people to have an open,
free, universal and central place to refer specific objects.
*Some possible applications:*
- Creating neutral and standard lists:
Nowadays if anyone create, for example, a tutorial for building
something (DIY projects, receipts, ...) they have to link all items to a
comercial or no-neutral web which could change its url in the future or
redirect it to adds or whatever.
Lists could be created in external webpages linking wikimedia objects
webpage or/and could be created as category pages in Wikipedia. For
example, currently, https://en.wikipedia.org/wiki/Car_of_the_Year
article lists cars which won COTY award but not links to the specific car
(AUDI A3 Hatchback 2012 - Present) but generic serie (Audi A3).
The good thing at this point it's that to start creating object
lists only item name is necessary, no infoboxes or description needed.
- Universal repository for inventories:
Lot of business fill their inventories again and again with same data
("cardboard box 50x30x15", "step by step nema motor 17", ... ) they should
be able to import this data from a open website with their corresponding
info like GTIN , SKU , Barcode... and more in the future weight, size,
- Encourage Recycling and Reutilitation:
Imagine if we use wikidata properties (
"has part" and "part of" , people will find other uses for objects, or
discover were to find
- Social activism and Corporate Social Responsibility (CSR)
Companies have info and metrics about their costumers (habits, location,
...) why not costumers have info about companies products, who manufacture
what?, what products have a good carbon footprint
<https://en.wikipedia.org/wiki/Carbon_footprint>?, what products have
been retired from some problem?, what are Fair Trade?. This also can moved
companies do better.
*Very rough roadmap*:
1. At the very begining, using wikidata infrestructure, objects would
only have common info like "name", " image", "related links"
First use cases could be doing lists or grouping objects by categories.
2. Step by step new fields could be added like "manufacturer" , "tags",
3. A separated website could be created. wikiobject.org isn't availiabe
so url could be something like objects.wikpedia.org
4. In a long-term in order to explote all the possibilities of this
project more complex fields and relations would have to be managed, like
for example "fridges with energy class A+++ and width less than 80 cm",
which could be easy if all always were similar but nothing further from
A friend of mine and me tried to build a demo version in an home-made
apache cassandra cluster four years ago, but we don't have enough resources
and knowledge for that.
In my humble opinion, problem with wikipedia funding It's that most part of
its users don't see culture as a need (sorry for that, I am a sporadic
donor). In Wikiobject case I think it could rather be different.
If part of companies business lies on this project, companies will be very
inclined to donate to improve performace, usability, etc.. maybe similar to
what happens in Linux.
Where came this need from? Data needed for some software to run, product
vissibility, costumer requests, etc ... , no advertisement needed, It could
be a need and standart.
I trully believe that world need something like this, and the correct
people to do it, to warranty openness and independence, are you.
thanks for your time and attention,
Dear members of the Wikimedia community,
As you know the board passed a resolution allowing for the creation of a
standing Elections Committee in November of last year . Per the
implementing resolution, the Board Governance Committee (BGC) has appointed
the initial members from the recommendation of the Executive Director and
her staff. We will be starting with 6 committee members:
They will be joined by two official advisors from the Wikimedia Foundation:
James Alexander (Manager, Trust & Safety) from Community Engagement
Stephen LaPorte (Senior Legal Counsel) from the WMF Legal team
They will also be working closely with the BGC as a whole and especially
Nataliia and me. Because I may consider applying as a candidate in the
upcoming community-selection process I will be recusing for any discussions
involving that election.
The new committee, along with the BGC, will, of course, be able to choose
how many members and advisors they truly need and how to recruit the best
candidates. One of the first orders of business for the committee will be
to decide on a process for expanding its membership through some form of
open call. While there is an enormous amount of work for the committee to
do, it can be expected that they will begin looking at:
The selection of a committee Chair
The dates and process for the upcoming community selection process (and
consider shortening the terms and having community elections in early 2017,
so that the elected members would join the Board at April meeting).
The method of voting for that process both for the upcoming selection
and the future and
The composition of the board and how to ensure a steady supply of good
candidates (in particular, making sure that the candidates have the
skills and expertise matching the Board skill matrix while making sure that
the process is still owned by the community).
Just as the BGC is committed to greater transparency (see for example our
recent minutes), the committee will likely consult with the wider
Wikimedia community in developing and revising election procedures within
the scope of this charter to the greatest extent possible.
This day has been a long time coming and is the result of requests made by
multiple different temporary election committees over the years. I'm glad
to finally see it come to fruition and hope that it will allow our
selection process to continue to expand and improve well beyond the
record-breaking election we had in 2015.
Before I sign off I also wanted to call out the amazing work of the 2015
temporary Election Committee. They were put together in 2015 to do one
thing: run an election. They did that well (with almost 3x the
participation of the next largest year) but then they went well beyond the
call of duty in serving as an advisory body to the board, offering
invaluable feedback on how to fill the empty community selected seats we
saw this year. They did not have to do this, but they did it anyway, and I
hope that everyone can acknowledge the stress and courage that took.
Please join me in thanking the 2015 committee and welcoming the new
Dariusz Jemielniak ("pundit", current Board member)
-  https://wikimediafoundation.org/wiki/Resolution:Elections_Committee
-  See also my disclosure to the BGC on that matter:
Forwarding to the Wikimedia mailing list, I'm sorry for the lateness!
Product Manager, Discovery
---------- Forwarded message ----------
From: Trey Jones <tjones(a)wikimedia.org>
Date: Mon, Jul 25, 2016 at 11:58 AM
Subject: Re: [discovery] Fwd: [Wikimedia-l] Improving search (sort of)
To: A public mailing list about Wikimedia Search and Discovery projects <
Cc: James Heilman <jmh649(a)gmail.com>
I decided to look into this as my 10% project last week. It ended up being
a 15% project, but I wanted to finish it up.
I carefully reviewed and categorized the top 100 "unsuccessful" (i.e.,
zero-results) queries from May 2016, and skimmed the top 1,000 from May,
and skimmed and compared the top 100 / 1,000 for June.
The top result (with several variants in the top 100) is a porn site that
has had a wiki page created and deleted several times. Various websites
round out the top 10. Internet personalities and websites dominate the top
100 and several have had pages created and deleted over the years. There's
strong evidence of links being used for some queries—though I didn't try to
track them down. There's plenty of personally identifiable information in
the top 1000 most frequent queries. More than 10% of the queries (by
volume) get good results from the completion suggester or "did you mean"
spelling suggestions, and more than 10% have some results approximately two
months later (i.e., late last week).
Obvious refinements to the search strategy would eliminate so many
high-frequency queries that any useful mining would be down to slogging
through the low-impact long tail.
I don’t think there’s a lot here worth extracting, though others may
disagree. The privacy concerns expressed earlier are genuine, and simple
attempts to filter PII (using patterns, minimum IP counts, etc) are not
guaranteed to be effective.
For lots more details (but no actual queries), see here:
Software Engineer, Discovery
On Fri, Jul 15, 2016 at 11:31 AM, Trey Jones <tjones(a)wikimedia.org> wrote:
> Finally, if this is important enough and the task gets prioritized, I'd be
> willing to dive back in and go through the process once and pull out the
> top zero-results queries, this time with basic bot exclusion and IP
> deduplication—which we didn't do early on because we didn't realize what a
> mess the data was. We could process a week or a month of data and
> categorize the top 100 to 500 results in terms of personal info, junk,
> porn, and whatever other categories we want or that bubble up from the
> data, and perhaps publish the non-personal-info part of the list as an
> example, either to persuade ourselves that this is worth pursuing, or as a
> clearer counter to future calls to do so.
> ---------- Forwarded message ----------
>> From: "James Heilman" <jmh649(a)gmail.com>
>> Date: Jul 15, 2016 06:33
>> Subject: [Wikimedia-l] Improving search (sort of)
>> To: "Wikimedia Mailing List" <wikimedia-l(a)lists.wikimedia.org>
>> A while ago I requested a list of the "most frequently searched for terms
>> for which no Wikipedia articles are returned". This would allow the
>> community to than create redirect or new pages as appropriate and help
>> address the "zero results rate" of about 30%.
>> While we are still waiting for this data I have recently come across a
>> of the most frequently clicked on redlinks on En WP produced by Andrew
>> https://en.wikipedia.org/wiki/User:West.andrew.g/Popular_redlinks Many of
>> these can be reasonably addressed with a redirect as the issue is often
>> Do anyone know where things are at with respect to producing the list of
>> most search for terms that return nothing?
>> James Heilman
>> MD, CCFP-EM, Wikipedian
discovery mailing list