This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
http://youtube-global.blogspot.com/2009/09/five-stars-dominate-ratings.html…
I think a sane, well researched (with actual subjects) rating system
is
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/6/68/DeHoustMangalat…>
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
Report. PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/d/d3/RassbachPincock…>
Hoi,
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
Thanks,
GerardM
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
To: LINGUIST(a)listserv.linguistlist.org
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
<reviews(a)linguistlist.org>
Homepage: http://linguistlist.org/
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
================================================================
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html
===========================Directory==============================
1)
Date: 18-Jun-2007
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
Dear subscribers,
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
at:
http://linguistlist.org/donation/fund-drive2007/wikipedia/index.cfm.
The speed with which we met our goal, thanks to the interest and generosity
of
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
more
leisure time, we are hoping that you will be able to help us get started on
the
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
as
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
articles;
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
efforts.
We hope you are as enthusiastic about this effort as we are. Just to help us
all
get started looking at Wikipedia more critically, and to easily identify an
area
needing improvement, we suggest that you take a look at the List of
Linguists
page at:
http://en.wikipedia.org/wiki/List_of_linguists. M
Many people are not listed there; others need to have more facts and
information
added. If you would like to participate in this exciting update effort,
please
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
linguistics
entries you feel should be updated or added. Some linguists who saw our
campaign
on the Internet have already written us with specific suggestions, which we
will
share with you soon.
This update project will take major time and effort on all our parts. The
end
result will be a much richer internet resource of information on the breadth
and
depth of the field of linguistics. Our efforts should also stimulate
prospective
students to consider studying linguistics and to educate a wider public on
what
we do. Please consider participating.
Sincerely,
Hannah Morales
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
-----------------------------------------------------------
LINGUIST List: Vol-18-1831
Hello, everyone.
(this is an announcement in my capacity as a volunteer.)
Inspired by a lightning talk at the recent CEE Meeting[1] by our colleague
Lars Aronsson, I made a little command-line tool to automate batch
recording of pronunciations of words by native speakers, for uploading to
Commons and integration into Wiktionary etc. It is called *pronuncify*, is
written in Ruby and uses the sox(1) tool, and should work on any modern
Linux (and possibly OS X) machine. It is available here[2], with
instructions.
I was then asked about a Windows version, and agreed to attempt one. This
version is called *pronuncify.net <http://pronuncify.net>*, and is a .NET
gooey GUI version of the same tool, with slightly different functions. It
is available here[3], with instructions.
Both tools require word-list files in plaintext, with one word (or phrase)
per line. Both tools name the files according to the standard established
in [[commons:Category:Pronunciation]], and convert them to Ogg Vorbis for
you, so they are ready to upload.
In the future, I may add OAuth-based direct uploading to Commons. If you
run into difficulties, please file issues on GitHub, for the appropriate
tool. Feedback is welcome.
A.
[1]
https://meta.wikimedia.org/wiki/Wikimedia_CEE_Meeting_2015/Programme/Lightn…
[2] https://github.com/abartov/pronuncify
[3] https://github.com/abartov/Pronuncify.net
--
Asaf Bartov
Hello everyone,
For a few months now, 15 French-speaking Wikipedia editors, supported by
Wikimédia France, have been working to design a Massive Online Open Course,
to learn how to contribute to Wikipedia and discover more about the way it
works.
The WikiMOOC lasts for 5 weeks (with 2,5h of work/ week, including the
duration of the courses). You can check out the project page on Wikipedia
[1].
The registration for this WikiMOOC opens today, on the FUN [2] platform
(powered by the Ministry of Education and Research, in France) !
The courses will start on February 22nd, 2016.
Do not hesitate to share this information to all French-speaking
communities you might know of. Please, note that it is possible to stay
tuned via WikiMOOC's Twitter[3] and Facebook[4] accounts.
Here is a short trailer about the WikiMOOC in French :) Enjoy ! [5]
Please, feel free to reach out to me if you have any questions,
Jules Xénard jules.xenard(a)wikimedia.fr
Wikimédia France
[1] https://fr.wikipedia.org/wiki/Wikip%C3%A9dia:WikiMOOC
[2]
https://www.france-universite-numerique-mooc.fr/courses/WMFr/86001/session0…
[3] https://twitter.com/wikimooc
[4] https://www.facebook.com/Wikimooc/
[5] https://www.youtube.com/watch?v=assiAnG3lv4
--
Myriam Berard
Wikimédia France
There’s an excellent profile of Magnus Manske in the Wikimedia blog today.
It’s hard to think of people more important to the movement than Magnus has
been since 2001.
Selected quotes: "...we have gone from slowdown to standstill; the
interface has changed little in the last ten years or so, and all the
recent changes have been fought teeth-and-claw by the communities,
especially the larger language editions. From the Media Viewer, the Visual
Editor, to Wikidata transclusion, all have been resisted by vocal groups of
editors, not because they are a problem, but because they represent
change... all websites, including Wikipedia must obey the Red Queen
hypothesis: you have to run just to stand still. This does not only affect
Wikipedia itself, but the entire Wikimedia ecosystem... if we wall our
garden against change, against new users, new technologies our work of 15
years is in danger of fading away... we are in an ideal position to try new
things. We have nothing to lose, except a little time.”
Link:
https://blog.wikimedia.org/2016/01/18/fifteen-years-wikipedia-magnus-manske/
Hi all,
in case you don't know, https://en.wikipedia.org/wiki/FreeCell is a
single-player card game, that became popular after being included in
some versions of Microsoft Windows. Now, the English Wikipedia entry about it
used to contain during at least two times in the past, some relatively short
sections about several automated solvers that have been written for it.
However, they were removed due to being considered "non-notable" or
"non-Encyclopaedic".
Right now there's only this section -
https://en.wikipedia.org/wiki/FreeCell#Solver_complexity which talks about the
fact that FreeCell was proved to be NP-complete.
I talked about it with a friend, and he told me I should try to get a
"reliable source" news outlet/newspaper to write about such solvers (including
I should add my own over at http://fc-solve.shlomifish.org/ , though the
sections on the FreeCell Wikipedia entry did not exclusively cover it.).
Recently I stumbled upon this paper written by three computer scientists, then
at Ben-Gurion University of the Negev:
*
http://www.genetic-programming.org/hc2011/06-Elyasaf-Hauptmann-Sipper/Elyas…
* There's some analysis of this paper in this thread in the fc-solve-discuss
Yahoo Group:
https://groups.yahoo.com/neo/groups/fc-solve-discuss/conversations/messages…
The solver mentioned in the paper can solve 98% of the first 32,000 Microsoft
FreeCell deals. However, several hobbyist solvers (= solvers that were written
outside the Academia and may incorporate techniques that are less fashionable
there, and that were not submitted for Academic peer review) that were written
by the time the article published, have been able to solve all deals in the
first MS 32,000 deals except one (#11,982), which is widely believed to be
impossible, and which they fully traverse without a solution.
Finally, I should note that I've written a Perl 5/CPAN distribution to verify
that the FreeCell solutions generated by my solver (and with some potential
future work - other solvers) are correct, and I can run it on the output of
my solver on the MS 32,000 deals on my Core i3 machine in between 3 and 4
minutes.[Verification]
===========
Now my questions are:
1. Can this paper be considered a reliable, notable, and/or Encyclopaedic source
that can hopefully deter and prevent future Deletionism?
2. Can I cite the fc-solve-discuss’s thread mentioning the fact that there are
hobbyist solvers in question that perform better in this respect - just for
"Encyclopaedic" completeness sake, because the scientific paper in question
does not mention them at all.
===========
Sorry this E-mail was quite long, but I wanted to present all the facts. As you
can tell, I've become quite frustrated at Wikipedia deletionism and the hoops
one has to overcome in order to cope with them.
Regards,
Shlomi Fish
[Verification] - one note is that all these programs were not verified/proved
as correct by a proof verifier such as https://en.wikipedia.org/wiki/Coq , so
there is a small possibility that they have insurmountable bugs. Note that I
did write some automated tests for them.
--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/
What Makes Software Apps High Quality - http://shlom.in/sw-quality
The three principal virtues of a programmer are Laziness, Impatience, and
Hubris.
— http://perldoc.perl.org/perl.html
Please reply to list if it's a mailing list post - http://shlom.in/reply .
Dear friends,
Recent events have made me curious to learn more about the Wikimedia
Foundation's origins and history as a membership organization. The
revelations about the Wikimedia Foundation Board elections being a
recommendation for appointment rather than a direct vote seem to have been
a surprise to many of us, and almost ten years after membership was
eliminated, we see strongly suggestive "directly elected" language still
being fixed on the Foundation's own Board elections page.[1]
It turns out that this history is colorful, the Foundation was a membership
organization from 2003-2006 and Board seats were indeed, originally,
intended to be directly elected by member-Wikimedians. It seems that the
membership issue was never quite resolved. I've put some of my notes on
metawiki, please forward to any wiki historians who might be interested in
throwing their weight on a shovel.
https://meta.wikimedia.org/wiki/Wikimedia_Foundation_membership_controversy
As a current WMF staff member, and having received a formal scolding two
weeks ago for expressing my professional and personal opinions on this
list--that a hierarchical corporate structure is completely inappropriate
and ineffectual for running the Foundation--I don't feel safe
editorializing about what membership could mean for the future of the
Wikimedia movement. But I would be thrilled to see this discussion take
place, and to contribute however I am able.
A note to fellow staff: Anything you can say about this history is most
likely protected speech under the Sarbanes-Oxley Act, since we're asking
whether state and federal laws were violated.
In solidarity,
Adam Wight
[[mw:User:Adamw]]
[1]
https://wikimediafoundation.org/w/index.php?title=Board_of_Trustees&diff=10…
Dear all,
Today the Wikimedia Foundation Board of Trustees voted to remove one of the
Trustees, Dr. James Heilman, from the Board. His term ended effective
immediately.
This was not a decision the Board took lightly. The Board has a
responsibility to the Wikimedia movement and the Wikimedia Foundation to
ensure that the Board functions with mutual confidence to ensure effective
governance. Following serious consideration, the Board felt this removal
decision was a necessary step at this time. The resolution will be
published shortly.
This decision creates an open seat for a community-selected Trustee. The
Board is committed to filling this open community seat as quickly as
possible. We will reach out to the 2015 election committee
<https://meta.wikimedia.org/wiki/Wikimedia_Foundation_elections_2015/Committ…>
to discuss our options, and will keep you informed as we determine next
steps.
Patricio Lorente
Chair, Board of Trustees
Wikimedia Foundation
--
Just copying part of Andreas's comment from another thread:
"...can the board now please come to a decision on whether the Knight
Foundation grant letter and grant application documents will be posted on
Meta, and if not, provide an explanation to the community why they cannot
be made public?
"To recap, Jimmy Wales said over two weeks ago on his talk page[1] that in
his opinion the documentation should be posted on Meta, to clear the air
around this issue. However, nothing appears to have happened since then."
[1]
https://en.wikipedia.org/w/index.php?title=User_talk%3AJimbo_Wales&diff=698…
Anthony Cole