This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
http://youtube-global.blogspot.com/2009/09/five-stars-dominate-ratings.html…
I think a sane, well researched (with actual subjects) rating system
is
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/6/68/DeHoustMangalat…>
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
Report. PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/d/d3/RassbachPincock…>
Hoi,
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
Thanks,
GerardM
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
To: LINGUIST(a)listserv.linguistlist.org
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
<reviews(a)linguistlist.org>
Homepage: http://linguistlist.org/
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
================================================================
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html
===========================Directory==============================
1)
Date: 18-Jun-2007
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
Dear subscribers,
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
at:
http://linguistlist.org/donation/fund-drive2007/wikipedia/index.cfm.
The speed with which we met our goal, thanks to the interest and generosity
of
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
more
leisure time, we are hoping that you will be able to help us get started on
the
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
as
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
articles;
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
efforts.
We hope you are as enthusiastic about this effort as we are. Just to help us
all
get started looking at Wikipedia more critically, and to easily identify an
area
needing improvement, we suggest that you take a look at the List of
Linguists
page at:
http://en.wikipedia.org/wiki/List_of_linguists. M
Many people are not listed there; others need to have more facts and
information
added. If you would like to participate in this exciting update effort,
please
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
linguistics
entries you feel should be updated or added. Some linguists who saw our
campaign
on the Internet have already written us with specific suggestions, which we
will
share with you soon.
This update project will take major time and effort on all our parts. The
end
result will be a much richer internet resource of information on the breadth
and
depth of the field of linguistics. Our efforts should also stimulate
prospective
students to consider studying linguistics and to educate a wider public on
what
we do. Please consider participating.
Sincerely,
Hannah Morales
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
-----------------------------------------------------------
LINGUIST List: Vol-18-1831
Hoi,
There is a request for a Wikipedia in Ancient Greek. This request has so far
been denied. A lot of words have been used about it. Many people maintain
their positions and do not for whatever reason consider the arguments of
others.
In my opinion their are a few roadblocks.
- Ancient Greek is an ancient language - the policy does not allow for
it
- Text in ancient Greek written today about contemporary subjects
require the reconstruction of Ancient Greek.
- it requires the use of existing words for concepts that did
not exist at the time when the language was alive
- neologisms will be needed to describe things that did not
exist at the time when the language was alive
- modern texts will not represent the language as it used to be
- Constructed and by inference reconstructed languages are effectively
not permitted
We can change the policy if there are sufficient arguments, when we agree on
a need.
When a text is written in reconstructed ancient Greek, and when it is
clearly stated that it is NOT the ancient Greek of bygone days, it can be
obvious that it is a great tool to learn skills to read and write ancient
Greek but that it is in itself not Ancient Greek. Ancient Greek as a
language is ancient. I have had a word with people who are involved in the
working group that deals with the ISO-639, I have had a word with someone
from SIL and it is clear that a proposal for a code for "Ancient Greek
reconstructed" will be considered for the ISO-639-3. For the ISO-639-6 a
code is likely to be given because a clear use for this code can be given.
We can apply for a code and as it has a use bigger then Wikipedia alone it
clearly has merit.
With modern texts clearly labelled as distinct from the original language,
it will be obvious that innovations a writers needs for his writing are
legitimate.
This leaves the fact that constructed and reconstructed languages are not
permitted because of the notion that mother tongue users are required. In my
opinion, this has always been only a gesture to those people who are dead
set against any and all constructed languages. In the policies there is
something vague "*it must have a reasonable degree of recognition as
determined by discussion (this requirement is being discussed by the language
subcommittee <http://meta.wikimedia.org/wiki/Language_subcommittee>)."* It
is vague because even though the policy talks about a discussion, it is
killed off immediately by stating "The proposal has a sufficient number of
living native speakers to form a viable community and audience." In my
opinion, this discussion for criteria for the acceptance of constructed or
reconstructed languages has not happened. Proposals for objective criteria
have been ignored.
In essence, to be clear about it:
- We can get a code for reconstructed languages.
- We need to change the policy to allow for reconstructed and
constructed languages
We need to do both in order to move forward.
The proposal for objective criteria for constructed and reconstructed
languages is in a nutshell:
- The language must have an ISO-639-3 code
- We need full WMF localisation from the start
- The language must be sufficiently expressive for writing a modern
encyclopaedia
- The Incubator project must have sufficiently large articles that
demonstrate both the language and its ability to write about a wide range of
topics
- A sufficiently large group of editors must be part of the Incubator
project
Thanks,
GerardM
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello all!
Next Thursday's office hours will feature Véronique Kessler, the
Foundation's Chief Financial Officer. If you don't know
Naoko, you can get to know her at
<http://wikimediafoundation.org/wiki/V%C3%A9ronique_Kessler>.
Office hours on Thursday are from 2100 to 2200 UTC (3:00 PM - 4:00 PM PDT).
If you do not have an IRC client, there are two ways you can come chat
using a web browser: First is using the Wikizine chat gateway at
<http://chatwikizine.memebot.com/cgi-bin/cgiirc/irc.cgi>. Type a
nickname, select irc.freenode.net from the top menu and
#wikimedia-office from the following menu, then login to join.
Also, you can access Freenode by going to http://webchat.freenode.net/,
typing in the nickname of your choice and choosing wikimedia-office as
the channel. You may be prompted to click through a security warning.
It should be all right.
Please feel free to forward (and translate!) this email to any other
relevant email lists you happen to be on. Also note, this is
Veronique's first foray into IRC, so lets show her how welcoming we can
be! :-)
- --
Cary Bass
Volunteer Coordinator, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/
iEYEARECAAYFAksDQcwACgkQyQg4JSymDYl+wACcCsTgIUtThC4agEUwC9533olx
61cAn1titMJqMmNt4GESgoQ9U5sQMFM7
=1DvA
-----END PGP SIGNATURE-----
Possibly of interest to Wikimedians: the U.S. Office of Science and
Technology Policy is requesting public comment on making federally
funded scientific research open access. The deadline is Jan. 7.
----- Forwarded Message -----
From: "Charles W. Bailey, Jr." <cwbailey(a)digital-scholarship.com>
To: sts-l(a)ala.org
Sent: Thursday, December 10, 2009 10:50:30 AM GMT -08:00 US/Canada Pacific
Subject: [STS-L] OSTP Request for Comment on Open Access to Federally
Funded Research
The Office of Science and Technology Policy is requesting
input regarding enhanced access to federally funded science
and technology research results, including the possibility
of open access to them. Comments can be e-mailed to
publicaccess(a)ostp.gov. The deadline for comments is January
7, 2010.
Here's an excerpt from the announcement
(http://bit.ly/5J1ZAp):
Input is welcome on any aspect of expanding public access to
peer reviewed publications arising from federal research.
Questions that individuals may wish to address include, but
are not limited to, the following (please respond to
questions individually):
1. How do authors, primary and secondary publishers,
libraries, universities, and the federal government
contribute to the development and dissemination of peer
reviewed papers arising from federal funds now, and how
might this change under a public access policy?
2. What characteristics of a public access policy would best
accommodate the needs and interests of authors, primary and
secondary publishers, libraries, universities, the federal
government, users of scientific literature, and the public?
3. Who are the users of peer-reviewed publications arising
from federal research? How do they access and use these
papers now, and how might they if these papers were more
accessible? Would others use these papers if they were more
accessible, and for what purpose?
4. How best could federal agencies enhance public access to
the peer-reviewed papers that arise from their research
funds? What measures could agencies use to gauge whether
there is increased return on federal investment gained by
expanded access?
5. What features does a public access policy need to have to
ensure compliance?
6. What version of the paper should be made public under a
public access policy (e.g., the author's peer reviewed
manuscript or the final published version)? What are the
relative advantages and disadvantages to different versions
of a scientific paper?
7. At what point in time should peer-reviewed papers be made
public via a public access policy relative to the date a
publisher releases the final version? Are there empirical
data to support an optimal length of time? Should the delay
period be the same or vary for levels of access (e.g., final
peer reviewed manuscript or final published article, access
under fair use versus alternative license), for federal
agencies and scientific disciplines?
8. How should peer-reviewed papers arising from federal
investment be made publicly available? In what format should
the data be submitted in order to make it easy to search,
find, and retrieve and to make it easy for others to link to
it? Are there existing digital standards for archiving and
interoperability to maximize public benefit? How are these
anticipated to change?
9. Access demands not only availability, but also meaningful
usability. How can the federal government make its
collections of peer- reviewed papers more useful to the
American public? By what metrics (e.g., number of articles
or visitors) should the Federal government measure success
of its public access collections? What are the best examples
of usability in the private sector (both domestic and
international)? And, what makes them exceptional? Should
those who access papers be given the opportunity to comment
or provide feedback?
In "The Obama Administration Wants OA for Federally-Funded
Research" (http://bit.ly/8fZ6Yh), Peter Suber says:
"This is big. We already have important momentum in Congress
for FRPAA. The question here is about separate action from
the White House. What OA policies should President Obama
direct funding agencies to adopt? This is the first major
opening to supplement legislative action with executive
action to advance public access to publicly-funded research.
It's also the first explicit sign that President Obama
supports the OA policy at the NIH and wants something
similar at other federal agencies."
In "Please Comment on Mandate Proposal by President Obama's
Office of Science and Technology Policy (OSTP)"
(http://bit.ly/8OQUEF), Stevan Harnad provides his answers
to the OSTP's questions.
--
Best Regards,
Charles
Charles W. Bailey, Jr.
Publisher, Digital Scholarship
http://bit.ly/Z6HFx
Hi folks, just curious - is the appeal from Jimmy going to be the
standard banner for the remainder of the fundraiser? Congratulations,
by the way, on the success of the drive thus far - it has raised 92%
of the annual goal, or $6.498M, according to the fundraiser statistics
page. Despite early hiccups with the banner content, this fundraiser
appears to be (by a wide margin) the most successful in Wikimedia's
history.
Nathan
Hi everyone,
The next strategic planning office hours are:
Tuesday from 20:00-21:00 UTC, which is:
Tuesday, 12-1pm PST
Tuesday, 3pm-4pm EST
There has been a lot of tremendous work on the strategy wiki the past
few months, and Task Forces are starting to finish up their work.
Office hours will be a great opportunity to discuss the work that's
happened as well as the work to come.
As always, you can access the chat by going to
https://webchat.freenode.net and filling in a username and the channel
name (#wikimedia-strategy). You may be prompted to click through a
security warning. It's fine. More details at:
http://strategy.wikimedia.org/wiki/IRC_office_hours
Thanks! Hope to see many of you there.
=Eugene
--
======================================================================
Eugene Eric Kim ................................ http://xri.net/=eekim
Blue Oxen Associates ........................ http://www.blueoxen.com/
======================================================================
A bit of housekeeping, but just to let everyone know: Jan-Bart de
Vreede, Jimmy Wales, Stu West, and Matt Halprin have all been appointed
to additional one-year terms for 2010, by unanimous votes of their
fellow board members. I look forward to working with all of them as we
continue with the strategic planning process and am grateful for their
willingness to serve on the board.
--Michael Snow
Hi,
I'm evaluating our legal options around commercially using wikipedia
content, if this is not the right forum, please let me know / forward
the question. It might be that the method I describe is not legally
possible, so if there is any similar situation that does or does not
work, please let me know either. I'd like to play safe in this field
and avoid potential issues.
For the sake of example we would like to automatically convert the
page content to a different text and different format (e.g.
automatically create text extracts and compile it into a pdf document)
and sell it as part of a subscription service or even better as a
standalone product. We include all the attributions / links wherever
possible, and mark that the source of the product is Wikipedia. What
else are we required to do before the sell can happen? Is there any
fee or percentage that shall go back to mediawiki foundation in such
cases? Can we restrict the copy or re-distribution of such product?
For the later, I suppose there is nothing we can do, however this
seems to ruin the whole business model, doesn't it?
Thank you for your help,
Istvan