This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
http://youtube-global.blogspot.com/2009/09/five-stars-dominate-ratings.html…
I think a sane, well researched (with actual subjects) rating system
is
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/6/68/DeHoustMangalat…>
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
Report. PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/d/d3/RassbachPincock…>
Hoi,
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
Thanks,
GerardM
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
To: LINGUIST(a)listserv.linguistlist.org
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
<reviews(a)linguistlist.org>
Homepage: http://linguistlist.org/
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
================================================================
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html
===========================Directory==============================
1)
Date: 18-Jun-2007
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
Dear subscribers,
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
at:
http://linguistlist.org/donation/fund-drive2007/wikipedia/index.cfm.
The speed with which we met our goal, thanks to the interest and generosity
of
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
more
leisure time, we are hoping that you will be able to help us get started on
the
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
as
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
articles;
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
efforts.
We hope you are as enthusiastic about this effort as we are. Just to help us
all
get started looking at Wikipedia more critically, and to easily identify an
area
needing improvement, we suggest that you take a look at the List of
Linguists
page at:
http://en.wikipedia.org/wiki/List_of_linguists. M
Many people are not listed there; others need to have more facts and
information
added. If you would like to participate in this exciting update effort,
please
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
linguistics
entries you feel should be updated or added. Some linguists who saw our
campaign
on the Internet have already written us with specific suggestions, which we
will
share with you soon.
This update project will take major time and effort on all our parts. The
end
result will be a much richer internet resource of information on the breadth
and
depth of the field of linguistics. Our efforts should also stimulate
prospective
students to consider studying linguistics and to educate a wider public on
what
we do. Please consider participating.
Sincerely,
Hannah Morales
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
-----------------------------------------------------------
LINGUIST List: Vol-18-1831
Re: http://twkozlowski.net/the-pot-and-the-kettle-the-wikimedia-way/
Two questions:
1. Where can I find a response from either the WMF board or WMF
funding/finance to the criticisms of a lack of transparency or the
apparent failure of the project to deliver value for the donor's money
as raised in this blog post?
2. Where can I read an officially recognized report for the outcomes
of this project in terms of value for Wikimedia projects? Obviously we
do not want to rely on second-hand analysis when reports to the WMF
are a requirement for such projects.
Thanks,
Fae
--
faewik(a)gmail.com https://commons.wikimedia.org/wiki/User:Fae
We know NSA wants Wikipedia data, as Wikipedia is listed in one of the
NSA slides:
https://commons.wikimedia.org/wiki/File:KS8-001.jpg
That slide is about HTTP, and the tech staff are moving the
user/reader base to HTTPS.
As we learn more about the NSA programs, we need to consider vectors
other than HTTP for the NSA to obtain the data they want. And the
userbase needs to be aware of the current risks.
One question from the "Dells are backdored"[sic] thread that is worth
separate consideration is:
Are the Wikimedia transit links encrypted, especially for database replication?
MySQL has replication over SSL, so I assume the answer is Yes.
If not, is this necessary or useful, and feasible ?
However we also need to consider that SSL and other encryption may be
useless against NSA/etc, which means replicating non-public data
should be avoided wherever possible, as it becomes a single point of
failure.
Given how public our system is, we don't have a lot of non-public
data, so we might be able to design the architecture so that
information isnt replicated, and also ensure it isnt accessed over
insecure links. I think the only parts of the dataset that are
private & valuable are
* passwords/login cookies,
* checkuser info - IPs and useragents,
* WMF analytics, which includes readers iirc, and
* hidden/deleted edits
* private wikis and mailing lists
Have I missed any?
Are passwords and/or checkuser info replicated?
Is there a data policy on WMF analytics data which prevents it flowing
over insecure links, and limits what is collected and ensures
destruction of the data within reasonable timeframes? i.e. how about
not using cookies to track analytics of readers who are on HTTP
instead of HTTPS?
The private wikis can be restricted to https, depending on the value
of the data on those wikis in the wrong hands. The private mailing
lists will be harder to secure, and at least the English Wikipedia
arbcom list contain a lot of valuable data about contributors.
Regarding hidden/deleted edits, the replication isnt the only source
of this data. All edits are also exposed via Recent Changes
(https/api/etc) as they occur, and the value of these edits is
determined by the fact they are hidden afterwards (e.g. don't appear
in dumps). Is there any way to control who is effectively capturing
all edits via Recent Changes?
--
John Vandenberg
Hi folks,
to increase accountability and create more opportunities for course
corrections and resourcing adjustments as necessary, Sue's asked me
and Howie Fung to set up a quarterly project evaluation process,
starting with our highest priority initiatives. These are, according
to Sue's narrowing focus recommendations which were approved by the
Board [1]:
- Visual Editor
- Mobile (mobile contributions + Wikipedia Zero)
- Editor Engagement (also known as the E2 and E3 teams)
- Funds Dissemination Committe and expanded grant-making capacity
I'm proposing the following initial schedule:
January:
- Editor Engagement Experiments
February:
- Visual Editor
- Mobile (Contribs + Zero)
March:
- Editor Engagement Features (Echo, Flow projects)
- Funds Dissemination Committee
We’ll try doing this on the same day or adjacent to the monthly
metrics meetings [2], since the team(s) will give a presentation on
their recent progress, which will help set some context that would
otherwise need to be covered in the quarterly review itself. This will
also create open opportunities for feedback and questions.
My goal is to do this in a manner where even though the quarterly
review meetings themselves are internal, the outcomes are captured as
meeting minutes and shared publicly, which is why I'm starting this
discussion on a public list as well. I've created a wiki page here
which we can use to discuss the concept further:
https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings/Quarterly_r…
The internal review will, at minimum, include:
Sue Gardner
myself
Howie Fung
Team members and relevant director(s)
Designated minute-taker
So for example, for Visual Editor, the review team would be the Visual
Editor / Parsoid teams, Sue, me, Howie, Terry, and a minute-taker.
I imagine the structure of the review roughly as follows, with a
duration of about 2 1/2 hours divided into 25-30 minute blocks:
- Brief team intro and recap of team's activities through the quarter,
compared with goals
- Drill into goals and targets: Did we achieve what we said we would?
- Review of challenges, blockers and successes
- Discussion of proposed changes (e.g. resourcing, targets) and other
action items
- Buffer time, debriefing
Once again, the primary purpose of these reviews is to create improved
structures for internal accountability, escalation points in cases
where serious changes are necessary, and transparency to the world.
In addition to these priority initiatives, my recommendation would be
to conduct quarterly reviews for any activity that requires more than
a set amount of resources (people/dollars). These additional reviews
may however be conducted in a more lightweight manner and internally
to the departments. We’re slowly getting into that habit in
engineering.
As we pilot this process, the format of the high priority reviews can
help inform and support reviews across the organization.
Feedback and questions are appreciated.
All best,
Erik
[1] https://wikimediafoundation.org/wiki/Vote:Narrowing_Focus
[2] https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
Support Free Knowledge: https://wikimediafoundation.org/wiki/Donate
Hi folks,
I'd be interested in hearing broader community opinions about the
extent to which WMF should sponsor non-profits purely to support work
that Wikimedia benefits from, even if it's not directed towards a
specific goal established in a grant agreement.
This comes up from time to time. One of the few historic precedents
I'm aware of is the $5,000 donation that WMF made to FreeNode in 2006
[1]. But there are of course many other organizations/communities that
the Wikimedia movement is indebted to.
On the software side, we have Ubuntu Linux (itself highly indebted to
Debian) / Apache / MariaDB / PHP / Varnish / ElasticSearch / memcached
/ Puppet / OpenStack / various libraries and many other dependencies [2],
infrastructure tools like ganglia, observium, icinga, etc. Some of
these projects have nonprofits that accept and seek sponsorship and
support, some don't.
One could easily expand well beyond the software we depend on
server-side to client-side open source applications used by our
community to create content: stuff like Inkscape, GIMP and LibreOffice
(used for diagrams). And there are other communities we depend on,
like OpenStreetMap.
So, should we steer clear of this type of sponsorship altogether
because it's a slippery slope, or should we try to come up with
evaluation criteria to consider it on a case-by-case basis (e.g. is
there a trustworthy non-profit that has a track record of
accomplishment and is in actual need of financial support)?
I could imagine a process with a fixed "giving back" annual budget
and a community nominations/review workflow. It'd be work to create
and I don't want to commit to that yet, but I would be interested to
hear opinions.
MariaDB specifically invited WMF to become a sponsor, and we're
clearly highly dependent on them. But I don't think it makes sense for
us to just write checks if there's someone who asks for support and
there's a justifiable need. However, if there's broad agreement that
this is something Wikimedia should do more of, then I think it's worth
developing more consistent sponsorship criteria.
Thanks,
Erik
[1] https://wikimediafoundation.org/wiki/Resolution:Freenode_Donation
[2] Cf. https://www.mediawiki.org/wiki/Upstream_projects
--
Erik Möller
VP of Engineering and Product Development, Wikimedia Foundation
I emailed mobile-l and wikitech-l about this, now I'm moving this
discussion to wikimedia-l. Here's the longer technical thread:
http://lists.wikimedia.org/pipermail/mobile-l/2014-April/006884.html
In summary, to show Wikipedia Zero banners for the correct mobile networks,
we are planning once for each cellular-based app session to log two pieces
of data in a specialized logfile, deleting log entries older than 90 days.
1. MCC-MNC <http://en.wikipedia.org/wiki/Mobile_country_code> code (format
is ###-##), which denotes the mobile operator
2. Exit (gateway/proxy) IP address
* These data points would not be logged alongside the normal web access
logs.
This information could be used to estimate rough demand for Wikipedia in
potential Wikipedia Zero geos, although remediating the out-of-sync IP
addresses on file for existing partners is primary.
Internal review suggests this is in alignment with privacy policy, and we
wanted to see if there were other thoughts on this approach here on
wikimedia-l.
-Adam
hi,
could wmf please extend the mediawiki software in the following way:
1. it should knows "groups"
2. allow users to store an arbitrary number of groups with their profile
3. allow to select one of the "group"s joined to an edit when saving
4. add a checkbox "COI" to an edit, meaning "potential conflict of interest"
5. display and filter edits marked with COI in a different color in history
views
6. display and filter edits done for a group in a different color in
history views
7. allow members of a group to receive notifications done on the group page,
or when a group is mentioned in an edit/comment/talk page.
reason:
currently it is quite cumbersome to participate as an organisation. it is
quite cumbersome for people as well to detect COI edits. the most prominent
examples are employees of the wikimedia foundation, and GLAMs. users tend
to create multiple accounts, and try to create "company accounts". the main
reason for this behaviour are (examples, but of course valid general):
* have a feedback page / notification page for the swiss federal archive
for other users
* make clear that an edit is done private or as wmf employee
this then would allow the community to create new policies, e.g. the german
community might cease using company accounts, and switch over to this
system. this proposal is purely technical. current policies can still be
applied if people do not need something else, e.g. wmf employees may
continue to use "sue gardner (wmf)" accounts.
what you think?
best regards,
rupert
-------------------
swissGLAMour, http://wikimedia.ch
Hey everyone :)
I'll be doing another Wikidata office hour on IRC. It will take place
on May 19th at 5PM UTC in #wikimedia-office. For your timezone please
see http://www.timeanddate.com/worldclock/fixedtime.html?msg=Wikidata+office+ho…
I'll be giving a status update and then answer whatever
Wikidata-related questions you have. Hope to see many of you there.
Cheers
Lydia
--
Lydia Pintscher - http://about.me/lydia.pintscher
Product Manager for Wikidata
Wikimedia Deutschland e.V.
Tempelhofer Ufer 23-24
10963 Berlin
www.wikimedia.de
Wikimedia Deutschland - Gesellschaft zur Förderung Freien Wissens e. V.
Eingetragen im Vereinsregister des Amtsgerichts Berlin-Charlottenburg
unter der Nummer 23855 Nz. Als gemeinnützig anerkannt durch das
Finanzamt für Körperschaften I Berlin, Steuernummer 27/681/51985.
Dear all,
The next WMF metrics and activities meeting will take place on Thursday,
May 1, 2014 at 6 PM UTC (11 AM PDT). The IRC channel is #wikimedia-office
on irc.freenode.net and the meeting will be broadcast as a live YouTube
stream.
The current structure of the meeting is:
* Review of key metrics including the monthly report card, but also
specialized reports and analytic
* Review of financials
* Welcoming recent hires
* Brief presentations on recent projects, with a focus on highest priority
initiatives
* Update and Q&A with the Executive Director, if available
Please review
https://meta.wikimedia.org/wiki/Metrics_and_activities_meetings for further
information about how to participate.
We'll post the video recording publicly after the meeting.
Thank you,
Praveena
--
Praveena Maharaj
Executive Assistant to the VP of Engineering & Product Development
www.wikimedia.org