This paper (first reference) is the result of a class project I was part of
almost two years ago for CSCI 5417 Information Retrieval Systems. It builds
on a class project I did in CSCI 5832 Natural Language Processing and which
I presented at Wikimania '07. The project was very late as we didn't send
the final paper in until the day before new years. This technical report was
never really announced that I recall so I thought it would be interesting to
look briefly at the results. The goal of this paper was to break articles
down into surface features and latent features and then use those to study
the rating system being used, predict article quality and rank results in a
search engine. We used the [[random forests]] classifier which allowed us to
analyze the contribution of each feature to performance by looking directly
at the weights that were assigned. While the surface analysis was performed
on the whole english wikipedia, the latent analysis was performed on the
simple english wikipedia (it is more expensive to compute). = Surface
features = * Readability measures are the single best predictor of quality
that I have found, as defined by the Wikipedia Editorial Team (WET). The
[[Automated Readability Index]], [[Gunning Fog Index]] and [[Flesch-Kincaid
Grade Level]] were the strongest predictors, followed by length of article
html, number of paragraphs, [[Flesh Reading Ease]], [[Smog Grading]], number
of internal links, [[Laesbarhedsindex Readability Formula]], number of words
and number of references. Weakly predictive were number of to be's, number
of sentences, [[Coleman-Liau Index]], number of templates, PageRank, number
of external links, number of relative links. Not predictive (overall - see
the end of section 2 for the per-rating score breakdown): Number of h2 or
h3's, number of conjunctions, number of images*, average word length, number
of h4's, number of prepositions, number of pronouns, number of interlanguage
links, average syllables per word, number of nominalizations, article age
(based on page id), proportion of questions, average sentence length. :*
Number of images was actually by far the single strongest predictor of any
class, but only for Featured articles. Because it was so good at picking out
featured articles and somewhat good at picking out A and G articles the
classifier was confused in so many cases that the overall contribution of
this feature to classification performance is zero. :* Number of external
links is strongly predictive of Featured articles. :* The B class is highly
distinctive. It has a strong "signature," with high predictive value
assigned to many features. The Featured class is also very distinctive. F, B
and S (Stop/Stub) contain the most information.
:* A is the least distinct class, not being very different from F or G. =
Latent features = The algorithm used for latent analysis, which is an
analysis of the occurence of words in every document with respect to the
link structure of the encyclopedia ("concepts"), is [[Latent Dirichlet
Allocation]]. This part of the analysis was done by CS PhD student Praful
Mangalath. An example of what can be done with the result of this analysis
is that you provide a word (a search query) such as "hippie". You can then
look at the weight of every article for the word hippie. You can pick the
article with the largest weight, and then look at its link network. You can
pick out the articles that this article links to and/or which link to this
article that are also weighted strongly for the word hippie, while also
contributing maximally to this articles "hippieness". We tried this query in
our system (LDA), Google (site:en.wikipedia.org hippie), and the Simple
English Wikipedia's Lucene search engine. The breakdown of articles occuring
in the top ten search results for this word for those engines is: * LDA
only: [[Acid rock]], [[Aldeburgh Festival]], [[Anne Murray]], [[Carl
Radle]], [[Harry Nilsson]], [[Jack Kerouac]], [[Phil Spector]], [[Plastic
Ono Band]], [[Rock and Roll]], [[Salvador Allende]], [[Smothers brothers]],
[[Stanley Kubrick]]. * Google only: [[Glam Rock]], [[South Park]]. * Simple
only: [[African Americans]], [[Charles Manson]], [[Counterculture]], [[Drug
use]], [[Flower Power]], [[Nuclear weapons]], [[Phish]], [[Sexual
liberation]], [[Summer of Love]] * LDA & Google & Simple: [[Hippie]],
[[Human Be-in]], [[Students for a democratic society]], [[Woodstock
festival]] * LDA & Google: [[Psychedelic Pop]] * Google & Simple: [[Lysergic
acid diethylamide]], [[Summer of Love]] ( See the paper for the articles
produced for the keywords philosophy and economics ) = Discussion /
Conclusion = * The results of the latent analysis are totally up to your
perception. But what is interesting is that the LDA features predict the WET
ratings of quality just as well as the surface level features. Both feature
sets (surface and latent) both pull out all almost of the information that
the rating system bears. * The rating system devised by the WET is not
distinctive. You can best tell the difference between, grouped together,
Featured, A and Good articles vs B articles. Featured, A and Good articles
are also quite distinctive (Figure 1). Note that in this study we didn't
look at Start's and Stubs, but in earlier paper we did. :* This is
interesting when compared to this recent entry on the YouTube blog. "Five
Stars Dominate Ratings"
http://youtube-global.blogspot.com/2009/09/five-stars-dominate-ratings.html…
I think a sane, well researched (with actual subjects) rating system
is
well within the purview of the Usability Initiative. Helping people find and
create good content is what Wikipedia is all about. Having a solid rating
system allows you to reorganized the user interface, the Wikipedia
namespace, and the main namespace around good content and bad content as
needed. If you don't have a solid, information bearing rating system you
don't know what good content really is (really bad content is easy to spot).
:* My Wikimania talk was all about gathering data from people about articles
and using that to train machines to automatically pick out good content. You
ask people questions along dimensions that make sense to people, and give
the machine access to other surface features (such as a statistical measure
of readability, or length) and latent features (such as can be derived from
document word occurence and encyclopedia link structure). I referenced page
262 of Zen and the Art of Motorcycle Maintenance to give an example of the
kind of qualitative features I would ask people. It really depends on what
features end up bearing information, to be tested in "the lab". Each word is
an example dimension of quality: We have "*unity, vividness, authority,
economy, sensitivity, clarity, emphasis, flow, suspense, brilliance,
precision, proportion, depth and so on.*" You then use surface and latent
features to predict these values for all articles. You can also say, when a
person rates this article as high on the x scale, they also mean that it has
has this much of these surface and these latent features.
= References =
- DeHoust, C., Mangalath, P., Mingus., B. (2008). *Improving search in
Wikipedia through quality and concept discovery*. Technical Report.
PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/6/68/DeHoustMangalat…>
- Rassbach, L., Mingus., B, Blackford, T. (2007). *Exploring the
feasibility of automatically rating online article quality*. Technical
Report. PDF<http://grey.colorado.edu/mediawiki/sites/mingus/images/d/d3/RassbachPincock…>
Hoi,
I have asked and received permission to forward to you all this most
excellent bit of news.
The linguist list, is a most excellent resource for people interested in the
field of linguistics. As I mentioned some time ago they have had a funding
drive and in that funding drive they asked for a certain amount of money in
a given amount of days and they would then have a project on Wikipedia to
learn what needs doing to get better coverage for the field of linguistics.
What you will read in this mail that the total community of linguists are
asked to cooperate. I am really thrilled as it will also get us more
linguists interested in what we do. My hope is that a fraction will be
interested in the languages that they care for and help it become more
relevant. As a member of the "language prevention committee", I love to get
more knowledgeable people involved in our smaller projects. If it means that
we get more requests for more projects we will really feel embarrassed with
all the new projects we will have to approve because of the quality of the
Incubator content and the quality of the linguistic arguments why we should
approve yet another language :)
NB Is this not a really clever way of raising money; give us this much in
this time frame and we will then do this as a bonus...
Thanks,
GerardM
---------- Forwarded message ----------
From: LINGUIST Network <linguist(a)linguistlist.org>
Date: Jun 18, 2007 6:53 PM
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
To: LINGUIST(a)listserv.linguistlist.org
LINGUIST List: Vol-18-1831. Mon Jun 18 2007. ISSN: 1068 - 4875.
Subject: 18.1831, All: Call for Participation: Wikipedia Volunteers
Moderators: Anthony Aristar, Eastern Michigan U <aristar(a)linguistlist.org>
Helen Aristar-Dry, Eastern Michigan U <hdry(a)linguistlist.org>
Reviews: Laura Welcher, Rosetta Project
<reviews(a)linguistlist.org>
Homepage: http://linguistlist.org/
The LINGUIST List is funded by Eastern Michigan University,
and donations from subscribers and publishers.
Editor for this issue: Ann Sawyer <sawyer(a)linguistlist.org>
================================================================
To post to LINGUIST, use our convenient web form at
http://linguistlist.org/LL/posttolinguist.html
===========================Directory==============================
1)
Date: 18-Jun-2007
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
-------------------------Message 1 ----------------------------------
Date: Mon, 18 Jun 2007 12:49:35
From: Hannah Morales < hannah(a)linguistlist.org >
Subject: Wikipedia Volunteers
Dear subscribers,
As you may recall, one of our Fund Drive 2007 campaigns was called the
"Wikipedia Update Vote." We asked our viewers to consider earmarking their
donations to organize an update project on linguistics entries in the
English-language Wikipedia. You can find more background information on this
at:
http://linguistlist.org/donation/fund-drive2007/wikipedia/index.cfm.
The speed with which we met our goal, thanks to the interest and generosity
of
our readers, was a sure sign that the linguistics community was enthusiastic
about the idea. Now that summer is upon us, and some of you may have a bit
more
leisure time, we are hoping that you will be able to help us get started on
the
Wikipedia project. The LINGUIST List's role in this project is a purely
organizational one. We will:
*Help, with your input, to identify major gaps in the Wikipedia materials or
pages that need improvement;
*Compile a list of linguistics pages that Wikipedia editors have identified
as
"in need of attention from an expert on the subject" or " does not cite any
references or sources," etc;
*Send out periodical calls for volunteer contributors on specific topics or
articles;
*Provide simple instructions on how to upload your entries into Wikipedia;
*Keep track of our project Wikipedians;
*Keep track of revisions and new entries;
*Work with Wikimedia Foundation to publicize the linguistics community's
efforts.
We hope you are as enthusiastic about this effort as we are. Just to help us
all
get started looking at Wikipedia more critically, and to easily identify an
area
needing improvement, we suggest that you take a look at the List of
Linguists
page at:
http://en.wikipedia.org/wiki/List_of_linguists. M
Many people are not listed there; others need to have more facts and
information
added. If you would like to participate in this exciting update effort,
please
respond by sending an email to LINGUIST Editor Hannah Morales at
hannah(a)linguistlist.org, suggesting what your role might be or which
linguistics
entries you feel should be updated or added. Some linguists who saw our
campaign
on the Internet have already written us with specific suggestions, which we
will
share with you soon.
This update project will take major time and effort on all our parts. The
end
result will be a much richer internet resource of information on the breadth
and
depth of the field of linguistics. Our efforts should also stimulate
prospective
students to consider studying linguistics and to educate a wider public on
what
we do. Please consider participating.
Sincerely,
Hannah Morales
Editor, Wikipedia Update Project
Linguistic Field(s): Not Applicable
-----------------------------------------------------------
LINGUIST List: Vol-18-1831
Hullo everyone.
I was asked by a volunteer for help getting stats on the gender gap in
content on a certain Wikipedia, and came up with simple Wikidata Query
Service[1] queries that pulled the total number of articles on a given
Wikipedia about men and about women, to calculate *the proportion of
articles about women out of all articles about humans*.
Then I was curious about how that wiki compared to other wikis, so I ran
the queries on a bunch of languages, and gathered the results into a table,
here:
https://meta.wikimedia.org/wiki/User:Ijon/Content_gap
(please see the *caveat* there.)
I don't have time to fully write-up everything I find interesting in those
results, but I will quickly point out the following:
1. The Nepali statistic is simply astonishing! There must be a story
there. I'm keen on learning more about this, if anyone can shed light.
2. Evidently, ~13%-17% seems like a robust average of the proportion of
articles about women among all biographies.
3. among the top 10 largest wikis, Japanese is the least imbalanced. Good
job, Japanese Wikipedians! I wonder if you have a good sense of what
drives this relatively better balance. (my instinctive guess is pop culture
coverage.)
4. among the top 10 largest wikis, Russian is the most imbalanced.
5. I intend to re-generate these stats every two months or so, to
eventually have some sense of trends and changes.
6. Your efforts, particularly on small-to-medium wikis, can really make a
dent in these numbers! For example, it seems I am personally
responsible[2] for almost 1% of the coverage of women on Hebrew Wikipedia!
:)
7. I encourage you to share these numbers with your communities. Perhaps
you'd like to overtake the wiki just above yours? :)
8. I'm happy to add additional languages to the table, by request. Or you
can do it yourself, too. :)
A.
[1] https://query.wikidata.org/
[2] Yay #100wikidays :) https://meta.wikimedia.org/wiki/100wikidays
--
Asaf Bartov
Wikimedia Foundation <http://www.wikimediafoundation.org>
Imagine a world in which every single human being can freely share in the
sum of all knowledge. Help us make it a reality!
https://donate.wikimedia.org
Being put together by Eliezer Yudkowsky of LessWrong. Content is
cc-by-sa 3.0, don't know about the software.
https://arbital.com/p/arbital_ambitions/
Rather than the "encyclopedia" approach, it tries to be more
pedagogical, teaching the reader at their level.
Analysis from a sometime Yudkowsky critic on Tumblr:
http://nostalgebraist.tumblr.com/post/140995096534/a-year-ago-i-remember-be…
(there's a pile more comments linked from the notes on that post,
mostly from quasi-fans; I have an acerbic comment in there, but you
should look at the site yourself first.)
No idea if this will go anywhere, but might be of interest; new
approaches generally are. They started in December, first publicised
it a week ago and have been scaling up. First day it collapsed due to
load from a Facebook post announcement ... so maybe hold off before
announcing it everywhere :-)
- d.
Hello, everyone.
(this is an announcement in my capacity as a volunteer.)
Inspired by a lightning talk at the recent CEE Meeting[1] by our colleague
Lars Aronsson, I made a little command-line tool to automate batch
recording of pronunciations of words by native speakers, for uploading to
Commons and integration into Wiktionary etc. It is called *pronuncify*, is
written in Ruby and uses the sox(1) tool, and should work on any modern
Linux (and possibly OS X) machine. It is available here[2], with
instructions.
I was then asked about a Windows version, and agreed to attempt one. This
version is called *pronuncify.net <http://pronuncify.net>*, and is a .NET
gooey GUI version of the same tool, with slightly different functions. It
is available here[3], with instructions.
Both tools require word-list files in plaintext, with one word (or phrase)
per line. Both tools name the files according to the standard established
in [[commons:Category:Pronunciation]], and convert them to Ogg Vorbis for
you, so they are ready to upload.
In the future, I may add OAuth-based direct uploading to Commons. If you
run into difficulties, please file issues on GitHub, for the appropriate
tool. Feedback is welcome.
A.
[1]
https://meta.wikimedia.org/wiki/Wikimedia_CEE_Meeting_2015/Programme/Lightn…
[2] https://github.com/abartov/pronuncify
[3] https://github.com/abartov/Pronuncify.net
--
Asaf Bartov
Hi, how about a wikipedia about objects?
Instead of generic articles of , for example, "Ballpoint pen" or "Bic
cristal" it would be "Ballpoint pen Bic cristal 2014"
Doing these for millions of objects would allow people to have an open,
free, universal and central place to refer specific objects.
*Some possible applications:*
- Creating neutral and standard lists:
Nowadays if anyone create, for example, a tutorial for building
something (DIY projects, receipts, ...) they have to link all items to a
comercial or no-neutral web which could change its url in the future or
redirect it to adds or whatever.
Lists could be created in external webpages linking wikimedia objects
webpage or/and could be created as category pages in Wikipedia. For
example, currently, https://en.wikipedia.org/wiki/Car_of_the_Year
article lists cars which won COTY award but not links to the specific car
(AUDI A3 Hatchback 2012 - Present) but generic serie (Audi A3).
The good thing at this point it's that to start creating object
lists only item name is necessary, no infoboxes or description needed.
- Universal repository for inventories:
Lot of business fill their inventories again and again with same data
("cardboard box 50x30x15", "step by step nema motor 17", ... ) they should
be able to import this data from a open website with their corresponding
info like GTIN , SKU , Barcode... and more in the future weight, size,
...
- Encourage Recycling and Reutilitation:
Imagine if we use wikidata properties (
https://m.wikidata.org/wiki/Wikidata:List_of_properties/Generic) like
"has part" and "part of" , people will find other uses for objects, or
discover were to find
- Social activism and Corporate Social Responsibility (CSR)
Companies have info and metrics about their costumers (habits, location,
...) why not costumers have info about companies products, who manufacture
what?, what products have a good carbon footprint
<https://en.wikipedia.org/wiki/Carbon_footprint>?, what products have
been retired from some problem?, what are Fair Trade?. This also can moved
companies do better.
*Very rough roadmap*:
1. At the very begining, using wikidata infrestructure, objects would
only have common info like "name", " image", "related links"
(datasheets?), GTIN.
First use cases could be doing lists or grouping objects by categories.
2. Step by step new fields could be added like "manufacturer" , "tags",
...
3. A separated website could be created. wikiobject.org isn't availiabe
so url could be something like objects.wikpedia.org
4. In a long-term in order to explote all the possibilities of this
project more complex fields and relations would have to be managed, like
for example "fridges with energy class A+++ and width less than 80 cm",
which could be easy if all always were similar but nothing further from
reality
A friend of mine and me tried to build a demo version in an home-made
apache cassandra cluster four years ago, but we don't have enough resources
and knowledge for that.
*Funding*
In my humble opinion, problem with wikipedia funding It's that most part of
its users don't see culture as a need (sorry for that, I am a sporadic
donor). In Wikiobject case I think it could rather be different.
If part of companies business lies on this project, companies will be very
inclined to donate to improve performace, usability, etc.. maybe similar to
what happens in Linux.
Where came this need from? Data needed for some software to run, product
vissibility, costumer requests, etc ... , no advertisement needed, It could
be a need and standart.
I trully believe that world need something like this, and the correct
people to do it, to warranty openness and independence, are you.
thanks for your time and attention,
grettings
All,
TL;DR: The Editing Department is working to make the content editing
software better. The big work areas are improving the visual editor and
editing wikitext. We will bring in a wikitext mode inside the visual editor
for simpler, faster switching. We will experiment with prompts to give
users ideas for what they might want to make as they edit. We will do other
things as well. Your feedback is welcome.
I thought it would be helpful to send an update about editing software.
It's been over a year since my last, and things change (and it's easy to
lose track). We set out some higher-level objectives for Editing in the
Wikimedia Foundation's annual plan for the coming financial year.[0] This
gives a little more detail on that, with particular emphasis on the team
working on content editing tools directly. There's also a brief, more
feature-focussed roadmap available on MediaWiki.org if you are
interested.[1]
Status
In Editing, we're continuing to work on our commission from the 2010
community strategy[2] to create a rich visual editor which makes it
possible to edit all our content, and participate in our workflows, without
knowing or having to learn wikitext. This is a work in progress; as with
all our improvements to the software, we will never be "done", and
hopefully you notice improvements over time. Each week, new features,
improvements, and bug fixes are released, often led, altered or supported
by our volunteer developers and community pioneers; my thanks to you all.
We are now roughly five years into this visual editor work, and have made
good progress on a credible content editor for many users' workflows,
helping editors spend more time on what they're editing instead of how.
First and foremost, not having to think about the vagaries of wikitext and
instead focus on the content of their writing is something that many new
and experienced volunteers alike have mentioned they appreciate. The
automatic citations tool makes adding new references to websites or DOIs
much more quickly and thoroughly, improving the quality of the content. The
visual media searching tool makes it simple to find and add more of the
great images and other media on Commons and add to a page. Visual table
editing helps make changes to tables, like moving columns or parts of
tables around, much more easily than in wikitext, saving time of our
volunteers to focus on their work making the wikis better.
The visual editor supports many (but not yet all) of our content languages,
and thanks to community support and engagement the editor is available by
default on over 235 Wikipedias (and for opt-in use on the remaining 55),
including almost all of our largest Wikipedias. It is on by default for
logged-out users and new accounts on 233 of these, and on for new accounts
(but not yet for logged-out users) on two, English and Spanish. As of this
week, this now includes representatives from each of the "CJK" language
group, with four different Chinese script languages (Classical, Cantonese
and Wu, as well as Min Nan), Korean and Japanese. We're currently working
our way through each of the remaining communities asking them if it's OK to
switch; the next groups will be the thirteen Arabic script Wikipedias and
the twenty-three Indic Wikipedias. You can see specific details at the
rollout grid if you're interested.[3]
We have recently been working with the non-Wikipedia sister projects. As
you might imagine, each project has different needs, workflows and
concerns, and it's important to us that we ensure the tools we provide are
tweaked as appropriate to support, not undermine, those requirements to the
extent justifiable by demand. Per community request, the visual editor is
already available to all users on several different sister projects, but we
think there is more to do for some before we encourage this more widely.
Recently, we have been working with the communities on the Wikivoyages,
which are quite similar to the Wikipedias in needs from the visual editor;
our thanks to the patience and assistance from the Wikivoyagers. We're also
working with User:tpt and other volunteers who create and maintain the
software used by Wikisources to adapt the visual editor to work with those
features; our thanks to them, and to Wikisourcerers more widely.
Core and maintenance work
Despite this progress, there are still several areas in which the core
functionality of the editing software needs extensions, improvements and
fixes. In many places within the visual editor software we have to work
around browsers' bugs, missing features and idiosyncrasies, and nowhere is
that more problematic than the critical areas of typing, cursoring, and
related language support. There continue to be irritating, occasionally
serious bugs related to these, for which we continue to partner with
browser vendors and experts around the Web to try to develop workarounds
and improvements.
Another important area related to language support is coming up with a
solution for the nine languages in the Wikimedia family which use content
language variants, biggest amongst them Chinese, which poses some very
large challenges as it is fundamentally incompatible with a visual editing
method. If you're interested in discussing how this might work we would
love to discuss with you what possible options you think would work out,
even more so if you wish to work on support for this.
The performance of the software is not yet as good as we would wish, in
terms of speed, network use, and load on users' browsers. This is a
usability issue for all users, but is especially critical for users of
lower-powered devices (like older machines) and more powerful but
limited-resource ones (like most mobile phones and tablets), where in some
cases it can be not merely awkward to use, which is disrespectful of
volunteers' time, but prohibitive, excluding community members from
volunteering their time. We have several strategies lined up to tackle this
basket of issues, from editing only small parts of a document at once –
sometimes called "sentence-level editing" – to loading smaller bits of the
editor at first and then larger, less-used bits as needed whilst retaining
a consistent interface without changes in interface which can be confusing
and distracting. More widely, working to let the software include as many
possible volunteers in the community if they wish to join also covers
accessibility in all its forms, making sure editors who have learning
impairments or physical disabilities are supported as much as possible.
Many of our communities have put in significant effort over the past
fifteen years to come up with specialised workflows on their wikis.
Sometimes these efforts have involved complicated extensions and gadgets,
like the use of the "inputbox" button to start a new article based on a
template, as used on several wikis. Others provide additional tools inside
the wikitext editor, like the English Wikipedia's tool to automatically
created references based on a link, some of which we provide inside the
visual editor now, but many of which are not yet there, and which we at the
Foundation can never scale to provide the individual attention for each of
our hundreds of wikis. For the visual editor to be successful, pleasant and
as un-confusing as possible, it is vital that we help communities provide
gadgets as appropriate, and duplicate or extend the integration with the
various other editors. We look forward to helping you help others.
A big technical change we're hoping to achieve this year, as we set out
directly in the annual plan,[0] is to re-engineer how MediaWiki supports
content. We want to allow multiple "parts" of content, of different parts,
to be stored as revisions of pages. This is a much-needed feature already,
most obvious with file pages – each file's upload history is shown separate
from its description page, and videos' subtitles are stored in a different
namespace rather than shown on the page. This also is an issue in other
areas, making workflows more complicated, like the common documentation
sub-page of templates rather than having a combined template and
documentation page needing two edits to improve a template and document how
it now works. With Wikimedia Deutschland's work on moving Commons' file
meta-data into a proper structure linked to Wikidata, addressing this need
is now acute. We look forward to driving forward the technical discussion
and implementation of multi-part content revisions in the back-end,[4] and
we have some hopes with how it can be used to do new things which we
discuss below.
Finally with regards to core work, our intent right from the beginning of
our work on the visual editor has been to operate as the core 'platform'
for all kinds of editing in MediaWiki, and not just to be another single
editor. Depending on how you count them, there are currently six pieces of
editor software beyond the visual editor installed on most of our wikis,
which gives us six different interfaces by which to confuse users, six
different sets of bugs to track down, and six different places where
features can interact in unexpected ways and which need to be tested.[5]
Our goal is to gradually reduce the number of pieces of software with
equivalents based on the single platform. This has already been done for
example in Flow, where it uses the visual editor for rich content editing
rather than re-inventing its down, and we are planning to work with our
colleagues in the Language Engineering team to do the same for the Content
Translation tool. We are experimenting with providing a more modern
wikitext editor which can provide a consistent experience between the
visual and wikitext editors, and between desktop and mobile; there's a
video of our work to date on this, still incomplete, which some of you may
have seen.[6] Naturally, any new wikitext editor would have to be not just
change for its own sake but better for users to be worth switching, so
we're cautious about how quickly we would introduce this; certainly, a beta
feature test of the initial version for the intrepid will be necessary
before we make any plans as to wider availability.
Feature work
As well as our core work, it is important to us that we also spend some of
our time to explore ways in which new features can improve the experience
of the site for users, helping them improve quality, breadth and depth of
content more effectively and efficiently. Not all of the ideas below are
ones on which we're actively working right now, but we should have some
progress this coming year on at least most of them.
An idea I'm quite excited about in terms of possibilities is providing a
system in the visual and wikitext editors that can prompt users as they
edit. The range of different kinds of edit, from copy editing and improving
references to a full up re-working of whole sets of pages, means that
newbies can get lost in knowing where to start. There are lots of different
kinds of improvements that we could provide, from simple static ones like
"this article isn't illustrated yet" to very complex and specific ones like
"this article's main wikiproject is about the USA, so wants you to write in
American English". This work is aimed at reducing the burden on experienced
users when they review new editors' changes, letting each wiki configure
hints appropriate to that community. We also intend for these experiments
to improve the "on-boarding" experience for new users, helping them learn
what is wanted and valued by their wiki's community, and what makes for
more constructive edits.
Once we have multi-part content streams (which I mentioned above as core
work), there are several possible feature areas we think are worth
considering.
A big one is that we think that there's a lot of potential in storing edits
in HTML DOM as well as in wikitext. Firstly, this should allow us to help
MediaWiki understand changes in edits more like the way that humans do.
This would allow us to provide neat features like visual diffs and animated
histories of pages. Showing clearly who wrote which parts of an article, or
what parts of the article have been changed most recently, is not a new
idea but hasn't been practical to implement at scale. It would be
fascinating to see if this tool could assist in deepening the richness of
understanding for readers of the staleness or volatility of articles in
practice.
More importantly, it should allow much better automatic handling of edit
conflict situations, and so reduce the occurrence of edit conflicts that
need manual correction. Theoretically, it could let us remove edit
conflicts entirely, though this would mean making some decisions about how
edits work which we may decide are worse long-term than having manual
resolution of edit conflicts; we're not planning to make a decision on that
until we know more.
Storing pages in DOM could also allow smart partial document saving,
splitting up your bigger edits into different chunks, each of which you can
save as you go. Making smaller, simpler edits. This could also let us
reduce edit conflicts by prompting people to save bits as they edit, and
pushing those new versions "live" into the editor of others also editing at
the same time.
The final thing I'll mention that DOM edits could do is allow DOM-based
annotations to pages. With this, citations could be 'applied' to bits of
the article showing which statements are (and aren't) backed up with a
reference. Discussions could refer to a specific image, sentence or word
choice to let editors have deeper, faster conversations, and understand
when they're editing a potentially divisive section. Illustrations like
diagrams and maps could highlight an area.
Another thing we're keen to explore with content streams is improving how
page meta-data is edited, centralising the data about a page's name,
protection level, whether it should show a table of contents, what pages
redirect to it, and so on. Each of these examples is currently edited in a
different place and with a different tool. We think it could help a lot to
provide these controls together, editing a new "part" of the page alongside
the wikitext block. Note that we're not planning on removing the existing
mechanisms, which each work well enough – this would be an additional tool,
at least at first.
A final item worth mentioning, because it comes up a lot as a
technical/editor wishlist item from some editors and developers alike, is
real-time collaborative editing. I wrote some details a couple of years ago
about how this, especially the "full-throttle" collaboration system (like
Etherpad or Google Docs, where there can be multiple users at the same time
each with their own cursor) is a huge problem, not just a technical one but
critically a social one.[7] Despite this, I do hear quite often from people
about how this would be very helpful, for mentoring new users and those
doing something with which they're unfamiliar, and for content editing
collaborating, like for edit-a-thons, breaking news articles where lots of
changes are wanted at the same time, and themed collaborations of the day
or similar. I'm keeping an open mind as to whether we will ever do this,
but it's not something we're worrying about right now.
Summary
As you can see if you have made it this far, there's a lot of different
things we're working on in the department. I'm hopeful that the
improvements we make have already made, and will continue to make, the
editing lives of those reading this a bit easier.
I'm thankful for all the support we get from across the communities, be it
in the form of clear suggestions, complaints and proposals, technical
advice and volunteering, or anything else. If you're technical, and a
current or prospective volunteer developer interested in working on some of
these areas, we would love to help you.
I'll be at Wikimania this year. As always, I'll be happy to talk about
anything in this update — or missing from it — in person there, online on
Phabricator or IRC, on-wiki, or wherever else. Your thoughts and responses
are what guide us, and what makes it worth doing. Hope this was interesting!
Links
[0] —
https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Annual_Plan/2016-2017/…
[1] — https://www.mediawiki.org/wiki/VisualEditor/Roadmap
[2] —
https://strategy.wikimedia.org/wiki/Wikimedia_Movement_Strategic_Plan_Summa…
which has been gradually developed into
https://www.mediawiki.org/wiki/Feature_map for possible Product work.
[3] — https://www.mediawiki.org/wiki/VisualEditor/Rollouts
[4] — https://phabricator.wikimedia.org/T107595
[5] — The plain wikitext editor (with the dark blue buttons toolbar),
WikiEditor (with the light blue/grey toolbar), CodeEditor (with the syntax
highlighting, based on WikiEditor), Flow's discussion editor, and
LiquidThreads's editor (mostly not seen now).
[6] — https://www.youtube.com/watch?v=jgd2ZHOZGBE
[7] —
https://lists.wikimedia.org/pipermail/wiki-research-l/2014-September/003807…
J.
--
James D. Forrester
Lead Product Manager, Editing
Wikimedia Foundation, Inc.
jforrester(a)wikimedia.org | @jdforrester
Hoi,
At Wikimania two wikipedians of the year were elected. The article for one
of them and the data at Wikidata are pathetic.
The article is a one liner stub. The Wikidata item had no statements and I
added the few that were minimally needed.
I find it incredible that we take no care of our own even when they are
obviously notable.
Thanks,
GerardM
PS I have not looked at any of the others and I would welcome it when they
get some proper attention.
http://ultimategerardm.blogspot.nl/2016/06/wikipedia-wikipedian-of-year-ros…