I am doing a PhD on online civic participation project
(e-participation). Within my research, I have carried out a user
survey, where I asked how many people ever edited/created a page on a
Wiki. Now I would like to compare the results with the overall rate of
wiki editing/creation on country level.
I've found some country-level statistics on Wikipedia Statistics (e.g.
3,000 editors of Wikipedia articles in Italy) but data for UK and
France are not available since Wikipedia provides statistics by
languages, not by countries. I'm thus looking for statistics on UK and
France (but am also interested in alternative ways of measuring wiki
editing/creation in Sweden and Italy).
I would be grateful for any tips!
Sunny regards, Alina
European University Institute
For the last week or so I am getting the following error when trying to
use the http://wikidashboard.appspot.com/ tool: "403: User account
expired. The page you requested is hosted by the Toolserver user
wiki_researcher, whose account has expired. Toolserver user accounts are
automatically expired if the user is inactive for over six months. To
prevent stale pages remaining accessible, we automatically block
requests to expired content. If you think you are receiving this page in
error, or you have a question, please contact the owner of this
document: wiki_researcher [at] toolserver [dot] org. (Please do not
contact Toolserver administrators about this problem, as we cannot fix
it---only the Toolserver account owner may renew their account.)"
I've tried contacting the owner, and send an email to PARC
<http://en.wikipedia.org/wiki/PARC_%28company%29> (it's their project,
per the logo seen at the project page ) through their web form, but so
far - nothing. Can anyone help to contact them?
The tool is useful not only for research (I've used and I am sure so
have others here); it is also one of the tools used by Good Article
reviewers (and linked from
Why we allow toolserver tools used by the community to expire in such a
confusing way is beyond me.
Piotr Konieczny, PhD
WMF researchers have agreed to participate in an office hour about WMF research projects and methodologies.
The currently scheduled participants are:
* Aaron Halfaker, Research Analyst (contractor)
* Jonathan Morgan, Research Strategist (contractor)
* Evan Rosen, Data Analytics Manager, Global Development
* Haitham Shammaa, Contribution Research Manager
* Dario Taraborelli, Senior Research Analyst, Strategy
We'll meet on IRC in #wikimedia-office on April 22 at 1800 UTC. Please join us.
I'm starting a new project, a wiki search engine. It uses MediaWiki,
Semantic MediaWiki and other minor extensions, and some tricky templates
I remember Wikia Search and how it failed. It had the mini-article thingy
for the introduction, and then a lot of links compiled by a crawler. Also
something similar to a social network.
My project idea (which still needs a cool name) is different. Althought it
uses an introduction and images copied from Wikipedia, and some links from
the "External links" sections, it is only a start. The purpose is that
community adds, removes and orders the results for each term, and creates
redirects for similar terms to avoid duplicates.
Why this? I think that Google PageRank isn't enough. It is frequently
abused by farmlinks, SEOs and other people trying to put their websites
Search "Shakira" in Google for example. You see 1) Official site, 2)
Wikipedia 3) Twitter 4) Facebook, then some videos, some news, some images,
Myspace. It wastes 3 or more results in obvious nice sites (WP, TW, FB).
The wiki search engine puts these sites in the top, and an introduction and
related terms, leaving all the space below to not so obvious but
interesting websites. Also, if you search for "semantic queries" like
"right-wing newspapers" in Google, you won't find real newspapers but
"people and sites discussing about ring-wing newspapers". Or latex and
LaTeX being shown in the same results pages. These issues can be resolved
with disambiguation result pages.
How we choose which results are above or below? The rules are not fully
designed yet, but we can put official sites in the first place, then .gov
or .edu domains which are important ones, and later unofficial websites,
blogs, giving priority to local language, etc. And reaching consensus.
We can control aggresive spam with spam blacklists, semi-protect or protect
highly visible pages, and use bots or tools to check changes.
It obviously has a CC BY-SA license and results can be exported. I think
that this approach is the opposite to Google today.
For weird queries like "Albert Einstein birthplace" we can redirect to the
most obvious results page (in this case Albert Einstein) using a hand-made
redirect or by software (some little change in MediaWiki).
You can check a pretty alpha version here http://www.todogratix.es (only
Spanish by now sorry) which I'm feeding with some bots.
I think that it is an interesting experiment. I'm open to your questions
Emilio J. Rodríguez-Posada. E-mail: emijrp AT gmail DOT com
Pre-doctoral student at the University of Cádiz (Spain)
Projects: AVBOT <http://code.google.com/p/avbot/> |
| WikiEvidens <http://code.google.com/p/wikievidens/> |
| WikiTeam <http://code.google.com/p/wikiteam/>
Personal website: https://sites.google.com/site/emijrp/
Dearest research list!
1) I am looking for anything and everything about counting Wikipedia
contributions for attribution & tenure/promotion purposes and/or C.V.
enhancement, especially for academic faculty. This includes blog posts,
anecdotes, research, case studies...
2) I'm just starting a review on the subject, which is also going to
involve interviewing academics involved in Wikipedia about their thoughts,
hopes and dreams on the subject of getting 'credit' for their
contributions: so let me know if you're interested in being interviewed.
If there's interest maybe we can get together a little informal discussion
at WikiSym/Wikimania as well.
* I use this address for lists; send personal messages to phoebe.ayers <at>
IdeaLab is an incubator for people to share ideas to improve Wikimedia
projects and collaboratively develop them into plans and grant
I'm cross-posting to the developer and researcher lists because I
could imagine some of you following this path:
idea for research -> IdeaLab -> learning to use publicly available
data sources -> quick prototyping via User Metrics, replicated
databases in Labs, stats.wikimedia.org, and Limn -> idea for a bigger
project with research & editor engagement implications -> idea
refinement in IdeaLab -> grant proposal
Now is a good time to start so you can get a grant proposal in by the
30 September deadline, requesting up to USD 30,000. More information:
Hope this is helpful!
Engineering Community Manager