Pursuant to prior discussions about the need for a research
policy on Wikipedia, WikiProject Research is drafting a
policy regarding the recruitment of Wikipedia users to
participate in studies.
At this time, we have a proposed policy, and an accompanying
group that would facilitate recruitment of subjects in much
the same way that the Bot Approvals Group approves bots.
The policy proposal can be found at:
The Subject Recruitment Approvals Group mentioned in the proposal
is being described at:
Before we move forward with seeking approval from the Wikipedia
community, we would like additional input about the proposal,
and would welcome additional help improving it.
Also, please consider participating in WikiProject Research at:
University of Minnesota
I am doing a PhD on online civic participation project
(e-participation). Within my research, I have carried out a user
survey, where I asked how many people ever edited/created a page on a
Wiki. Now I would like to compare the results with the overall rate of
wiki editing/creation on country level.
I've found some country-level statistics on Wikipedia Statistics (e.g.
3,000 editors of Wikipedia articles in Italy) but data for UK and
France are not available since Wikipedia provides statistics by
languages, not by countries. I'm thus looking for statistics on UK and
France (but am also interested in alternative ways of measuring wiki
editing/creation in Sweden and Italy).
I would be grateful for any tips!
Sunny regards, Alina
European University Institute
I'm starting a new project, a wiki search engine. It uses MediaWiki,
Semantic MediaWiki and other minor extensions, and some tricky templates
I remember Wikia Search and how it failed. It had the mini-article thingy
for the introduction, and then a lot of links compiled by a crawler. Also
something similar to a social network.
My project idea (which still needs a cool name) is different. Althought it
uses an introduction and images copied from Wikipedia, and some links from
the "External links" sections, it is only a start. The purpose is that
community adds, removes and orders the results for each term, and creates
redirects for similar terms to avoid duplicates.
Why this? I think that Google PageRank isn't enough. It is frequently
abused by farmlinks, SEOs and other people trying to put their websites
Search "Shakira" in Google for example. You see 1) Official site, 2)
Wikipedia 3) Twitter 4) Facebook, then some videos, some news, some images,
Myspace. It wastes 3 or more results in obvious nice sites (WP, TW, FB).
The wiki search engine puts these sites in the top, and an introduction and
related terms, leaving all the space below to not so obvious but
interesting websites. Also, if you search for "semantic queries" like
"right-wing newspapers" in Google, you won't find real newspapers but
"people and sites discussing about ring-wing newspapers". Or latex and
LaTeX being shown in the same results pages. These issues can be resolved
with disambiguation result pages.
How we choose which results are above or below? The rules are not fully
designed yet, but we can put official sites in the first place, then .gov
or .edu domains which are important ones, and later unofficial websites,
blogs, giving priority to local language, etc. And reaching consensus.
We can control aggresive spam with spam blacklists, semi-protect or protect
highly visible pages, and use bots or tools to check changes.
It obviously has a CC BY-SA license and results can be exported. I think
that this approach is the opposite to Google today.
For weird queries like "Albert Einstein birthplace" we can redirect to the
most obvious results page (in this case Albert Einstein) using a hand-made
redirect or by software (some little change in MediaWiki).
You can check a pretty alpha version here http://www.todogratix.es (only
Spanish by now sorry) which I'm feeding with some bots.
I think that it is an interesting experiment. I'm open to your questions
Emilio J. Rodríguez-Posada. E-mail: emijrp AT gmail DOT com
Pre-doctoral student at the University of Cádiz (Spain)
Projects: AVBOT <http://code.google.com/p/avbot/> |
| WikiEvidens <http://code.google.com/p/wikievidens/> |
| WikiTeam <http://code.google.com/p/wikiteam/>
Personal website: https://sites.google.com/site/emijrp/
I'm wondering if anyone has done any research into identifying which
articles in Wikipedia have associated video?
There is this category, which only has 280 or so articles:
It seems far from complete. Appreciate any advice or previous work in this
The background: I'm working with some grad students on staging a Wiki Makes
Video contest in April, and we'd like to do some measurement of the current
state of video in Wikipedia.
Thanks, and email me if you'd like to know more about the video project for
"This gem will perform a whitepaper lookup on major scholarly databases.
Its purpose is to easily find related papers and organize your paper
collection. With this application, you can easily download pdfs or use
it as a library to automatically assign metadata.
"Currently, CiteSeerX, ACM and IEEE are the only databases it uses along
with a google pdf/ps search to find other pdf or ps links to download."
The author says it is just for personal use.
Engineering Community Manager
do you have any idea how to unify date formats in various WPs via URL?
my aim is to compare revision date/time from different WP versions
and it would be great to have the same date format for every version of WP
that I am looking at.
Does anyone know a solution for the Wikipedias that do not offer the format I
consider most useful, namely the format starting with 2013-...?
I am seeking a solution via URL, i.e. one that can be used (and replicated) by
any user who has no extra rights or any particular database query expertise for
the WP universe.
for a previous exchange on this topic see
thanks & cheers,
Recently, there's been a thread discussing about an Inventory of articles
with video in Wikipedia.
In my Univ, we are starting a project to upload animations to Commons,
I wonder if there is a category of "animations needed" or a place to offer
our work so we can make an interesting for Wikipedia and Commons.
Of course personal proposals are also welcome ...
Prof. Manuel Palomo Duarte, PhD
Software Process Improvement and Formal Methods group (SPI&FM).
Degree Coordinator for Computer Science.
Department of Computer Science.
Escuela Superior de Ingenieria.
C/ Chile, 1
11002 - Cadiz (Spain)
University of Cadiz
Tlf: (+34) 956 015483
Mobile phone: (+34) 649 280080
Mobile phone from University network: 45483
Fax: (+34) 956 015139
Aviso legal: Este mensaje (incluyendo los ficheros adjuntos) puede contener
información confidencial, dirigida a un destinatario y objetivo específico.
Si usted no es el destinatario del mismo le pido disculpas, y le pido que
elimine este correo, evitando cualquier divulgación, copia o distribución
de su contenido, así como desarrollar o ejecutar cualquier acción basada en
Legal Notice: This message (including the attached files) contains
confidential information, directed to a specific addressee and objective.
In case you are not the addressee of the same, I apologize. And I ask you
to delete this mail, and not to resend, copy or distribute its content, as
well as develop or execute any action based on the same.