Hi!
I am doing a PhD on online civic participation project
(e-participation). Within my research, I have carried out a user
survey, where I asked how many people ever edited/created a page on a
Wiki. Now I would like to compare the results with the overall rate of
wiki editing/creation on country level.
I've found some country-level statistics on Wikipedia Statistics (e.g.
3,000 editors of Wikipedia articles in Italy) but data for UK and
France are not available since Wikipedia provides statistics by
languages, not by countries. I'm thus looking for statistics on UK and
France (but am also interested in alternative ways of measuring wiki
editing/creation in Sweden and Italy).
I would be grateful for any tips!
Sunny regards, Alina
--
Alina ÖSTLING
PhD Candidate
European University Institute
www.eui.eu
For the last week or so I am getting the following error when trying to
use the http://wikidashboard.appspot.com/ tool: "403: User account
expired. The page you requested is hosted by the Toolserver user
wiki_researcher, whose account has expired. Toolserver user accounts are
automatically expired if the user is inactive for over six months. To
prevent stale pages remaining accessible, we automatically block
requests to expired content. If you think you are receiving this page in
error, or you have a question, please contact the owner of this
document: wiki_researcher [at] toolserver [dot] org. (Please do not
contact Toolserver administrators about this problem, as we cannot fix
it---only the Toolserver account owner may renew their account.)"
I've tried contacting the owner, and send an email to PARC
<http://en.wikipedia.org/wiki/PARC_%28company%29> (it's their project,
per the logo seen at the project page ) through their web form, but so
far - nothing. Can anyone help to contact them?
The tool is useful not only for research (I've used and I am sure so
have others here); it is also one of the tools used by Good Article
reviewers (and linked from
http://en.wikipedia.org/wiki/Template:Good_article_tools)
Why we allow toolserver tools used by the community to expire in such a
confusing way is beyond me.
--
Piotr Konieczny, PhD
http://hanyang.academia.edu/PiotrKoniecznyhttp://scholar.google.com/citations?user=gdV8_AEAAAAJhttp://en.wikipedia.org/wiki/User:Piotrus
Hi everyone,
WMF researchers have agreed to participate in an office hour about WMF research projects and methodologies.
The currently scheduled participants are:
* Aaron Halfaker, Research Analyst (contractor)
* Jonathan Morgan, Research Strategist (contractor)
* Evan Rosen, Data Analytics Manager, Global Development
* Haitham Shammaa, Contribution Research Manager
* Dario Taraborelli, Senior Research Analyst, Strategy
We'll meet on IRC in #wikimedia-office on April 22 at 1800 UTC. Please join us.
Pine
Hi all;
I'm starting a new project, a wiki search engine. It uses MediaWiki,
Semantic MediaWiki and other minor extensions, and some tricky templates
and bots.
I remember Wikia Search and how it failed. It had the mini-article thingy
for the introduction, and then a lot of links compiled by a crawler. Also
something similar to a social network.
My project idea (which still needs a cool name) is different. Althought it
uses an introduction and images copied from Wikipedia, and some links from
the "External links" sections, it is only a start. The purpose is that
community adds, removes and orders the results for each term, and creates
redirects for similar terms to avoid duplicates.
Why this? I think that Google PageRank isn't enough. It is frequently
abused by farmlinks, SEOs and other people trying to put their websites
above.
Search "Shakira" in Google for example. You see 1) Official site, 2)
Wikipedia 3) Twitter 4) Facebook, then some videos, some news, some images,
Myspace. It wastes 3 or more results in obvious nice sites (WP, TW, FB).
The wiki search engine puts these sites in the top, and an introduction and
related terms, leaving all the space below to not so obvious but
interesting websites. Also, if you search for "semantic queries" like
"right-wing newspapers" in Google, you won't find real newspapers but
"people and sites discussing about ring-wing newspapers". Or latex and
LaTeX being shown in the same results pages. These issues can be resolved
with disambiguation result pages.
How we choose which results are above or below? The rules are not fully
designed yet, but we can put official sites in the first place, then .gov
or .edu domains which are important ones, and later unofficial websites,
blogs, giving priority to local language, etc. And reaching consensus.
We can control aggresive spam with spam blacklists, semi-protect or protect
highly visible pages, and use bots or tools to check changes.
It obviously has a CC BY-SA license and results can be exported. I think
that this approach is the opposite to Google today.
For weird queries like "Albert Einstein birthplace" we can redirect to the
most obvious results page (in this case Albert Einstein) using a hand-made
redirect or by software (some little change in MediaWiki).
You can check a pretty alpha version here http://www.todogratix.es (only
Spanish by now sorry) which I'm feeding with some bots.
I think that it is an interesting experiment. I'm open to your questions
and feedback.
Regards,
emijrp
--
Emilio J. Rodríguez-Posada. E-mail: emijrp AT gmail DOT com
Pre-doctoral student at the University of Cádiz (Spain)
Projects: AVBOT <http://code.google.com/p/avbot/> |
StatMediaWiki<http://statmediawiki.forja.rediris.es>
| WikiEvidens <http://code.google.com/p/wikievidens/> |
WikiPapers<http://wikipapers.referata.com>
| WikiTeam <http://code.google.com/p/wikiteam/>
Personal website: https://sites.google.com/site/emijrp/
The May 2013 issue of the Wikimedia Research Newsletter is out:
https://meta.wikimedia.org/wiki/Research:Newsletter/2013/May
In this issue:
• 1 Motivations to contribute to the Persian Wikipedia
• 2 Science eight times more popular on the Spanish Wikipedia than on the English Wikipedia?
• 3 In brief
• Winning and losing argument patterns in deletion debates
• Why English Wikinews rejects submissions
• Wikipedia as a discussion forum for Malaysian students
• Using Wikipedia to predict the stock market
• Main NPOV concerns in articles about corporations: Promotional language and inclusion of criticism
• "Gangnam Style" pageview trends
• 4 References
••• 9 publications were covered in this issue •••
Thanks to: Piotr Konieczny, Aaron Halfaker, Taha Yasseri, Daniel Mietchen for contributing
Dario Taraborelli and Tilman Bayer
--
Wikimedia Research Newsletter
https://meta.wikimedia.org/wiki/Research:Newsletter/
* Follow us on Twitter/Identi.ca: @WikiResearch
* Receive this newsletter by mail: https://lists.wikimedia.org/mailman/listinfo/research-newsletter
* Subscribe to the RSS feed: http://blog.wikimedia.org/c/research-2/wikimedia-research-newsletter/feed/
Dear all,
I am trying to gather some data for a new paper, but I wonder if there
is a more efficient way of doing so than by using Wikipedia Special:Contribs
I have a list of editors, whose edits I'd like to analyze and get
numbers on their contributions by mainspace, and to specific groups of
pages (such as Wikipedia:Arbitration and its subpages, for example). In
other words, for a defined group of users, I would like to know if they
have ever contributed to an arbitration page, and if they did, how many
edits did they make.
I am assuming this wouldn't be that difficult for somebody who knows how
to run the queries on the Wikipedia database, but I have never been able
to develop enough of a coding skill to do so. Still, if people could
direct me to a page with instructions on how to run a database query,
perhaps I can try to learn. THat is, if they have been made more non-CS
person friendly, as two or so years ago when I last research this topic
they were, IMHO, still beyond the means of a non-coder to deal with.
Alternatively, I can consider paying someone to run a number of such
queries for me, since I now even have a real research budget :)
--
Piotr Konieczny, PhD
http://hanyang.academia.edu/PiotrKoniecznyhttp://scholar.google.com/citations?user=gdV8_AEAAAAJhttp://en.wikipedia.org/wiki/User:Piotrus
Dear all,
this is to remind you that the deadline for submitting an abstract for
oral communications at next NETTAB 2013 workshop is set on next July 5,
2013.
Take you decision now and prepare to submit in time your most recent
research on semantic, social, and mobile tools and applications in Life
Sciences.
I'm looking forward to meeting you in Venice.
Paolo Romano
====
LAST CALL FOR ABSTRACTS FOR ORAL COMMUNICATIONS
NETTAB 2013 Workshop on
"Semantic, Social, and Mobile Applications for Bioinformatics and
Biomedical Laboratories"
October 16-18, 2013, Lido of Venice, Italy
http://www.nettab.org/2013/
NETTAB 2013 will explore mobile, social, and semantic solutions for
bioinformatics and laboratory informatics.
A savvy combination of these technologies could enhance the research
outcome of life scientists and simplify workflows in biomedical
laboratories.
KEYNOTE SPEAKERS
+ Barend Mons, Leiden University Medical Center, and Netherlands
Bioinformatics Center, The Netherlands
+ Antony Williams, Royal Society of Chemistry, USA
+ Ross D. King, University of Manchester, Manchester, United Kingdom
TUTORIAL PRESENTERS (confirmed only, more will be announced soon)
+ Andrea Splendiani, IntelliLeaf, United Kingdom, and Digital Enterprise
Research Institute, Ireland
+ Dominique Hazael-Massieux, W3C/ERCIM, Sophia Antipolis, Biot, France
+ Alex Clark, Molecular Materials Informatics, Inc
VENUE
The workshop will be held in the Congress Center “Palazzo ex Casino del
Lido” in Lido of Venice.
DEADLINES
• July 5, 2013: Abstract submission deadline for Oral communications
• July 31, 2013: Abstract submission deadline for Posters
• September 13, 2013: Early registration deadline
TOPICS
We are looking for abstracts on all aspects of the focus theme,
including issues, methods, algorithms, and technologies for the design
and development of tools and platforms able to provide Semantic, Social,
and Mobile (SeSaMo) applications supporting bioinformatics and the
activities carried out in a biomedical laboratory.
An extended list of topics is available at
http://www.nettab.org/2013/call.php .
INSTRUCTIONS
We welcome structured abstracts for oral communications,
industrial-technological communications, and posters.
Abstracts for oral communications and Industrial-technological
communications should be between 3 and 4 pages, including no more than
TWO tables / figures.
Abstracts for posters should be between 2 and 3 pages, including no more
than ONE table or figure.
All abstracts should include the following sections: Motivation and
Objectives, Methods, Results and Discussion, Acknowledgements, References.
Accepted abstracts will be included in the Proceedings of the workshop,
that will be published in a Supplement of EMBnet.journal.
Full papers from abstracts presented at NETTAB 2013 will be published in
a peer-review, indexed, international journal that will soon be announced.
SUBMISSION
Structured abstracts must be submitted at
http://conference.embnet.org/index.php/NETTAB/ .
Authors must first register at
http://conference.embnet.org/index.php/NETTAB/NETTAB2013/user/account .
The abstract must be prepared by using the template
http://www.nettab.org/2013/docs/Nettab_abstractTemplate.doc .
Full instructions for the preparation of the abstract are available at
http://conference.embnet.org/index.php/NETTAB/NETTAB2013/about/submissions#…
.