I am doing a PhD on online civic participation project
(e-participation). Within my research, I have carried out a user
survey, where I asked how many people ever edited/created a page on a
Wiki. Now I would like to compare the results with the overall rate of
wiki editing/creation on country level.
I've found some country-level statistics on Wikipedia Statistics (e.g.
3,000 editors of Wikipedia articles in Italy) but data for UK and
France are not available since Wikipedia provides statistics by
languages, not by countries. I'm thus looking for statistics on UK and
France (but am also interested in alternative ways of measuring wiki
editing/creation in Sweden and Italy).
I would be grateful for any tips!
Sunny regards, Alina
European University Institute
Hi all! I've been looking for studies about editing dynamics of Featured
Articles. Would anyone know about papers *in languages other than
English* with this theme?
Is it just me or something went wrong with
>From my browser, the fields of country names, population and internet users
are all missing from the report. I tried to send an email to the email
address listed but the email fails to get through.
---------- Forwarded message ----------
From: Mail Delivery Subsystem <mailer-daemon(a)googlemail.com>
Date: 2014-04-28 9:30 GMT+08:00
Subject: Delivery Status Notification (Failure)
Delivery to the following recipient failed permanently:
Technical details of permanent failure:
Google tried to deliver your message, but it was rejected by the server for
the recipient domain wikimedia.org by mchenry.wikimedia.org.
The error that the other server returned was:
550 Address aengels(a)wikimedia.org does not exist
----- Original message -----
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
X-Received: by 10.60.34.65 with SMTP id x1mr19476526oei.6.1398648587780;
27 Apr 2014 18:29:47 -0700 (PDT)
Received: by 10.76.174.104 with HTTP; Sun, 27 Apr 2014 18:29:47 -0700 (PDT)
Date: Mon, 28 Apr 2014 09:29:47 +0800
Subject: Country name and other fields missing?
From: h <hanteng(a)gmail.com>
Content-Type: multipart/alternative; boundary=089e0111bf723d8c8704f8104049
I am not sure what went wrong.
Cross-posting from Kaitlin Thaney (Mozilla Science Lab)
Begin forwarded message:
> From: Kaitlin Thaney <kaitlin(a)mozillafoundation.org>
> Subject: Reminder: "Code as a Research Object" office hours - today, 11 am + 4 pm ET
> Date: April 24, 2014 at 7:44:43 AM PDT
> Hi all -
> Looking to learn more about how to archive your code and assign it a Digital Object Identifier? Curious about how we linked GitHub (a code hosting service) and figshare (an open data repository) - and how you can do the same?
> We'll be running two open office hour sessions today (to be kinder to those in other timezones) - one at 11 am ET, the other at 4pm ET. Our collaborators will be walking participants through the technical build of the project.
> Call in details here in the etherpad: https://etherpad.mozilla.org/sciencelab-coderesobject-officehours
> And more on the project can be found here: http://mozillascience.org/code-as-a-research-object-updates-prototypes-next…
> All the best,
> Kaitlin Thaney
> Director, Mozilla Science Lab
> @kaythaney ; @MozillaScience
> skype / IRC: kaythaney
There were a good article criticising Google Flu approach: “ The parable of
Google Flu: Traps in Big Data Analysis”
It seems too easy to predict anything using Google or Wikipedia data. Just
follow the following plan: a) choose what to predict, b) find searches that
give you a positive results.
And this unscientific approach produce articles like this one: “Predicting
Recessions in Real-Time: Mining Google Trends and Electronic Payments Data
for Clues.” http://www.cdhowe.org/pdf/Commentary_387.pdf
Thank you James, I am not criticising the article you mentioned.
I just ready to to bet that I will be able to predict anything on Wikipedia
I was wondering if there is any work answering the question on how "up-to-date" Wikipedia is.
For some high-impact news, like Snowden's revelation of the PRISM program, articles are written in mere hours. For others, e.g. business people taking on important posts in companies and thus becoming Wikipedia-relevant, it sometimes takes weeks until an article is written (Ian Robertson of BMW is an example).
Is there some work trying to answer this question of how long it takes for Wikipedia articles to be created after an event became newsworthy (and eventually ends up in Wikipedia)?
Max-Planck-Institut fuer Informatik
Databases and Information Systems
McIver, David J. and John S. Brownstein (April 17, 2014) "Wikipedia
Usage Estimates Prevalence of Influenza-Like Illness in the United
States in Near Real-Time" PLOS Computational Biology:
"Wikipedia article view data has been demonstrated to be effective at
estimating the level of Influenza-Like Illness [ILI] activity in the
US, when compared to CDC data.... and can provide a reliable estimate
of ILI activity up to 2 weeks in advance of traditional ILI
Many of you have done research on the gender gap in Wikipedia articles. As
a result you must have associated articles with people and those people
with their gender.
It would be awesome if you would do the following:
- provide us with files that include at least that information.
- better, add pertinent information to Wikidata ... at least the fact
that they are human and, their sex
- It would be stellar when you can identify differences between what you
know and what is known in WIkidata
The point is very much that a lot of information is added to Wikidata all
the time and when your base line information is known to Wikidata, It will
cover Wikipedia that much better.
In your research you may want to look into the current difference in sex
between men and women... You can find it all the time, near real time..
Currently there are 150.801 females for 755.747 males known to Wikidata.
Yes, you can change the queries to find only female painters or females
with India as their nationality.. or males obviously
When you would like your own database, you can.