On Thu, Sep 18, 2014 at 3:49 PM, Pine W <wiki.pine(a)gmail.com> wrote:
> Yes, but supposedly phone survey companies are able to get representative
> samples of broad populations despite many people refusing to respond to
> phone surveys. If opt-in users were chosen using similar methods, could
> arguably representative data be obtained?
In theory, sure, but that's a high bar. Responsible phone survey firms that
generate high quality data generally work very hard to draw random samples
of the population under consideration, follow up with non-respondents
numerous times to maximize the response rate, develop nuanced survey
weights for their data in order to adjust the responses relative to known
parameters of larger populations (when possible) and - at least recently -
often conduct ongoing studies to ensure that their data quality remains
high (e.g., in response to the transition away from land-lines toward
cell-phone only users among some demographic groups).
Many of these practices are very difficult to map into contexts like
Wikipedia, WMF projects, or online communities more broadly. Even the most
sophisticated web-metrics data providers (e.g., ComScore, Quantcast)
struggle with the issues of non-response and data quality. Those firms do
not publish much about their methodologies and do not share their data with
non-paying members of the public.
Mako and I have written about some of these issues in a PLoS ONE article
where we also attempt to correct some existing Wikipedia survey data using
an interesting technique that draws on overlapping questions in an opt-in
survey and a nationally-representative phone survey of US adults. I've also
talked with a few communities about conducting surveys in a manner that
would be more likely to generate high quality data along these lines, but
without much to show for it yet. It would be great to see more people
(scholars/communities/observers) move in this direction.
all the best,
On Thu, Sep 18, 2014 at 3:49 PM, Pine W <wiki.pine(a)gmail.com> wrote:
> Yes, but supposedly phone survey companies are able to get
> representative samples of broad populations despite many people refusing to
> respond to phone surveys. If opt-in users were chosen using similar
> methods, could arguably representative data be obtained?
> On Sep 18, 2014 1:32 PM, "Benj. Mako Hill" <mako(a)atdot.cc> wrote:
>> <quote who="Pine W" date="Thu, Sep 18, 2014 at 12:07:53PM -0700">
>> > I suppose you could get more granular data by conducting an opt-in
>> study of
>> > some kind, and you would need to be careful that users who haven't
>> opted in
>> > are not accidentally included or indirectly have their privacy
>> affected. I
>> > agree that collection at intervals shorter than an hour is going to
>> raise a
>> > lot of privacy considerations for users who have not opted in.
>> That would certainly work for some research questions and that's more
>> or less what most toolbar data is.
>> The problem is that often questions answered with view data are about
>> the overall popularity of visibility of pages which requires data that
>> is representative. There's lots of reasons to believe that people who
>> opt-in aren't going to be representative of all Wikipedia readers.
>> Benjamin Mako Hill
>> Creativity can be a social contribution, but only in so far
>> as society is free to use the results. --GNU Manifesto
>> Wiki-research-l mailing list
Hello research colleagues,
When I look at the WMF Report Card, it appears to me that the global active
editor stats and the number of new accounts being registered per month has
been relatively flat since at least 2011.
Those of you who work in EE research and analytics, I would like to ask if
there is a summary of techniques that you have found that do produce
statistically significant results in improving editor retention. I know
that some of you write tools, design projects, or pull and analyze data
about editors. It looks to me like WMF is investing significant effort in
research and tool creation, but we're not moving the needle to create the
results that we had hoped to achieve. So I'd like to ask what have we
learned from all of our time working on editor engagement about techniques
and programs that do improve the EE stats significant ways, so that we can
hopefully accelerate the implementation of programs and techniques that
have demonstrated success.
I'd also like to ask what barriers you think prevent us from becoming more
effective at improving the number of users who register and the number of
active editors. For example, are users who go through GettingStarted often
being deterred by quickly being confronted by experienced editors in ways
that make the newbies want to leave? If that is a significant problem, how
do you suggest addressing this?
One of my concerns about investing further in developing Flow, analytics
tools like like WIkimetrics, and further complex editor engagement research
projects, is that the most important challenges related to editor
engagement may be problems that can only be solved through primarily
interpersonal and social means rather than the use of software tools and
mass communications. I like Wikimetrics and I use it, and I think there's
an important place for analytics and tool development in EE work, but I
wonder if WMF should scale up the emphasis on grassroots social and
interpersonal efforts, particularly in the context of the 2015+ Strategic
Plan and Jimmy's speech at the 2014 Wikimania. What do you think,and if
your answer is yes, how do you think WMF can do this while respecting the
autonomy and social processes of the volunteer projects?
Oliver Keyes wrote:
> Mobile now makes up 30% of our page views and its
> users display divergent behavioural patterns; you
> don't think a group that makes up 30% of pageviews
> is a user group that is a 'big deal' for engagement?
For the English Wikipedia:
Date editors Change pageviews Change
July 2009 3,795 -7%
July 2010 3,517 -7% 278
July 2011 3,374 -4% 571 105%
July 2012 3,360 0% 1,210 112%
July 2013 3,135 -7% 1,880 55%
July 2014 3,037 -3% 3,010 60%
Where is the evidence that mobile use has any influence on editor engagement?
If you want to predict how long editors will stay, compare how many
new articles they were successfully creating in their first 500 edits
in 2004-2006 versus 2008-present.
> I agree that the shift to mobile is a big deal;
I do not agree: Active editor attrition began on its present trend in
2007, far before any mobile use was significant.
> I remain concerned that tech-centric approaches
> to editor engagement like VE and Flow, while
> perhaps having a modest positive impact, do little
> to fix the incivility problem that is so frequently
> cited as a reason for people to leave.
I agree that VE has already proven that it is ineffective in
significantly increasing editor engagement. And I agree that Flow has
no hope of achieving any substantial improvements. There are good
reasons to believe that Flow will make things worse. For example,
using wikitext on talk pages acts as a pervasive sandbox substitute
for practicing the use of wikitext in article editing.
And I do not agree that civility issues have any substantial
correlation with editor attrition. There have been huge civility
problems affecting most editors on controversial subjects since 2002,
and I do not see any evidence that they have become any worse or
better on a per-editor basis since.
My opinion is that the transition from the need to create new articles
to maintaining the accuracy and quality of existing articles has been
the primary cause of editor attrition, and my studies of Short Popular
Vital Articles (WP:SPVA) have supported this hypothesis.
Therefore, I strongly urge implementation of accuracy review systems:
Thanks very much, Valentin. I am forwarding your email to Wiki-research-l.
I am interested in the topic of editor engagement in general, and I am very
happy that you included work done by LauraHale, Hawkeye7 and I in your
literature review. Thank you!
On Tue, Aug 5, 2014 at 4:58 AM, Valentin Münscher <
> Hello everyone,
> last year Wikimedia Deutschland cooperated with Beuth University of
> Applied Science concerning diversity in Wikipedia. Now we published the key
> conclusions and recommendation and translate the report into English, so
> that you and other interested people can also take a closer look on our
> work concerning diversity and maybe getting input for your own work.
> The English title is “Charting Diversity - Working together towards
> diversity in Wikipedia” and starts off with a review of the current
> situation in (the German) Wikipedia and then goes on to offer different
> concepts and possibilities how to improve diversity in Wikipedia.
> You find the document on Commons via this link
> Sebastian Horndasch
> and me, Valentin,
> from the education & knowledge departement of Wikimedia Deutschland will
> also bring paper copies to the Wikimania in London and we are looking
> forward to answer your questions and discuss your comments with you. Check
> our Wikimania user sides for our Wikimania timetables or write us an email,
> if you wanna get in touch with us:
> Sebastian: sebastian.horndasch(a)wikimedia.de
> Valentin: valentin.muenscher(a)wikimedia.de
> All the best,
> Valentin Münscher
> Bereich Bildung & Wissen
> Wikimedia Deutschland e.V. | Tempelhofer Ufer 23-24 | 10963 Berlin
> Tel. (030) 219 158 260
> Stellen Sie sich eine Welt vor, in der jeder Mensch freien Zugang zu
> der Gesamtheit des Wissens der Menschheit hat. Helfen Sie uns dabei!
> ****Unterstützen Sie Freies Wissen mit einer SMS. Senden Sie einfach
> WIKI an 81190. Mit 5 Euro sichern Sie so die Verfügbarkeit und
> Weiterentwicklung der Wikipedia.****
> Gendergap mailing list
Oliver Keyes wrote:
> the reason Mobile is going to have an impact is not that it will
> have an impact on the delta, but because there are additional
> factors to juggle when working on solutions to said delta.
Are you saying that the mobile skin will affect editor attrition once
it is able to display and edit templates?
If so, why do you think that template display will be added to the
mobile skin when there was a conscious decision to remove it which has
stood for years? Why do you think template editing will ever be part
of the mobile skin?
And even if the mobile skin could edit templates, what data is there
supporting the idea that it would have any impact on editor
engagement? Pageviews when mobile views are included have remained on
their current trend for longer than editor attrition rate has held
steady. I don't see a shred of evidence that platform proportions have
had or ever will have any influence on editor engagement.
Gerard Meijssen wrote:
> Please define "just worked fine"... Really ?? !!
> Try editing a page that starts with a template..
Editing pages with or without templates works under the Vector skin on both
iOS and Android, although scrolling in the textarea can be difficult if you
aren't used to it.
Are referring to the fact that the mobile skin silently omits many if not
most templates, and prevents users from editing them? The thought that
active editors will ever take a skin which does that seriously is absurd.
I am doing a PhD on online civic participation project
(e-participation). Within my research, I have carried out a user
survey, where I asked how many people ever edited/created a page on a
Wiki. Now I would like to compare the results with the overall rate of
wiki editing/creation on country level.
I've found some country-level statistics on Wikipedia Statistics (e.g.
3,000 editors of Wikipedia articles in Italy) but data for UK and
France are not available since Wikipedia provides statistics by
languages, not by countries. I'm thus looking for statistics on UK and
France (but am also interested in alternative ways of measuring wiki
editing/creation in Sweden and Italy).
I would be grateful for any tips!
Sunny regards, Alina
European University Institute