Frankly that sounds like quite a high participation rate to me. I'd be surprised if a random call on English Wikipedians would have produced that level of response. But, as Jonathon has already said, participation rate is going to depend on a lot of variables.
I don't think anyone has conducted the "meta-research" on the willingness of Wikipedians to be surveyed and in what circumstances. Indeed, I doubt it can be studied. Even if you had access to every study ever done on Wikipedians, it is unlikely that each study occurs independent of any other. Almost certainly, some of the studies started with enquiries to other researchers about how best to solicit participation, so I would suspect that many studies adopted methods of recruitment influenced by the experience of previous studies. This means that surveys may be recruiting based on greatest likelihood of finding respondents rather than whether those respondents are a representative sample within the desired cohort of subjects. And, let's face it, a researcher who has to produce a result to get a PhD, tenure, promotion, or their next grant, a large number of non-representatives respondents at least gives you enough data to draw some conclusions about, whereas a very small number of representative respondents might not. "We surveyed 2 people and they said ..." (ouch!). Let's never forget that we don't do research just to make the world a better place.
Indeed, when it comes to Wikipedians, I don't think we know what a representative sample should look like in any case. Even the WMF's own editor surveys had relatively low participation rates (5000 for the 2011 editor survey) which is a drop in the bucket of the millions of registered user accounts (and the almost unknowable number of editors contributing anonymously. In contrast when you do a random population study or within staff of an organisation, you do generally have other data (e.g. census, HR records) to tell you how representative your sample is on a number of the standard demographic variables. No wonder we like to study university students so much (known demographics and such a convenient sample!).
Kerry
-----Original Message----- From: Wiki-research-l [mailto:wiki-research-l-bounces@lists.wikimedia.org] On Behalf Of Juliana Bastos Marques Sent: Wednesday, 1 November 2017 7:38 AM To: Research into Wikimedia content and communities wiki-research-l@lists.wikimedia.org Subject: Re: [Wiki-research-l] Editor participation rates in surveys
Thanks for your reply, Jonathan. I was wondering if anybody has ever conducted a systematic research on the variables that you listed.
I had a sample of the 200 editors with more edits (30 days in Sept/Oct) on Portuguese Wikipedia, and 11 of these participated. I could have adopted other criteria - for instance, only 34 of these are admins, 2 are bots -, but for my purposes I just wanted a small sample with an objective selection. Indeed, the participation rate of 5,5% was expected, but I was wondering if there are any studies that can corroborate this.
Thanks, Juliana
On Tue, Oct 31, 2017 at 5:27 PM, Jonathan Morgan jmorgan@wikimedia.org wrote:
Hi Juliana,
Can you give a little more info about what you're looking for, and a little context about why your asking?
I don't know of any research that has specifically asked whether there is a difference in response rate per target group. Anecdotally (I've run a lot of editor surveys), I can say that in my experience:
- very new editors often don't respond to surveys at a high rate,
probably because they're less committed to/invested in Wikipedia and/or they have already lost interest (or stopped participating for other reasons) by the time they get the survey
- how you deliver the survey matters a lot: for example, direct email
vs. talkpage message vs. newsletter/mailing list message vs. invitation at a live f2f event
- the topic and goal of your survey matters a lot: if it's something
that people care about, they're more likely to respond. If people feel that it's important or personally useful to tell you what they know or what they think, they're more likely to respond. If you're asking for very personal information, or information that is not clearly relevant to your stated goals, they're often less likely to respond.
- who you are and why you're asking matters a lot: do the editors trust
you? do they have preconceived notions (correct or not) about who you are, what the data will be used for, how it will be stored and published, how privacy and anonymity will be ensured (if applicable)... these all matter a whole lot.
- in general, smaller-scale surveys targeted at a very specific group
and which are clearly relevant to the expertise and goals of that group, and follow scientific best practices for open and ethical research, seem to work pretty well (with all the above caveats)
Hope that helps, Jonathan
On Tue, Oct 31, 2017 at 1:36 PM, Juliana Bastos Marques < domusaurea@gmail.com> wrote:
Hi all! I am looking for any discussions/data about participation rates
in
research surveys directed towards editors. I'd like to see if there's a consistent rate, or not, in responses per target group. Can anybody help
me
with this?
Thank you, Juliana
-- www.domusaurea.org _______________________________________________ Wiki-research-l mailing list Wiki-research-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
-- Jonathan T. Morgan Senior Design Researcher Wikimedia Foundation User:Jmorgan (WMF) https://meta.wikimedia.org/wiki/User:Jmorgan_(WMF) _______________________________________________ Wiki-research-l mailing list Wiki-research-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
-- www.domusaurea.org _______________________________________________ Wiki-research-l mailing list Wiki-research-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wiki-research-l