I just completed the survey at http://survey47.wikipediastudy.org/ survey.php. I'm sorry to be harsh and blunt. It's terrible. You can't use my results accurately -- they're wrong. I doubt you can use anyone's results accurately.
I don't know if I'd go that far, but it's certainly not good.
This survey could only be completed accurately by someone:
- with nothing to do / too much time on their hands
True
- who never makes mistakes
True
- who can anticipate future questions before they're asked
That depends on how they intend to interpret the questions. That that wasn't immeadiately clear is a serious problem
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
Indeed - how difficult is it to put the most common languages at the top? I expect at least 90% of respondents were from the top 10 languages.
- who knows the 2-character ISO code for the languages he knows,
even when they're not obvious (e.g. DE for German)
How is DE not obvious? It's the first two letters of the language's name in that language...
- who knows the 3-character ISO code for the currency he uses
I struggled to find my currency (even knowing the code), it took me a while to work out what they were actually sorting by.
The survey told me I couldn't use my browser's Back and Forward buttons, but had to use its own. That's rude.
That's a technical issue - it's certainly possible to do it in such a way that back and forward buttons work, but not as easy.
The survey then failed to provide Back buttons on all pages. That's incompetent.
True
The survey then asked me questions like "How many hours do you spend contributing to Wikipedia, per week?", followed by "How many hours to you spend administering Wikipedia?", followed by "How many hours do you spend supporting Wikipedia in technical ways?" And that ended up being profoundly insulting. Here's why.
The administrative and technical work I do on Wikipedia feels like "contributions" to me, so (not knowing the next questions were coming up) I included those hours in my first answer. And the technical work I do feels like "administration", so (not knowing the next question was coming up) I included that in my second answer. Therefore, if (as I suspect) you're assuming those three categories are disjoint, and since my major contributions lately have all been technical, I've inadvertently overstated my overall contributions in this survey by a factor of three.
I assumed they were intended the first question to be a total and so answered the same as you. If that assumption was incorrect then my response is also overstated.
Also, the survey took *way* too long. And there was no information given up-front about how long it might take. The progress bar in the upper right-hand corner was a clue and a nice touch, but it came too late.
Absolutely. I did finish it, but only because I'd got so far through before realising how long it was taking. When it said it could take 30 mins to complete (of whatever it said), I assumed it was giving an absolute maximum and it would actually be far shorter - it wasn't.
The survey also took too long in relationship to the impression of the data likely to be gleaned from it. Short, tightly-focused surveys give the surveyee the impression that some well-thought-out, concise questions are being addressed by the surveyer. Long, scattershot surveys give the impression that the surveyers aren't quite sure what they're looking for, are trying to ask everything they can think of, and are imagining that they'll mine the data later for interesting results later. But, with poorly-defined surveys, that task often ends up being difficult or impossible. So I'm left begrudging the time I spent filling out the survey, because it feels like the ratio of time investment (by me) to useful information which can be gleaned (by you) is not good.
Indeed - the first thing you need to work out when writing a survey is what you want to learn from it. I'm not sure they did that...
The survey asked me to specify things like "approximate number of articles edited" and "percentage of time spent translating" using drop-down selection boxes -- and with an increment of 1 between the available choices! That's just silly. (I dreaded how long I was going to have to scroll down to find my article edit count -- 1196 -- and was both relieved and annoyed to discover that, after 500 entries, the drop-down list ended with "more than 500".)
I have no idea how many articles I've edited and guessed. I imagine most other people guessed as well, which means having the numbers accurate to 1 article is meaningless. They should have had groups (0-10, 11-50, 51-100, 101-200, etc), the data would be just as useful and it would be far quicker to fill out.
The survey's categories were too-bluntly taken from existing lists. For example, the list I had to choose my employment from was apparently taken from one of those dreadful Department of Commerce categorizations, that I have just as much trouble finding my job in when I fill out my tax forms.
It was the attempt to categorise what kind of articles you edit that annoyed me. What does "General information" mean?
At the very end, the survey asked if I wanted to submit my results, or fix any mistakes. But the provided way to fix mistakes was to use the Back button -- perhaps several dozen times -- which I wouldn't have felt like doing even if the chain of Back buttons were complete.
A list of questions (without responses, since that would take up far too much space) would have been good.
The survey was clearly designed by someone who was thinking about the data they wanted to collect, and in a scattershot way. The survey was clearly not designed with the person completing it in mind. The survey was clearly not designed or vetted by anyone who knew anything about designing good surveys.
You don't need someone that's good at designing surveys (well you do, but not to spot most of these problems), you just need to try the survey out on a few people first.
I probably had more complaints to list, but I shouldn't waste as much time on this letter as I already wasted taking the survey, so I'll stop here.
Bottom line: Please use the results of this survey with extreme care, if at all. The results are going to be heavily, heavily biased by the inadvertent selection criteria involved in the survey's hostility towards its participants. If you conduct a survey like this again, please find someone to assist in the process who knows something about real-world survey work.
I was under the impression it was done with the support of experts - if that's the case, pick better experts next time!