Anybody know where on-wiki the current survey is being discussed? I've got a thing or two to say. (Message I just sent to info@wikipediastudy.org appended.)
* * *
From: Steve Summit scs@eskimo.com Date: Sat, 01 Nov 2008 10:16:41 -0400 To: info@wikipediastudy.org Subject: your survey has problems
I just completed the survey at http://survey47.wikipediastudy.org/ survey.php. I'm sorry to be harsh and blunt. It's terrible. You can't use my results accurately -- they're wrong. I doubt you can use anyone's results accurately.
This survey could only be completed accurately by someone: * with nothing to do / too much time on their hands * who never makes mistakes * who can anticipate future questions before they're asked * who can be bothered to search for his country and language (several times) in strictly-alphabetical lists of every single country and language in the world * who knows the 2-character ISO code for the languages he knows, even when they're not obvious (e.g. DE for German) * who knows the 3-character ISO code for the currency he uses
The survey told me I couldn't use my browser's Back and Forward buttons, but had to use its own. That's rude.
The survey then failed to provide Back buttons on all pages. That's incompetent.
The survey then asked me questions like "How many hours do you spend contributing to Wikipedia, per week?", followed by "How many hours to you spend administering Wikipedia?", followed by "How many hours do you spend supporting Wikipedia in technical ways?" And that ended up being profoundly insulting. Here's why.
The administrative and technical work I do on Wikipedia feels like "contributions" to me, so (not knowing the next questions were coming up) I included those hours in my first answer. And the technical work I do feels like "administration", so (not knowing the next question was coming up) I included that in my second answer. Therefore, if (as I suspect) you're assuming those three categories are disjoint, and since my major contributions lately have all been technical, I've inadvertently overstated my overall contributions in this survey by a factor of three.
And those particular survey pages were among those without Back buttons, so I couldn't fix my mistake. Do you know how incredibly frustrating that is, to have wanted to spend time contributing to a survey, to know I've contributed false information, and to not be able to fix it?
Also, the survey took *way* too long. And there was no information given up-front about how long it might take. The progress bar in the upper right-hand corner was a clue and a nice touch, but it came too late.
The survey also took too long in relationship to the impression of the data likely to be gleaned from it. Short, tightly-focused surveys give the surveyee the impression that some well-thought-out, concise questions are being addressed by the surveyer. Long, scattershot surveys give the impression that the surveyers aren't quite sure what they're looking for, are trying to ask everything they can think of, and are imagining that they'll mine the data later for interesting results later. But, with poorly-defined surveys, that task often ends up being difficult or impossible. So I'm left begrudging the time I spent filling out the survey, because it feels like the ratio of time investment (by me) to useful information which can be gleaned (by you) is not good.
The survey asked me to specify things like "approximate number of articles edited" and "percentage of time spent translating" using drop-down selection boxes -- and with an increment of 1 between the available choices! That's just silly. (I dreaded how long I was going to have to scroll down to find my article edit count -- 1196 -- and was both relieved and annoyed to discover that, after 500 entries, the drop-down list ended with "more than 500".)
The survey's categories were too-bluntly taken from existing lists. For example, the list I had to choose my employment from was apparently taken from one of those dreadful Department of Commerce categorizations, that I have just as much trouble finding my job in when I fill out my tax forms.
At the very end, the survey asked if I wanted to submit my results, or fix any mistakes. But the provided way to fix mistakes was to use the Back button -- perhaps several dozen times -- which I wouldn't have felt like doing even if the chain of Back buttons were complete.
The survey was clearly designed by someone who was thinking about the data they wanted to collect, and in a scattershot way. The survey was clearly not designed with the person completing it in mind. The survey was clearly not designed or vetted by anyone who knew anything about designing good surveys.
I probably had more complaints to list, but I shouldn't waste as much time on this letter as I already wasted taking the survey, so I'll stop here.
Bottom line: Please use the results of this survey with extreme care, if at all. The results are going to be heavily, heavily biased by the inadvertent selection criteria involved in the survey's hostility towards its participants. If you conduct a survey like this again, please find someone to assist in the process who knows something about real-world survey work.
On Sat, Nov 1, 2008 at 2:21 PM, Steve Summit scs@eskimo.com wrote:
Anybody know where on-wiki the current survey is being discussed? I've got a thing or two to say. (Message I just sent to info@wikipediastudy.org appended.)
* * *
I haven't done the survey, and don't intend to. I had a brief look at it, and decided not to bother when it said it would take half an hour. That's half an hour that I could be spending working on an article. I also heard surveys could be submitted more than once for one person, so effectively, makes it worthless.
Afraid I gave up too after 5 minutes. Waste of time no-one ever tells the truth in surveys anyway. Everyone always wants to appear more sophisticated and clever than they really are.
Giano
On Sat, Nov 1, 2008 at 2:36 PM, Al Tally majorly.wiki@googlemail.comwrote:
On Sat, Nov 1, 2008 at 2:21 PM, Steve Summit scs@eskimo.com wrote:
Anybody know where on-wiki the current survey is being discussed? I've got a thing or two to say. (Message I just sent to info@wikipediastudy.org appended.)
* * *
I haven't done the survey, and don't intend to. I had a brief look at it, and decided not to bother when it said it would take half an hour. That's half an hour that I could be spending working on an article. I also heard surveys could be submitted more than once for one person, so effectively, makes it worthless.
-- Alex (User:Majorly) _______________________________________________ WikiEN-l mailing list WikiEN-l@lists.wikimedia.org To unsubscribe from this mailing list, visit: https://lists.wikimedia.org/mailman/listinfo/wikien-l
Three points - the survey took me 10 minutes to complete, I didn't happen to notice missing back buttons, and I don't think the Wikimedia Foundation (or the English Wikipedia, as implied by where the complaints were posted) designed the survey in detail. If you notice, it is an UNU-MERIT survey. UNU-MERIT is a research institute of the United Nations University and Maastricht University.
Nathan
on 11/1/08 11:16 AM, Nathan at nawrich@gmail.com wrote:
Three points - the survey took me 10 minutes to complete, I didn't happen to notice missing back buttons, and I don't think the Wikimedia Foundation (or the English Wikipedia, as implied by where the complaints were posted) designed the survey in detail. If you notice, it is an UNU-MERIT survey. UNU-MERIT is a research institute of the United Nations University and Maastricht University.
Nathan, the survey is a disgrace in both form and content. And it is not worthy of anyone's time or effort.
Marc Riddell
I agree that the "unanticipated follow-ups" are a serious design problem that will distort some of the results, and in general, that the technical implementation leaves lots to be desired, especially in terms of usability. That said, there are many questions in the survey which will yield useful data, or data that is useful with caveats. See my summary here: http://en.wikipedia.org/wiki/Wikipedia_talk:Survey_2008#Some_interim_comment...
I just completed the survey at http://survey47.wikipediastudy.org/ survey.php. I'm sorry to be harsh and blunt. It's terrible. You can't use my results accurately -- they're wrong. I doubt you can use anyone's results accurately.
I don't know if I'd go that far, but it's certainly not good.
This survey could only be completed accurately by someone:
- with nothing to do / too much time on their hands
True
- who never makes mistakes
True
- who can anticipate future questions before they're asked
That depends on how they intend to interpret the questions. That that wasn't immeadiately clear is a serious problem
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
Indeed - how difficult is it to put the most common languages at the top? I expect at least 90% of respondents were from the top 10 languages.
- who knows the 2-character ISO code for the languages he knows,
even when they're not obvious (e.g. DE for German)
How is DE not obvious? It's the first two letters of the language's name in that language...
- who knows the 3-character ISO code for the currency he uses
I struggled to find my currency (even knowing the code), it took me a while to work out what they were actually sorting by.
The survey told me I couldn't use my browser's Back and Forward buttons, but had to use its own. That's rude.
That's a technical issue - it's certainly possible to do it in such a way that back and forward buttons work, but not as easy.
The survey then failed to provide Back buttons on all pages. That's incompetent.
True
The survey then asked me questions like "How many hours do you spend contributing to Wikipedia, per week?", followed by "How many hours to you spend administering Wikipedia?", followed by "How many hours do you spend supporting Wikipedia in technical ways?" And that ended up being profoundly insulting. Here's why.
The administrative and technical work I do on Wikipedia feels like "contributions" to me, so (not knowing the next questions were coming up) I included those hours in my first answer. And the technical work I do feels like "administration", so (not knowing the next question was coming up) I included that in my second answer. Therefore, if (as I suspect) you're assuming those three categories are disjoint, and since my major contributions lately have all been technical, I've inadvertently overstated my overall contributions in this survey by a factor of three.
I assumed they were intended the first question to be a total and so answered the same as you. If that assumption was incorrect then my response is also overstated.
Also, the survey took *way* too long. And there was no information given up-front about how long it might take. The progress bar in the upper right-hand corner was a clue and a nice touch, but it came too late.
Absolutely. I did finish it, but only because I'd got so far through before realising how long it was taking. When it said it could take 30 mins to complete (of whatever it said), I assumed it was giving an absolute maximum and it would actually be far shorter - it wasn't.
The survey also took too long in relationship to the impression of the data likely to be gleaned from it. Short, tightly-focused surveys give the surveyee the impression that some well-thought-out, concise questions are being addressed by the surveyer. Long, scattershot surveys give the impression that the surveyers aren't quite sure what they're looking for, are trying to ask everything they can think of, and are imagining that they'll mine the data later for interesting results later. But, with poorly-defined surveys, that task often ends up being difficult or impossible. So I'm left begrudging the time I spent filling out the survey, because it feels like the ratio of time investment (by me) to useful information which can be gleaned (by you) is not good.
Indeed - the first thing you need to work out when writing a survey is what you want to learn from it. I'm not sure they did that...
The survey asked me to specify things like "approximate number of articles edited" and "percentage of time spent translating" using drop-down selection boxes -- and with an increment of 1 between the available choices! That's just silly. (I dreaded how long I was going to have to scroll down to find my article edit count -- 1196 -- and was both relieved and annoyed to discover that, after 500 entries, the drop-down list ended with "more than 500".)
I have no idea how many articles I've edited and guessed. I imagine most other people guessed as well, which means having the numbers accurate to 1 article is meaningless. They should have had groups (0-10, 11-50, 51-100, 101-200, etc), the data would be just as useful and it would be far quicker to fill out.
The survey's categories were too-bluntly taken from existing lists. For example, the list I had to choose my employment from was apparently taken from one of those dreadful Department of Commerce categorizations, that I have just as much trouble finding my job in when I fill out my tax forms.
It was the attempt to categorise what kind of articles you edit that annoyed me. What does "General information" mean?
At the very end, the survey asked if I wanted to submit my results, or fix any mistakes. But the provided way to fix mistakes was to use the Back button -- perhaps several dozen times -- which I wouldn't have felt like doing even if the chain of Back buttons were complete.
A list of questions (without responses, since that would take up far too much space) would have been good.
The survey was clearly designed by someone who was thinking about the data they wanted to collect, and in a scattershot way. The survey was clearly not designed with the person completing it in mind. The survey was clearly not designed or vetted by anyone who knew anything about designing good surveys.
You don't need someone that's good at designing surveys (well you do, but not to spot most of these problems), you just need to try the survey out on a few people first.
I probably had more complaints to list, but I shouldn't waste as much time on this letter as I already wasted taking the survey, so I'll stop here.
Bottom line: Please use the results of this survey with extreme care, if at all. The results are going to be heavily, heavily biased by the inadvertent selection criteria involved in the survey's hostility towards its participants. If you conduct a survey like this again, please find someone to assist in the process who knows something about real-world survey work.
I was under the impression it was done with the support of experts - if that's the case, pick better experts next time!
2008/11/1 Thomas Dalton thomas.dalton@gmail.com:
You don't need someone that's good at designing surveys (well you do, but not to spot most of these problems), you just need to try the survey out on a few people first.
The survey was tried out on a group of testers and translators. You only get so much useful feedback - the feedback that we're getting from actually running the survey is much more detailed and valuable for future surveys.
I was under the impression it was done with the support of experts - if that's the case, pick better experts next time!
It was developed by the UNU-Merit Collaborative Creativity Group, who have developed and run in-depth, multilingual surveys on the free software movement, probably one of the most comparable specialized communities. It's a first run, and the results will be imperfect and need to be interpreted very carefully -- but we'll get some basic, useful data, and we have a huge amount of feedback that will help with the design of future surveys. I don't think we could have done much better, especially given that the only resources we spent on this project are staff time to shepherd it.
On Sat, Nov 1, 2008 at 10:45 AM, Erik Moeller erik@wikimedia.org wrote:
I don't think we could have done much better, especially given that the only resources we spent on this project are staff time to shepherd it. --
It's probably fine as a university study that Wikimedia is helping with. Since we didn't spend much (if anything) on it, I wouldn't be too hard on it. If one question has good results that may be worth it. Even if we only get a relative ratio of people willing to take surveys or something.
Having said that, I'm not going to take it, and I actually quite enjoy taking and creating surveys, but this one sounds terrible from Steve's description. I wonder if we can get the data about how many people clicked on the take the survey link. That might be the only actually good data available. (which I still consider a success, since the foundation didn't actually spend much resources on it)
Judson Dunn wrote:
On Sat, Nov 1, 2008 at 10:45 AM, Erik Moeller erik@wikimedia.org wrote:
I don't think we could have done much better, especially given that the only resources we spent on this project are staff time to shepherd it.
It's probably fine as a university study that Wikimedia is helping with. Since we didn't spend much (if anything) on it, I wouldn't be too hard on it.
I wouldn't have been so hard on it, except that I was led to it from a link appearing at the top of every Wikipedia page, a spot that's usually reserved for things that are really significant and really important.
On Sat, Nov 1, 2008 at 4:09 PM, Steve Summit scs@eskimo.com wrote:
I wouldn't have been so hard on it, except that I was led to it from a link appearing at the top of every Wikipedia page, a spot that's usually reserved for things that are really significant and really important.
This link was added at the top of every project afaik, by Cary Bass, without any discussion with the project's active editors, and no edit summary was even used.
2008/11/1 Judson Dunn cohesion@sleepyhead.org:
It's probably fine as a university study that Wikimedia is helping with. Since we didn't spend much (if anything) on it, I wouldn't be too hard on it. If one question has good results that may be worth it. Even if we only get a relative ratio of people willing to take surveys or something.
Lol, I would expect some more useful data than that. So far the statistics indicate that there are almost 80,000 submitted questionnaires, out of a total of 130,000 who at least took the first question. That's a pretty high submission rate.
Looking again through the questionnaire, here are some of the questions which I think will yield useful data:
* basic composition of sample (readers vs. contributors) * basic demographics (gender, age, nationality, language, education level, etc.) - exception: the "years of formal education" question will probably be of limited usefulness; the occupation breakdown will have some gaps * what contributors do - exceptions: the detailed hours breakdown and category breakdown will probably be of limited usefulness * the "why contribute" reasons * why non-contributors stopped contributing * the "what purpose" question for readers * the quality questions for readers * the project and organization awareness questions
Sure, virtually every multiple choice question could have benefited from additional choices, but that's always going to be the case -- you can either try to process thousands of write-ins, or live with the fact that some reasons will not be represented.
In general, there are some "numbers" questions which are dubious, but we'll see what kind of data we get from those.
We won't get a representative selection of readers, but we wouldn't get that anyway through a sitenotice survey. It's possible to just interpret subsamples of the data which you want to examine to understand e.g. differences between casual readers and frequent readers.
The anonymized data will be CC-BY, so we'll all be able to get out of it what's useful, and flag what's not.
What type of data substantiation are they planning on doing using the username they ask for? A lot of the questions are moot once they have the username - you can just look up simple data points like number of articles edited etc.
Nathan
2008/11/1 Nathan nawrich@gmail.com:
What type of data substantiation are they planning on doing using the username they ask for? A lot of the questions are moot once they have the username - you can just look up simple data points like number of articles edited etc.
Yes, for users who provide the name, they're planning to validate at least the basic edit counts and such -- I'm not sure what additional validation, but you can ask them at info(at)wikipediastudy(dot)org.
Erik Moeller wrote:
Sure, virtually every multiple choice question could have benefited from additional choices, but that's always going to be the case -- you can either try to process thousands of write-ins, or live with the fact that some reasons will not be represented.
Although true, most of the complaints seem to be about one or two omissions on particular questions. The ones that have come up most often on the English Wikipedia are:
1. The question about how you use talk pages is missing something like "to discuss how to improve the article", which is actually what I thought their main reason for existing was. =]
2. The question about why you don't donate to the Wikimedia Foundation is lacking the standard "I prioritize giving to other charities more highly" answer, which ends up making it seem like a vaguely accusatory question (i.e. if you don't donate, it must be because you're either poor, or dislike Wikimedia).
I thought most of the rest was reasonable.
-Mark
Eric, In my opinion this survey highlights one of the issues that I have with the wikimedia foundation __a failure to collaborate - utilize wikipedians and wikipedia__
By this I mean a failure to use the talented people that are part of the community and failure to use wikipedia as a resource to find those people. I would expect before taking on a survey like this one (assuming it was WMF driven), or permitting a survey to be taken with its blessing (assuming it was UMU driven), that several questions would be asked. * What is the purpose of this survey * Look inward - how can wikipedia and wikipedians be used as a resource on this project * Are there contributors/volunteers who have surveying expertise * How do we identify those volunteers without skewing the survey results * Are we using open source technology - in less than 5 minutes I found limesurvey which appears to be a well written (although the forward back buttons don't work), has buttons on every page and a resume later option - and is superior to the software used for the UMU survey * Is the open source technology well done - if not - what are other options
The same could be said of the recent donation banner - there are many wikipedians - people invested in the success of wikipedia that have non-profit and fund-raising expertise that could have been tapped to help design, share best practices etc. The comments by the guy from soschildren.org seem to be things we should have known beforehand.
I think this survey - at least for me - hurt the goodwill I feel for the WMF because it was disrespectful of my time, and showed serious technology defects that I will obviously make the results less than accurate. In other words my level of trust in WMF has deteriorated.
Using the collabortive/expertise process is less difficult than it would seem. I have had times where I have found wikipedians who could answer esoteric questions by reading the wikipedia article, looking at the history and emailing a few to see if they could help me interpret, understand a difficult concept. My feeling is that if I can demonstrate a rudimentary understanding of the subject matter (as gleaned from wikipedia) and can ask intelligent questions, I will probably find someone who can help me get my specific question answered.
This would be even more effective if used by the WMF - I know that if I got an email from anyone with a wikimedia.org email address (especially if it was a name I recognized - like Erik, Sue, Brion, Cary, Jay, etc) telling me they noticed I contributed to page widget and that they needed a few people with widget expertise, could I help, I'd help in a heartbeat because I would see that contribution as being valuable, just like I see my edits as being a valuable contribution.
Jim
On Sat, Nov 1, 2008 at 10:45 AM, Erik Moeller erik@wikimedia.org wrote:
2008/11/1 Thomas Dalton thomas.dalton@gmail.com:
You don't need someone that's good at designing surveys (well you do, but not to spot most of these problems), you just need to try the survey out on a few people first.
The survey was tried out on a group of testers and translators. You only get so much useful feedback - the feedback that we're getting from actually running the survey is much more detailed and valuable for future surveys.
I was under the impression it was done with the support of experts - if that's the case, pick better experts next time!
It was developed by the UNU-Merit Collaborative Creativity Group, who have developed and run in-depth, multilingual surveys on the free software movement, probably one of the most comparable specialized communities. It's a first run, and the results will be imperfect and need to be interpreted very carefully -- but we'll get some basic, useful data, and we have a huge amount of feedback that will help with the design of future surveys. I don't think we could have done much better, especially given that the only resources we spent on this project are staff time to shepherd it. -- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
on 11/6/08 10:34 AM, Jim at trodel@gmail.com wrote:
Eric, In my opinion this survey highlights one of the issues that I have with the wikimedia foundation __a failure to collaborate - utilize wikipedians and wikipedia__
By this I mean a failure to use the talented people that are part of the community and failure to use wikipedia as a resource to find those people.
Jim,
What an incredibly astute and very accurate observation. I will be very interested in the response.
Marc Riddell
On Sat, Nov 1, 2008 at 10:45 AM, Erik Moeller erik@wikimedia.org wrote:
2008/11/1 Thomas Dalton thomas.dalton@gmail.com:
You don't need someone that's good at designing surveys (well you do, but not to spot most of these problems), you just need to try the survey out on a few people first.
The survey was tried out on a group of testers and translators. You only get so much useful feedback - the feedback that we're getting from actually running the survey is much more detailed and valuable for future surveys.
I was under the impression it was done with the support of experts - if that's the case, pick better experts next time!
It was developed by the UNU-Merit Collaborative Creativity Group, who have developed and run in-depth, multilingual surveys on the free software movement, probably one of the most comparable specialized communities. It's a first run, and the results will be imperfect and need to be interpreted very carefully -- but we'll get some basic, useful data, and we have a huge amount of feedback that will help with the design of future surveys. I don't think we could have done much better, especially given that the only resources we spent on this project are staff time to shepherd it. -- Erik Möller Deputy Director, Wikimedia Foundation
Support Free Knowledge: http://wikimediafoundation.org/wiki/Donate
WikiEN-l mailing list WikiEN-l@lists.wikimedia.org To unsubscribe from this mailing list, visit: https://lists.wikimedia.org/mailman/listinfo/wikien-l
2008/11/6 Marc Riddell michaeldavid86@comcast.net:
on 11/6/08 10:34 AM, Jim at trodel@gmail.com wrote:
Eric, In my opinion this survey highlights one of the issues that I have with the wikimedia foundation __a failure to collaborate - utilize wikipedians and wikipedia__
By this I mean a failure to use the talented people that are part of the community and failure to use wikipedia as a resource to find those people.
Jim,
What an incredibly astute and very accurate observation. I will be very interested in the response.
Likewise. It does, indeed, seem to be a recurring feature of the WMF's mistakes. (I should clarify, I think most of what they do is great, but there are things that aren't so good and most of them seem to boil down to not consulting the community so that problems could be spotted and fixed before they actually become problematic.)
On Thu, Nov 6, 2008 at 11:16 AM, Thomas Dalton thomas.dalton@gmail.comwrote:
2008/11/6 Marc Riddell michaeldavid86@comcast.net:
on 11/6/08 10:34 AM, Jim at trodel@gmail.com wrote:
Eric, In my opinion this survey highlights one of the issues that I have with
the
wikimedia foundation __a failure to collaborate - utilize wikipedians
and
wikipedia__
By this I mean a failure to use the talented people that are part of the community and failure to use wikipedia as a resource to find those
people.
Jim,
What an incredibly astute and very accurate observation. I will be very interested in the response.
Likewise. It does, indeed, seem to be a recurring feature of the WMF's mistakes. (I should clarify, I think most of what they do is great, but there are things that aren't so good and most of them seem to boil down to not consulting the community so that problems could be spotted and fixed before they actually become problematic.)
I should also clarify that I think the WMF is awesome! The good far outweighs the bad, and, in the past, I have felt that most of the criticisms of the foundation (executive leadership, transparency, etc) where impatient as they were the result of a fast-growing organization that was doing well considering the difficulties and challenges that it faced.
I would just like to see the foundation leverage the efforts and willingness of its volunteers to give of their time and talents in new ways i.e. in ways other than 1) helping new users and solving problems (OTRS), 2) build documents and content (Wikipedia and its progeny) and 3) programing code (Wikimedia and its extensions).
This is especially important as the foundation develops a professional staff. Before if something was to get done it was by volunteers (and usually quite a few of them) who had contributed to the project and were deeply vested in its success. Now they can be done as a job assignment, and it will become increasingly possible to have projects taken on by a smaller group of people (school projects, dissertations/thesis in addition to employees) who now have a way(through responsive foundation employees) to communicate and get permission/access to the appropriate information but may not have had the shared collaborative giving process of actually editing the encyclopedia in their spare time (between work/family/school/other responsibilities). Nothing can substitute for the experiences of making a few hundred edits on Wikipedia for understanding the perspective, the diversity of ideas, the shared values, etc.
Jim
On 11/1/08, Thomas Dalton thomas.dalton@gmail.com wrote:
The survey told me I couldn't use my browser's Back and Forward buttons, but had to use its own. That's rude.
That's a technical issue - it's certainly possible to do it in such a way that back and forward buttons work, but not as easy.
I missed the part that said not to use these buttons, so I used them. I hope this didn't make my responses any more corrupted than they already were.
—C.W.
I tend to agree with many of your comments on your survey and would just like to pick some of the points I disagree with:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
Well, one the one hand I am quite happy to have a list where it doesn't say "United States", "United Kingdom" at the top, then two dashes, and then all the "less important countries".
What I agree, though, is that this could have been made more language-specific, if there had been more preparation time. E.g., if someone chooses the German version of the survey, Germany, Austria and Switzerland could have been at the top etc.
- who knows the 2-character ISO code for the languages he knows,
even when they're not obvious (e.g. DE for German)
See, I think you should choose an example you know about next time ;-) All German URLs end in .de, the German Wikipedia says "de.wikipedia.org" etc.etc. "DE" for German might not be obvious to a English-speaker, but it is obvious to a German speaker and that's the whole point of it.
- who knows the 3-character ISO code for the currency he uses
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
But these are minor points, again, I agree with the general direction of your argument.
Michael
2008/11/1 Michael Bimmler mbimmler@gmail.com:
Well, one the one hand I am quite happy to have a list where it doesn't say "United States", "United Kingdom" at the top, then two dashes, and then all the "less important countries".
What I agree, though, is that this could have been made more language-specific, if there had been more preparation time. E.g., if someone chooses the German version of the survey, Germany, Austria and Switzerland could have been at the top etc.
Yep, I agree - this was one of our requests, but it was not to be. These are usability quirks that make it more cumbersome than necessary to complete the survey, but other than potentially more people not answering these questions, should not influence the results.
2008/11/1 Erik Moeller erik@wikimedia.org:
2008/11/1 Michael Bimmler mbimmler@gmail.com:
Well, one the one hand I am quite happy to have a list where it doesn't say "United States", "United Kingdom" at the top, then two dashes, and then all the "less important countries".
What I agree, though, is that this could have been made more language-specific, if there had been more preparation time. E.g., if someone chooses the German version of the survey, Germany, Austria and Switzerland could have been at the top etc.
Yep, I agree - this was one of our requests, but it was not to be. These are usability quirks that make it more cumbersome than necessary to complete the survey, but other than potentially more people not answering these questions, should not influence the results.
Well, you may end up with a disproportionate representation of people speaking languages near the beginning of the alphabet. The data about what speakers of each language do should be fine, but the data about how many people speak each language will be unreliable.
2008/11/1 Thomas Dalton thomas.dalton@gmail.com:
Well, you may end up with a disproportionate representation of people speaking languages near the beginning of the alphabet.
Putting some countries first protects against such selection bias in common cases, though it could potentially introduce other biases (countries not in the top list may be underrepresented). The only way to truly protect against selection biases of any kind is to randomize the list, which obviously is much more cumbersome.
We'll have to see the actual data to assess how large these potential distortions might be. For example, if 95% of respondents completed the country/languages questions, then the selection bias of not finding your country is probably relatively small.
2008/11/1 Erik Moeller erik@wikimedia.org:
2008/11/1 Thomas Dalton thomas.dalton@gmail.com:
Well, you may end up with a disproportionate representation of people speaking languages near the beginning of the alphabet.
Putting some countries first protects against such selection bias in common cases, though it could potentially introduce other biases (countries not in the top list may be underrepresented). The only way to truly protect against selection biases of any kind is to randomize the list, which obviously is much more cumbersome.
Indeed, there is no ideal solution.
We'll have to see the actual data to assess how large these potential distortions might be. For example, if 95% of respondents completed the country/languages questions, then the selection bias of not finding your country is probably relatively small.
It's the people that stopped answering questions completely just before the language questions that are the problem - there is no way to know if they gave up because they couldn't find their language or because they'd just had enough. Obviously, if very few people stopped at that point then it doesn't matter, but chances are a significant number would have stopped at that point by random chance which makes it difficult to interpret the data.
2008/11/1 Thomas Dalton thomas.dalton@gmail.com:
It's the people that stopped answering questions completely just before the language questions that are the problem
Sure. But before we actually look at the data (which will tell us that, too), I would be careful speculating about the extent of bias in each question. In any event, any survey is always an approximation of the truth -- just ask the US election pollsters. :-)
Michael Bimmler schreef:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
Well, one the one hand I am quite happy to have a list where it doesn't say "United States", "United Kingdom" at the top, then two dashes, and then all the "less important countries".
*Shrug* I wouldn't really care about that. But my nationality is Dutch, and that couldn't be found at the time when I took the survey. At least, not under the "D". And neither under the "N" for Netherlands. It was, of course, under the "T" of "The Netherlands".
That suggests to me that they really didn't test that question, seeing that it is a Dutch university, and that I would expect at least one of their internal testers to be Dutch.
Eugene
2008/11/1 Eugene van der Pijll eugene@vanderpijll.nl:
Michael Bimmler schreef:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
Well, one the one hand I am quite happy to have a list where it doesn't say "United States", "United Kingdom" at the top, then two dashes, and then all the "less important countries".
*Shrug* I wouldn't really care about that. But my nationality is Dutch, and that couldn't be found at the time when I took the survey. At least, not under the "D". And neither under the "N" for Netherlands. It was, of course, under the "T" of "The Netherlands".
That suggests to me that they really didn't test that question, seeing that it is a Dutch university, and that I would expect at least one of their internal testers to be Dutch.
I found a similar problem. My passport says my nationality is "British Citizen", the closest fit I could find on their list was "United Kingdom". They clearly don't understand what "nationality" means. I have no idea what they actually mean by "United Kingdom", there is no such nationality. Is it a catch all for all British Nationals? Just British Citizens? Just British Nationals resident in the UK?
On Sat, Nov 1, 2008 at 08:47, Michael Bimmler mbimmler@gmail.com wrote:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who knows the 3-character ISO code for the currency he uses
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
Maybe where you live. In my country, the only symbol commonly used to indicate quantities of currency is "$".
On Mon, Nov 3, 2008 at 6:34 PM, Mark Wagner carnildo@gmail.com wrote:
On Sat, Nov 1, 2008 at 08:47, Michael Bimmler mbimmler@gmail.com wrote:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who knows the 3-character ISO code for the currency he uses
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
Maybe where you live. In my country, the only symbol commonly used to indicate quantities of currency is "$".
-- Mark Wagner
You've never gotten anything from Canada and had to compare Canadian dollars (CAD) to United States dollars (USD)? Also the "$" symbol stands for pesos as well as dollars- and a few other currencies too (check the [[$]] article). The survey is intended for a worldwide audience- you can't expect them to cater to just us from the US.
On Mon, Nov 3, 2008 at 16:02, Elias Friedman elipongo@gmail.com wrote:
On Mon, Nov 3, 2008 at 6:34 PM, Mark Wagner carnildo@gmail.com wrote:
On Sat, Nov 1, 2008 at 08:47, Michael Bimmler mbimmler@gmail.com wrote:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who knows the 3-character ISO code for the currency he uses
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
Maybe where you live. In my country, the only symbol commonly used to indicate quantities of currency is "$".
-- Mark Wagner
You've never gotten anything from Canada and had to compare Canadian dollars (CAD) to United States dollars (USD)?
No, but I have had to compare C$ with $, and occasionally with AU$.
On Wed, Nov 5, 2008 at 8:27 AM, Mark Wagner carnildo@gmail.com wrote:
On Mon, Nov 3, 2008 at 16:02, Elias Friedman elipongo@gmail.com wrote:
On Mon, Nov 3, 2008 at 6:34 PM, Mark Wagner carnildo@gmail.com wrote:
On Sat, Nov 1, 2008 at 08:47, Michael Bimmler mbimmler@gmail.com wrote:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who knows the 3-character ISO code for the currency he uses
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
Maybe where you live. In my country, the only symbol commonly used to indicate quantities of currency is "$".
-- Mark Wagner
You've never gotten anything from Canada and had to compare Canadian dollars (CAD) to United States dollars (USD)?
No, but I have had to compare C$ with $, and occasionally with AU$.
-- Mark Wagner
I don't know about the creators of the survey, but to tell you the truth, I had never expected that people don't know the ISO code of their own currency. It's something I assume that they never expected it either, how common or uncommon it might be.
On Wed, Nov 05, 2008 at 11:41:38PM +0100, Martijn Hoekstra wrote:
I don't know about the creators of the survey, but to tell you the truth, I had never expected that people don't know the ISO code of their own currency. It's something I assume that they never expected it either, how common or uncommon it might be.
My theory is that it's related to international travel. I'm sure that the main place I've seen ISO currency codes is in lists of exchange rates (see http://www.xe.com/ucc/). Editors who never travel internationally are less likely to need to check exchange rates.
- Carl
2008/11/6 Carl Beckhorn cbeckhorn@fastmail.fm:
On Wed, Nov 05, 2008 at 11:41:38PM +0100, Martijn Hoekstra wrote:
I don't know about the creators of the survey, but to tell you the truth, I had never expected that people don't know the ISO code of their own currency. It's something I assume that they never expected it either, how common or uncommon it might be.
My theory is that it's related to international travel. I'm sure that the main place I've seen ISO currency codes is in lists of exchange rates (see http://www.xe.com/ucc/). Editors who never travel internationally are less likely to need to check exchange rates.
Exchange rates and plane tickets. You're probably right that people that don't either travel or pay close attention to financial news probably won't see ISO currency codes very often.
On Sat, Nov 1, 2008 at 11:47 AM, Michael Bimmler mbimmler@gmail.com wrote:
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
On Mon, Nov 3, 2008 at 7:02 PM, Elias Friedman elipongo@gmail.com wrote:
You've never gotten anything from Canada and had to compare Canadian dollars (CAD) to United States dollars (USD)? Also the "$" symbol stands for pesos as well as dollars- and a few other currencies too (check the [[$]] article). The survey is intended for a worldwide audience- you can't expect them to cater to just us from the US.
On Wed, Nov 5, 2008 at 5:41 PM, Martijn Hoekstra martijnhoekstra@gmail.com wrote:
I don't know about the creators of the survey, but to tell you the truth, I had never expected that people don't know the ISO code of their own currency. It's something I assume that they never expected it either, how common or uncommon it might be.
People often complain about Americocentrism, but Eurocentrism can be just as much of a problem. America has close to four times the population of the most populous European nation, and 18 times the land area of the largest Western European nation. We're a lot more self-sufficient than any European nation, and *definitely* more than a small place like the Netherlands. Most Americans do not have frequent contact with anything relating to other countries. Most rarely buy things from Canada or Mexico. Most rarely travel to Canada or Mexico. Most rarely speak to people from Canada or Mexico, except immigrants and tourists who typically do their best to accommodate themselves to us.
Most Americans have little to no *reason* to know about non-American currency, units of measurement, languages, systems of writing, ideas, cultural concepts, or anything else. This is not because we're arrogant and think we're better than everyone else (although some of us do, of course). It's because of simple geographic and demographic reality. Most Europeans are living with foreigners close by on all sides. Americans are not. This fact needs to be respected by Europeans who are trying to make an international product, just as much as Americans doing the same need to do things as Europeans expect.
Put simply, if you're going to conduct a survey that includes a lot of Americans, you'd better make sure you have plenty of American input in the survey creation. You can't test it on Dutch people and expect that a simple translation will work fine for Americans. Not that I'm saying that's what happened here, but "I wouldn't have expected them to think of it" isn't an excuse for a serious survey. They're not supposed to *think* of it, they're supposed to *observe* it during usability testing.
I definitely know my currency code -- but only because I've read a lot of Wikipedia articles and it's come up in a few, probably due to British or other non-American influence. I've never seen it anywhere within America. It might have been on plane tickets, but I think I've taken four plane flights in my life, and only two were international. I wouldn't expect almost any American to be able to easily come up with "USD". Likewise I doubt almost any American would be sure of his language code, and the country code might pose no problem only because it's so obvious.
(Disclaimer: Yes, some Americans have plenty of contact with people from other countries, and obviously there are many exceptions to all of the above. If you're reading this, ipso facto you're most likely accustomed to reading a fairly international mailing list and using a fairly international encyclopedia, so the above might not apply to you personally. It doesn't apply to me, in fact: I had no problem with the Eurocentrism issues in the survey, although all the other problems did really annoy me.)
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
This is completely untrue in England too. I have two bank accounts with two different large high street banks and have just spent five minutes looking at statements from them both. There is definitely no ISO code. I have also tried two newspapers, a utility bill, half a dozen invoices and I am none the wiser. If I was given an hour to find it offline I think I would fail (and I still have no clue what it is). I guess Google or Wikipedia would work but I have never heard of an ISO code for currency even though I have traveled to 48 countries etc etc...
On Mon, Nov 3, 2008 at 11:34 PM, Mark Wagner carnildo@gmail.com wrote:
On Sat, Nov 1, 2008 at 08:47, Michael Bimmler mbimmler@gmail.com wrote:
On Sat, Nov 1, 2008 at 3:21 PM, Steve Summit scs@eskimo.com wrote:
- who knows the 3-character ISO code for the currency he uses
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
Maybe where you live. In my country, the only symbol commonly used to indicate quantities of currency is "$".
-- Mark Wagner
WikiEN-l mailing list WikiEN-l@lists.wikimedia.org To unsubscribe from this mailing list, visit: https://lists.wikimedia.org/mailman/listinfo/wikien-l
Even i had no idea what the ISO code was for mine (Australia Dollar) till i looked it up and noticed it was AUD, if it asked for something like the currency shorthand or something similar i would of known.
On Tue, Nov 4, 2008 at 7:51 AM, Andrew Cates Andrew@soschildren.org wrote:
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
This is completely untrue in England too. I have two bank accounts with two different large high street banks and have just spent five minutes looking at statements from them both. There is definitely no ISO code. I have also tried two newspapers, a utility bill, half a dozen invoices and I am none the wiser. If I was given an hour to find it offline I think I would fail (and I still have no clue what it is). I guess Google or Wikipedia would work but I have never heard of an ISO code for currency even though I have traveled to 48 countries etc etc...
The discussion is getting a bit off-topic...but I just found an old bus ticket from the UK in my jacket and it said clearly "GBP" on it. But nevermind, I take it that it is obviously much more common and usual in Switzerland to read "CHF" than in other countries of the world to read their own currency in shorthand. We might be a bit banking obsessed here...
To answer K. Peachey's question: Yes, obviously "currency shorthand" or whatever would have been better (did the survey really just say "ISO currency code"? I'm too lazy to check but this would really be a usability mistake...)
Michael
Michael Bimmler
To answer K. Peachey's question: Yes, obviously "currency shorthand" or whatever would have been better (did the survey really just say "ISO currency code"? I'm too lazy to check but this would really be a usability mistake...)
Or it could have just said "currency" (which is probably what it did say) but present to the user a drop-down list of the *names* of the currencies to select from, taken, perhaps, from the fourth column of the table at [[ISO 4217]].
(But the image at the top of that page suggests another way that even an American might find a currency code in common use. I think I've seen "USD" on airline tickets, too.)
(But the image at the top of that page suggests another way that even an American might find a currency code in common use. I think I've seen "USD" on airline tickets, too.)
Yeap for anything international wise (eg: plane tickets) you will see USD instead of the dollar symbol so it can't be confused in other places with the dollar (eg: Australia [AUD])
On Tue, Nov 4, 2008 at 2:23 AM, Michael Bimmler mbimmler@gmail.com wrote:
To answer K. Peachey's question: Yes, obviously "currency shorthand" or whatever would have been better (did the survey really just say "ISO currency code"? I'm too lazy to check but this would really be a usability mistake...)
It was a drop-down list of all currencies, with their name, sorted by ISO code, so like
AED United Arab Emirates dirham AFN Afghani ALL Lek AMD Armenian dram ANG Netherlands Antillean guilder AOA Kwanza ARS Argentine peso AUD Australian dollar . . . etc
2008/11/4 Andrew Cates Andrew@soschildren.org:
Come on, every bank statement of yours will tell you the ISO code of the currency your account is in, you will probably find it on every magazine that you read and so on and so on. Please don't tell me that this is such an academic thing...
This is completely untrue in England too. I have two bank accounts with two different large high street banks and have just spent five minutes looking at statements from them both. There is definitely no ISO code. I have also tried two newspapers, a utility bill, half a dozen invoices and I am none the wiser. If I was given an hour to find it offline I think I would fail (and I still have no clue what it is). I guess Google or Wikipedia would work but I have never heard of an ISO code for currency even though I have traveled to 48 countries etc etc...
I have to concur - I've just discovered that my bank statement, quite remarkably, doesn't even have the word "pounds" on it, much less a code or the £ symbol. (I hope they haven't redenominated it in ZWD when I wasn't looking)
It's intuitive when you see the code written down; I would be comfortable guessing that most people would look at 57.43 GBP and recognise it as "£57.43". But it's intuitive to go from the code to the currency but not the other way around. In the case of the UK, I suspect most people would look at U-- and then B-- before ending up at G--...
This is a particularly confusing case for the UK, though! Most countries have it a lot simpler.
On Sat, Nov 1, 2008 at 10:21 AM, Steve Summit scs@eskimo.com wrote:
This survey could only be completed accurately by someone:
- with nothing to do / too much time on their hands
- who never makes mistakes
- who can anticipate future questions before they're asked
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
- who knows the 2-character ISO code for the languages he knows,
even when they're not obvious (e.g. DE for German)
- who knows the 3-character ISO code for the currency he uses
[snip]
While I would not use your harsh language, I did encounter many of the same frustrations you did.
Un-anticipated follow ups made me re-consider my answers prior questions, which I couldn't change without a back button. (For example, my 'time spent' allocations will look screwy, because I binned things together which shouldn't have been).
There were questions which I couldn't realistically provide precise and reliable answers to such as "How many unique articles have you started", "How many unique articles have you edited", ... thought at least it didn't expect me to provide answers with 1 unit granularity for over 500. (I still wasted a lot of time actually looking up the correct answers, though I'm sure almost no one else would, and I ended up having 'over 500' anyways).
There were several cases where I was frustrated by the answer I would have ranked highest being unavailable. For example, many of my content contributions to English Wikipedia are photographs. But that was never an offered option, though write-ins helped.
It allowed you to tell it about contributions in multiple projects and languages, but didn't really provide a facility to express that your contributions were different in different languages. (For example, being an admin on some projects and not others will result in vastly different distributions of time).
I hope the question tree for people who are only readers is somewhat better.
In the future it might be helpful if the questions were made available in advance to more than just translators. I specifically tried to find the question sheet in advance on this one. I doubt I would have caught the unanticipated followups without actually taking it, but I would have pointed out a couple things which could have improved.
All that said I don't share Steve's pessimism: I expect the results of the survey to be interesting regardless of the survey's shortcomings.
Unfortunate, but I agree
I stopped at 50%. Too long. Not friendly enough.
Immediately after it asked me how many hours per week I spent on the project. Follow a looooooonnnnnnng list of figures, from 0 to 168. One by one. Honestly... that's a bit ridiculous. Why not ranges ?
How many wikipedians do I meet every week ? 1, 2, 3, 4 ... 150.
I also did a mistake in entering my number of years of study. Usually, it means "superior studies". So, I put 5. Later learned that it meant total years of studies. Should have been ... around 20 then. Provided that life long learning is not studying.
Anyway, I wanted to go back, to fix my number of years of studies. Well, no such luck, there are some back buttons missing.
Gnannnnaaaa.
Okay, suggestion for next time Erik. Much shorter survey. And separate "contribution to" and "use of" surveys.
Ant
Steve Summit wrote:
Anybody know where on-wiki the current survey is being discussed? I've got a thing or two to say. (Message I just sent to info@wikipediastudy.org appended.)
* * *
From: Steve Summit scs@eskimo.com Date: Sat, 01 Nov 2008 10:16:41 -0400 To: info@wikipediastudy.org Subject: your survey has problems
I just completed the survey at http://survey47.wikipediastudy.org/ survey.php. I'm sorry to be harsh and blunt. It's terrible. You can't use my results accurately -- they're wrong. I doubt you can use anyone's results accurately.
This survey could only be completed accurately by someone:
- with nothing to do / too much time on their hands
- who never makes mistakes
- who can anticipate future questions before they're asked
- who can be bothered to search for his country and language (several times) in strictly-alphabetical lists of every single country and language in the world
- who knows the 2-character ISO code for the languages he knows, even when they're not obvious (e.g. DE for German)
- who knows the 3-character ISO code for the currency he uses
The survey told me I couldn't use my browser's Back and Forward buttons, but had to use its own. That's rude.
The survey then failed to provide Back buttons on all pages. That's incompetent.
The survey then asked me questions like "How many hours do you spend contributing to Wikipedia, per week?", followed by "How many hours to you spend administering Wikipedia?", followed by "How many hours do you spend supporting Wikipedia in technical ways?" And that ended up being profoundly insulting. Here's why.
The administrative and technical work I do on Wikipedia feels like "contributions" to me, so (not knowing the next questions were coming up) I included those hours in my first answer. And the technical work I do feels like "administration", so (not knowing the next question was coming up) I included that in my second answer. Therefore, if (as I suspect) you're assuming those three categories are disjoint, and since my major contributions lately have all been technical, I've inadvertently overstated my overall contributions in this survey by a factor of three.
And those particular survey pages were among those without Back buttons, so I couldn't fix my mistake. Do you know how incredibly frustrating that is, to have wanted to spend time contributing to a survey, to know I've contributed false information, and to not be able to fix it?
Also, the survey took *way* too long. And there was no information given up-front about how long it might take. The progress bar in the upper right-hand corner was a clue and a nice touch, but it came too late.
The survey also took too long in relationship to the impression of the data likely to be gleaned from it. Short, tightly-focused surveys give the surveyee the impression that some well-thought-out, concise questions are being addressed by the surveyer. Long, scattershot surveys give the impression that the surveyers aren't quite sure what they're looking for, are trying to ask everything they can think of, and are imagining that they'll mine the data later for interesting results later. But, with poorly-defined surveys, that task often ends up being difficult or impossible. So I'm left begrudging the time I spent filling out the survey, because it feels like the ratio of time investment (by me) to useful information which can be gleaned (by you) is not good.
The survey asked me to specify things like "approximate number of articles edited" and "percentage of time spent translating" using drop-down selection boxes -- and with an increment of 1 between the available choices! That's just silly. (I dreaded how long I was going to have to scroll down to find my article edit count -- 1196 -- and was both relieved and annoyed to discover that, after 500 entries, the drop-down list ended with "more than 500".)
The survey's categories were too-bluntly taken from existing lists. For example, the list I had to choose my employment from was apparently taken from one of those dreadful Department of Commerce categorizations, that I have just as much trouble finding my job in when I fill out my tax forms.
At the very end, the survey asked if I wanted to submit my results, or fix any mistakes. But the provided way to fix mistakes was to use the Back button -- perhaps several dozen times -- which I wouldn't have felt like doing even if the chain of Back buttons were complete.
The survey was clearly designed by someone who was thinking about the data they wanted to collect, and in a scattershot way. The survey was clearly not designed with the person completing it in mind. The survey was clearly not designed or vetted by anyone who knew anything about designing good surveys.
I probably had more complaints to list, but I shouldn't waste as much time on this letter as I already wasted taking the survey, so I'll stop here.
Bottom line: Please use the results of this survey with extreme care, if at all. The results are going to be heavily, heavily biased by the inadvertent selection criteria involved in the survey's hostility towards its participants. If you conduct a survey like this again, please find someone to assist in the process who knows something about real-world survey work.
WikiEN-l mailing list WikiEN-l@lists.wikimedia.org To unsubscribe from this mailing list, visit: https://lists.wikimedia.org/mailman/listinfo/wikien-l
All I can say is that most of you must not take too many on-line surveys. I really found this one to be pretty well designed and easy to use. The fact is that there's never always a perfect answer in a multiple choice survey- you just have to pick the best answer (just like on any other multiple choice test).
Also a hint for those who didn't like the lengthy drop-down lists- just type the first letter of the listing you want and the list will advance to that letter and will usually advance one position each additional time you hit that same key. For example, I live in Connecticut which is always the third "C" listing in an alphabetical list of the US states whether it's spelled out or they use an abbreviation. When I get to a list where I have to enter my state, all I do is click "c" three times and I'm done!
On Sun, Nov 2, 2008 at 1:21 AM, Steve Summit scs@eskimo.com wrote:
This survey could only be completed accurately by someone:
- with nothing to do / too much time on their hands
- who never makes mistakes
- who can anticipate future questions before they're asked
- who can be bothered to search for his country and language
(several times) in strictly-alphabetical lists of every single country and language in the world
- who knows the 2-character ISO code for the languages he knows,
even when they're not obvious (e.g. DE for German)
- who knows the 3-character ISO code for the currency he uses
Such hate! I took the survey, and while it was imperfect, didn't find it provoked any kind of existential crisis.
My main complaints were questions like "How many hours do you spend doing X: 47 hours? 48 hours? 49 hours? 50 hours?..." - far too much precision.
And of course the standard problem with most surveys, when you're forced into either/or choices, and the choice you really want isn't there, or you think two answers are correct or something.
The survey's categories were too-bluntly taken from existing lists. For example, the list I had to choose my employment from was apparently taken from one of those dreadful Department of Commerce categorizations, that I have just as much trouble finding my job in when I fill out my tax forms.
Agree here - given how many editors are likely in the IT field, you'd expect smarter, more precise options.
Similarly, it was pretty stupid how the most common option for "what language wikipedia do you read/write/translate to" - English - was buried in the list. That was tedious.
The survey was clearly designed by someone who was thinking about the data they wanted to collect,
Erm...yes? Logical, don't you think?
Bottom line: Please use the results of this survey with extreme care, if at all. The results are going to be heavily, heavily biased by the inadvertent selection criteria involved in the
IMHO, leave the survey operators to deal with such issues. It's not up to us to concern ourselves with the accuracy of their data. They probably have ways of working out how meaningful the answers to each question were.
Steve