[Foundation-l] Attribution survey, first results
Nathan
nawrich at gmail.com
Wed Mar 4 20:53:46 UTC 2009
As a non-statistician (and, from this list, you'd think there are lots of
professional statisticians participating...), can one of the experts explain
the practical implications of the bias of this survey? It seems fairly
informal, intended perhaps to be food for thought but not a definitive
answer. Is this survey sufficiently accurate (i.e., accurate in a very broad
way) to serve its purpose? How much will problems with methodology (which
I'm sure Erik knew would be pointed out immediately) distort the results?
Nathan
On Wed, Mar 4, 2009 at 3:47 PM, Brian <Brian.Mingus at colorado.edu> wrote:
> This entire field has been formalized but in my experience the key
> things to worry about are experimenter and subject bias.
>
> Experimenter bias in a survey context means that the survey writer
> (Erik) has expectations about the likely community answers. This has
> been clearly demonstrated, as he already has a feeling about what the
> German survey results will be even though it hasn't been written.
> Writing an unbiased survey requires very careful wording and is a
> tough job. If the entire point of the survey is to find out what the
> community thinks then the survey should be unbiased.
>
> A variety of types of subject bias are overcome by taking a random
> sample. The claim that the survey takers are self selected is overcome
> by also recording various demographic information and normalizing the
> number of responses from demographics, or some other kind of filter.
>
> You essentially need to employ psychometric techniques in order to
> verify the construct validity of the survey (that you can really draw
> those inferences from those questions).
>
> Erik's survey, in my opinion, is likely to have low construct validity
> and should have been created by a blind, relatively unbiased 3rd party
> instead. Creating a survey in which the subjects are non-self-selected
> is a practical impossibility. I can think of some software methods
> that might help but the better solution is to gather rich demographics
> and then filter.
>
> On Wed, Mar 4, 2009 at 1:32 PM, Gregory Kohs <thekohser at gmail.com> wrote:
> > *phoebe ayers* phoebe.wiki at gmail.com writes:
> >
> > ++++++
> > I'm not sure there's any way to get a non-self-selected survey about
> anything
> > on the projects due to anonymity concerns.
> > ++++++
> >
> > I'm a 17-year veteran of implementing professional quantitative survey
> > research. Self-selection bias is a very complicated study, but there are
> > some fairly accessible and intuitive techniques one may implement to
> create
> > a thoughtful survey of a target population which minimizes self-selection
> > bias concerns. This allows the stakeholders to focus on the challenge of
> > deriving meaning from the response data rather than feeling nausea over
> the
> > sampling methodology.
> >
> > I am willing to give, pro bono, 45 minutes of telephone consulting time
> to
> > any Wikimedia Foundation staff member who is attached to this particular
> > survey project, on the condition that they will be open and attentive to
> the
> > possibility that a properly-designed and fairly-executed survey may not
> > return results that foster their preconceived desires to railroad through
> a
> > license migration (which, unfortunately, is my key takeaway from
> observing
> > this discussion).
> >
> > --
> > Gregory Kohs
> > Cell: 302.463.1354
> > _______________________________________________
> > foundation-l mailing list
> > foundation-l at lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
> >
>
> _______________________________________________
> foundation-l mailing list
> foundation-l at lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
>
--
Your donations keep Wikipedia running! Support the Wikimedia Foundation
today: http://www.wikimediafoundation.org/wiki/Donate
More information about the foundation-l
mailing list