[Foundation-l] Attribution survey, first results
phoebe.wiki at gmail.com
Wed Mar 4 21:18:50 UTC 2009
For what it's worth, what Nathan says basically sums up my concerns as
well. I think for a (relatively informal, community-opinion) survey
it's less important to have an absolutely rigorous methodology (not
what I was asking for) than it is to ask the question: is this good
enough for our purposes? (and indeed, what *are* our purposes, and how
does that influence what we ask?)
Saying that community opinion should be taken into account on this
question is wonderful, and crucial -- but as we all know it's damn
hard to determine community opinion with any degree of reliability.
Devoting some thought to this non-trivial matter has useful
implications for determining *all sorts* of controversial, broad-scale
questions, however, and getting it right means that we are one step
closer to better community governance. Or if we can't get it "right",
let's acknowledge what the biases are, and be very clear on the kinds
of input that did go into this conversation. For instance, many of the
people who have participated in the GFDL rewrite and the discussion so
far are some of the preeminent free-content, free-culture,
open-knowledge experts in the world: that should be acknowledged.
There are many more potential constituencies that haven't had a say,
For instance, a while back I polled a handful of librarian colleagues
who are occasional Wikipedia contributors about their thoughts on
attribution, just for my own edification. Obviously, the plural of
anecdote is not data, but I still found their anecdotes interesting.
These are all people who know something about copyright and quite a
bit about 'attribution' in the academic world (our job, as librarians,
is often to advise people on how to provide proper credit to sources).
They were all firmly against the list-all-authors method of
attribution. One said:
"I expect no personal attribution whatsoever for work on WP. The point
of WP is that it is a communal/communitarian encyclopedia. To give
credit to individual author defeats that aim. Further, pages evolve,
even if some given selection of articles wind up printed. To identify
authors as of 2009 ignores the work that will almost certainly come
later, and it implicitly devalues that later work by giving primacy to
the people who got the ball rolling on an article."
This is a strong and interesting opinion that as far as I know hasn't
even been expressed in quite that way on this mailing list. Part of my
questioning the survey is because its design explicitly excludes the
opinions of people like my friend, who edits under an IP afaik.
On Wed, Mar 4, 2009 at 12:53 PM, Nathan <nawrich at gmail.com> wrote:
> As a non-statistician (and, from this list, you'd think there are lots of
> professional statisticians participating...), can one of the experts explain
> the practical implications of the bias of this survey? It seems fairly
> informal, intended perhaps to be food for thought but not a definitive
> answer. Is this survey sufficiently accurate (i.e., accurate in a very broad
> way) to serve its purpose? How much will problems with methodology (which
> I'm sure Erik knew would be pointed out immediately) distort the results?
More information about the foundation-l