[Foundation-l] WMF Development and Memes
Mike Godwin
mnemonic at gmail.com
Wed Jan 9 14:25:53 UTC 2008
Gregory Maxwell writes:
> It is almost universal that when something is becoming worse the
> people involved think they are really fixing things. Agent
> provocateurs excluded, people don't set out to break their
> organizations.
Okay, but don't people who are fixing things also frequently believe
they are fixing things?
> I do think it is informative to contrast Erik's response to me with
> yours. The impression I get from Erik's "I don't feel that there's
> been significantly less or more sharing of information over the last
> few months than over the last couple of years." is "nothing has
> changed", while your response appears to be much further to the
> "Things are changing because they had to.. and for the better!".
Well, assuming that Erik and I would agree with your paraphrasing, I
don't see anything logically contradictory between the two
propositions you restate here.
> And, for the avoidance of doubt, I do not think that it has failed. I
> am, however, concerned that *Wikimedia* is is closer to substantial
> failure than it has been previously. And as you may be aware I am, by
> far, not the most important person around the organization to express
> this view in the last month or so.
The thing about viral memes is that they infect indiscriminately --
even the wisest can catch them.
See for example the recent meme that Hillary Clinton's primary
campaign had collapsed. Everybody, even the experts, "knew" it was true.
And then it turned out not to be.
> But let me suggest a different 'meme' for you: "The Emperor's New
> Clothes".
Sure, it's a great meme, but the thing to remember about the story is
that it's not the great crowd of people who point out the truth --
it's the one observer who challenges the received wisdom. If there
were only one person who was critical about the Foundation while
everyone else was cheerleading about it, I'd see the more obvious
parallel. But what I see on the lists (and, like you, in private
conversations and other media as well) is frequently a kind of crowd-
sourcing of conspiracy theories.
> If our fear of self-fulfilling negative outcomes or bad press causes
> us to perform self-censorship or denial we will not only fail but we
> will very likely cause harm to the outside world in the process.
This reminds me of recent discussions I've had elsewhere about
parenting. I'm a parent myself. Obviously, any parent wants to
protect his or her child, but, equally obviously, you can't protect a
child from every conceivable thing. There's such a thing as
overprotectiveness, but that's not in itself an argument against
protectiveness.
Similarly, an argument against too much self-censorship (and I agree
there can be too much) is not the same thing as an argument against
prudence in what we choose to say.
> As the projects and Wikimedias grow there will be more things wrong,
> and so it will take more eyes to see them, and more mouths to speak up
> about them. Not less.
Of course.
> Transparency is something which is objectively measurable, so we
> should be able to escape the trap of inferences.
I am of the view that very few human enterprises and products lend
themselves to objective measurability. Back before I went to law
school, I was for a while a graduate student in experimental
psychology. What convinced to me leave the program was my feeling that
attempts to quantify and characterize human cognitive processes were
grounded irreducibly in subjectivity. You remind me of that period of
my life when you say that "Transparency is something which is
objectively measurable." I'm unaware of studies that demonstrate
irrefutably that this is the case, and I'm not sure there's even an
objective definition of "transparency." You may of course know more
about experimental studies on the subject than I do -- my experience
is a quarter-century out of date.
Nevertheless, I find that, when dealing with human beings and their
enterprises and when dealing with something as philosophically
freighted as "transparency," it's more effective to insist on trying
to act ethically rather than to insist on objectively measurable
criteria.
> There is a fine line between actual constraints and artificially
> created confidentiality suicide pacts.
There's less of a fine line between studying labor law and inferring
labor law from, say, a nondisparagement agreement. For example, when
you research the history of nondisparagement agreements, you find that
they occur because, before they occurred, there was lots of
disparagement going on, leading to costly litigation.
For this reason, labor-law experts routinely advise that non-
disparagement be the rule, even in the absence of a formal agreement,
although they also advise formal agreements. That's why you may find
that some companies, as a matter of policy, never say anything bad
about a former employee unless required to by law.
Certainly that is what I would advise, regardless of whether a
"suicide pact" existed, and it is what I have advised when working at
other organizations.
> While I don't want the foundation to be subject to a transparency
> suicide pact either, if transparency isn't one of the most 'expensive'
> soft-costs of doing business for Wikimedia then something is probably
> wrong.
I don't know that we disagree here.
> Dishonesty is a question of motivations, such questions are often hard
> to resolve.
That's precisely why one should be careful, it seems to me, about
implying dishonesty in the Foundation, in the community, or
elsewhere. More below.
> Hanlon's razor is an often applied tool when someone is suspected
> being dishonest. My own view is that Hanlon's razor isn't all that
> useful: Stupidity, Malice. If the effect is the same we still need to
> fix it.
Well, here you discount the benefit of Hanlon's Razor. Say that I've
done something stupid, and my reaction, after reflection, may well be
that I agree with you. Say I'm dishonest, and I'll feel the impulse
to resist anything you say after that. (You may be different -- you
might regard being called "stupid" equally offensive with being called
"dishonest.") I've generally found that, when trying to offer
constructive criticism, implying things about people's motives (which
you necessarily do when implying dishonesty) is less helpful than
focusing on what is a better course.
>> I'll close by saying something I've noted before, which is that we
>> have created a culture of *editors* here -- people who look at
>> everything anyone says or writes with a critical eye -- and so we've
>
> This sounds nice, but I don't see anything objective to support it.
It's a hypothesis, not data. For my take on the meaning of
"hypothesis," see Karl Popper, The Logic of Scientific Discovery
(1959). Because there is much about the world that cannot be known
with certainty, it is important to favor what Popper calls "critical
rationalism" rather than "justificationism." We are compelled to work
with hypotheses all the time.
> Here is a counter hypothesis
It doesn't seem to me to be "counter." That is to say, both hypotheses
could logically be true.
> We have created a culture of personal ownership here -- people who
> feel responsible for the product of their collective labors, or at
> least the parts they pick and choose -- and so they feel entitled to a
> fairly high degree of control, or at least visibility into the actions
> of those who are exerting influence over it, just as you might ask
> questions as a doctor works on your body or a plumber works on your
> home.
I agree with all this. I generally would not support any policy
choices that, in my view, undercut the sense of personal investment in
the projects. That said, things relating to professionalizing the
organizational infrastructure have needed to be done, and still are
being done. And this means that the Projects can't be operated like
hobbies anymore.
> This concern is heightened when the actors do not appear to
> have the same level of investment and when they do not have the
> credibility that doctors or plumbers have.
I want to suggest that geographical relocation is a pretty good sign
of individual investment. Maybe not irrefutable, but still hard
evidence.
> While I am disappointed in the level of hostility in our forums, I
> can't personally agree with the position that we have become too
> critical.
Funnily enough, if you *agreed* with me, that might be taken as
evidence against my hypothesis! ;)
> While some areas may have become too critical at times, I
> think there are a lot of areas where more criticism is needed.
I don't think we disagree about this.
> When the time comes that a Wikimedia representative can't make a
> clearly incorrect claim in the media without Wikimedians calling the
> person out about it (politely, of course) in our forums, come back to
> me.. and at that time I'll be willing to consider the idea that we're
> critical enough.
I'm not sure what you're referring to here, and it may not be helpful
to ask you to be more specific, but I want to refer you again to Karl
Popper. I don't think the problem in our memetic culture is that we're
too critical -- indeed, criticism fuels the whole enterprise, and
hooray about that. I do think that *reflexive* criticism, conspiracy-
mongering, and hostility is destructive, and I think we all ought to
be as self-aware as possible about whether we're saying things that
promote destructive memes.
--Mike
More information about the foundation-l
mailing list