[Wikipedia-l] Wikipedia moderators and moral authority (was Re: Repost: clear guidelines and the power to enforce)

erik_moeller at gmx.de erik_moeller at gmx.de
Sat Nov 9 22:43:00 UTC 2002


> That's probably one main point of disagreement, then.  Suffice it to say
> I've been with Wikipedia from the beginning and I do think that things are
> very bad right now, far worse than they have been in the beginning.  I
> think it's becoming nearly intolerable for polite and well-meaning people
> to participate, because they're constantly having to deal with people who
> simply don't respect the rules.

It would be nice to get some specific examples. Perhaps Ed's "Annoying  
users" page (if renamed) isn't such a bad idea. I have certainly observed  
some people like Lir to be quite persistent and sometimes silly, but I  
have also noticed quite a bit of hysteria from the other side (see the  
recent "Lir again" thread).

So maybe we should start collecting individual case histories and examine  
them in more detail, instead of relying on our personal observations  
entirely. This might allow us to come up with better policies, and  
quantify the need for stronger enforcement.

> In that case, we can
> always collect a list of people who have been driven away or who have
> quietly stopped editing so much out of disgust with having to deal with
> people who just don't get it.

That's not the kind of list I'm talking about, because it only tells us  
about the reactions, not the actual actions. You may say that these people  
were driven away by silly eedjots, but I cannot tell whether this is true  
without looking at the actual conflicts. Often I've seen so-called experts  
on Wikipedia try to stop reasonable debate by simple assertion of their  
authority. This doesn't work, and this shouldn't work on Wikipedia, and if  
they can't handle that fact, it's my turn to say they should better leave.

> I'm talking about a body of trusted members, not an "elite."  Tarring the
> proposal with that word isn't an argument.

Sorry, but I do not really see much of a difference. A group with superior  
powers is an elite, trusted or not. The word "elite" requires a certain  
stability of that position, though, so it might not apply to an approach  
of random moderation privileges.

> Well, the times I'm concerned about aren't necessarily times when people
> are shouting against the majority, but when they write nonsense, brazen
> political propoganda, crankish unsupported stuff, and so forth--in other
> words, violating community standards.

Do you mean nonsense in the sense of "something that just isn't true" or  
in the sense of simple noise, like crapflooders? How do you plan to define  
/ recognize "crankish unspported stuff"?

Yes, I know, there are egregious cases where we would all agree that they  
are not tolerable. I'm just worried that such terms might mean different  
things to different people, and if we adopt them, we risk suppression of  
non-mainstream opinions. Is Wikipedia's page about MKULTRA, a CIA mind  
control project, nonsense, crankish? No, it's not, it really happened, but  
the large majority of Americans would never believe that.

>>> I am, like many others, a big believer in the concept of "soft
> security".

> Why don't you explain exactly what that means here on the list, and why
> you and others think it's such a good thing?

The idea of soft security has evolved in wikis, and it is only fair to  
point you to the respective page at MeatballWiki for the social and  
technical components of soft security:
http://www.usemod.com/cgi-bin/mb.pl?SoftSecurity

As to why I, personally, think it's a good idea, that's simple: Once you  
introduce hard security mechanisms like banning, deletion etc., you create  
an imbalance of power, which in turn creates a risk of abuse of said power  
against those who do not have it. Abuse of power can have many different  
results, it can encourage groupthink, drive newbies away, censor  
legitimate material, ban legitimate users etc. We already *have* a  
situation where we occasionally ban legitimate users and delete legitimate  
material. We need to get away from this. I am absolutely disgusted by the  
thought that we are already banning completely innocent users.

My underlying philosophy here is that it's worse to punish an innocent man  
than to let a guilty man go free.

> It seems to me that in growing numbers people refuse to bow to "peer
> pressure" or to be "educated" about anything regarding Wikipedia.

I'd like to see evidence of those growing numbers. Wikipedia's overall  
number of users has been growing constantly, are we talking about absolute  
growth or relative growth? Again, we should try to collect empirical  
evidence.

> Without generally-accepted standards and moral authority and the shame
> culture that accompanies them, peer pressure is impossible.

Yes, but in my opinion, the worst way to attain authority is through the  
exercise of superior power. The best way is through respect. With the  
trusted user groups that are part of my certification scheme, it might be  
easier to build a reputation.

> Peer pressure seemd to work relatively well in the past.  It is working
> less and less well as the project has grown and since I left a position of
> official authority.

Well, you know that I disagree about the effect of your departure on the  
project, so let's not go into that again.

> They do certainly cool down conflicts if the person receiving the
> statement knows that the person issuing the statement has the authority to
> do something about it.  They also let the recipient know that there are
> some lines that just can't be crossed without the community taking a
> forthright stand against it.

I think this kind of last resort authority should not be concentrated but  
distributed. If a poll shows that many members think that member X has  
"crossed the line", this sends a much stronger message than any non- 
totalitarian scheme of concentrated authority. Especially if last resort  
measures like banning *can* be approved by the majority.

It appears that Jimbo opposes voting, though, so you might be able to  
convince him of your stance.

> When the disagreement concerns Wikipedia policies and obvious
> interpretations of them, when the violator of those policies does so
> brazenly, knowingly, and mockingly--and surely you've been around long
> enough to know that this happens not infrequently now--then it's not
> particularly important that we "express respect for the other person's
> view."  By then, it's clear that diplomacy will not solve the problem.

Yes, I agree that those cases exist. But I also believe that we need to  
have a lot of patience when dealing with newbies. Not an infinite amount,  
but a lot. And I think everyone should be given the opportunity to  
rehabilitate themselves.

> Please do acknowledge that some newbies (and a few not-so-newbies) really
> *are* destructive, at least sometimes.  And that's a really *serious*
> problem, that we must not ignore simply because it violates our righteous
> liberal sensibilities (I have 'em too; I am a libertarian but not an
> anarchist).

True.

> You seem to be implying that, if we simply were nice to people, cranks,
> trolls, vandals, and other destructive elements would be adequately
> manageable.

No, I just consider hard security a last resort, to be used carefully and  
only when all else fails (as to when we know that is the case, we might  
actually develop a timeframe of conflict resolution, based on data about  
previous conflicts).

> First, Wikipedia has grown a lot.  It's the biggest wiki project in the
> world.  We *can't* make people nice as you suggest.

Allow me to psychoanalyze a bit. My experience is that most people just  
want to be respected, to be part of the "club", but some people have  
failed in their life to learn the necessary behaviors to do so. Sometimes  
we are dealing with years of problematic experiences, and often we cannot  
really help these people, I agree. But I've also seen the opposite cases,  
especially on Kuro5hin, where the combination of peer pressure and voting/ 
rating has driven many trolls away or made them serious (although often  
somewhat unskilled) contributors. Why? Because they learned which  
behaviors worked and which didn't.

The best example I can think of is a troll called OOG THE CAVEMAN. At  
first he would troll and post crap in all upper case. A large number of  
his comments were hidden by majority vote, and OOG suddenly started  
posting on-topic comments. They still weren't rated highly or of high  
quality, but he stopped his behavior. I've seen (but not recorded) similar  
turnarounds.

To make this work, we probably need both the carrot and the stick. But I  
disagree with the Christian philosophy of "Spare the rod, spoil the  
child". Force should not be used as a training mechanism but strictly for  
self protection. We should try to *always* be friendly and courteous, even  
if we ban people.

> Third, your proposal requires that the best members of Wikipedia follow
> around and politely educate an ever-growing group of destructive members.
> We've tried that.  We've lost a number of members as a result, and I
> personally am tempted, every so often, to completely forget about
> Wikipedia, and resign it to the dogs.  But I don't want to do that.  I
> still feel some responsibility for it, and I think I helped build it to
> where it is now.  I don't want to see something that I've helped build
> wear away into something awful.

Again, I'm not seeing that happen. I've seen many articles improve  
rapidly, though. Simple vandalism is a growing problem and really putting  
the wiki model to the test. We might consider a rather simple solution:  
edits only for logged in members. This drastically limits the  
accessibility of the wiki, but we do have a core of contributors, and if  
we can't grow without losing some of them, maybe we should slow our  
growth.

(user-to-user comm.)
> We do this automatically, of necessity, on talk pages.  No internal
> messaging system would be better than direct constructive criticism on
> offending pages.

Talk pages are nice, but they have the disadvantage of being not very  
personal. The mentor idea truly centers around building a social bond.  
Although you might not want to "bond" with strangers, I can tell you with  
authority that some people are very, very good at this, and actually enjoy  
it a lot. Within a sufficiently large community, which Wikipedia is, you  
will have such mentors.

> But one reason I'm worried about the current state of Wikipedia is that we
> might have some expert reviewers coming in to do some good work here, only
> to be attacked by some eedjit who gets his jollies out of attacking an
> expert precisely because she's an expert.  That *will* happen, almost
> certainly, if the Wikipedia peer review project gets going.

Well, I can predict that I'm going to "attack" experts myself if they add  
non-NPOV content, fail to cite sources properly, insist on their authority  
to make their point etc. As you want me to acknowledge the problem of  
vandals and cranks (which I do), it would be nice if you would acknowledge  
the fallibility of experts more often. My view on experts and what makes  
an expert is very different from yours, but I believe both views can  
coexist in a good certification system.

> The whole reason behind a random sample is precisely to forestall the sort
> of "elitism" and abuse of power that you fear.

I know, and I respect this good intention. I just don't think it's the  
right approach, it will only lead to less informed decisions. Better keep  
the decision process open to (almost) everybody (we need to prevent vote  
flooding as well), that way we can reduce abuse of power the most  
effectively. That's why K5's moderation system works and Slashdot's  
doesn't.

Regards,

Erik



More information about the Wikipedia-l mailing list