Quotes from me are preceded by >, from Erik by >> and followed by <<.
That's probably one main point of disagreement, then. Suffice it to say I've been with Wikipedia from the beginning and I do think that things are very bad right now, far worse than they have been in the beginning. I think it's becoming nearly intolerable for polite and well-meaning people to participate, because they're constantly having to deal with people who simply don't respect the rules.
It would be nice to get some specific examples. <<
How many specific examples would it take to convince you, I wonder? I doubt I could produce enough, because our difference is philosophical.
Perhaps Ed's "Annoying users" page (if renamed) isn't such a bad idea. <<
Due respect to Ed, but I seriously doubt that.
I have certainly observed some people like Lir to be quite persistent
and sometimes silly, but I have also noticed quite a bit of hysteria from the other side (see the recent "Lir again" thread). <<
Hysteria? I have to support Zoe here; Lir is a disruptive child, and she should probably be banned. In saying this I am not aware of being "hysterical." When you seemingly coolly accuse people of "hysteria," Erik, the word implies that the people responding to Lir are purely motivated by irrational emotions and, like literally hysterical neurotics like Freud studied, merely have emotional problems.
So, here you come out more or less in favor of Lir, because others have banned her and/or are suggesting that she should be banned; and you accuse others, who are trying to keep Wikipedia a ***PRODUCTIVE*** project, of being "hysterical."
So maybe we should start collecting individual case histories and
examine them in more detail, instead of relying on our personal observations entirely. This might allow us to come up with better policies, and quantify the need for stronger enforcement. <<
No. Our collective experience is more than enough proof that we need *consistent* enforcement, Erik, not *stronger* enforcement. Right now we have very strong enforcement. Your interest is obviously not in reducing power but in keeping power distributed among a lot of different people who use it in totally different, inconsistent ways, and none of which has any particular respect among other users. Virtually anyone can, for the asking, get sysop privileges and start banning IPs and locking pages. That certainly appears to be mob rule and by golly, in my experience on Wikipedia lately I have to say it certainly *feels* like mob rule.
If we had power concentrated in the power of a rotating group of trusted individuals who could be appealed to to enforce the *actual rules* that we now have on the project--we do have rules on the project, but the enforcement mechanism for them is no longer working, I think--then there would be, as there is not now, *clear consequences* for breaking the rules. These people would have moral authority and respect that *no one* can command right now.
In that case, we can always collect a list of people who have been driven away or who have quietly stopped editing so much out of disgust with having to deal with people who just don't get it.
That's not the kind of list I'm talking about, because it only tells us
about the reactions, not the actual actions. You may say that these people were driven away by silly eedjots, but I cannot tell whether this is true without looking at the actual conflicts. <<
Be serious--look at what you just wrote. Does anyone other than you really need it to be proven? It's *obvious* to anyone who has observed very many of the people who have left the project in disgust. It's also quite obviously the fact that we have to tolerate a bunch of people who just don't want to play by the rules--even after being told what the rules are and that those rules are indeed not going to be changed--that a lot of highly qualified people see the website and decide not to participate.
Often I've seen so-called experts on Wikipedia try to stop reasonable
debate by simple assertion of their authority. This doesn't work, and this shouldn't work on Wikipedia, and if they can't handle that fact, it's my turn to say they should better leave. <<
Wait a second. Take a step back and put this exchange into context. I said that Wikipedia is descending into a sort of mob rule, and that this has driven away, and will continue to drive away, some of our best contributors. You reply, here, by defending the mob against the experts. This seems to me to imply that you simply value the radical freedom and openness in Wikipedia--which I wholeheartedly agree is one main key to its success--above retaining the people who can write and have written some of the best articles in the project.
I think your priorities are seriously askew.
I'm talking about a body of trusted members, not an "elite." Tarring the proposal with that word isn't an argument.
Sorry, but I do not really see much of a difference. A group with
superior powers is an elite, trusted or not. The word "elite" requires a certain stability of that position, though, so it might not apply to an approach of random moderation privileges. <<
That shows that you're essentially viewing this ideologically, and that you're expecting the rest of us to buy an essentially anarchistic ideology: *any* group of trusted members who has powers others don't have is *by definition* an "elite." But in the mouths of any libertarian or anarchist, "elites" (and "cabals") are necessarily evil. Hence anyone distinguished by special powers is an evil elite and to be opposed.
The result of this ideology is that *anyone* who is distinguished in any way that results in their having more authority--whether officially or unofficially--will be opposed as an evil "elite" or "cabal" by you and people like you. Thus we have a small but **far** too vocal band of Wikipedians who oppose virtually everything that implies any distinctions: banning, neutrality, prising off meta-discussion from article creation, and any community standards generally.
They, like every committed Wikipedian, can't help but derive pleasure from the fact that we have built up a huge structure of knowledge. But they have a woefully incomplete idea what makes it possible. What makes it possible is precisely the *combination* of freedom, which makes it easy to contribute, *and* enforced standards, which define and guide our mission.
But in the context of a wiki, the only way to enforce standards is by a great enough proportion of contributors *respecting other contributors* when those other contributors do try to enforce the rules. When that breaks down--when a large enough quorum of parasites decides the rules do not apply to them, or that nobody ought to be paid any special respect--it naturally follows that rules will be openly flouted. (And then there are, emboldened, people like yourself, Cunctator, and a few others who defend the parasites because you also innately feel that nobody ought to be paid any special respect and that there should not be any rules. How elitist.)
And then it becomes a waste of time for people who *do* want to contribute to a project defined by rules, with a particular purpose.
Well, the times I'm concerned about aren't necessarily times when people are shouting against the majority, but when they write nonsense, brazen political propoganda, crankish unsupported stuff, and so forth--in other words, violating community standards.
Do you mean nonsense in the sense of "something that just isn't true"
or in the sense of simple noise, like crapflooders? How do you plan to define / recognize "crankish unspported stuff"? <<
Do you really think it would resolve anything in our discussion if I were to supply you with an answer to these questions? No, you seem to want to ask rhetorical questions, and the point of the questions is: there are no clear standards whereby we can determine when community standards are violated.
Some standards are explicitly stated and have been vigorously debated and shaped to something well-understood and -agreed by Wikipedia's old guard and best contributors; for example, NPOV, having lower-cased titles, and not signing articles. Other standards are specific to a field and some of them are known (and indeed perhaps knowable) only to people who have given adequate time studying the subject. There are certainly clear standards of both sorts, and the fact that there are borderline cases, where we're not sure what to say, hardly impugns the idea that there are such clear standards. Moreover, if it should turn out that I would be unable to answer your rhetorical questions in general, if I should interpret them as nonrhetorical, that would prove nothing. We proceed by practice and experience and these, as is well understood by philosophers, engineers, and many others, often produce excellent results even when overarching principles describing the practice and experience are not forthcoming.
Yes, I know, there are egregious cases where we would all agree that
they are not tolerable. I'm just worried that such terms might mean different things to different people, and if we adopt them, we risk suppression of non-mainstream opinions. Is Wikipedia's page about MKULTRA, a CIA mind control project, nonsense, crankish? No, it's not, it really happened, but the large majority of Americans would never believe that. <<
OK, so you're worried that "crankish unsupported stuff" and other words we'd use to describe undesirable material and behavior would be such that we'd disagree about cases. Of course we would. But if we're reasonable people and understand what an encyclopedia *generally* requires, and we have much experience actually working on an encyclopedia, then we can certainly agree on a lot of cases.
The lack of absolute unanimity in every case does not--just to give an example--provide people an excuse to write unsupportable or provably false stuff in articles, just for one example. It doesn't mean we can't forthrightly eject (or completely rewrite) material that is, on any reasonable person's view, a violation of our neutrality policy.
(As an aside, [[Neutral point of view]] has specific implications for the case that you raise; where the implications are unclear, we use our judgment and engage in debate.)
I am, like many others, a big believer in the concept of "soft
security".
Why don't you explain exactly what that means here on the list, and why you and others think it's such a good thing?
The idea of soft security has evolved in wikis, and it is only fair to
point you to the respective page at MeatballWiki for the social and technical components of soft security: http://www.usemod.com/cgi-bin/mb.pl?SoftSecurity <<
I'm not particularly interested in going to the website to find out what you mean. If you want to introduce an unfamiliar term into a debate, it is polite to define it. Use information from that Usemod page to make your case, but don't expect me to go there and provide you arguments against it here on Wikipedia-L.
As to why I, personally, think it's a good idea, that's simple: Once
you introduce hard security mechanisms like banning, deletion etc., you create an imbalance of power, which in turn creates a risk of abuse of said power against those who do not have it. Abuse of power can have many different results, it can encourage groupthink, drive newbies away, censor legitimate material, ban legitimate users etc. We already *have* a situation where we occasionally ban legitimate users and delete legitimate material. We need to get away from this. I am absolutely disgusted by the thought that we are already banning completely innocent users. <<
Erik, two things. First, we already have banning and deletion. We aren't debating about those, but what you write above makes it sound as if we were. From what you say I infer that Wikipedia has LONG AGO decided against using SoftSecurity. Those of you who promote it apparently are trying to change it.
Second, the mere *existence* of sanctions hardly implies that those sanctions will be abused to any degree at all. If your point is simply "Power can be abused," I'm totally unconvinced and I don't think anyone else will be either. Surely you must have more reason to support it than that.
After all, as for example under my rough proposal, we can have strong and multiple safeguards against abuse. We can have not just one moderator, but several who are empowered to check on each other. We can make sure that the moderators are selected on a random and rotating basis, so that no one becomes particularly power-hungry and so that we can take power-abusers out of the loop.
You're also forgetting that this all happens in the wide-open context of a wiki. It's very hard to abuse power in an environment when a sizable minority of the contributors are virtually drooling with the opportunity to catch someone in an act of abuse of power. (I oughta know.)
You say that you're disgusted by the thought that we are already banning innocent users. The best way we have of ensuring against that is by adopting my proposal; it would provide for a totally open, regular, rational method of imposing sanctions, quite unlike the present system.
We aren't going to stop banning people, Erik. So we might as well find a rational method to do it. One that will reinvigorate a sense of seriousness about our mission and lend moral authority to those vested with the power to issue sanctions, something that people lack right now, but which they SORELY need.
My underlying philosophy here is that it's worse to punish an innocent
man than to let a guilty man go free. <<
But that's hardly a reason never to punish anyone for anything, is it?
It seems to me that in growing numbers people refuse to bow to "peer pressure" or to be "educated" about anything regarding Wikipedia.
I'd like to see evidence of those growing numbers. Wikipedia's overall
number of users has been growing constantly, are we talking about absolute growth or relative growth? Again, we should try to collect empirical evidence. <<
Since that will not be forthcoming, I am hoping that others will voice their opinions; ultimately in any case, that's all we'll have. On the other hand, you know as well as I do (if you've been paying attention on Wikipedia-L) that there is a growing number of protests over the growing anarchy that we're seeing on Wikipedia.
Again, remember, we're not talking about adopting sanctions like banning and deleting articles; we're talking about adopting a new system whereby they can be more consistently (and, as I would favor, **leniently**) imposed.
Without generally-accepted standards and moral authority and the shame culture that accompanies them, peer pressure is impossible.
Yes, but in my opinion, the worst way to attain authority is through
the exercise of superior power. The best way is through respect. With the trusted user groups that are part of my certification scheme, it might be easier to build a reputation. <<
I agree: the best way is through respect. However, we don't have that luxury of being able to depend on respect alone, because respect is an increasingly rare commodity. The situation in that regard is getting worse, as we've discussed above.
Respect is already a factor, though, coupled with the present messy sysop system. The people we most want to rein in are precisely the people who *lack* respect for anybody.
Peer pressure seemd to work relatively well in the past. It is working less and less well as the project has grown and since I left a position of official authority.
Well, you know that I disagree about the effect of your departure on
the project, so let's not go into that again. <<
No, Erik, I didn't know that, but I don't really care about your opinion about that, either: that wasn't my point. My point was that peer pressure *did* indeed work pretty well under my tenure. It seems to be working considerably less well now. Whether my official presence had anything to do with it, I don't know or care; I do know that the problem is far worse than it was.
Again, you could ask the many people who have left or who have stopped contributing as much.
They do certainly cool down conflicts if the person receiving the statement knows that the person issuing the statement has the authority to do something about it. They also let the recipient know that there are some lines that just can't be crossed without the community taking a forthright stand against it.
I think this kind of last resort authority should not be concentrated
but distributed. If a poll shows that many members think that member X has "crossed the line", this sends a much stronger message than any non- totalitarian scheme of concentrated authority. Especially if last resort measures like banning *can* be approved by the majority. <<
You favor a democracy, susceptible to mob rule as at present; I favor a republic, where the representatives are in the hot glare of the public gaze but as a result have the moral authority that individuals in mob rule do not have.
When the disagreement concerns Wikipedia policies and obvious interpretations of them, when the violator of those policies does so brazenly, knowingly, and mockingly--and surely you've been around long enough to know that this happens not infrequently now--then it's not particularly important that we "express respect for the other person's view." By then, it's clear that diplomacy will not solve the problem.
Yes, I agree that those cases exist. But I also believe that we need to
have a lot of patience when dealing with newbies. Not an infinite amount, but a lot. And I think everyone should be given the opportunity to rehabilitate themselves. <<
Well, I do agree with that, and it's not at all inconsistent with my proposal.
Please do acknowledge that some newbies (and a few not-so-newbies) really *are* destructive, at least sometimes. And that's a really *serious* problem, that we must not ignore simply because it violates our righteous liberal sensibilities (I have 'em too; I am a libertarian but not an anarchist).
True. <<
It's nice to know that you agree with that much at least.
You seem to be implying that, if we simply were nice to people, cranks, trolls, vandals, and other destructive elements would be adequately manageable.
No, I just consider hard security a last resort, to be used carefully
and only when all else fails (as to when we know that is the case, we might actually develop a timeframe of conflict resolution, based on data about previous conflicts). <<
If by "hard security" you mean what non-Usemod readers would express by the ordinary English word "sanctions," I tend to agree with you: actual sanctions should be doled out carefully and only in relatively extreme cases. But the threat of sanctions must be there and must be clear, and the process must not be so *unnecessarily* slow and painful as to put off valuable contributors.
I'm surprised that it turns out you can accept any "hard security" at all. That's also nice to know.
First, Wikipedia has grown a lot. It's the biggest wiki project in the world. We *can't* make people nice as you suggest.
Allow me to psychoanalyze a bit. <<
This is all very nice, but theorizing is no match for plain experience. I think our experience clearly indicates that we can't make people nice if they don't want to be. It might be different when the trolls are simply tuned out by an efficient process. However, as I've observed before, trolls in the context of Wikipedia can't simply be tuned out; we've got to go around and clean up after then, in addition. And Wikipedia's trolls seemingly get great delight in seeing others try to go around and clean up after them.
Third, your proposal requires that the best members of Wikipedia follow around and politely educate an ever-growing group of destructive members. We've tried that. We've lost a number of members as a result, and I personally am tempted, every so often, to completely forget about Wikipedia, and resign it to the dogs. But I don't want to do that. I still feel some responsibility for it, and I think I helped build it to where it is now. I don't want to see something that I've helped build wear away into something awful.
Again, I'm not seeing that happen. <<
Then, frankly, Erik, you haven't been paying attention, or your ideology is blinding you to facts that seem obvious to the many others who have commented on them on Wikipedia-l.
Simple vandalism is a growing problem and really putting the wiki model
to the test. We might consider a rather simple solution: edits only for logged in members. This drastically limits the accessibility of the wiki, but we do have a core of contributors, and if we can't grow without losing some of them, maybe we should slow our growth. <<
Vandalism is relatively easy to deal with. I strongly dislike your proposed solution, and I'm amazed, given what you've said above, that you actually support it. Anyway, it's the nearly-worthless contributors who on balance damage the project by the dross they shovel in (and the resulting controversies and wasted time) that are the real problem.
But one reason I'm worried about the current state of Wikipedia is that we might have some expert reviewers coming in to do some good work here, only to be attacked by some eedjit who gets his jollies out of attacking an expert precisely because she's an expert. That *will* happen, almost certainly, if the Wikipedia peer review project gets going.
Well, I can predict that I'm going to "attack" experts myself if they
add non-NPOV content, fail to cite sources properly, insist on their authority to make their point etc. <<
Good luck. Remember, experts know more about their areas than you do. That's why we call them experts. So don't embarrass yourself too badly.
My view on experts and what makes an expert is very different from
yours, but I believe both views can coexist in a good certification system. <<
I'm a Ph.D. epistemologist. My dissertation adviser was (still is) an expert on the concept of expertise, and I've read several papers on this area of social epistemology as part of a graduate course. In addition, I gave careful thought to this subject while working on Nupedia. Now, what is that you think my view of what experts are and what makes an expert?
The whole reason behind a random sample is precisely to forestall the sort of "elitism" and abuse of power that you fear.
I know, and I respect this good intention. I just don't think it's the
right approach, it will only lead to less informed decisions. Better keep the decision process open to (almost) everybody (we need to prevent vote flooding as well), that way we can reduce abuse of power the most effectively. That's why K5's moderation system works and Slashdot's doesn't. <<
I just have no idea why you say the system I proposed would "lead to less informed decisions." Perhaps you should reread the proposal (even though it was, as I said, just a rough outline); I even went so far as to suggest that perhaps there would be a body of Wikipedia "case law" developed, that moderators could consult. This would lead to *less* informed decisions than in the present case, when virtually anyone can have the power to ban and delete?
Larry