Quotes from me are preceded by >, from Erik by >> and followed by <<.
That's probably one main point of disagreement, then. Suffice it to say I've been with Wikipedia from the beginning and I do think that things are very bad right now, far worse than they have been in the beginning. I think it's becoming nearly intolerable for polite and well-meaning people to participate, because they're constantly having to deal with people who simply don't respect the rules.
It would be nice to get some specific examples. <<
How many specific examples would it take to convince you, I wonder? I doubt I could produce enough, because our difference is philosophical.
Perhaps Ed's "Annoying users" page (if renamed) isn't such a bad idea. <<
Due respect to Ed, but I seriously doubt that.
I have certainly observed some people like Lir to be quite persistent
and sometimes silly, but I have also noticed quite a bit of hysteria from the other side (see the recent "Lir again" thread). <<
Hysteria? I have to support Zoe here; Lir is a disruptive child, and she should probably be banned. In saying this I am not aware of being "hysterical." When you seemingly coolly accuse people of "hysteria," Erik, the word implies that the people responding to Lir are purely motivated by irrational emotions and, like literally hysterical neurotics like Freud studied, merely have emotional problems.
So, here you come out more or less in favor of Lir, because others have banned her and/or are suggesting that she should be banned; and you accuse others, who are trying to keep Wikipedia a ***PRODUCTIVE*** project, of being "hysterical."
So maybe we should start collecting individual case histories and
examine them in more detail, instead of relying on our personal observations entirely. This might allow us to come up with better policies, and quantify the need for stronger enforcement. <<
No. Our collective experience is more than enough proof that we need *consistent* enforcement, Erik, not *stronger* enforcement. Right now we have very strong enforcement. Your interest is obviously not in reducing power but in keeping power distributed among a lot of different people who use it in totally different, inconsistent ways, and none of which has any particular respect among other users. Virtually anyone can, for the asking, get sysop privileges and start banning IPs and locking pages. That certainly appears to be mob rule and by golly, in my experience on Wikipedia lately I have to say it certainly *feels* like mob rule.
If we had power concentrated in the power of a rotating group of trusted individuals who could be appealed to to enforce the *actual rules* that we now have on the project--we do have rules on the project, but the enforcement mechanism for them is no longer working, I think--then there would be, as there is not now, *clear consequences* for breaking the rules. These people would have moral authority and respect that *no one* can command right now.
In that case, we can always collect a list of people who have been driven away or who have quietly stopped editing so much out of disgust with having to deal with people who just don't get it.
That's not the kind of list I'm talking about, because it only tells us
about the reactions, not the actual actions. You may say that these people were driven away by silly eedjots, but I cannot tell whether this is true without looking at the actual conflicts. <<
Be serious--look at what you just wrote. Does anyone other than you really need it to be proven? It's *obvious* to anyone who has observed very many of the people who have left the project in disgust. It's also quite obviously the fact that we have to tolerate a bunch of people who just don't want to play by the rules--even after being told what the rules are and that those rules are indeed not going to be changed--that a lot of highly qualified people see the website and decide not to participate.
Often I've seen so-called experts on Wikipedia try to stop reasonable
debate by simple assertion of their authority. This doesn't work, and this shouldn't work on Wikipedia, and if they can't handle that fact, it's my turn to say they should better leave. <<
Wait a second. Take a step back and put this exchange into context. I said that Wikipedia is descending into a sort of mob rule, and that this has driven away, and will continue to drive away, some of our best contributors. You reply, here, by defending the mob against the experts. This seems to me to imply that you simply value the radical freedom and openness in Wikipedia--which I wholeheartedly agree is one main key to its success--above retaining the people who can write and have written some of the best articles in the project.
I think your priorities are seriously askew.
I'm talking about a body of trusted members, not an "elite." Tarring the proposal with that word isn't an argument.
Sorry, but I do not really see much of a difference. A group with
superior powers is an elite, trusted or not. The word "elite" requires a certain stability of that position, though, so it might not apply to an approach of random moderation privileges. <<
That shows that you're essentially viewing this ideologically, and that you're expecting the rest of us to buy an essentially anarchistic ideology: *any* group of trusted members who has powers others don't have is *by definition* an "elite." But in the mouths of any libertarian or anarchist, "elites" (and "cabals") are necessarily evil. Hence anyone distinguished by special powers is an evil elite and to be opposed.
The result of this ideology is that *anyone* who is distinguished in any way that results in their having more authority--whether officially or unofficially--will be opposed as an evil "elite" or "cabal" by you and people like you. Thus we have a small but **far** too vocal band of Wikipedians who oppose virtually everything that implies any distinctions: banning, neutrality, prising off meta-discussion from article creation, and any community standards generally.
They, like every committed Wikipedian, can't help but derive pleasure from the fact that we have built up a huge structure of knowledge. But they have a woefully incomplete idea what makes it possible. What makes it possible is precisely the *combination* of freedom, which makes it easy to contribute, *and* enforced standards, which define and guide our mission.
But in the context of a wiki, the only way to enforce standards is by a great enough proportion of contributors *respecting other contributors* when those other contributors do try to enforce the rules. When that breaks down--when a large enough quorum of parasites decides the rules do not apply to them, or that nobody ought to be paid any special respect--it naturally follows that rules will be openly flouted. (And then there are, emboldened, people like yourself, Cunctator, and a few others who defend the parasites because you also innately feel that nobody ought to be paid any special respect and that there should not be any rules. How elitist.)
And then it becomes a waste of time for people who *do* want to contribute to a project defined by rules, with a particular purpose.
Well, the times I'm concerned about aren't necessarily times when people are shouting against the majority, but when they write nonsense, brazen political propoganda, crankish unsupported stuff, and so forth--in other words, violating community standards.
Do you mean nonsense in the sense of "something that just isn't true"
or in the sense of simple noise, like crapflooders? How do you plan to define / recognize "crankish unspported stuff"? <<
Do you really think it would resolve anything in our discussion if I were to supply you with an answer to these questions? No, you seem to want to ask rhetorical questions, and the point of the questions is: there are no clear standards whereby we can determine when community standards are violated.
Some standards are explicitly stated and have been vigorously debated and shaped to something well-understood and -agreed by Wikipedia's old guard and best contributors; for example, NPOV, having lower-cased titles, and not signing articles. Other standards are specific to a field and some of them are known (and indeed perhaps knowable) only to people who have given adequate time studying the subject. There are certainly clear standards of both sorts, and the fact that there are borderline cases, where we're not sure what to say, hardly impugns the idea that there are such clear standards. Moreover, if it should turn out that I would be unable to answer your rhetorical questions in general, if I should interpret them as nonrhetorical, that would prove nothing. We proceed by practice and experience and these, as is well understood by philosophers, engineers, and many others, often produce excellent results even when overarching principles describing the practice and experience are not forthcoming.
Yes, I know, there are egregious cases where we would all agree that
they are not tolerable. I'm just worried that such terms might mean different things to different people, and if we adopt them, we risk suppression of non-mainstream opinions. Is Wikipedia's page about MKULTRA, a CIA mind control project, nonsense, crankish? No, it's not, it really happened, but the large majority of Americans would never believe that. <<
OK, so you're worried that "crankish unsupported stuff" and other words we'd use to describe undesirable material and behavior would be such that we'd disagree about cases. Of course we would. But if we're reasonable people and understand what an encyclopedia *generally* requires, and we have much experience actually working on an encyclopedia, then we can certainly agree on a lot of cases.
The lack of absolute unanimity in every case does not--just to give an example--provide people an excuse to write unsupportable or provably false stuff in articles, just for one example. It doesn't mean we can't forthrightly eject (or completely rewrite) material that is, on any reasonable person's view, a violation of our neutrality policy.
(As an aside, [[Neutral point of view]] has specific implications for the case that you raise; where the implications are unclear, we use our judgment and engage in debate.)
I am, like many others, a big believer in the concept of "soft
security".
Why don't you explain exactly what that means here on the list, and why you and others think it's such a good thing?
The idea of soft security has evolved in wikis, and it is only fair to
point you to the respective page at MeatballWiki for the social and technical components of soft security: http://www.usemod.com/cgi-bin/mb.pl?SoftSecurity <<
I'm not particularly interested in going to the website to find out what you mean. If you want to introduce an unfamiliar term into a debate, it is polite to define it. Use information from that Usemod page to make your case, but don't expect me to go there and provide you arguments against it here on Wikipedia-L.
As to why I, personally, think it's a good idea, that's simple: Once
you introduce hard security mechanisms like banning, deletion etc., you create an imbalance of power, which in turn creates a risk of abuse of said power against those who do not have it. Abuse of power can have many different results, it can encourage groupthink, drive newbies away, censor legitimate material, ban legitimate users etc. We already *have* a situation where we occasionally ban legitimate users and delete legitimate material. We need to get away from this. I am absolutely disgusted by the thought that we are already banning completely innocent users. <<
Erik, two things. First, we already have banning and deletion. We aren't debating about those, but what you write above makes it sound as if we were. From what you say I infer that Wikipedia has LONG AGO decided against using SoftSecurity. Those of you who promote it apparently are trying to change it.
Second, the mere *existence* of sanctions hardly implies that those sanctions will be abused to any degree at all. If your point is simply "Power can be abused," I'm totally unconvinced and I don't think anyone else will be either. Surely you must have more reason to support it than that.
After all, as for example under my rough proposal, we can have strong and multiple safeguards against abuse. We can have not just one moderator, but several who are empowered to check on each other. We can make sure that the moderators are selected on a random and rotating basis, so that no one becomes particularly power-hungry and so that we can take power-abusers out of the loop.
You're also forgetting that this all happens in the wide-open context of a wiki. It's very hard to abuse power in an environment when a sizable minority of the contributors are virtually drooling with the opportunity to catch someone in an act of abuse of power. (I oughta know.)
You say that you're disgusted by the thought that we are already banning innocent users. The best way we have of ensuring against that is by adopting my proposal; it would provide for a totally open, regular, rational method of imposing sanctions, quite unlike the present system.
We aren't going to stop banning people, Erik. So we might as well find a rational method to do it. One that will reinvigorate a sense of seriousness about our mission and lend moral authority to those vested with the power to issue sanctions, something that people lack right now, but which they SORELY need.
My underlying philosophy here is that it's worse to punish an innocent
man than to let a guilty man go free. <<
But that's hardly a reason never to punish anyone for anything, is it?
It seems to me that in growing numbers people refuse to bow to "peer pressure" or to be "educated" about anything regarding Wikipedia.
I'd like to see evidence of those growing numbers. Wikipedia's overall
number of users has been growing constantly, are we talking about absolute growth or relative growth? Again, we should try to collect empirical evidence. <<
Since that will not be forthcoming, I am hoping that others will voice their opinions; ultimately in any case, that's all we'll have. On the other hand, you know as well as I do (if you've been paying attention on Wikipedia-L) that there is a growing number of protests over the growing anarchy that we're seeing on Wikipedia.
Again, remember, we're not talking about adopting sanctions like banning and deleting articles; we're talking about adopting a new system whereby they can be more consistently (and, as I would favor, **leniently**) imposed.
Without generally-accepted standards and moral authority and the shame culture that accompanies them, peer pressure is impossible.
Yes, but in my opinion, the worst way to attain authority is through
the exercise of superior power. The best way is through respect. With the trusted user groups that are part of my certification scheme, it might be easier to build a reputation. <<
I agree: the best way is through respect. However, we don't have that luxury of being able to depend on respect alone, because respect is an increasingly rare commodity. The situation in that regard is getting worse, as we've discussed above.
Respect is already a factor, though, coupled with the present messy sysop system. The people we most want to rein in are precisely the people who *lack* respect for anybody.
Peer pressure seemd to work relatively well in the past. It is working less and less well as the project has grown and since I left a position of official authority.
Well, you know that I disagree about the effect of your departure on
the project, so let's not go into that again. <<
No, Erik, I didn't know that, but I don't really care about your opinion about that, either: that wasn't my point. My point was that peer pressure *did* indeed work pretty well under my tenure. It seems to be working considerably less well now. Whether my official presence had anything to do with it, I don't know or care; I do know that the problem is far worse than it was.
Again, you could ask the many people who have left or who have stopped contributing as much.
They do certainly cool down conflicts if the person receiving the statement knows that the person issuing the statement has the authority to do something about it. They also let the recipient know that there are some lines that just can't be crossed without the community taking a forthright stand against it.
I think this kind of last resort authority should not be concentrated
but distributed. If a poll shows that many members think that member X has "crossed the line", this sends a much stronger message than any non- totalitarian scheme of concentrated authority. Especially if last resort measures like banning *can* be approved by the majority. <<
You favor a democracy, susceptible to mob rule as at present; I favor a republic, where the representatives are in the hot glare of the public gaze but as a result have the moral authority that individuals in mob rule do not have.
When the disagreement concerns Wikipedia policies and obvious interpretations of them, when the violator of those policies does so brazenly, knowingly, and mockingly--and surely you've been around long enough to know that this happens not infrequently now--then it's not particularly important that we "express respect for the other person's view." By then, it's clear that diplomacy will not solve the problem.
Yes, I agree that those cases exist. But I also believe that we need to
have a lot of patience when dealing with newbies. Not an infinite amount, but a lot. And I think everyone should be given the opportunity to rehabilitate themselves. <<
Well, I do agree with that, and it's not at all inconsistent with my proposal.
Please do acknowledge that some newbies (and a few not-so-newbies) really *are* destructive, at least sometimes. And that's a really *serious* problem, that we must not ignore simply because it violates our righteous liberal sensibilities (I have 'em too; I am a libertarian but not an anarchist).
True. <<
It's nice to know that you agree with that much at least.
You seem to be implying that, if we simply were nice to people, cranks, trolls, vandals, and other destructive elements would be adequately manageable.
No, I just consider hard security a last resort, to be used carefully
and only when all else fails (as to when we know that is the case, we might actually develop a timeframe of conflict resolution, based on data about previous conflicts). <<
If by "hard security" you mean what non-Usemod readers would express by the ordinary English word "sanctions," I tend to agree with you: actual sanctions should be doled out carefully and only in relatively extreme cases. But the threat of sanctions must be there and must be clear, and the process must not be so *unnecessarily* slow and painful as to put off valuable contributors.
I'm surprised that it turns out you can accept any "hard security" at all. That's also nice to know.
First, Wikipedia has grown a lot. It's the biggest wiki project in the world. We *can't* make people nice as you suggest.
Allow me to psychoanalyze a bit. <<
This is all very nice, but theorizing is no match for plain experience. I think our experience clearly indicates that we can't make people nice if they don't want to be. It might be different when the trolls are simply tuned out by an efficient process. However, as I've observed before, trolls in the context of Wikipedia can't simply be tuned out; we've got to go around and clean up after then, in addition. And Wikipedia's trolls seemingly get great delight in seeing others try to go around and clean up after them.
Third, your proposal requires that the best members of Wikipedia follow around and politely educate an ever-growing group of destructive members. We've tried that. We've lost a number of members as a result, and I personally am tempted, every so often, to completely forget about Wikipedia, and resign it to the dogs. But I don't want to do that. I still feel some responsibility for it, and I think I helped build it to where it is now. I don't want to see something that I've helped build wear away into something awful.
Again, I'm not seeing that happen. <<
Then, frankly, Erik, you haven't been paying attention, or your ideology is blinding you to facts that seem obvious to the many others who have commented on them on Wikipedia-l.
Simple vandalism is a growing problem and really putting the wiki model
to the test. We might consider a rather simple solution: edits only for logged in members. This drastically limits the accessibility of the wiki, but we do have a core of contributors, and if we can't grow without losing some of them, maybe we should slow our growth. <<
Vandalism is relatively easy to deal with. I strongly dislike your proposed solution, and I'm amazed, given what you've said above, that you actually support it. Anyway, it's the nearly-worthless contributors who on balance damage the project by the dross they shovel in (and the resulting controversies and wasted time) that are the real problem.
But one reason I'm worried about the current state of Wikipedia is that we might have some expert reviewers coming in to do some good work here, only to be attacked by some eedjit who gets his jollies out of attacking an expert precisely because she's an expert. That *will* happen, almost certainly, if the Wikipedia peer review project gets going.
Well, I can predict that I'm going to "attack" experts myself if they
add non-NPOV content, fail to cite sources properly, insist on their authority to make their point etc. <<
Good luck. Remember, experts know more about their areas than you do. That's why we call them experts. So don't embarrass yourself too badly.
My view on experts and what makes an expert is very different from
yours, but I believe both views can coexist in a good certification system. <<
I'm a Ph.D. epistemologist. My dissertation adviser was (still is) an expert on the concept of expertise, and I've read several papers on this area of social epistemology as part of a graduate course. In addition, I gave careful thought to this subject while working on Nupedia. Now, what is that you think my view of what experts are and what makes an expert?
The whole reason behind a random sample is precisely to forestall the sort of "elitism" and abuse of power that you fear.
I know, and I respect this good intention. I just don't think it's the
right approach, it will only lead to less informed decisions. Better keep the decision process open to (almost) everybody (we need to prevent vote flooding as well), that way we can reduce abuse of power the most effectively. That's why K5's moderation system works and Slashdot's doesn't. <<
I just have no idea why you say the system I proposed would "lead to less informed decisions." Perhaps you should reread the proposal (even though it was, as I said, just a rough outline); I even went so far as to suggest that perhaps there would be a body of Wikipedia "case law" developed, that moderators could consult. This would lead to *less* informed decisions than in the present case, when virtually anyone can have the power to ban and delete?
Larry
Hello Larry!
It would be nice to get some specific examples. <<
How many specific examples would it take to convince you, I wonder? I doubt I could produce enough, because our difference is philosophical.
I don't think so. I respect the current system of sysops. I think it can be improved, but I'm not against authority. Unlike what you may have assumed at this point (I get the feeling you read the mail while you replied), I'm not an anarchist. I believe in direct democracy and therefore in the necessity of enforcement.
The main problem I have is that right now, we are not a democracy. You may call it mob rule, an elite, whatever -- decisions are generally made by individuals, though, and not by a collective. I want to create more informed decisions by involving more people in a well-defined voting process with a discussion phase and a decision phase (where these phases could be very short in emergencies -- user definable).
While I believe such a process could be quite streamlined, I think it should generally be used only when other, "softer" means don't work. This is certainly the case with all vandals: here we have no choice but to intervene. I am more lenient (and that is indeed a difference of philosophy) towards "cranks" and the like.
By asking you whether the problem you perceive is growing or not I did not mean to imply that there is no problem. The question whether it is growing, however, is important to determine the urgency of the solution. How much time do we have to implement something meaningful?
Working on Wikipedia today, I agree more with you than I did yesterday. About half of all the anonymous edits were vandalism. It is my *perception* that this is significantly more than when I started. I'd still like to quantify this, though. It's quite possible that we are now primarily getting new users via Google, and the percentage of those who are valuable contributors is possibly smaller than with our earlier contributors, with a higher number of vandals.
Perhaps Ed's "Annoying users" page (if renamed) isn't such a bad idea. <<
Hysteria? I have to support Zoe here; Lir is a disruptive child, and she should probably be banned.
I disagree here, but I'm not familiar with all evidence (which is why a "Problematic users" page, as a record of evidence, might be helpful). I followed much of Lir's actions today and saw nothing too problematic. She's antagonistic, sometimes silly and certainly not as smart as she thinks, but I believe we can deal with here as long as she doesn't vandalize pages.
The "hysteria" remark referred to a specific incident and was probably an overstatement, I apologized to Zoe on her user page.
No. Our collective experience is more than enough proof that we need *consistent* enforcement, Erik, not *stronger* enforcement. Right now we have very strong enforcement.
I don't think so, IP banning is quite ineffective, we clearly need to work on this. Unlike Jimbo, I'm not so sure we're dealing with the same vandal.
But I also agree that enforcement needs to be applied consistently. I'm worried that we may have missed vandalism when sysops weren't available. But (more below) I don't think that the problem really comes from our logged in userbase.
Your interest is obviously not in reducing power but in keeping power distributed among a lot of different people who use it in totally different, inconsistent ways, and none of which has any particular respect among other users.
Actually, no, I believe we need to quantify the generally agreed upon standards using open voting. I have described such a system already on wikitech, and have explained why I believe voting is important here on wikipedia-l.
I am for enforcement in what you rightly call "relatively extreme" cases.
If we had power concentrated in the power of a rotating group of trusted individuals
There are many problems of this idea vs. open voting, such as definition of "trust", randomness of assignment (Slashdot-style voting) etc., I pointed these out already.
That's not the kind of list I'm talking about, because it only tells us
about the reactions, not the actual actions. You may say that these people were driven away by silly eedjots, but I cannot tell whether this is true without looking at the actual conflicts. <<
Be serious--look at what you just wrote. Does anyone other than you really need it to be proven?
Perhaps, perhaps not. You're talking to me now, though, so either present to me the knowledge I do not have, or you cannot presume that I have it.
Often I've seen so-called experts on Wikipedia try to stop reasonable
debate by simple assertion of their authority. This doesn't work, and this shouldn't work on Wikipedia, and if they can't handle that fact, it's my turn to say they should better leave. <<
Wait a second. Take a step back and put this exchange into context. I said that Wikipedia is descending into a sort of mob rule, and that this has driven away, and will continue to drive away, some of our best contributors. You reply, here, by defending the mob against the experts.
No, that's a grossly incorrect characterization of my statement. You see, I asked my question above for examples of the conflicts because I myself have had conflicts with so-called experts, and I did not accept their assertion of authority as an argument, which offended them.
Now I would hardly call myself part of a "mob", and I would find it demeaning and insulting to be called so. I can draw primarily from my own experience when it comes to conflicts, so what I can say is this: If this is the kind of conflicts you are talking about, where you think the so- called experts should be able to enforce their POV just because of their status as experts, then we are clearly of extremely opposite opinions. I do not support an expertocracy, and I don't think you do either. In the above paragaph, it sounds like it, though.
I'm talking about a body of trusted members, not an "elite." Tarring the proposal with that word isn't an argument.
Sorry, but I do not really see much of a difference. A group with
superior powers is an elite, trusted or not. The word "elite" requires a certain stability of that position, though, so it might not apply to an approach of random moderation privileges. <<
That shows that you're essentially viewing this ideologically, and that you're expecting the rest of us to buy an essentially anarchistic ideology: *any* group of trusted members who has powers others don't have is *by definition* an "elite." But in the mouths of any libertarian or anarchist, "elites" (and "cabals") are necessarily evil.
I'm not a libertarian or anarchist, and I do not consider elites necessarily evil (I do not believe in a concept of evil). I just don't think they are necessary either.
The result of this ideology is..
Please, don't assume so much.
They, like every committed Wikipedian, can't help but derive pleasure from the fact that we have built up a huge structure of knowledge. But they have a woefully incomplete idea what makes it possible. What makes it possible is precisely the *combination* of freedom, which makes it easy to contribute, *and* enforced standards, which define and guide our mission.
As I said repeatedly, I believe in enforcement, but only as a last resort, and only democratically justified. I also find enforcement more important in cases of vandalism than in cases of "cranks".
to a project defined by rules, with a particular purpose.
Well, the times I'm concerned about aren't necessarily times when people are shouting against the majority, but when they write nonsense, brazen political propoganda, crankish unsupported stuff, and so forth--in other words, violating community standards.
Do you mean nonsense in the sense of "something that just isn't true"
or in the sense of simple noise, like crapflooders? How do you plan to define / recognize "crankish unspported stuff"? <<
Do you really think it would resolve anything in our discussion if I were to supply you with an answer to these questions? No, you seem to want to ask rhetorical questions, and the point of the questions is: there are no clear standards whereby we can determine when community standards are violated.
That's not true. Ed has proposed some good standards, although I agree with Cunctator that it is not always easy to identify partisanship (but there are egregious cases). Our policy should be to argue with people, to revert changes a few times to help them understand the wiki principle, and if they don't agree with our minimum community standards, we enforce them by community vote.
I also think, however, that your phrases above are much more vague, and much less apt for inclusion in a set of minimum standards.
OK, so you're worried that "crankish unsupported stuff" and other words we'd use to describe undesirable material and behavior would be such that we'd disagree about cases. Of course we would. But if we're reasonable people and understand what an encyclopedia *generally* requires, and we have much experience actually working on an encyclopedia, then we can certainly agree on a lot of cases.
Hopefully so, but because of the potential for disagreement, I think the decision making process should remain open.
The lack of absolute unanimity in every case does not--just to give an example--provide people an excuse to write unsupportable or provably false stuff in articles, just for one example.
True. However, it's perfectly OK to attribute provably false stuff. Ed Poor does this all the time when he writes about homosexuality :-)
It doesn't mean we can't forthrightly eject (or completely rewrite) material that is, on any reasonable person's view, a violation of our neutrality policy.
I agree.
The idea of soft security has evolved in wikis, and it is only fair to
point you to the respective page at MeatballWiki for the social and technical components of soft security: http://www.usemod.com/cgi-bin/mb.pl?SoftSecurity <<
I'm not particularly interested in going to the website to find out what you mean. If you want to introduce an unfamiliar term into a debate, it is polite to define it.
It is also polite to visit the links provided in someone else's mail in a discussion. Are you afraid of MeatballWiki because you associate it with TheCunctator? Rest assured, he's not much more popular around there than he is here. MeatballWiki is a pretty good resource for many questions we are debating, much more rich in content than the meta-wikipedia, and there are many smart people over there. SoftSecurity as defined by the Meatball people involves a large amount of different components, usually social concepts relying on existing wiki technology as opposed to extending it. We have talked about some aspects already. I suggest you familiarize yourself with the concept.
We already *have* a
situation where we occasionally ban legitimate users and delete legitimate material. We need to get away from this. I am absolutely disgusted by the thought that we are already banning completely innocent users. <<
Erik, two things. First, we already have banning and deletion. We aren't debating about those, but what you write above makes it sound as if we were.
No, no, no. I'm not against banning or deleting. I'm for making the process less error-prone by involving more people in it. We are banning innocent people behind multi-user proxies, and sometimes overzealous sysops delete allegedly infringing material which is completely harmless. This is, in my opinion, a very serious issue that needs to be addresed. You believe that your trusted moderators could make better decisions, and it's quite possible that in clear-cut cases, they would be better than what we have now (though in less clear-cut cases, they might be worse). I think open voting would be the best solution.
Second, the mere *existence* of sanctions hardly implies that those sanctions will be abused to any degree at all. If your point is simply "Power can be abused," I'm totally unconvinced and I don't think anyone else will be either. Surely you must have more reason to support it than that.
It's quite possible that abuse will be minimal in a scheme like yours. I'm afraid, however, if broad policies like "crankish unsupported stuff" are adopted, a high degree of arbitrariness is introduced. We need to be very precise here to avoid the Everything2/Slashdot dilemma.
After all, as for example under my rough proposal, we can have strong and multiple safeguards against abuse.
I like that about your proposal. It makes it closer to mine :-)
My underlying philosophy here is that it's worse to punish an innocent
man than to let a guilty man go free. <<
But that's hardly a reason never to punish anyone for anything, is it?
No, it is not. Repeat violators must be "punished" as a matter of self- defense.
I think this kind of last resort authority should not be concentrated
but distributed. If a poll shows that many members think that member X has "crossed the line", this sends a much stronger message than any non- totalitarian scheme of concentrated authority. Especially if last resort measures like banning *can* be approved by the majority. <<
You favor a democracy, susceptible to mob rule as at present
Please explain "suceptible to mob rule as at present". Currently we have a situation where decisions are made by individual sysops, they are not verified by anyone else. In my proposal, they would be verified by anyone interested in doing so (logged in users, possibly only users with >n contributions to avoid vote flooding).
I favor a republic,
It was my understanding that you are libertarian. How is that compatible with that view? (Yes, I know you're only using a metaphor -- but is real life group decision making that much different from Wikipedia?)
Yes, I agree that those cases exist. But I also believe that we need to
have a lot of patience when dealing with newbies. Not an infinite amount, but a lot. And I think everyone should be given the opportunity to rehabilitate themselves. <<
Well, I do agree with that, and it's not at all inconsistent with my proposal.
I think most of the supposed inconsistencies are simply misunderstandings, but the main difference in opinion is that you regard cranks as the most serious problems and I (and probably most sysops) see vandals as much more annoying and serious.
I think our experience clearly indicates that we can't make people nice if they don't want to be.
Well, no offense, but I don't think you're particularly good at making people nice, so that may have something to do with the experience :-)
Anyway, it's the nearly-worthless contributors who on balance damage the project by the dross they shovel in (and the resulting controversies and wasted time) that are the real problem.
I disagree here. I don't see a high number of "nearly-worthless contributors" (that you characterize human beings as worthless is another story). You named Lir (even there I don't agree entirely), who else?
Well, I can predict that I'm going to "attack" experts myself if they
add non-NPOV content, fail to cite sources properly, insist on their authority to make their point etc. <<
Good luck. Remember, experts know more about their areas than you do. That's why we call them experts.
That's incorrect, they're *supposed* to know more about their areas than non-experts. Often they don't. Sometimes they're even paid to lie, especially when commercial interests are involved.
I'm a Ph.D. epistemologist. My dissertation adviser was (still is) an expert on the concept of expertise, and I've read several papers on this area of social epistemology as part of a graduate course. In addition, I gave careful thought to this subject while working on Nupedia. Now, what is that you think my view of what experts are and what makes an expert?
I haven't read your adviser's papers, I base my opinions solely on your mails. It appears that you value credentials such as a degree or publications very highly, as opposed to reputation building through verifiable expertise, regardless of background. An expert, in my view, can be pseudonymous, he can be a 13 year old kid, it can be a motivated housewife. A degree and other credentials are only one way of measuring reputation, and not a particularly good one, because many institutions are highly biased. We had two certified mediavalists on Wikipedia, who both added highly biased POV material from a Christian-apologetic perspective, for example. You yourself added quite a bit of NPOV material (often, but not always, marked as such).
I just have no idea why you say the system I proposed would "lead to less informed decisions." Perhaps you should reread the proposal (even though it was, as I said, just a rough outline); I even went so far as to suggest that perhaps there would be a body of Wikipedia "case law" developed, that moderators could consult. This would lead to *less* informed decisions than in the present case, when virtually anyone can have the power to ban and delete?
Yes, in some cases, because no longer the people who make the decisions are the ones who care about the matter, but instead the ones who were assigned moderation duty by your random number generator. A voting system I propose would be similar to the Recent_Changes page, you would look at the polls you find interesting, read the arguments, then vote. In your system, however, the people who vote/make the decision are simply assigned, and they may not care at all about the debate at hand and just throw in a quick "yes or no". Because of your safeguards, this is better than an individual sysop in the clear-cut cases, but possibly worse in less clear-cut cases. And I consider this especially problematic because you seem to see the less clear-cut cases as the more serious problem.
Regards,
Erik
Larry Sanger wrote:
Our collective experience is more than enough proof that we need *consistent* enforcement, Erik, not *stronger* enforcement.
You're talking about consistency regarding people like Helga and Lir, right? (I think that we're already pretty consistent about vandalism.) I agree with you and Ed (if you're saying what I think you are) that we need to design policies for these situations so that they can begin to be enforced consistently. I don't think that this requires a new class of administrator, since the situations come up rarely enough that ordinary adminstrators can deal with them (or could if given the power to ban logged in users, a change that would be useful even only for vandals). But a consistent policy on what the authority to ban should be would be a very good idea.
Your interest is obviously not in reducing power but in keeping power distributed among a lot of different people who use it in totally different, inconsistent ways, and none of which has any particular respect among other users. Virtually anyone can, for the asking, get sysop privileges and start banning IPs and locking pages. That certainly appears to be mob rule and by golly, in my experience on Wikipedia lately I have to say it certainly *feels* like mob rule.
It doesn't feel like mob rule to me, because adminstrators don't abuse their power. When we have mistakes, it's because the policies either aren't clear (and in that case we need a change, we need to write clear policies) are aren't understood (and in that case we don't need a change, since the mistake can be undone by another administrator).
Erik Moeller wrote:
Larry Sanger wrote:
In that case, we can always collect a list of people who have been driven away or who have quietly stopped editing so much out of disgust with having to deal with people who just don't get it.
That's not the kind of list I'm talking about, because it only tells us about the reactions, not the actual actions. You may say that these people were driven away by silly eedjots, but I cannot tell whether this is true without looking at the actual conflicts.
Be serious--look at what you just wrote. Does anyone other than you really need it to be proven? It's *obvious* to anyone who has observed very many of the people who have left the project in disgust.
It's obvious to me only because I've looked at the conflicts, and then only in those cases where I *have* looked at the conlficts (which is all of the cases that I know about, which is very few). If Erik hasn't seen the conflicts, then they should be pointed out to him (this is where viewing deleted pages without undeleting them would be useful) so that it can become obvious to him.
It's also quite obviously the fact that we have to tolerate a bunch of people who just don't want to play by the rules--even after being told what the rules are and that those rules are indeed not going to be changed--that a lot of highly qualified people see the website and decide not to participate.
And it's not obvious to me that this actually happens at all. Who are the "highly qualified" people that decide not to participate? You know them, I suppose; Erik and I don't.
But in the mouths of any libertarian or anarchist
(quoted for context of the word "ideology" below)
The result of this ideology is that *anyone* who is distinguished in any way that results in their having more authority--whether officially or unofficially--will be opposed as an evil "elite" or "cabal" by you and people like you.
Although Erik claims not to be one of them (and I believe him), I suspect that we do have a lot of libertarians and anarchists here, and if you want people to have moral authority among these users, then it won't be helpful to give them extraordinary powers. (Of course, the anarchists will prefer to say "community trust" rather than "moral authority", but you don't have to let them in on that -_^.)
Well, the times I'm concerned about aren't necessarily times when people are shouting against the majority, but when they write nonsense, brazen political propoganda, crankish unsupported stuff, and so forth--in other words, violating community standards.
Do you mean nonsense in the sense of "something that just isn't true" or in the sense of simple noise, like crapflooders? How do you plan to define / recognize "crankish unspported stuff"?
Do you really think it would resolve anything in our discussion if I were to supply you with an answer to these questions? No, you seem to want to ask rhetorical questions, and the point of the questions is: there are no clear standards whereby we can determine when community standards are violated.
I think that it's an important practical question if your system of moderators is to be used. What will the standards by which moderators decide whether or not community standards have been violated? Specifically (for Erik's question), how will they decide when something is "crankish unsupported stuff"? You answer this practical question in general below:
Some standards are explicitly stated and have been vigorously debated and shaped to something well-understood and -agreed by Wikipedia's old guard and best contributors; for example, NPOV, having lower-cased titles, and not signing articles. Other standards are specific to a field and some of them are known (and indeed perhaps knowable) only to people who have given adequate time studying the subject. There are certainly clear standards of both sorts, and the fact that there are borderline cases, where we're not sure what to say, hardly impugns the idea that there are such clear standards.
:except for the borderline cases, which is a pretty good start. And if the borderline cases are rare, then the answer to that could be case law, as you mentioned in your original moderator proposal, so now the general question is completely answered. But you didn't answer the specific question, about detecting "crankish unsupported stuff", which IMO is still a reasonable practical question.
(As an aside, [[Neutral point of view]] has specific implications for the case that you raise; where the implications are unclear, we use our judgment and engage in debate.)
Well, we do now. What will the moderators do, when there are only 3 of them at one time, and nobody besides Jimbo can overrule the trifecta until their term of office is over? (after which they still leave their precedent in case law). It might be better to clarify the implications now, where possible. We don't have to figure this out immediately, but I think that it's important to Erik, so if you have ideas, then it will help you to mention them. (It will also help Erik to read [[Wikipedia:NPOV]] and come up with specific examples, hypothetical or not, of the sort of thing that still worries him after reading that.)
Erik Moeller wrote:
I am, like many others, a big believer in the concept of "soft security".
Why don't you explain exactly what that means here on the list, and why you and others think it's such a good thing?
The idea of soft security has evolved in wikis, and it is only fair to point you to the respective page at MeatballWiki for the social and technical components of soft security: http://www.usemod.com/cgi-bin/mb.pl?SoftSecurity
I'm not particularly interested in going to the website to find out what you mean.
Then you're not particularly interested in engaging in discussion with him. That's your right.
If you want to introduce an unfamiliar term into a debate, it is polite to define it. Use information from that Usemod page to make your case, but don't expect me to go there and provide you arguments against it here on Wikipedia-L.
I went to the web page, and it means pretty much what I thought that it did (I had previously just figured out the meaning from context on this list). I'd copy down the definition for you, but there is no definition; it's a vague concept, so instead I'll list examples and nonexamples. And of course, I'll have to define what each of the examples is.
Actually, rather than put all that text into this post, let me just point you to two web pages that have it already: http://www.usemod.com/cgi-bin/mb.pl?SoftSecurity (for the examples) and http://www.usemod.com/cgi-bin/mb.pl?HardSecurity (for the nonexamples). Both pages have some stuff after the lists of examples, but you can ignore all of that (although you might find it interesting).
I think that this is an entirely reasonable way to carry on a discussion. It's not like I've put down a dozen links that you need to hunt through; I've put down two links that clearly and up front say what I want to say. If you don't want to continue the discussion under these terms, then that's your choice.
Erik, two things. First, we already have banning and deletion. From what you say I infer that Wikipedia has LONG AGO decided against using SoftSecurity. Those of you who promote it apparently are trying to change it.
Rather, Wikipedia has decided against using *only* soft security. It still uses many soft security measures, the most obvious being the ability of any user to undo the changes done by any other user (on the same level of Ed's hierarchy of power).
It's very hard to abuse power in an environment when a sizable minority of the contributors are virtually drooling with the opportunity to catch someone in an act of abuse of power. (I oughta know.)
Well, but you never really *tried*, now did you? ^_^
You say that you're disgusted by the thought that we are already banning innocent users. The best way we have of ensuring against that is by adopting my proposal; it would provide for a totally open, regular, rational method of imposing sanctions, quite unlike the present system.
But your proposal isn't about banning vandals, is it? Surely you're not saying that 3 moderators at a time is better than every administrator going after the vandals! And the banning of innocent users is primarily occurring as a side effect of the war against vandalism. This is an important, but I think different, issue. One possible solution is to automatically unban after, say, a week. Erik (or maybe it was somebody else) has suggested a technical fix that would improve this (but possibly worsen other things), on a different thread.
My point was that peer pressure *did* indeed work pretty well under my tenure. It seems to be working considerably less well now. Whether my official presence had anything to do with it, I don't know or care; I do know that the problem is far worse than it was.
Again, you could ask the many people who have left or who have stopped contributing as much.
We should also ask the people that *haven't* done this. (Not me; I wasn't around when Larry was official.)
I don't want to see something that I've helped build wear away into something awful.
Again, I'm not seeing that happen.
Then, frankly, Erik, you haven't been paying attention, or your ideology is blinding you to facts that seem obvious to the many others who have commented on them on Wikipedia-l.
I've been paying attention, but I'm not seeing it happen either. Perhaps I'm blinded by my ideology; perhaps you're blinded by yours. Since I can't tell in either case, I'm just going to have to give each of our impressions equal weight.
But one reason I'm worried about the current state of Wikipedia is that we might have some expert reviewers coming in to do some good work here, only to be attacked by some eedjit who gets his jollies out of attacking an expert precisely because she's an expert. That *will* happen, almost certainly, if the Wikipedia peer review project gets going.
These experts will be in a much better position than Julie was to ignore their attacks. They make their edits, approve the article, and if the eedjit messes it up, still Recylopediasifter will be OK. So while they'll no doubt be just as annoyed (since they'll have to undo the eedjit every time that the article comes back around on their schedule for review, assuming that nobody else has dealt with it in the meantime), they won't have to leave the project, since Wikipedia isn't the project.
Well, I can predict that I'm going to "attack" experts myself if they add non-NPOV content, fail to cite sources properly, insist on their authority to make their point etc.
Good luck. Remember, experts know more about their areas than you do. That's why we call them experts. So don't embarrass yourself too badly.
First, the people on Recyclopediasifter will be PhDs, not experts. While these characteristics tend to go together, they're not the same thing.
More importantly, if Erik attacks an expert (not just a PhD, but an expert) for not citing sources and arguing fallaciously (argument by authority), and if the expert comes back and cites sources and argues validly and proves her point beyond any doubt, leaving her critics in the dust, then I for one will not consider Erik to have lost any face. (Nor the experpt, because while she was in danger for a moment there, in the end she made good.)
OTOH, if Erik tells an expert that she isn't being NPOV, and the withering force of her vast knowledge proves that she is, then, yeah, he should be embarrassed. That's the risk of debate.
My view on experts and what makes an expert is very different from yours, but I believe both views can coexist in a good certification system.
I'm a Ph.D. epistemologist. My dissertation adviser was (still is) an expert on the concept of expertise, and I've read several papers on this area of social epistemology as part of a graduate course. In addition, I gave careful thought to this subject while working on Nupedia. Now, what is [it] that you think [is] my view of what experts are and what makes an expert?
(I'm not sure if I parsed the last sentence correctly, so please check it.) I get the strong impression from what I've read of your opinions, here and on Nupedia, that you think that experts are usually correct and that academic degrees are strong evidence of expertise. I get the (weaker) impression that Erik doesn't believe these things. But rather than go by my impressions (or Erik's, since you asked him), we'd probably get further if each of you just came out and said *what* your views on experts and what makes an expert *are*.
The whole reason behind a random sample is precisely to forestall the sort of "elitism" and abuse of power that you fear.
I know, and I respect this good intention. I just don't think it's the right approach, it will only lead to less informed decisions. Better keep the decision process open to (almost) everybody (we need to prevent vote flooding as well), that way we can reduce abuse of power the most effectively. That's why K5's moderation system works and Slashdot's doesn't.
I just have no idea why you say the system I proposed would "lead to less informed decisions." Perhaps you should reread the proposal (even though it was, as I said, just a rough outline); I even went so far as to suggest that perhaps there would be a body of Wikipedia "case law" developed, that moderators could consult. This would lead to *less* informed decisions than in the present case, when virtually anyone can have the power to ban and delete?
Why can't we start developing the case law anyway, either under the present system of discussion and consensus, or under Erik's voting proposal? In fact, I claim that we *have*been* developing such case law, in the form of the precedents and customs that we bring up every time discussion of a new content based banning takes place, and also bring up in the talk discussions that precede such efforts.
-- Toby
wikipedia-l@lists.wikimedia.org