On 3/13/06, Anthony DiPierro wikilegal@inbox.org wrote:
On 3/13/06, The Cunctator cunctator@gmail.com wrote:
On 3/13/06, Anthony DiPierro wikilegal@inbox.org wrote:
Just allowing people to report errors isn't a problem. The problems are acting on those reports without first verifying the true facts, and removing entire articles simply because some of the facts in that article are inaccurate. Then of course there's the problem of protecting articles, though that one's probably arguable (now that semi-protection exists I can't personally think of a scenario where full protection is *ever* a good idea).
The argument is that since any form of protection is an unwanted state, it's in certain senses better when it bothers more people -- it motivates people to fix the underlying problems.
[....]
If this doesn't make sense I can try to do a better job of explaining.
No, that does make sense. Though the way I see it, especially since the advent of the three revert rule, page protection only makes sense when dealing with sockpuppets, and semi-protection is a good protection against that which still allows established editors.
And if page protection is only used in that way - in the face of a distributed sockpuppet attack, I really don't see how semi-protection hinders solving the underlying problems.
But I suppose this presumes that page protection is only used in this limited sense, which doesn't reflect how it is actually used in practice.
To my mind, a fully protected page is the absolute worst state a page can be in. A vandalized but editable page is even better, in my opinion.
Do you see how a semi-protected page could be worse than a fully protected page?
Or, rather, having significant numbers of semi-protected pages could be worse than significant numbers of fully protected pages?
The argument hinges upon the assumptions such as that it's important to Wikipedia to a) encourage 1st-time editing or b) creating different classes of users is a long-term bad thing.