In 2002, before the change of software, the ip information was visible even when users were identified under a pseudonyme. The information could be visible when people were just moving the mouse's pointer over the name of the user.
When the software was upgraded, this option disappeared and the ip data of registered users became *private* data.
I would like to know if that disappearance was discussed at that time and if it was on purpose that ip data of loggued in people became *private* data.
Ant
Hi!
In 2002, before the change of software, the ip information was visible
Ha! I wonder how many people can answer about software in 2002 :) Magnus? :)
Florence Devouard wrote:
In 2002, before the change of software, the ip information was visible even when users were identified under a pseudonyme. The information could be visible when people were just moving the mouse's pointer over the name of the user.
When the software was upgraded, this option disappeared and the ip data of registered users became *private* data.
I would like to know if that disappearance was discussed at that time and if it was on purpose that ip data of loggued in people became *private* data.
Ant
I have a feeling this might have been done to make non-logged-in pages cacheable in an attempt to improve performance, rather than out of any privacy concerns.
-- Neil
On 12/11/07, Neil Harris usenet@tonal.clara.co.uk wrote:
I have a feeling this might have been done to make non-logged-in pages cacheable in an attempt to improve performance, rather than out of any privacy concerns.
I don't understand how this would affect cacheability at all? It just affects what's displayed in the page history.
It's true that the principles like "use real names" have fallen by the wayside over the past few years. It would be interesting to see them revived in a fashion like this. It's not very difficult to get the IP address of a typical user, anyway . . . you just need to get them to follow one link (say, by planting a dubious link on an article they're watching that they have to follow to see if it's legit) to get a shortlist.
I doubt anyone's going to remember anything from 2003, unless Magnus does. Maybe it's best to hunt through archives to see if this was discussed. (Although if it was just Magnus making the change in his new software, maybe nobody did discuss it.)
Simetrical wrote:
On 12/11/07, Neil Harris usenet@tonal.clara.co.uk wrote:
I have a feeling this might have been done to make non-logged-in pages cacheable in an attempt to improve performance, rather than out of any privacy concerns.
I don't understand how this would affect cacheability at all? It just affects what's displayed in the page history.
It's true that the principles like "use real names" have fallen by the wayside over the past few years. It would be interesting to see them revived in a fashion like this. It's not very difficult to get the IP address of a typical user, anyway . . . you just need to get them to follow one link (say, by planting a dubious link on an article they're watching that they have to follow to see if it's legit) to get a shortlist.
I doubt anyone's going to remember anything from 2003, unless Magnus does. Maybe it's best to hunt through archives to see if this was discussed. (Although if it was just Magnus making the change in his new software, maybe nobody did discuss it.)
Re-reading Florence's original comment, I can now see that she was talking about something else: I was referring to the removal of IP addresses from the served article pages themselves, rather than the information in the history.
Mea culpa.
-- Neil
Florence Devouard wrote:
In 2002, before the change of software, the ip information was visible even when users were identified under a pseudonyme. The information could be visible when people were just moving the mouse's pointer over the name of the user.
When the software was upgraded, this option disappeared and the ip data of registered users became *private* data.
I would like to know if that disappearance was discussed at that time and if it was on purpose that ip data of loggued in people became *private* data.
I'm pretty sure there was a bug report from Lee Daniel Crocker in SourceForge indicating that the behaviour was accidental and that he intended to fix it by reimplementing the mouseover feature. He was shouted down on that bug report, by Brion and various users. The expectation that IP addresses are private data was already well established by that time. I can't find the bug report now.
Once something is private, it's very hard to make it public again. There was a similar outcry when I proposed making watchlists public. If watchlists had been public from the outset, I doubt anyone would have even requested that they be made private, and their usefulness would have been pretty much the same.
-- Tim Starling
Tim Starling wrote:
Florence Devouard wrote:
In 2002, before the change of software, the ip information was visible even when users were identified under a pseudonyme. The information could be visible when people were just moving the mouse's pointer over the name of the user.
When the software was upgraded, this option disappeared and the ip data of registered users became *private* data.
I would like to know if that disappearance was discussed at that time and if it was on purpose that ip data of loggued in people became *private* data.
I'm pretty sure there was a bug report from Lee Daniel Crocker in SourceForge indicating that the behaviour was accidental and that he intended to fix it by reimplementing the mouseover feature. He was shouted down on that bug report, by Brion and various users. The expectation that IP addresses are private data was already well established by that time. I can't find the bug report now.
Once something is private, it's very hard to make it public again. There was a similar outcry when I proposed making watchlists public. If watchlists had been public from the outset, I doubt anyone would have even requested that they be made private, and their usefulness would have been pretty much the same.
-- Tim Starling
Well,
I remember quite well the previous feature, as the french wikipedia was switched to the new mediawiki 8 months after my arrival. During 8 months, we saw the "private" data.
I wonder if we might not argue that making this data private made somewhat more damages to the tissu of the community than if the data had been kept public. Legally speaking, it weakens our case. It goes against the principles of transparency and responsibility that we like to put upfront. It simplifies defense strategy against vandals and sockpuppets. It avoids power grabs (or perception there of) by the few members who succeed to get access to the data.
I am looking for some arguments to keep it private. Others than "well, this is the default behavior".
I did not find the past discussion. What happened ?
ant
Past discussion not sure but Wikipedia currently has a privacy policy reading "When using a pseudonym, your IP address will not be available to the public except in cases of abuse, including vandalism of a wiki page by you or by another user with the same IP address." It is therefore part of the encouragement to create an account.
Personally I think restoring IP addresses would be good for transparency but there would be objectors. I think we have enough quality issues not to be worrying overly about contributors at the margin
BozMo
================= Florence Devouard wrote:
Tim Starling wrote:
Florence Devouard wrote:
In 2002, before the change of software, the ip information was visible even when users were identified under a pseudonyme. The information could be visible when people were just moving the mouse's pointer over the name of the user.
When the software was upgraded, this option disappeared and the ip data of registered users became *private* data.
I would like to know if that disappearance was discussed at that time and if it was on purpose that ip data of loggued in people became *private* data.
I'm pretty sure there was a bug report from Lee Daniel Crocker in SourceForge indicating that the behaviour was accidental and that he intended to fix it by reimplementing the mouseover feature. He was shouted down on that bug report, by Brion and various users. The expectation that IP addresses are private data was already well established by that time. I can't find the bug report now.
Once something is private, it's very hard to make it public again. There was a similar outcry when I proposed making watchlists public. If watchlists had been public from the outset, I doubt anyone would have even requested that they be made private, and their usefulness would have been pretty much the same.
-- Tim Starling
Well,
I remember quite well the previous feature, as the french wikipedia was switched to the new mediawiki 8 months after my arrival. During 8 months, we saw the "private" data.
I wonder if we might not argue that making this data private made somewhat more damages to the tissu of the community than if the data had been kept public. Legally speaking, it weakens our case. It goes against the principles of transparency and responsibility that we like to put upfront. It simplifies defense strategy against vandals and sockpuppets. It avoids power grabs (or perception there of) by the few members who succeed to get access to the data.
I am looking for some arguments to keep it private. Others than "well, this is the default behavior".
I did not find the past discussion. What happened ?
ant
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
On 12/12/07, Florence Devouard Anthere9@yahoo.com wrote:
I am looking for some arguments to keep it private. Others than "well, this is the default behavior".
If you have a fixed IP address that can be relatively easily linked to your personal identity or your employer, I could see why you wouldn't want it to be publicly known that you've been pushing the article about gay midget pornography to featured article status. Or the one about Scientology.
Yes, there are anonymizer tools, but * most of them are quite slow * they are not always built to be used on a per-page basis * they may cost money, or contain spyware, etc.
Basically - it would mean that quite a few users would have to go through substantial pain in order to have any degree of privacy at all.
-----Original Message----- From: wikitech-l-bounces@lists.wikimedia.org [mailto:wikitech-l-bounces@lists.wikimedia.org] On Behalf Of Erik Moeller Sent: 12 December 2007 08:41 To: Wikimedia developers Subject: Re: [Wikitech-l] ip address
On 12/12/07, Florence Devouard Anthere9@yahoo.com wrote:
I am looking for some arguments to keep it private. Others
than "well,
this is the default behavior".
If you have a fixed IP address that can be relatively easily linked to your personal identity or your employer, I could see why you wouldn't want it to be publicly known that you've been pushing the article about gay midget pornography to featured article status. Or the one about Scientology.
Yes, there are anonymizer tools, but
- most of them are quite slow
- they are not always built to be used on a per-page basis
- they may cost money, or contain spyware, etc.
Basically - it would mean that quite a few users would have to go through substantial pain in order to have any degree of privacy at all.
Could always hash the ip address, and display that. It would be completely meaningless outside of wikipedia, so couldn't be tracked back to ISPs or company owner. But the hash would always remain the same for each IP.
Jared
Jared Williams wrote:
Could always hash the ip address, and display that. It would be completely meaningless outside of wikipedia, so couldn't be tracked back to ISPs or company owner. But the hash would always remain the same for each IP.
It would be meaningless inside Wikipedia as well, as has been repeatedly discussed. Very few of our users have a non-shared static IP.
-- Tim Starling
On Dec 12, 2007 7:59 AM, Jared Williams jared.williams1@ntlworld.com wrote:
Could always hash the ip address, and display that. It would be completely meaningless outside of wikipedia, so couldn't be tracked back to ISPs or company owner. But the hash would always remain the same for each IP.
The hashing mechanism would have to be kept incredibly secret, and I'm not sure that'd be possible. There are only 4 billion IP addresses. A dictionary attack could probably crack every single address in less than a year, if the hashing algorithm was discovered and took less than 8 milliseconds to run.
And you'd still be linkable to your employer (for instance). Your employer need only make an edit using your work IP to find out what that IP hashes to.
On 12/12/07, Erik Moeller erik@wikimedia.org wrote:
Yes, there are anonymizer tools, but
- most of them are quite slow
- they are not always built to be used on a per-page basis
- they may cost money, or contain spyware, etc.
And:
* they're all anonymous proxies and therefore banned from editing (generally).
The counterargument is that to gain transparency for the project, participants should be willing to sacrifice some privacy, and that if they aren't willing to publicly admit to their efforts, they're immediately suspect. Needless to say, this counterargument becomes more questionable when it gets to sensitive issues like gay midget pornography and Scientology. I like the idea of transparency, but on some topics at least we need some anonymity to get people to participate, I suspect.
On 12/12/07, Florence Devouard Anthere9@yahoo.com wrote:
There are several possibilities to fix that.
Either the use of the tool is much more widely made possible, increasing the check and balances (and thus reducing risks of abuse). Eg, giving the tool to all admins.
I think this would be a nice compromise, with one proviso: the log is made public as far as is possible without posting IP addresses. One straightforward proposition would be to blank out IP addresses in the log for non-admins, but leave usernames intact, and leave a record of the IP address examination (just not which IP was examined).
This would give reasonable but not total protection to advocates of gay midget pornography. Any admin could get their IP addresses on a pretext, but if the pretext isn't good enough they risk getting sacked (hopefully, although enwiki at least has a pretty poor track record here).
The downside is it would require superficially radical privacy-policy changes, and might alienate some users. I suspect the latter effect would be temporary, though: as Tim says, if anything had been that way all along, it would probably cause little complaint. As for the former, I say "superficially" because in practice, it's not like the privacy policy protects most editors anyway, given that they're anonymous.
Or on the contrary, limiting the use of the tool by reducing number of people with access, strengthening the rules, and applying the rules strictly (in short, in case of abuse, removing access rather than simply whining).
Then people start banning sockpuppets on random suspicion when the response time for checkuser gets too long. Transparency is the right direction to head in.
On Dec 12, 2007 1:45 AM, Florence Devouard Anthere9@yahoo.com wrote:
I wonder if we might not argue that making this data private made somewhat more damages to the tissu of the community than if the data had been kept public. Legally speaking, it weakens our case. It goes against the principles of transparency and responsibility that we like to put upfront. It simplifies defense strategy against vandals and sockpuppets. It avoids power grabs (or perception there of) by the few members who succeed to get access to the data.
I am looking for some arguments to keep it private. Others than "well, this is the default behavior".
As long as it's made tremendously clear to everyone, before they post, I'm all for it. But it needs to be made really really clear. Not "oh, we mentioned it in our privacy policy" or "there's a message at the bottom of the edit page". More like spelling it out in BIG BOLD LETTERS and making people check a box saying "yes, I understand" before they can ever post again.
The argument against it would be, as Erik suggested, but as I'll rephrase, that it would mean that it'd be incredibly difficult to post with any significant degree of anonymity. But personally I'd see that as a good thing. I'd also support a policy requiring real names.
Do you want to force contributors to Wikipedia to use their real names? If not, then I really don't think you want this.
Anthony wrote:
On Dec 12, 2007 1:45 AM, Florence Devouard Anthere9@yahoo.com wrote:
I wonder if we might not argue that making this data private made somewhat more damages to the tissu of the community than if the data had been kept public. Legally speaking, it weakens our case. It goes against the principles of transparency and responsibility that we like to put upfront. It simplifies defense strategy against vandals and sockpuppets. It avoids power grabs (or perception there of) by the few members who succeed to get access to the data.
I am looking for some arguments to keep it private. Others than "well, this is the default behavior".
As long as it's made tremendously clear to everyone, before they post, I'm all for it. But it needs to be made really really clear. Not "oh, we mentioned it in our privacy policy" or "there's a message at the bottom of the edit page". More like spelling it out in BIG BOLD LETTERS and making people check a box saying "yes, I understand" before they can ever post again.
The argument against it would be, as Erik suggested, but as I'll rephrase, that it would mean that it'd be incredibly difficult to post with any significant degree of anonymity. But personally I'd see that as a good thing. I'd also support a policy requiring real names.
Do you want to force contributors to Wikipedia to use their real names? If not, then I really don't think you want this.
I would not support that. However, I can not help thinking that the rather ugly atmosphere that developped on enwiki is largely due to the very large and uncontrolled use of the checkuser tool by a minority. When one gives specific tools to a person, that's creates a power lever which may be used to grab bits of power. Which is more or less what is happening, much to the dismay of those who do not have that power.
There are several possibilities to fix that.
Either the use of the tool is much more widely made possible, increasing the check and balances (and thus reducing risks of abuse). Eg, giving the tool to all admins.
Or on the contrary, limiting the use of the tool by reducing number of people with access, strengthening the rules, and applying the rules strictly (in short, in case of abuse, removing access rather than simply whining).
Or dividing strategy (which seems a good idea anyway), to flatten the roles and responsabilities (eg, a checkuser can not be oversight; an arbcom member can not be steward; or even a checkuser can not be arbcom)
Removing the tool entirely and making ip of registered users, public info
Making it mandatory to publicly log checkusers actions
Other options ?
Ant
On Dec 12, 2007 8:59 AM, Florence Devouard Anthere9@yahoo.com wrote:
Anthony wrote:
Do you want to force contributors to Wikipedia to use their real names? If not, then I really don't think you want this.
I would not support that. However, I can not help thinking that the rather ugly atmosphere that developped on enwiki is largely due to the very large and uncontrolled use of the checkuser tool by a minority. When one gives specific tools to a person, that's creates a power lever which may be used to grab bits of power. Which is more or less what is happening, much to the dismay of those who do not have that power.
I very much agree with this. Though maybe we're talking about it on the wrong list.
There are several possibilities to fix that.
Either the use of the tool is much more widely made possible, increasing the check and balances (and thus reducing risks of abuse). Eg, giving the tool to all admins.
Or on the contrary, limiting the use of the tool by reducing number of people with access, strengthening the rules, and applying the rules strictly (in short, in case of abuse, removing access rather than simply whining).
In the limit of this idea, only giving access to WMF employees, and only then giving them access from within the office.
Or dividing strategy (which seems a good idea anyway), to flatten the roles and responsabilities (eg, a checkuser can not be oversight; an arbcom member can not be steward; or even a checkuser can not be arbcom)
I doubt that would help, as you can't stop people in the different roles from talking to each other.
Removing the tool entirely and making ip of registered users, public info
Making it mandatory to publicly log checkusers actions
Not sure how that would work, as the actions of checkusers often reveals the results.
Other options ?
Unblocking Tor and anonymizing proxies, thereby making checkuser relatively useless.
Anthony wrote:
On Dec 12, 2007 8:59 AM, Florence Devouard Anthere9@yahoo.com wrote:
Anthony wrote:
Do you want to force contributors to Wikipedia to use their real names? If not, then I really don't think you want this.
I would not support that. However, I can not help thinking that the rather ugly atmosphere that developped on enwiki is largely due to the very large and uncontrolled use of the checkuser tool by a minority. When one gives specific tools to a person, that's creates a power lever which may be used to grab bits of power. Which is more or less what is happening, much to the dismay of those who do not have that power.
I very much agree with this. Though maybe we're talking about it on the wrong list.
I am tired of long trolls on other lists :-)
There are several possibilities to fix that.
Either the use of the tool is much more widely made possible, increasing the check and balances (and thus reducing risks of abuse). Eg, giving the tool to all admins.
Or on the contrary, limiting the use of the tool by reducing number of people with access, strengthening the rules, and applying the rules strictly (in short, in case of abuse, removing access rather than simply whining).
In the limit of this idea, only giving access to WMF employees, and only then giving them access from within the office.
yes, but very unpractical due to language limitations
Or dividing strategy (which seems a good idea anyway), to flatten the roles and responsabilities (eg, a checkuser can not be oversight; an arbcom member can not be steward; or even a checkuser can not be arbcom)
I doubt that would help, as you can't stop people in the different roles from talking to each other.
true... but look, in real life, the law is decided by some, the judgement using legal information and past history is done by others, and last the application of the judgement is applied by a third group. For example, parliament, judges and cops. Right now, it seems the community left in part the parliament in the hands of the arbcom. The investigators are the checkusers. Judges are arbcom. Arbcom are also checkusers, so investigation and judgement are done by the same. Cops are the admins (for ban) or oversight (for clean up). Arbcom is frequently playing the admin and oversight role.
In short, the principles of separation are very weakly implemented. There ought to be a reason why most democracies decided to separate those, don't you think ?
Removing the tool entirely and making ip of registered users, public info
Making it mandatory to publicly log checkusers actions
Not sure how that would work, as the actions of checkusers often reveals the results.
True. German wikipedia does that. Not sure what the impacts are with regards to privacy policy.
Other options ?
Unblocking Tor and anonymizing proxies, thereby making checkuser relatively useless.
ant
2007/12/12, Florence Devouard Anthere9@yahoo.com:
Making it mandatory to publicly log checkusers actions
Not sure how that would work, as the actions of checkusers often reveals the results.
True. German wikipedia does that. Not sure what the impacts are with regards to privacy policy.
Maybe I should clarify here: In the german wikipedia, all checkuser requests are requested in public (other wikipedias have a similar request page). On this page the requester has to provide two things: a) evidence why a sockpuppet suspicion seems likely and b) explain why it would be an abuse of sockpuppets.
The checkusers accept or deny the request and - if we made a check - say if the suspicion was correct or false (and eventually list additional sockpuppets if found)
By up to now unwritten policy, checkusers don't do checks in private on their own. If someone has legitimate cause (like being stalked) he can request a check by email. If such a check is made, a record on the public request page is made of it, giving as much details as possible without violating the privacy of the requester.
So much for german checkuser practices, I suppose in other projects they may be similar...
greetings, elian
On Dec 12, 2007 9:55 AM, elisabeth bauer eflebeth@googlemail.com wrote:
2007/12/12, Florence Devouard Anthere9@yahoo.com:
Making it mandatory to publicly log checkusers actions
Not sure how that would work, as the actions of checkusers often reveals the results.
True. German wikipedia does that. Not sure what the impacts are with regards to privacy policy.
Maybe I should clarify here: In the german wikipedia, all checkuser requests are requested in public (other wikipedias have a similar request page). On this page the requester has to provide two things: a) evidence why a sockpuppet suspicion seems likely and b) explain why it would be an abuse of sockpuppets.
The checkusers accept or deny the request and - if we made a check - say if the suspicion was correct or false (and eventually list additional sockpuppets if found)
Ah, I misunderstood. I thought the suggestion was to make the raw logs public. Doing something like this, where the requests and results had to be made public, and anyone caught making a checkuser request outside this procedure had her privileges removed, wouldn't reveal the particular private information. OTOH, I don't think promoting public accusations of sockpuppetry is a good thing, at least not on a Wikipedia like the English Wikipedia where mean-spirited accusations run rampant. Maybe the Germans can play nicely, but I don't think it'd be pretty on en.wp.
On Dec 12, 2007 4:22 PM, Anthony wikimail@inbox.org wrote:
Ah, I misunderstood. I thought the suggestion was to make the raw logs public. Doing something like this, where the requests and results had to be made public, and anyone caught making a checkuser request outside this procedure had her privileges removed, wouldn't reveal the particular private information. OTOH, I don't think promoting public accusations of sockpuppetry is a good thing, at least not on a Wikipedia like the English Wikipedia where mean-spirited accusations run rampant. Maybe the Germans can play nicely, but I don't think it'd be pretty on en.wp.
Well, the accusation always have to include diff links showing why sockpuppetry seems likely. If what Americans knows "probable cause" does not exist, no check is performed. Repeatedly making CU unfounded requests or acting viciously is a good way to get yourself banned temporarily or permanently. Our experience with this system has, at least in my opinion, been a good one.
Sebastian
On 12/12/2007, elisabeth bauer eflebeth@googlemail.com wrote:
Maybe I should clarify here: In the german wikipedia, all checkuser requests are requested in public (other wikipedias have a similar request page). On this page the requester has to provide two things: a) evidence why a sockpuppet suspicion seems likely and b) explain why it would be an abuse of sockpuppets. By up to now unwritten policy, checkusers don't do checks in private on their own. If someone has legitimate cause (like being stalked) he can request a check by email. If such a check is made, a record on the public request page is made of it, giving as much details as possible without violating the privacy of the requester.
On en:wp, there was no public request mechanism for checkuser at all other than the checkers' user talk pages. Kelly Martin created WP:RFCU specifically to get the noise off our talk pages to somewhere we could ignore it. WP:RFCU tends to be a troll-feeding exercise in practice.
- d.
David Gerard wrote:
On 12/12/2007, elisabeth bauer eflebeth@googlemail.com wrote:
Maybe I should clarify here: In the german wikipedia, all checkuser requests are requested in public (other wikipedias have a similar request page). On this page the requester has to provide two things: a) evidence why a sockpuppet suspicion seems likely and b) explain why it would be an abuse of sockpuppets. By up to now unwritten policy, checkusers don't do checks in private on their own. If someone has legitimate cause (like being stalked) he can request a check by email. If such a check is made, a record on the public request page is made of it, giving as much details as possible without violating the privacy of the requester.
On en:wp, there was no public request mechanism for checkuser at all other than the checkers' user talk pages. Kelly Martin created WP:RFCU specifically to get the noise off our talk pages to somewhere we could ignore it. WP:RFCU tends to be a troll-feeding exercise in practice.
- d.
Why is it a troll-fest on enwiki and not on dewiki ? What explains the difference ?
Ant
On 12/12/2007, Florence Devouard Anthere9@yahoo.com wrote:
Why is it a troll-fest on enwiki and not on dewiki ? What explains the difference ?
I understand governance is a lot tighter on de:wp. I don't know this from personal experience, of course.
en:wp suffers from being the biggest wiki and, to a large extent, the international edition of Wikipedia (English being the current lingua franca). I suspect it's a case of English being first to get problems.
- d.
On Dec 12, 2007 9:30 AM, Florence Devouard Anthere9@yahoo.com wrote:
Anthony wrote:
On Dec 12, 2007 8:59 AM, Florence Devouard Anthere9@yahoo.com wrote:
Or on the contrary, limiting the use of the tool by reducing number of people with access, strengthening the rules, and applying the rules strictly (in short, in case of abuse, removing access rather than simply whining).
In the limit of this idea, only giving access to WMF employees, and only then giving them access from within the office.
yes, but very unpractical due to language limitations
Hmm, I don't see how language would be an issue. Aren't communications between admins and checkusers generally very simple? User X is User Y. User Z is from Germany. User A and User B are using the same ISP. User C is using a Tor proxy.
Or dividing strategy (which seems a good idea anyway), to flatten the roles and responsabilities (eg, a checkuser can not be oversight; an arbcom member can not be steward; or even a checkuser can not be arbcom)
I doubt that would help, as you can't stop people in the different roles from talking to each other.
true... but look, in real life, the law is decided by some, the judgement using legal information and past history is done by others, and last the application of the judgement is applied by a third group. For example, parliament, judges and cops. Right now, it seems the community left in part the parliament in the hands of the arbcom. The investigators are the checkusers. Judges are arbcom. Arbcom are also checkusers, so investigation and judgement are done by the same. Cops are the admins (for ban) or oversight (for clean up). Arbcom is frequently playing the admin and oversight role.
In short, the principles of separation are very weakly implemented. There ought to be a reason why most democracies decided to separate those, don't you think ?
Well, it is commonly pointed out that "Wikipedia is not a democracy". I guess I like where you're going, but until there is some mechanism in place to encourage everyone to follow a consistent set of rules I don't think separating roles is going to help. Arb com currently makes its own rules and essentially chooses its own jurisdiction. Just taking away their checkuser wouldn't stop them from doing investigations. You'd have to make a rule against arb com doing investigations, and then find a way to enforce it.
On 12/12/07, Monahon, Peter B. Peter.Monahon@uspto.gov wrote:
why banning/blocking et cetera be banned, and a moderation system be developed to handle the transom between the community on one side, and spam/vandalism/off-topic-contributions on the other, instead of banning.
I'm sure that's often been suggested before, but you know, that's the best damn idea I've heard all month (although it's only the twelfth, to be fair). Eliminate blocking, and replace it with moderation; eliminate protection, and replace it with flagged revisions; then you just have to find something to replace deletion with, and adminship becomes meaningless and can all but be tossed out, since all this stuff can be given to autoconfirmed users. It's just a matter of coming up with a good interface and mechanism for the various rights -- that's the rub.
Given the opposition to giving even rollback to autoconfirmed users -- which is far more benign than any kind of "moderation", since such moderation would have to be able to prevent users editing somehow in order to fully replace blocking -- I think that's unlikely.
--- Simetrical Simetrical+wikilist@gmail.com wrote:
On 12/12/07, Monahon, Peter B. Peter.Monahon@uspto.gov wrote:
why banning/blocking et cetera be banned, and a moderation system be developed to handle the transom
between the
community on one side, and
spam/vandalism/off-topic-contributions on the
other, instead of banning.
I'm sure that's often been suggested before, but you know, that's the best damn idea I've heard all month (although it's only the twelfth, to be fair). Eliminate blocking, and replace it with moderation; eliminate protection, and replace it with flagged revisions; then you just have to find something to replace deletion with, and adminship becomes meaningless and can all but be tossed out, since all this stuff can be given to autoconfirmed users. It's just a matter of coming up with a good interface and mechanism for the various rights -- that's the rub.
Wikitech-l mailing list Wikitech-l@lists.wikimedia.org http://lists.wikimedia.org/mailman/listinfo/wikitech-l
wikitech-l@lists.wikimedia.org