... but -if we want to reach consensus[1]- what we really need to be discussing is: screwdrivers.
sincerely, Kim Bruning
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
Scattered pieces of the puzzle globe.
The WMF is still trying to scatter it in favour of ???
-- I am using the free version of SPAMfighter. We are a community of 7 million users fighting spam. SPAMfighter has removed 4955 of my spam emails to date. Get the free SPAMfighter here: http://www.spamfighter.com/len
The Professional version does not have this message
On Tue, Nov 29, 2011 at 08:09, Möller, Carsten c.moeller@wmco.de wrote:
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
You hold strong on that principle. Wikipedia should not be censored!
Even if that censorship is something the user initiates, desires, and can turn off at any time, like AdBlock.
Glad to see that Sue Gardner's warnings earlier in the debate that people don't get entrenched and fundamentalist but try to honestly and charitably see other people's points of view has been so well heeded.
Am 29.11.2011 10:32, schrieb Tom Morris:
On Tue, Nov 29, 2011 at 08:09, Möller, Carstenc.moeller@wmco.de wrote:
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
You hold strong on that principle. Wikipedia should not be censored!
Even if that censorship is something the user initiates, desires, and can turn off at any time, like AdBlock.
Glad to see that Sue Gardner's warnings earlier in the debate that people don't get entrenched and fundamentalist but try to honestly and charitably see other people's points of view has been so well heeded.
There is a simple thing to know, to see, that this wording is actually correct. There is not a single filter that can meet the personal preferences, is easy to use and not in violation with NPOV, besides two extrema. The all and nothing options. We already discussed that in detail at the discussion page of the referendum.
If the filter is user initiated then it will meet the personal preference is not in violation with NPOV. But it isn't easy to use. He will have to do all the work himself. That is good, but practically impossible.
If the filter is predefined then it might meet the personal preference and can be easy to use. But it will be an violation of NPOV, since someone else (a group of reader/users) would have to define it. That isn't user initiated censorship anymore.
The comparison with AdBlock sucks, because you didn't looked at the goal of both tools. AdBlock and it's predefined lists are trying to hide _any_ advertisement, while the filter is meant to _only_ hide controversial content. This comes down to the two extrema noted above, that are the only two neutral options.
nya~
I agree that the main obstacle at the moment is that any form of "filter list" proposal is very controversial as many editors feel that this would be a way of "enabling" POV censorship that users may not want.
One thing I would like to know, which has not been clear to me in discussions is whether there is such a strong objection to any form of filter which includes in its core design the requirement that it can be trivially overridden on a particular image by asynchronous loading (i.e Images are not shown according to a predefined criterion - but the image is blocked and where the image is a grey square with the image description and a "show this image button"). So that a user who thinks that they might want to see an image that has been blocked by their filter can do so very easily.
If the feeling is that such a "weak" filter would (regardless of how the pre-populated "filter lists" are created) still attract significant opposition on many projects then I personally don't see how there can be any filter created that is likely to gain consensus support and still be useful - except for one that gives users the option to hide "all" images by default and then click on the greyed out images to load them if they want to see them.
-- Alasdair (User:ajbpearce)
On Tuesday, 29 November 2011 at 11:37, Tobias Oelgarte wrote:
Am 29.11.2011 10:32, schrieb Tom Morris:
On Tue, Nov 29, 2011 at 08:09, Möller, Carsten<c.moeller@wmco.de (mailto:c.moeller@wmco.de)> wrote:
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
You hold strong on that principle. Wikipedia should not be censored!
Even if that censorship is something the user initiates, desires, and can turn off at any time, like AdBlock.
Glad to see that Sue Gardner's warnings earlier in the debate that people don't get entrenched and fundamentalist but try to honestly and charitably see other people's points of view has been so well heeded.
There is a simple thing to know, to see, that this wording is actually correct. There is not a single filter that can meet the personal preferences, is easy to use and not in violation with NPOV, besides two extrema. The all and nothing options. We already discussed that in detail at the discussion page of the referendum.
If the filter is user initiated then it will meet the personal preference is not in violation with NPOV. But it isn't easy to use. He will have to do all the work himself. That is good, but practically impossible.
If the filter is predefined then it might meet the personal preference and can be easy to use. But it will be an violation of NPOV, since someone else (a group of reader/users) would have to define it. That isn't user initiated censorship anymore.
The comparison with AdBlock sucks, because you didn't looked at the goal of both tools. AdBlock and it's predefined lists are trying to hide _any_ advertisement, while the filter is meant to _only_ hide controversial content. This comes down to the two extrema noted above, that are the only two neutral options.
nya~
foundation-l mailing list foundation-l@lists.wikimedia.org (mailto:foundation-l@lists.wikimedia.org) Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Alasdair wrote:
If the feeling is that such a "weak" filter would (regardless of how the pre-populated "filter lists" are created) still attract significant opposition on many projects then I personally don't see how there can be any filter created that is likely to gain consensus support and still be useful - except for one that gives users the option to hide "all" images by default and then click on the greyed out images to load them if they want to see them.
You're confusing the opinions of a few extremists on foundation-l with general consensus. It's unclear what percent of users actually want this feature, particularly as the feature's implementation hasn't been fully developed. A few people on this list have been trying very hard to make it seem as though they're capable of accepting some magical invisible pink unicorn-equivalent media filter, but the truth is that they're realistically and pragmatically opposed to any media filter, full stop. This is an extremist opinion (it's not as though extremist opinions are particularly uncommon around here).
Personally, I want to believe that if the Wikimedia Board is making such a strong push for this feature to be implemented, there are very good reasons for doing so. Whether or not that's the case, I wouldn't look (closely or broadly) at the comments on this mailing list and try to divine community-wide views.
MZMcBride
Am 29.11.2011 13:03, schrieb MZMcBride:
Alasdair wrote:
If the feeling is that such a "weak" filter would (regardless of how the pre-populated "filter lists" are created) still attract significant opposition on many projects then I personally don't see how there can be any filter created that is likely to gain consensus support and still be useful - except for one that gives users the option to hide "all" images by default and then click on the greyed out images to load them if they want to see them.
You're confusing the opinions of a few extremists on foundation-l with general consensus. It's unclear what percent of users actually want this feature, particularly as the feature's implementation hasn't been fully developed. A few people on this list have been trying very hard to make it seem as though they're capable of accepting some magical invisible pink unicorn-equivalent media filter, but the truth is that they're realistically and pragmatically opposed to any media filter, full stop. This is an extremist opinion (it's not as though extremist opinions are particularly uncommon around here).
Personally, I want to believe that if the Wikimedia Board is making such a strong push for this feature to be implemented, there are very good reasons for doing so. Whether or not that's the case, I wouldn't look (closely or broadly) at the comments on this mailing list and try to divine community-wide views.
MZMcBride
... And I still want to see the "good reason for doing so". So far i could not find one single reason that was worthy to implement such a filter considering all the drawbacks it causes. That doesn't mean that I'm opposed to any kind of filter. It just that we currently have three models:
* The very simple clean solutions (all/nothing/blured/...), which aren't found intuitive by the filter lovers. * The category/labeling based solutions, which require an immense effort (constantly) and provide data for censors. * The user based solutions, which are most likely unusable, since they require a lot of work by the user himself.
What I'm missing is option four. But as long option four isn't present I'm strongly in favor of options 0 and 1. 0 would be: do nothing.
On 29 November 2011 12:56, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
... And I still want to see the "good reason for doing so". So far i could not find one single reason that was worthy to implement such a filter considering all the drawbacks it causes. That doesn't mean that
Yes.
The Board voted unanimously *twice* for the filter. They need to individually reveal their reasoning and what convinced them so strongly - the second time in the face of the threat of the second-largest project forking.
Really. You just haven't told us what you each personally find so compelling about the idea, and we can't see it. So people presume there's financial influence or some other reason going on.
Board, if you want this problem to go away, you need to explain yourselves, in a way that actually answers detractors. Your reasoning is really not obvious.
- d.
on 11/29/11 8:01 AM, David Gerard at dgerard@gmail.com wrote:
On 29 November 2011 12:56, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
... And I still want to see the "good reason for doing so". So far i could not find one single reason that was worthy to implement such a filter considering all the drawbacks it causes. That doesn't mean that
Yes.
The Board voted unanimously *twice* for the filter. They need to individually reveal their reasoning and what convinced them so strongly - the second time in the face of the threat of the second-largest project forking.
Really. You just haven't told us what you each personally find so compelling about the idea, and we can't see it. So people presume there's financial influence or some other reason going on.
Board, if you want this problem to go away, you need to explain yourselves, in a way that actually answers detractors. Your reasoning is really not obvious.
- d.
I agree with you completely, David. Wikipedia is supposed to be a collaborative effort. And the board should not be the law enforcement part of that collaboration. This parental, "We know what's best for you, and don't have to explain our decisions to you" makes a farce (or worse) of any claim of such collaboration. And the more silent they remain about the reasoning behind their decisions, the louder the suspicions become about that silence - and the motives behind it.
Marc Riddell
The problem starts at the point where the user does not choose the image(s) for himself and uses a predefined set on what should no be shown. Someone will have to create this sets and this will be unavoidably a violation of NPOV in the first place. If the user would choose for himself the images that shouldn't be shown or even (existing) categories of images that he wants to hide, then it would be his personal preference. But do we want to exchange this lists or make them public? I guess not. Since this lists will be a predefined sets itself.
What i found to be the best solution so far was the "blurred images filter". You can 'opt-in' to enable it and all images will be blurred as the default. Since they are only blurred you will get a rough impression on what to expect (something the what a hidden image can't do) and an blurred image can be viewed by just hovering the mouse cursor over it. While you browse, not a single click is needed. On top of that it is awfully easy to implement, we already have a running version of it (see brainstorming page), it doesn't feed any information to actual censors and it is in no way a violation with NPOV. So far i didn't hear any constructive critic why this wouldn't be a very good solution.
nya~
Am 29.11.2011 12:08, schrieb Alasdair:
I agree that the main obstacle at the moment is that any form of "filter list" proposal is very controversial as many editors feel that this would be a way of "enabling" POV censorship that users may not want.
One thing I would like to know, which has not been clear to me in discussions is whether there is such a strong objection to any form of filter which includes in its core design the requirement that it can be trivially overridden on a particular image by asynchronous loading (i.e Images are not shown according to a predefined criterion - but the image is blocked and where the image is a grey square with the image description and a "show this image button"). So that a user who thinks that they might want to see an image that has been blocked by their filter can do so very easily.
If the feeling is that such a "weak" filter would (regardless of how the pre-populated "filter lists" are created) still attract significant opposition on many projects then I personally don't see how there can be any filter created that is likely to gain consensus support and still be useful - except for one that gives users the option to hide "all" images by default and then click on the greyed out images to load them if they want to see them.
-- Alasdair (User:ajbpearce)
On Tuesday, 29 November 2011 at 11:37, Tobias Oelgarte wrote:
Am 29.11.2011 10:32, schrieb Tom Morris:
On Tue, Nov 29, 2011 at 08:09, Möller, Carsten<c.moeller@wmco.de (mailto:c.moeller@wmco.de)> wrote:
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
You hold strong on that principle. Wikipedia should not be censored!
Even if that censorship is something the user initiates, desires, and can turn off at any time, like AdBlock.
Glad to see that Sue Gardner's warnings earlier in the debate that people don't get entrenched and fundamentalist but try to honestly and charitably see other people's points of view has been so well heeded.
There is a simple thing to know, to see, that this wording is actually correct. There is not a single filter that can meet the personal preferences, is easy to use and not in violation with NPOV, besides two extrema. The all and nothing options. We already discussed that in detail at the discussion page of the referendum.
If the filter is user initiated then it will meet the personal preference is not in violation with NPOV. But it isn't easy to use. He will have to do all the work himself. That is good, but practically impossible.
If the filter is predefined then it might meet the personal preference and can be easy to use. But it will be an violation of NPOV, since someone else (a group of reader/users) would have to define it. That isn't user initiated censorship anymore.
The comparison with AdBlock sucks, because you didn't looked at the goal of both tools. AdBlock and it's predefined lists are trying to hide _any_ advertisement, while the filter is meant to _only_ hide controversial content. This comes down to the two extrema noted above, that are the only two neutral options.
nya~
foundation-l mailing list foundation-l@lists.wikimedia.org (mailto:foundation-l@lists.wikimedia.org) Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 29 November 2011 12:03, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
What i found to be the best solution so far was the "blurred images filter". You can 'opt-in' to enable it and all images will be blurred as the default. Since they are only blurred you will get a rough impression on what to expect (something the what a hidden image can't do) and an blurred image can be viewed by just hovering the mouse cursor over it. While you browse, not a single click is needed. On top of that it is awfully easy to implement, we already have a running version of it (see brainstorming page), it doesn't feed any information to actual censors and it is in no way a violation with NPOV. So far i didn't hear any constructive critic why this wouldn't be a very good solution.
I gave one before:
From the far side of the office, a blurred penis on your screen looks
like a blurred penis on your screen.
For this reason, I suggest a blank grey square instead.
- d.
Am 29.11.2011 13:45, schrieb David Gerard:
On 29 November 2011 12:03, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
What i found to be the best solution so far was the "blurred images filter". You can 'opt-in' to enable it and all images will be blurred as the default. Since they are only blurred you will get a rough impression on what to expect (something the what a hidden image can't do) and an blurred image can be viewed by just hovering the mouse cursor over it. While you browse, not a single click is needed. On top of that it is awfully easy to implement, we already have a running version of it (see brainstorming page), it doesn't feed any information to actual censors and it is in no way a violation with NPOV. So far i didn't hear any constructive critic why this wouldn't be a very good solution.
I gave one before:
From the far side of the office, a blurred penis on your screen looks
like a blurred penis on your screen.
For this reason, I suggest a blank grey square instead.
- d.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Just use another imageprocessingfilter and it will not look like a blurred penis, but maybe like a distorted penis or an arm.
On Tue, Nov 29, 2011 at 1:03 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
The problem starts at the point where the user does not choose the image(s) for himself and uses a predefined set on what should no be shown. Someone will have to create this sets and this will be unavoidably a violation of NPOV in the first place.
No, why would it? What does it say if someone created such a set? "These are pictures of such-and-so, and there might be people who do not want to see pictures of such-and-so." I don't see the NPOV here. Nobody is saying "These pictures should not be seen". They are saying, "some people would not like to see these pictures". That's not POV.
Am 29.11.2011 14:40, schrieb Andre Engels:
On Tue, Nov 29, 2011 at 1:03 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
The problem starts at the point where the user does not choose the image(s) for himself and uses a predefined set on what should no be shown. Someone will have to create this sets and this will be unavoidably a violation of NPOV in the first place.
No, why would it? What does it say if someone created such a set? "These are pictures of such-and-so, and there might be people who do not want to see pictures of such-and-so." I don't see the NPOV here. Nobody is saying "These pictures should not be seen". They are saying, "some people would not like to see these pictures". That's not POV.
You missed the previous question: "Why would some people not like to see these pictures?" The answer to this question is the motivation to create such a list and to spread it. But this answer is any case non NPOV.
On Tue, Nov 29, 2011 at 2:52 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
Am 29.11.2011 14:40, schrieb Andre Engels:
On Tue, Nov 29, 2011 at 1:03 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
The problem starts at the point where the user does not choose the image(s) for himself and uses a predefined set on what should no be shown. Someone will have to create this sets and this will be unavoidably a violation of NPOV in the first place.
No, why would it? What does it say if someone created such a set? "These are pictures of such-and-so, and there might be people who do not want to see pictures of such-and-so." I don't see the NPOV here. Nobody is saying "These pictures should not be seen". They are saying, "some people would not like to see these pictures". That's not POV.
You missed the previous question: "Why would some people not like to see these pictures?" The answer to this question is the motivation to create such a list and to spread it. But this answer is any case non NPOV.
Sure, it's not NPOV, it's not POV either, it has nothing to do with POV or NPOV.
Let's go to another parallel: There are lists of 'good articles', 'featured articles', 'featured images' andsoforth on various projects. POV too? And if I make a list of "interesting articles", am I allowed to put that on Wikipedia? What about a tool that lets you make such a list and share it with others? Would that also get you as mightily angry?
On Tue, Nov 29, 2011 at 02:40:15PM +0100, Andre Engels wrote:
On Tue, Nov 29, 2011 at 1:03 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
The problem starts at the point where the user does not choose the image(s) for himself and uses a predefined set on what should no be shown. Someone will have to create this sets and this will be unavoidably a violation of NPOV in the first place.
No, why would it? What does it say if someone created such a set? "These are pictures of such-and-so, and there might be people who do not want to see pictures of such-and-so." I don't see the NPOV here. Nobody is saying "These pictures should not be seen". They are saying, "some people would not like to see these pictures". That's not POV.
I thought we were past this point in the discussion, and working towards common consensus.
Here's the key argument from a "fellow traveller"[1] kind of organisation, to help you catch up. :-)
http://www.ala.org/Template.cfm?Section=interpretations&Template=/Conten...
sincerely, Kim Bruning
[1] Am I using this term right?
On Tue, Nov 29, 2011 at 11:37 AM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
If the filter is predefined then it might meet the personal preference and can be easy to use. But it will be an violation of NPOV, since someone else (a group of reader/users) would have to define it. That isn't user initiated censorship anymore.
It is still the user who chooses whether or not to remove images, and if so, which list, although of course their choice is restricted. I guess that's not user initiated, but it is still user chosen.
The comparison with AdBlock sucks, because you didn't looked at the goal of both tools. AdBlock and it's predefined lists are trying to hide _any_ advertisement, while the filter is meant to _only_ hide controversial content. This comes down to the two extrema noted above, that are the only two neutral options.
I don't agree. We are not deciding which content is controversial and which not, we are giving users the option to decide not to see such-and-such content if they don't want to. That's not necessarily labeling them as controversial; it is even less labeling other content as non-controversial.
Even more importantly, your options are not neutral at all, in my opinion. "Either everything is controversial or nothing is". That's not a neutral statement. "It's controversial to you if you consider it controversial to you" - that's much closer to being NPOV, and that's what the proposal is trying to do. NPOV is not about treating every _subject_ as equal, but about treating every _opinion_ as equal. If I have a set of images I consider controversial, and you have a different, perhaps non-intersecting set that you consider controversial, the NPOV method is to consider both distinctions as valid, not to say that it means that everything is controversial, or nothing is. And -surprise- that seems to be exactly what this proposal is trying to achieve. It is probably not ideal, there might even be reasons to drop it completely, but NPOV is much better served by this proposal than it is by yours.
Am 29.11.2011 12:09, schrieb Andre Engels:
On Tue, Nov 29, 2011 at 11:37 AM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
If the filter is predefined then it might meet the personal preference and can be easy to use. But it will be an violation of NPOV, since someone else (a group of reader/users) would have to define it. That isn't user initiated censorship anymore.
It is still the user who chooses whether or not to remove images, and if so, which list, although of course their choice is restricted. I guess that's not user initiated, but it is still user chosen.
With the tiny (actually big) problem that such lists are public and can be directly feed into the filters of not so people loving or extremely caring ISP's. This removes the freedom of choice from the users. Not from those that want this feature, but from those that don't want or that don't want it every time. In this case you trade a convenience for some of our readers against the ability to access all the knowledge that we could provide.
The comparison with AdBlock sucks, because you didn't looked at the goal of both tools. AdBlock and it's predefined lists are trying to hide _any_ advertisement, while the filter is meant to _only_ hide controversial content. This comes down to the two extrema noted above, that are the only two neutral options.
I don't agree. We are not deciding which content is controversial and which not, we are giving users the option to decide not to see such-and-such content if they don't want to. That's not necessarily labeling them as controversial; it is even less labeling other content as non-controversial.
I neither agree. We decide what belongs to which preset (or who will do it?), and it is meant to filter out controversial content. Therefore we define what controversial content is, - or at least we tell the people, what we think, that might be controversial, while we also tell them (exclusion method) that other things aren't controversial.
Even more importantly, your options are not neutral at all, in my opinion. "Either everything is controversial or nothing is". That's not a neutral statement. "It's controversial to you if you consider it controversial to you" - that's much closer to being NPOV, and that's what the proposal is trying to do.
No. This options are meant to say that "you have to define for yourself what is controversial". They take the extreme stances of equal judgment. Either anything is guilty or nothing is guilty and both stances provide no information at all. Both give no definition. It is not the answer to the question: "What is controversial?" under the assumption that not anything or not everything is controversial. If you agree that not anything or not everything is controversial than this simple rule has to apply, since both extremes are untrue. That is very simple logic and forces you to define it for yourself.
Back to the statement: "It's controversial to you if you consider it controversial to you". Thats right. But it's not related to the initial problem. In this case you will only find a "you" and a "you". There is no "we", "them" or anything like that. You could have written: "If my leg hurts, then my leg hurts". Always true, but useless to be applied to something that involves anything not done not by you in the first part of the sentence.
NPOV is not about treating every _subject_ as equal, but about treating every _opinion_ as equal.
This is a nice sentence. I hope that you will it. I also hope that you remember that images are subjects and not opinions.
If I have a set of images I consider controversial, and you have a different, perhaps non-intersecting set that you consider controversial, the NPOV method is to consider both distinctions as valid, not to say that it means that everything is controversial, or nothing is.
A filter with presets considers only one opinion as valid. It shows an image or it does hide it. Stating different opinions inside an article is a very different thing. You represent both opinions but you don't apply them. On top of that it are the opinions of people that don't write the article.
And -surprise- that seems to be exactly what this proposal is trying to achieve. It is probably not ideal, there might even be reasons to drop it completely, but NPOV is much better served by this proposal than it is by yours.
Actually you misused or misunderstood the core of NPOV in combination with this two stances. Thats why i can't agree or follow your conclusion.
NPOV is meant in the way that we don't say what is right or is wrong. We represent the opinions and we let the user decide what to do with them. Additionally NPOV implies that we don't write down our own opinions. Instead we cite them.
On Tuesday, 29 November 2011 at 13:42, Tobias Oelgarte wrote:
With the tiny (actually big) problem that such lists are public and can be directly feed into the filters of not so people loving or extremely caring ISP's.
I think this is a point that I was missing about the objections to the filter system.
So a big objection is that any "sets" of filters is not so much to the "weak" filtering on wikipedia but that such "sets" would enable other censors to more easily make a form of "strong" censorship of wikipedia where some images were not available (at all) to readers - regardless of whether or not they want to see them?
I am not sure I agree with this concern as a practical matter but I can understand it as a theoretical concern. Has the board or WMF talked about / addressed this issue anywhere in regards to "set" based filter systems?
-- Alasdair (User:Ajbpearce)
On Tue, Nov 29, 2011 at 13:28, Alasdair web@ajbpearce.co.uk wrote:
On Tuesday, 29 November 2011 at 13:42, Tobias Oelgarte wrote:
With the tiny (actually big) problem that such lists are public and can be directly feed into the filters of not so people loving or extremely caring ISP's.
I think this is a point that I was missing about the objections to the filter system.
So a big objection is that any "sets" of filters is not so much to the "weak" filtering on wikipedia but that such "sets" would enable other censors to more easily make a form of "strong" censorship of wikipedia where some images were not available (at all) to readers - regardless of whether or not they want to see them?
I am not sure I agree with this concern as a practical matter but I can understand it as a theoretical concern. Has the board or WMF talked about / addressed this issue anywhere in regards to "set" based filter systems?
I find it highly unconvincing and wrote an extended blog post on the topic a while back: http://blog.tommorris.org/post/11286767288/opt-in-image-filter-enabling-cens...
On Tue, Nov 29, 2011 at 01:30:12PM +0000, Tom Morris wrote:
I find it highly unconvincing and wrote an extended blog post on the topic a while back: http://blog.tommorris.org/post/11286767288/opt-in-image-filter-enabling-cens...
Yes, but that blog post attacks a straw man. The actual library argument is a bit different. (Notably, you don't address the ALA's concepts of "prejudicial label" or "censorship tool")
sincerely, Kim Bruning
On Tue, Nov 29, 2011 at 1:30 PM, Tom Morris tom@tommorris.org wrote:
On Tue, Nov 29, 2011 at 13:28, Alasdair web@ajbpearce.co.uk wrote:
On Tuesday, 29 November 2011 at 13:42, Tobias Oelgarte wrote:
With the tiny (actually big) problem that such lists are public and can be directly feed into the filters of not so people loving or extremely caring ISP's.
I think this is a point that I was missing about the objections to the
filter system.
So a big objection is that any "sets" of filters is not so much to the
"weak" filtering on wikipedia but that such "sets" would enable other censors to more easily make a form of "strong" censorship of wikipedia where some images were not available (at all) to readers - regardless of whether or not they want to see them?
I am not sure I agree with this concern as a practical matter but I can
understand it as a theoretical concern. Has the board or WMF talked about / addressed this issue anywhere in regards to "set" based filter systems?
I find it highly unconvincing and wrote an extended blog post on the topic a while back:
http://blog.tommorris.org/post/11286767288/opt-in-image-filter-enabling-cens...
I found that a very entertaining, well-argued, and salient post.
Andreas
On Wed, Nov 30, 2011 at 6:26 PM, Andreas K. jayen466@gmail.com wrote:
On Tue, Nov 29, 2011 at 1:30 PM, Tom Morris tom@tommorris.org wrote:
I find it highly unconvincing and wrote an extended blog post on the topic a while back:
http://blog.tommorris.org/post/11286767288/opt-in-image-filter-enabling-cens...
I found that a very entertaining, well-argued, and salient post.
While I don't find that line of argument to be a fully fledged straw-horse argument, it does appear to me to be a cherry-picked argument to *attempt* to refute. There are much stronger arguments, both practical and philosophical, at any attempt to elide controversial content. Even as such, I am not convinced by the argumentation, but would not prefer to rebut an argument that does not address the strongest reasons for opposing elision of controversial content, by choice or otherwise.
On Thu, Dec 1, 2011 at 03:34, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
While I don't find that line of argument to be a fully fledged straw-horse argument, it does appear to me to be a cherry-picked argument to *attempt* to refute. There are much stronger arguments, both practical and philosophical, at any attempt to elide controversial content. Even as such, I am not convinced by the argumentation, but would not prefer to rebut an argument that does not address the strongest reasons for opposing elision of controversial content, by choice or otherwise.
My point was not to provide an argument for or against any particular implementation. It was a response to one particularly god-awful argument.
On Thu, Dec 1, 2011 at 8:33 AM, Tom Morris tom@tommorris.org wrote:
On Thu, Dec 1, 2011 at 03:34, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
While I don't find that line of argument to be a fully fledged straw-horse argument, it does appear to me to be a cherry-picked argument to *attempt* to refute. There are much stronger arguments, both practical and philosophical, at any attempt to elide controversial content. Even as such, I am not convinced by the argumentation, but would not prefer to rebut an argument that does not address the strongest reasons for opposing elision of controversial content, by choice or otherwise.
My point was not to provide an argument for or against any particular implementation. It was a response to one particularly god-awful argument.
I honestly didn't intend to make a full rebuttal of your line of reasoning, but I do feel you are forcing my hand a bit. So here goes.
"People who create photos or music or anything else and license it [under a free licence] and the risk that someone they don’t like ends up using “their” content. I wouldn’t be too pleased if I found that one of the articles I’d written for Wikinews or one of the photos I’d put on Commons turned up on websites affiliated with, say, the British National Party. But that’s a risk I run from licensing stuff freely."
This is not a theoretical risk. This has happened. Most famously in the case of Virgin using pictures of persons that were licenced under a free licence, in their advertising campaign. I hesitate to call this argument fatuous, but it's relevance is certainly highly questionable. Nobody has raised this is as a serious argument except you assume it has been. This is the bit that truly is a straw horse. The "downstream use" objection was *never* about downstream use of _content_ but downstream use of _labels_ and the structuring of the semantic data. That is a real horse of a different colour, and not of straw.
Here is the first installment of my rebuttal. I don't want to go the tl;dr. route, so I'll chop it into easy chunks.
On Thu, Dec 1, 2011 at 8:11 PM, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
... The "downstream use" objection was *never* about downstream use of _content_ but downstream use of _labels_ and the structuring of the semantic data. That is a real horse of a different colour, and not of straw.
Tom thinks that this horse is real, but it has bolted. I agree with Tom that it is very simple for a commercial filter provider, or anyone else who is sufficiently motivated, to find most naughty content on WP and filter it. Risker said she had experienced something like this. Universities and schools have this too.
I would prefer that we do build good metadata/labels, but that we (wikimedia) do not incorporate any general purpose use of them for filtering from readers. Hiding content is the easy way out. The inappropriate content on our projects is of one of two types:
1. inappropriate content that is quickly addressed, but it is seen by some people as it works its way through our processes. Sometimes it is the public that sees the content; sometimes it is only the community members who *choose* to patrol new pages/files while on the train.
2. content which is appropriate for certain contexts, is known to be problematic but concensus is that the content stays, however readers stumble on it unawares.
The former cant be solved.
The latter can be solved by labelling but not filtering. If you are on the train and a link is annotated with a tag "nsfw", you can not click it, or be wary about the destination page.
-- John Vandenberg
On Thu, Dec 01, 2011 at 08:53:09PM +1100, John Vandenberg wrote:
The latter can be solved by labelling but not filtering. If you are on the train and a link is annotated with a tag "nsfw", you can not click it, or be wary about the destination page.
Dude, no. That's prejudicial labelling.
Filtering:meh Prejudicial Labelling: evil. Widely considered a Bad Idea (tm), since at least the '50s
The reason filtering is 'meh' (as opposed to 'mostly harmless') is because it is hard to do without prejudicial labelling.
sincerely, Kim Bruning
Am 01.12.2011 10:53, schrieb John Vandenberg:
On Thu, Dec 1, 2011 at 8:11 PM, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
... The "downstream use" objection was *never* about downstream use of _content_ but downstream use of _labels_ and the structuring of the semantic data. That is a real horse of a different colour, and not of straw.
Tom thinks that this horse is real, but it has bolted. I agree with Tom that it is very simple for a commercial filter provider, or anyone else who is sufficiently motivated, to find most naughty content on WP and filter it. Risker said she had experienced something like this. Universities and schools have this too.
I would prefer that we do build good metadata/labels, but that we (wikimedia) do not incorporate any general purpose use of them for filtering from readers. Hiding content is the easy way out. The inappropriate content on our projects is of one of two types:
- inappropriate content that is quickly addressed, but it is seen by
some people as it works its way through our processes. Sometimes it is the public that sees the content; sometimes it is only the community members who *choose* to patrol new pages/files while on the train.
- content which is appropriate for certain contexts, is known to be
problematic but concensus is that the content stays, however readers stumble on it unawares.
The former cant be solved.
The latter can be solved by labelling but not filtering. If you are on the train and a link is annotated with a tag "nsfw", you can not click it, or be wary about the destination page.
-- John Vandenberg
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Thats exactly this kind of pre-judicial labeling the ALA speaks about [1] and that can be misused by third parties (ISPs in general meaning). This kind of labeling has nothing to do with an encyclopedia. Either we include such content or we don't. If we include it, then we don't label it. This would be pre-judicial and someone has to do this for others. This someone will break with NPOV, since it is _his_ opinion and not only that of the reader. I thought category based filtering ('nsfw' is a category) is off the table?
[1] http://www.ala.org/Template.cfm?Section=interpretations&Template=/Conten...
nya~
On Thu, Dec 1, 2011 at 09:11, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
This is not a theoretical risk. This has happened. Most famously in the case of Virgin using pictures of persons that were licenced under a free licence, in their advertising campaign. I hesitate to call this argument fatuous, but it's relevance is certainly highly questionable. Nobody has raised this is as a serious argument except you assume it has been. This is the bit that truly is a straw horse. The "downstream use" objection was *never* about downstream use of _content_ but downstream use of _labels_ and the structuring of the semantic data. That is a real horse of a different colour, and not of straw.
I was drawing an analogy: the point I was making is very simple - the general principle of "we shouldn't do X because someone else might reuse it for bad thing Y" is a pretty lousy argument, given that we do quite a lot of things in the free culture/open source software world that have the same problem. Should the developers of Hadoop worry that (your repressive regime of choice) might use their tools to more efficiently sort through surveillance data of their citizens?
I'm not at all sure how you concluded that I was suggesting filtering groups would be reusing the content? Net Nanny doesn't generally need to include copies of Autofellatio6.jpg in their software. The reuse of the filtering category tree, or even the unstructured user data, is something anti-filter folk have been concerned about. But for the most part, if a category tree were built for filtering, it wouldn't require much more than identifying clusters of categories within Commons. That is the point of my post. If you want to find adult content to filter, it's pretty damn easy to do: you can co-opt the existing extremely detailed category system on Commons ("Nude images including Muppets", anybody?).
Worrying that filtering companies will co-opt a new system when the existing system gets them 99% of the way anyway seems just a little overblown.
It isn' one incidence, it isn't a class of incidences. Take it on board that the community is against the *principle* of censorship. Please.
As I said in the post, there may still be good arguments against filtering. The issue of principle may be very strong - and Kim Bruning made the point about the ALA definition, for instance, which is a principled rather than consequentialist objection.
Generally, though, I don't particularly care *what* people think, I care *why* they think it. This is why the debate over this has been so unenlightening, because the arguments haven't actually flowed, just lots of emotion and anger.
On Thu, Dec 1, 2011 at 9:06 PM, Tom Morris tom@tommorris.org wrote:
On Thu, Dec 1, 2011 at 09:11, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
This is not a theoretical risk. This has happened. Most famously in the case of Virgin using pictures of persons that were licenced under a free licence, in their advertising campaign. I hesitate to call this argument fatuous, but it's relevance is certainly highly questionable. Nobody has raised this is as a serious argument except you assume it has been. This is the bit that truly is a straw horse. The "downstream use" objection was *never* about downstream use of _content_ but downstream use of _labels_ and the structuring of the semantic data. That is a real horse of a different colour, and not of straw.
I was drawing an analogy: the point I was making is very simple - the general principle of "we shouldn't do X because someone else might reuse it for bad thing Y" is a pretty lousy argument, given that we do quite a lot of things in the free culture/open source software world that have the same problem. Should the developers of Hadoop worry that (your repressive regime of choice) might use their tools to more efficiently sort through surveillance data of their citizens?
No, just to keep the facts straight, you were not making an analogy, you were comparing apples with oranges. Even the sheerest imitation of your argument being an analogy breaks down when you consider that in fact for it to stand to scrutiny, you would have to believe that looking at at something, and looking at something and making your mind up over what it was you were looking at are the same thing. Which of course they are *not*.
On Thu, Dec 1, 2011 at 9:06 PM, Tom Morris tom@tommorris.org wrote:
I was drawing an analogy: the point I was making is very simple - the general principle of "we shouldn't do X because someone else might reuse it for bad thing Y" is a pretty lousy argument, given that we do quite a lot of things in the free culture/open source software world that have the same problem. Should the developers of Hadoop worry that (your repressive regime of choice) might use their tools to more efficiently sort through surveillance data of their citizens?
If you were interested in making a well formed analogy, you might go about it by thinking about what would be the reaction if the streetmaps google makes began to be tagged in such a fashion that people could plan their routes so they wouldn't have to look advertising billboards which had risque themes, such as lingerie advertisements or perfume advertisements or they could plan their route so they wouldn't have to pass through neighbourhoods where certain ethnic groups live, while travelling. The reason that will never happen of course, is because Google has this principle of not being evil, which the WMF could usefully emulate.
Am 01.12.2011 20:06, schrieb Tom Morris:
On Thu, Dec 1, 2011 at 09:11, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
This is not a theoretical risk. This has happened. Most famously in the case of Virgin using pictures of persons that were licenced under a free licence, in their advertising campaign. I hesitate to call this argument fatuous, but it's relevance is certainly highly questionable. Nobody has raised this is as a serious argument except you assume it has been. This is the bit that truly is a straw horse. The "downstream use" objection was *never* about downstream use of _content_ but downstream use of _labels_ and the structuring of the semantic data. That is a real horse of a different colour, and not of straw.
I was drawing an analogy: the point I was making is very simple - the general principle of "we shouldn't do X because someone else might reuse it for bad thing Y" is a pretty lousy argument, given that we do quite a lot of things in the free culture/open source software world that have the same problem. Should the developers of Hadoop worry that (your repressive regime of choice) might use their tools to more efficiently sort through surveillance data of their citizens?
If they provide a piece of software that can be used for evil things than it is ok, as long they don't support the use of the software for such purposes. Otherwise we would have to stop the development of Windows, Linux, Mac OS in the first place. What we do is different. We provide a weak tool, but we provide strong support for the evil detail. I called it weak since everyone should be able to disable it at any point he wants (if even enabled). But i also called it strong, because we provide the actual data for misuse through our effort to label content as inappropriate to some.
I'm not at all sure how you concluded that I was suggesting filtering groups would be reusing the content? Net Nanny doesn't generally need to include copies of Autofellatio6.jpg in their software. The reuse of the filtering category tree, or even the unstructured user data, is something anti-filter folk have been concerned about. But for the most part, if a category tree were built for filtering, it wouldn't require much more than identifying clusters of categories within Commons. That is the point of my post. If you want to find adult content to filter, it's pretty damn easy to do: you can co-opt the existing extremely detailed category system on Commons ("Nude images including Muppets", anybody?).
I had a nice conversation with Jimbo about this categories and i guess we came to the conclusion that it would not work that way you used it for an argument. At some point we will have to provide the user with some kind of interface in that he can easily select what should be filtered and what not. Giving the users a choice from a list containing hundreds of categories wouldn't work, because even Jimbo refuses it as to complicated and unsuited to be used. What would need to be done is to group this close to neutral (existing) category clusters up to more general terms to reduce the number of choices. But this clusters can then be easily be misused. That essential means for a category/label based filter:
The more user friendly it is, the more likely it is to be abused.
Worrying that filtering companies will co-opt a new system when the existing system gets them 99% of the way anyway seems just a little overblown.
Adapting a new source for inexpensive filter data was never a problem and is usually quickly done. It costs a lot of worktime (money) to maintain filter lists, but it is really cheap to set up automated filtering. Thats why many filters based on Googles filtering tools exist, even so Google makes a lot of mistakes.
It isn' one incidence, it isn't a class of incidences. Take it on board that the community is against the *principle* of censorship. Please.
As I said in the post, there may still be good arguments against filtering. The issue of principle may be very strong - and Kim Bruning made the point about the ALA definition, for instance, which is a principled rather than consequentialist objection.
Generally, though, I don't particularly care *what* people think, I care *why* they think it. This is why the debate over this has been so unenlightening, because the arguments haven't actually flowed, just lots of emotion and anger.
nya~
On Thu, Dec 1, 2011 at 8:33 AM, Tom Morris tom@tommorris.org wrote:
On Thu, Dec 1, 2011 at 03:34, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
While I don't find that line of argument to be a fully fledged straw-horse argument, it does appear to me to be a cherry-picked argument to *attempt* to refute. There are much stronger arguments, both practical and philosophical, at any attempt to elide controversial content. Even as such, I am not convinced by the argumentation, but would not prefer to rebut an argument that does not address the strongest reasons for opposing elision of controversial content, by choice or otherwise.
My point was not to provide an argument for or against any particular implementation. It was a response to one particularly god-awful argument.
"English Wikipedia already has the “bad image list”: a list of shocking images that can only be included in the article it is listed for on the list. If you want to use it elsewhere, an admin has to update the list. It’s basically to prevent that delightful image “Autofellatio6.jpg” from being inserted into My Little Pony articles and other amusing bits of vandalism. Does the bad image list enable censorware? Yes. But it has kind of an important and useful function: preventing vandalism. Similarly, the doctrine of double effect can be called into play here: yes, we may be building up a list of categories that could be reused by censorware sellers, but that’s not our primary intention."
If you had started with the last sentence, rather than concluded with it, I doubt even the most moronic reader would have been stringed along by the rhetoric. Let me emphasize the last phrase of that paragraph for rhetoric effect:
"[...] yes, we may be building up a list of categories that could be reused by censorware sellers, but that’s not our primary intention."
On 01/12/2011 7:58 AM, Jussi-Ville Heiskanen wrote:
"[...] yes, we may be building up a list of categories that could be reused by censorware sellers, but that’s not our primary intention."
I'm sorry, but who the fsck cares about intentions? The road to hell is paved with the best ones. The net effect is the only thing that counts.
A personal filter that allows individual editors to hide things they don't want to see is okay-ish, and the concept is something I can get behind. I still think we're over-engineering this - a simple "hide all images everywhere/on this page" button with a trivial "show/hide that specific image" toggle is more than adequate; but I'm not fundamentally opposed to a more elaborate system *iff* it can be demonstrated to not be usable by third parties to find out -- let alone use or impose -- what those settings may be.
Building a system that can (and /will/) be used for censorship is a fork-level nonstarter, and "but we didn't intend it for that" is not a justification. Prejudicial labeling is already *known* to be usable (and used) for censorship; why do you think librarians oppose any form of it as a matter of principle?
Surely nobody here has the hubris to believe that we, amazingly, know better than what over half a century of experience has taught our predecessors?
-- Coren / Marc
On Mon, Dec 5, 2011 at 6:59 PM, Marc A. Pelletier marc@uberbox.org wrote:
On 01/12/2011 7:58 AM, Jussi-Ville Heiskanen wrote:
"[...] yes, we may be building up a list of categories that could be reused by censorware sellers, but that’s not our primary intention."
I'm sorry, but who the fsck cares about intentions? The road to hell is paved with the best ones. The net effect is the only thing that counts.
A personal filter that allows individual editors to hide things they don't want to see is okay-ish, and the concept is something I can get behind. I still think we're over-engineering this - a simple "hide all images everywhere/on this page" button with a trivial "show/hide that specific image" toggle is more than adequate; but I'm not fundamentally opposed to a more elaborate system *iff* it can be demonstrated to not be usable by third parties to find out -- let alone use or impose -- what those settings may be.
Building a system that can (and /will/) be used for censorship is a fork-level nonstarter, and "but we didn't intend it for that" is not a justification. Prejudicial labeling is already *known* to be usable (and used) for censorship; why do you think librarians oppose any form of it as a matter of principle?
Surely nobody here has the hubris to believe that we, amazingly, know better than what over half a century of experience has taught our predecessors?
Uhm, that was not actually what I wrote, but what I was rebutting....
On Thu, Dec 1, 2011 at 8:33 AM, Tom Morris tom@tommorris.org wrote:
On Thu, Dec 1, 2011 at 03:34, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
While I don't find that line of argument to be a fully fledged straw-horse argument, it does appear to me to be a cherry-picked argument to *attempt* to refute. There are much stronger arguments, both practical and philosophical, at any attempt to elide controversial content. Even as such, I am not convinced by the argumentation, but would not prefer to rebut an argument that does not address the strongest reasons for opposing elision of controversial content, by choice or otherwise.
My point was not to provide an argument for or against any particular implementation. It was a response to one particularly god-awful argument.
"So, if you wanna make censorware, it’s gotta be pretty damn strict. And you’ve also got to keep the false negatives down for PR purposes because otherwise snarky people will relentlessly mock you. Oh, and you’ve got to keep your lists secret because this is capitalism and competition requires secrecy. And if you leak the list, people will start poking around on those websites."
Happily enough, if you want to use faulty rhetoric, people will also make fun of you.
You just successfully jumped over the shark like a half a dozen times.
It isn' one incidence, it isn't a class of incidences. Take it on board that the community is against the *principle* of censorship. Please.
Am 29.11.2011 14:28, schrieb Alasdair:
On Tuesday, 29 November 2011 at 13:42, Tobias Oelgarte wrote:
With the tiny (actually big) problem that such lists are public and can be directly feed into the filters of not so people loving or extremely caring ISP's.
I think this is a point that I was missing about the objections to the filter system.
So a big objection is that any "sets" of filters is not so much to the "weak" filtering on wikipedia but that such "sets" would enable other censors to more easily make a form of "strong" censorship of wikipedia where some images were not available (at all) to readers - regardless of whether or not they want to see them?
I am not sure I agree with this concern as a practical matter but I can understand it as a theoretical concern. Has the board or WMF talked about / addressed this issue anywhere in regards to "set" based filter systems?
-- Alasdair (User:Ajbpearce)
So far this thought got widely ignored. I can't remember a board member, aside from Arne Klempert talking about it. Instead i heard the argumentation that some censors would unbanning Wikipedia if we implemented such a feature as preemptive obedience. But who is really such naive to believe it? Censors aren't happy with a opt-in solution. They prefer the unable to opt-out solutions and are also interested in textual content as well.
nya~
On Tue, Nov 29, 2011 at 2:28 PM, Alasdair web@ajbpearce.co.uk wrote:
So a big objection is that any "sets" of filters is not so much to the "weak" filtering on wikipedia but that such "sets" would enable other censors to more easily make a form of "strong" censorship of wikipedia where some images were not available (at all) to readers - regardless of whether or not they want to see them?
I am not sure I agree with this concern as a practical matter but I can understand it as a theoretical concern. Has the board or WMF talked about / addressed this issue anywhere in regards to "set" based filter systems?
I don't know if they have, but it should be solvable in this system - something with creating a hash of the image name, and using the original name at some places and the hash at others. The list of images in a filter will have one, the html created when a page is looked at the other. I don't have the details all fleshed out, but it doesn't look too hard to do once one has decided that it's necessary.
On Tue, Nov 29, 2011 at 02:28:13PM +0100, Alasdair wrote:
On Tuesday, 29 November 2011 at 13:42, Tobias Oelgarte wrote:
With the tiny (actually big) problem that such lists are public and can be directly feed into the filters of not so people loving or extremely caring ISP's.
I think this is a point that I was missing about the objections to the filter system.
So a big objection is that any "sets" of filters is not so much to the "weak" filtering on wikipedia but that such "sets" would enable other censors to more easily make a form of "strong" censorship of wikipedia where some images were not available (at all) to readers - regardless of whether or not they want to see them?
I am not sure I agree with this concern as a practical matter but I can understand it as a theoretical concern.
This is an old objection, which diverse library organisations have been dealing with for at least half a century in their practice. They call such sets of prejudicial labels "Censorship Tools", and are opposed to them.
eg. http://www.ala.org/Template.cfm?Section=interpretations&Template=/Conten...
See elsewhere for further sources. (they get brought up regularly)
sincerely, Kim Bruning
On Tue, Nov 29, 2011 at 1:42 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
I neither agree. We decide what belongs to which preset (or who will do it?), and it is meant to filter out controversial content. Therefore we define what controversial content is, - or at least we tell the people, what we think, that might be controversial, while we also tell them (exclusion method) that other things aren't controversial.
No, we don't tell that other things aren't controversial. I consider that a ridiculous conclusion to draw. It's just that we have not yet found that it is under one of the categories we specified as blockable. There are other categories that might be specified, but alas, we don't have them yet.
Even more importantly, your options are not neutral at all, in my opinion. "Either everything is controversial or nothing is". That's not a neutral statement. "It's controversial to you if you consider it controversial to you" - that's much closer to being NPOV, and that's what the proposal is trying to do.
No. This options are meant to say that "you have to define for yourself what is controversial". They take the extreme stances of equal judgment. Either anything is guilty or nothing is guilty and both stances provide no information at all. Both give no definition. It is not the answer to the question: "What is controversial?" under the assumption that not anything or not everything is controversial. If you agree that not anything or not everything is controversial than this simple rule has to apply, since both extremes are untrue. That is very simple logic and forces you to define it for yourself.
Yet you are against any means that make this choice easier. If I say "I don't want to see pictures of XXX", why not give me the possibility to download a list of pictures of XXX and use that? Why do I have to specify in person each and every picture I do or do not want to see?
Back to the statement: "It's controversial to you if you consider it controversial to you". Thats right. But it's not related to the initial problem. In this case you will only find a "you" and a "you". There is no "we", "them" or anything like that. You could have written: "If my leg hurts, then my leg hurts". Always true, but useless to be applied to something that involves anything not done not by you in the first part of the sentence.
No, not useless. If I say that I don't want to see pictures of XXX, why not let someone else make a list of pictures of XXX? Say, I believe that every time a chainsaw touches my leg, it is going to hurt. Wouldn't it be good to have a rule then that anyone will have my permission before they touch my leg with a chainsaw? What you are saying is "only you can decide when your leg is hurting, so you have to choose: either we let everything touch your leg unless you forbid it, or we let nothing touch your leg unless you allow it."
NPOV is not about treating every _subject_ as equal, but about treating every _opinion_ as equal.
This is a nice sentence. I hope that you will it. I also hope that you remember that images are subjects and not opinions.
If I have a set of images I consider controversial, and you have a different, perhaps non-intersecting set that you consider controversial, the NPOV method is to consider both distinctions as valid, not to say that it means that everything is controversial, or nothing is.
A filter with presets considers only one opinion as valid. It shows an image or it does hide it. Stating different opinions inside an article is a very different thing. You represent both opinions but you don't apply them. On top of that it are the opinions of people that don't write the article.
But one can choose the filter oneself, or no filter at all.
And -surprise- that seems to be exactly what this proposal is trying to achieve. It is probably not ideal, there might even be reasons to drop it completely, but NPOV is much better served by this proposal than it is by yours.
Actually you misused or misunderstood the core of NPOV in combination with this two stances. Thats why i can't agree or follow your conclusion.
NPOV is meant in the way that we don't say what is right or is wrong. We represent the opinions and we let the user decide what to do with them. Additionally NPOV implies that we don't write down our own opinions. Instead we cite them.
And what does this have to do with image filters at all?
Am 29.11.2011 14:48, schrieb Andre Engels:
On Tue, Nov 29, 2011 at 1:42 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
I neither agree. We decide what belongs to which preset (or who will do it?), and it is meant to filter out controversial content. Therefore we define what controversial content is, - or at least we tell the people, what we think, that might be controversial, while we also tell them (exclusion method) that other things aren't controversial.
No, we don't tell that other things aren't controversial. I consider that a ridiculous conclusion to draw. It's just that we have not yet found that it is under one of the categories we specified as blockable. There are other categories that might be specified, but alas, we don't have them yet.
Do you remember your last mail in which you said that my viewpoints are extreme? I was writing that considering anything controversial or not are the only neutral positions to take. You opposed it strongly. Now you start your claim with the preposition that we will eventually find categories in a way that anything could be seen as controversial? Thats a 180° turn from one mail to the other. Just to find new arguments?
I will read the rest of your answers later on. For now i have some work to do. Maybe you want to enlighten me how that is possible.
nya~
On Tue, Nov 29, 2011 at 3:01 PM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
Do you remember your last mail in which you said that my viewpoints are extreme? I was writing that considering anything controversial or not are the only neutral positions to take. You opposed it strongly. Now you start your claim with the preposition that we will eventually find categories in a way that anything could be seen as controversial? Thats a 180° turn from one mail to the other. Just to find new arguments?
I don't say we _will_, I say we _might_. But if you like to have the 180 degree turn, then I will happily agree that any image might be controversial. That still does not convince me that the only 'neutral' ways of blocking are blocking nothing or blocking everything. Perhaps any image is objected to by someone. That does not mean that anyone who objects to some image should have every image taken away from them.
On Tue, Nov 29, 2011 at 3:02 PM, Tom Morris tom@tommorris.org wrote:
On Tue, Nov 29, 2011 at 08:09, Möller, Carsten c.moeller@wmco.de wrote:
No, we need to harden the wall agaist all attacks by hammers,
screwdrivers and drills.
We have consensus: Wikipedia should not be censored.
You hold strong on that principle. Wikipedia should not be censored!
Even if that censorship is something the user initiates, desires, and can turn off at any time, like AdBlock.
Glad to see that Sue Gardner's warnings earlier in the debate that people don't get entrenched and fundamentalist but try to honestly and charitably see other people's points of view has been so well heeded.
My question is, Is this really something that WMF should be spending its time and resources on?
In case of AdBlock, it's a third party extension for browsers. They were designed to fill a need, a need most people here can't seem to find, that compelled the board to enact this. Why has there been no third-party solution or anything close to this filter developed independently?
Why should we spend donor money to develop tools to censor our own content? I thought the goal was gathering the sum of all human knowledge not all knowledge minus controversial content.
Regards Theo
On Tue, Nov 29, 2011 at 11:32 AM, Tom Morris tom@tommorris.org wrote:
On Tue, Nov 29, 2011 at 08:09, Möller, Carsten c.moeller@wmco.de wrote:
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
You hold strong on that principle. Wikipedia should not be censored!
Even if that censorship is something the user initiates, desires, and can turn off at any time, like AdBlock.
Glad to see that Sue Gardner's warnings earlier in the debate that people don't get entrenched and fundamentalist but try to honestly and charitably see other people's points of view has been so well heeded.
The nub of the matter is that such an approach should be a two-way street. There is no evidence that the filter-pushing lobby is making even the most rudimentary good-faith effort at listening what the other side is telling them. Just doing a lot of hand-waving and misdirection. Case in point, ditching the idea of a "category based filtering scheme" as if that particular bit was what people were opposing. Not even close. There is still an echo chamber aspect to the people who are driving filters.
On Tue, Nov 29, 2011 at 09:09:04AM +0100, M?ller, Carsten wrote:
... but -if we want to reach consensus[1]- what we really need to be discussing is: screwdrivers.
sincerely, Kim Bruning
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
Right, hammering ourselves on the thumb is a bad idea :-P
However, there's nothing wrong with making sure that people don't get odd images when they don't expect it (something wikipedia is good at, but commons admittedly perhaps slightly less so). This is the screw.
I don't think a filter (the hammer) will be very successful at doing so, because filters have simply never been very good at keeping away unexpected content, and can easily lead to censorship and other unwanted side effects (hitting ourselves on the thumb). However, perhaps some other tool might be useful for fixing the screw. Some people have come up with some interesting proposals.
But shouting at each other about filters is probably counter-productive at this point. ;-)
sincerely, Kim Bruning
Am 29.11.2011 23:47, schrieb Kim Bruning:
On Tue, Nov 29, 2011 at 09:09:04AM +0100, M?ller, Carsten wrote:
... but -if we want to reach consensus[1]- what we really need to be discussing is: screwdrivers.
sincerely, Kim Bruning
No, we need to harden the wall agaist all attacks by hammers, screwdrivers and drills. We have consensus: Wikipedia should not be censored.
Right, hammering ourselves on the thumb is a bad idea :-P
However, there's nothing wrong with making sure that people don't get odd images when they don't expect it (something wikipedia is good at, but commons admittedly perhaps slightly less so). This is the screw.
That is more or less a search and time issue. If you search for a cucumber and a sexual related image ranks first instead of an actual cucumber then it would be time to improve the search function. If we have not enough people categorizing images the right way, we might start to recruit more helpers.
If we are careful enough we might be able to recycle the hammer to construct two or more small screwdrivers an argument against the image filter that is read as this: "Put more effort inside ideas how to improve search functionality and to help categorizing. It will actually help everyone and would get clear referendum results." ;P
I don't think a filter (the hammer) will be very successful at doing so, because filters have simply never been very good at keeping away unexpected content, and can easily lead to censorship and other unwanted side effects (hitting ourselves on the thumb). However, perhaps some other tool might be useful for fixing the screw. Some people have come up with some interesting proposals.
But shouting at each other about filters is probably counter-productive at this point. ;-)
sincerely, Kim Bruning
On Wed, Nov 30, 2011 at 12:51:04AM +0100, Tobias Oelgarte wrote:
If we are careful enough we might be able to recycle the hammer to construct two or more small screwdrivers an argument against the image filter that is read as this: "Put more effort inside ideas how to improve search functionality and to help categorizing. It will actually help everyone and would get clear referendum results." ;P
That's going in the right direction. And perhaps we can easily do more, within the given constraints. If so, there's no reason not to. :-)
sincerely, Kim Bruning
On Wed, Nov 30, 2011 at 1:51 AM, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
That is more or less a search and time issue. If you search for a cucumber and a sexual related image ranks first instead of an actual cucumber then it would be time to improve the search function. If we have not enough people categorizing images the right way, we might start to recruit more helpers.
Equally it is mildly amusing to get a picture of an actual gherkin, if one uses the search term "London Gherkin". The age old answer to this problem is to scroll down the search results. We didn't like it when google was prejudicially priviledging other results above ones from wikipedia (a practise they no longer adhere to). I don't see much of a difference here.
wikimedia-l@lists.wikimedia.org