Dear Wikimedia community,
First, I want to thank the 24,000 editors who participated in the Wikimedia Foundation's referendum on the proposed personal image hiding feature. We are particularly grateful to the nearly seven thousand people who took the time to write in detailed and thoughtful comments. Thank you.
Although the Board did not commission the referendum (it was commissioned by our Executive Director), we have read the results and followed the discussions afterwards with great interest. We discussed them at our Board meeting in San Francisco, in October. We are listening, and we are hearing you.
The referendum results show that there is significant division inside the Wikimedia community about the potential value and impact of an image hiding feature.
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed. Some of those people say there is no problem, and that anyone who is offended is wrong and should be ignored. Some say that regardless of whether there is a problem, it's not ours to solve: our job is to make knowledge available to everyone, not to participate in screening or filtering it. And some say that even if there is a problem, a category-based image hiding feature is the wrong solution, because it would enable censorship by third parties, and would also create significant new work for editors in creating and maintaining categories. Some of you say these are editorial issues, and the Wikimedia Foundation has no business being involved with them.
I, and the other Board members, and Sue, are paying attention to what you've told us.
We believe there is a problem. The purpose of the Wikimedia movement is to make information freely available to people all around the world, and when material on the projects causes grave offence, those offended don't benefit from our work. We believe that exercising editorial judgment to mitigate that offence is not censorship. We believe we need, and should want, to treat readers with respect. Their opinions and preferences are as legitimate as our own, and deliberately offending or provoking them is not respectful, and is not okay.
We are not going to revisit the resolution from May, for the moment: we let that resolution stand unchanged.
But, we are asking Sue and the staff to continue the conversation with editors, and to find a solution that strikes the best balance between serving our readers, empowering and supporting editors, and dedicating an appropriate amount of effort to the problem. I believe that is possible within the language of the resolution the Board already passed, which leaves open most details of how implementation should be achieved.
We realize this is an important issue for the Wikimedia movement, and in many ways it goes to the heart of who we are. I think church.of.emacs expressed this fairly well on foundation-l, when he described this as a conflict between two visions of our work: “a project of pure enlightenment, which ignores the biased/prejudiced reader and accepts the resulting limited distribution” versus “a project of praxis, which seeks a balance between the goals of enlightenment and the reader's interests, aiming at a high distribution.” I would quibble with some of his choice of words, but I agree with the general gist of what he said.
I believe we can find an answer that is right for us. I ask you to work with us, to do that.
Sincerely, Ting Chen
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
We are not going to revisit the resolution from May, for the moment: we let that resolution stand unchanged.
But, we are asking Sue and the staff to continue the conversation with editors, and to find a solution that strikes the best balance between serving our readers, empowering and supporting editors, and dedicating an appropriate amount of effort to the problem. I believe that is possible within the language of the resolution the Board already passed, which leaves open most details of how implementation should be achieved.
You haven't commented on the votes that have taken place on the German and French Wikipedias that show a very large majority opposed to the feature on those projects (I believe the German one creates binding policy on that project although the French one doesn't). Your original resolution doesn't go into any details about whether the feature should be forced upon individual projects that clearly don't want it. What are you views, and the views of the board, on that issue?
On 9 October 2011 14:18, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
I fear this point will need restating every time someone claims the "referendum" shows support.
David Gerard wrote:
On 9 October 2011 14:18, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
I fear this point will need restating every time someone claims the "referendum" shows support.
I wonder what the image filter referendum results would have had to look like in order to get anything other than a rambling "we march forward, unabated!" letter from the Board.
MZMcBride
I could probably look this up and find out, but can anyone tell me when the next Board election will be?
Nathan
On 9 October 2011 12:18, Nathan nawrich@gmail.com wrote:
I could probably look this up and find out, but can anyone tell me when the next Board election will be?
Nathan
Two board members are selected by chaptersl however, the board has certain rights to refuse the selected candidates. Chapter-selected candidates will be appointed in 2012.
The WMF-wide community holds an election in odd-numbered years to nominate three candidates. Again, the board has certain rights to refuse the candidates with the most votes.
The remainder of the board members are selected for their expertise, with the exception of the "Founder" seat which is approved on a regular basis.
The primary responsibility of Board members is to the Foundation, not to the community or the chapters or to any other external agent.
This is all available for review in the Bylaws.[1]
Risker/Anne
[1] http://wikimediafoundation.org/wiki/Bylaws#Section_3._Selection.
Risker, 09/10/2011 18:40:
Two board members are selected by chaptersl however, the board has certain rights to refuse the selected candidates. Chapter-selected candidates will be appointed in 2012.
The WMF-wide community holds an election in odd-numbered years to nominate three candidates. Again, the board has certain rights to refuse the candidates with the most votes.
The remainder of the board members are selected for their expertise, with the exception of the "Founder" seat which is approved on a regular basis.
The primary responsibility of Board members is to the Foundation, not to the community or the chapters or to any other external agent.
I find this response a bit odd. ;-) It almost seems to assume that the community (or Nathan?) is likely wanting to elect someone the WMF couldn't accept, or that "responsibility to the community" is a bad thing, while we used to say only that there's no imperative mandate and that chapters-elected trustees are not chapters representatives, etc.
Nemo
On 9 October 2011 12:48, Federico Leva (Nemo) nemowiki@gmail.com wrote:
Risker, 09/10/2011 18:40:
Two board members are selected by chaptersl however, the board has
certain
rights to refuse the selected candidates. Chapter-selected candidates
will
be appointed in 2012.
The WMF-wide community holds an election in odd-numbered years to
nominate
three candidates. Again, the board has certain rights to refuse the candidates with the most votes.
The remainder of the board members are selected for their expertise, with the exception of the "Founder" seat which is approved on a regular basis.
The primary responsibility of Board members is to the Foundation, not to
the
community or the chapters or to any other external agent.
I find this response a bit odd. ;-) It almost seems to assume that the community (or Nathan?) is likely wanting to elect someone the WMF couldn't accept, or that "responsibility to the community" is a bad thing, while we used to say only that there's no imperative mandate and that chapters-elected trustees are not chapters representatives, etc.
I'm not sure what you find odd about it, but it is factual.
The key point is that board members must work on behalf of the Foundation, and must not act as representatives of a particular constituency, and those constituencies cannot direct board members elected/nominated by them to act in certain ways.
I agree that it is not entirely relevant to this discussion: the board's statement on controversial content was issued in May, and all three community-nominated board members who signed off on that statement were re-elected subsequent to that.
Risker/Anne
On 10/09/11 9:58 AM, Risker wrote:
On 9 October 2011 12:48, Federico Leva (Nemo)nemowiki@gmail.com wrote:
Risker, 09/10/2011 18:40:
The primary responsibility of Board members is to the Foundation, not to the community or the chapters or to any other external agent.
I find this response a bit odd. ;-) It almost seems to assume that the community (or Nathan?) is likely wanting to elect someone the WMF couldn't accept, or that "responsibility to the community" is a bad thing, while we used to say only that there's no imperative mandate and that chapters-elected trustees are not chapters representatives, etc.
I'm not sure what you find odd about it, but it is factual.
The key point is that board members must work on behalf of the Foundation, and must not act as representatives of a particular constituency, and those constituencies cannot direct board members elected/nominated by them to act in certain ways.
It's not the factuality of the statement that is odd. The Hong Kong style of democracy that insures that the elected members can never form a majority is.
In a fully democratic country all elected representatives work on behalf of the country, but they still represent particular constituencies and/or parties, to which they are accountable. Without that the entire notion of constituencies is a sham. When they fail to represent the interests of their constituencies they should be voted out.
Ray
On Sun, Oct 9, 2011 at 12:40 PM, Risker risker.wp@gmail.com wrote:
Two board members are selected by chaptersl however, the board has certain rights to refuse the selected candidates. Chapter-selected candidates will be appointed in 2012.
The WMF-wide community holds an election in odd-numbered years to nominate three candidates. Again, the board has certain rights to refuse the candidates with the most votes.
The remainder of the board members are selected for their expertise, with the exception of the "Founder" seat which is approved on a regular basis.
The primary responsibility of Board members is to the Foundation, not to the community or the chapters or to any other external agent.
This is all available for review in the Bylaws.[1]
Risker/Anne
Thanks!
To your last point; that's of course true for any corporation. Yet, it seems clear and obvious in this case that the Board can't serve the Foundation without also serving the Wikimedia community. If the Board loses the support of the community, not only will that have election repercussions (despite the ability of the Board to determine its own membership), it will also be strongly detrimental to the interests of the corporation.
I'm sure the Board understands that you can't please the readers at the expense of the editors, particularly when we're at a point in project development where editors are not so easy to replace. Just like editorial decisions happen in the real world and have real world consequences, so also will Board decisions have consequences.
Now all this is not to say that the Board has already lost the confidence of the community, or that any specific members should be turned out or anything like that. But it's worth remembering, for folks on both sides of this issue, that there are methods of addressing any truly schismatic decisions on the part of the Board in the hopefully very unlikely case that any are taken.
Nathan
Discussing 'what if' scenarios in public rarely does any good if those same people have full power to avoid that scenario in the first place. Both the community and the board can avoid the sitation that we don't reach agreement. Therefore, discussing 'what if we don't, what will you do' will most likely not improve the arguments, discussion or outcome for anyone, but only makes that very scenario more likely to happen. Let's cross that river when we get there.
The same goes for the very theoretical 'the board might not accept a board member nomination'. No such situation happened ever in the history of the foundation, quite the contrary - they have sometimes appointed people who ended on the nomination list lower than required *as well* (for example Oscar). I don't see any reason why that should happen any time soon, so perhaps discussing that would be a theoretical exersize - very interesting but hardly productive to this specific discussion.
What would be very constructive for me is getting more hard data which we can use to have the discussion we need to have. Getting more data about how our readers think about the topic for example. On whether the difference in opinion is mainly geographical, related to education/background or to hair color - whether the community (as has been suggested by some) consists of a biased group of authors or that this is actually quite representative for their regions. No conclusions can be drawn automatically from that, but it would help us in getting to the core of the discussion, and also in figuring out if there would be a system (filter or not) that both would help resolve the issues people see, and not obstruct others.
The civil war scenario sounds horrible, but when I read some discussions, it seems some people are all too eager to steer into that direction, hoping that 'the others' will steer away first. Perhaps we should just slow down a bit and map the situation a bit better.
Best regards,
Lodewijk
No dia 9 de Outubro de 2011 19:05, Nathan nawrich@gmail.com escreveu:
On Sun, Oct 9, 2011 at 12:40 PM, Risker risker.wp@gmail.com wrote:
Two board members are selected by chaptersl however, the board has
certain
rights to refuse the selected candidates. Chapter-selected candidates
will
be appointed in 2012.
The WMF-wide community holds an election in odd-numbered years to
nominate
three candidates. Again, the board has certain rights to refuse the candidates with the most votes.
The remainder of the board members are selected for their expertise, with the exception of the "Founder" seat which is approved on a regular basis.
The primary responsibility of Board members is to the Foundation, not to
the
community or the chapters or to any other external agent.
This is all available for review in the Bylaws.[1]
Risker/Anne
Thanks!
To your last point; that's of course true for any corporation. Yet, it seems clear and obvious in this case that the Board can't serve the Foundation without also serving the Wikimedia community. If the Board loses the support of the community, not only will that have election repercussions (despite the ability of the Board to determine its own membership), it will also be strongly detrimental to the interests of the corporation.
I'm sure the Board understands that you can't please the readers at the expense of the editors, particularly when we're at a point in project development where editors are not so easy to replace. Just like editorial decisions happen in the real world and have real world consequences, so also will Board decisions have consequences.
Now all this is not to say that the Board has already lost the confidence of the community, or that any specific members should be turned out or anything like that. But it's worth remembering, for folks on both sides of this issue, that there are methods of addressing any truly schismatic decisions on the part of the Board in the hopefully very unlikely case that any are taken.
Nathan
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 9 October 2011 18:16, Lodewijk lodewijk@effeietsanders.org wrote:
Discussing 'what if' scenarios in public rarely does any good if those same people have full power to avoid that scenario in the first place. Both the community and the board can avoid the sitation that we don't reach agreement. Therefore, discussing 'what if we don't, what will you do' will most likely not improve the arguments, discussion or outcome for anyone, but only makes that very scenario more likely to happen. Let's cross that river when we get there.
I don't think the community really can avoid it, since it isn't a coherent body. An individual member of the community can't really achieve anything. The WMF has a hierarchy and structured decision making mechanisms, so it can take deliberate action. The community can't.
The situation we are in is not dissimilar to that faced in national politics all the time - a political party wanting to do something that a large portion of the electorate are opposed to. It's all very nice to say that both sides should have a mature discussion and reach a mutually acceptable conclusion, but it can't actually be done. The electorate can't take that kind of deliberate action. The only way forward is for the political party to listen to individual members of the electorate (either through public forums, referenda, polls of a small sample of the populations, etc.) and then decide on a route forward that will not annoy too many people.
If, at the end of this, no resolution is reached and you say that the community is partly to blame, who are you actually blaming?
On Sun, Oct 09, 2011 at 06:32:31PM +0100, Thomas Dalton wrote:
I don't think the community really can avoid it, since it isn't a coherent body. An individual member of the community can't really achieve anything. The WMF has a hierarchy and structured decision making mechanisms, so it can take deliberate action. The community can't.
Actually, the community is quite capable of generating coherent action, thank you. If you don't know how, there's folks around who can teach you. If you can't find any, I'll show you how. :-)
sincerely, Kim Bruning
On 9 October 2011 17:49, Kim Bruning kim@bruning.xs4all.nl wrote:
On Sun, Oct 09, 2011 at 06:32:31PM +0100, Thomas Dalton wrote:
I don't think the community really can avoid it, since it isn't a coherent body. An individual member of the community can't really achieve anything. The WMF has a hierarchy and structured decision making mechanisms, so it can take deliberate action. The community can't.
Actually, the community is quite capable of generating coherent action, thank you. If you don't know how, there's folks around who can teach you. If you can't find any, I'll show you how. :-)
I didn't say it can't take coherent action. Writing an encyclopaedia is a coherent action, after all. I said it can't take deliberate action. By deliberate action, I mean deciding to do something and then doing it. The way we work is that some people say they want to do something and then the community decides whether to let them or not. That works for a lot of things, but not for what Lodewijk is talking about. We can't decide to discuss this with the WMF and reach a compromise and then do so. Everyone will do whatever they want to do. Everyone agreeing that it would be great to reach a compromise won't actually change what individuals do.
On Sun, Oct 09, 2011 at 06:51:24PM +0100, Thomas Dalton wrote:
I didn't say it can't take coherent action. Writing an encyclopaedia is a coherent action, after all. I said it can't take deliberate action. By deliberate action, I mean deciding to do something and then doing it.
That's right.
The way we work is that some people say they want to do something and then the community decides whether to let them or not.
That's not entirely right.
That works for a lot of things, but not for what Lodewijk is talking about. We can't decide to discuss this with the WMF and reach a compromise and then do so.
That's neither here nor there. There's a way to make that work. (A little more complex than fits into this margin, but it's essentially what I'm up to all the time, or when I'm up to things at any rate. :-)
sincerely, Kim Bruning
On Sun, Oct 9, 2011 at 7:49 PM, Kim Bruning kim@bruning.xs4all.nl wrote:
On Sun, Oct 09, 2011 at 06:32:31PM +0100, Thomas Dalton wrote:
I don't think the community really can avoid it, since it isn't a coherent body. An individual member of the community can't really achieve anything. The WMF has a hierarchy and structured decision making mechanisms, so it can take deliberate action. The community can't.
Actually, the community is quite capable of generating coherent action, thank you. If you don't know how, there's folks around who can teach you. If you can't find any, I'll show you how. :-)
sincerely, Kim Bruning
--
The offer to mortgage my newly inherited house to bankroll a fork of wikipedia is still very much live, if the community shows enough resolve. :-) only serious.
On Sun, Oct 9, 2011 at 7:49 PM, Kim Bruning kim@bruning.xs4all.nl wrote:
On Sun, Oct 09, 2011 at 06:32:31PM +0100, Thomas Dalton wrote:
I don't think the community really can avoid it, since it isn't a coherent body. An individual member of the community can't really achieve anything. The WMF has a hierarchy and structured decision making mechanisms, so it can take deliberate action. The community can't.
Actually, the community is quite capable of generating coherent action, thank you. If you don't know how, there's folks around who can teach you. If you can't find any, I'll show you how. :-)
sincerely, Kim Bruning
--
The offer to mortgage my newly inherited house to bankroll a fork of wikipedia is still very much live, if the community shows enough resolve. :-) only serious.
--
Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]
Wikinfo is not available to you:
http://www.wikinfo.org/index.php/Wikinfo:Offensive_material
Fred
mid-2013.
Last ones were in June.
Tom
On 9 October 2011 17:18, Nathan nawrich@gmail.com wrote:
I could probably look this up and find out, but can anyone tell me when the next Board election will be?
Nathan
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Sun, Oct 9, 2011 at 9:10 AM, MZMcBride z@mzmcbride.com wrote:
David Gerard wrote:
On 9 October 2011 14:18, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
I fear this point will need restating every time someone claims the "referendum" shows support.
I wonder what the image filter referendum results would have had to look like in order to get anything other than a rambling "we march forward, unabated!" letter from the Board.
MZMcBride
Hi MZM and all! Greetings from the end of a long -- but productive and inspiring -- meeting weekend.
"Marching forward unabated" is not, in fact, what we are saying. The board, and individual members of the board, are quite aware of all of the criticisms from the vote and from the conversations on and off list -- believe me. This is not an official report on behalf of the board, but here is what we discussed doing:
* not going ahead with the category-based design that was proposed in the mockups; it is clear there are too many substantive problems that have been raised with this. Although this design (or any other) was actually not specified in the resolution, it is obvious that many of the critical comments were about using categorization in particular, and we hear that. * we are asking the staff to explore alternative designs, e.g. for a way for readers to flag images for themselves, and collapse individual images. This isn't fixed yet because it shouldn't be: we need to have a further period of iterative community & technical design. * not changing or revoking the Board resolution, because we do still think that there is a problem with our handling of potentially controversial content that needs to be addressed. We don't want to ignore the criticism, and we *also* don't want to ignore the positive comments from those who identified a problem and thought such a tool would be helpful and useful in addressing it. Our view is holistic. The Board discussed amending the resolution (we think, in particular, that the word 'filter' has led to many assumptions about design), but decided that for now the language of the resolution is broad enough that it leaves room for alternative solutions. And we also do not want to ignore the rest of the resolution -- the parts that call for better tools for commons, and that lay out that we respect the principle of least astonishment.
The speculation on this list the last few weeks about what individual board members think and want has generally been wildly, hilariously off base -- I have seen many statements about board member motivations that couldn't have been more wrong -- but so has the speculation that we don't care and have not been paying attention. My own views on whether a filter as proposed is workable have changed over the past couple of months. I appreciate especially the reasoned comments I have seen from people who have taken the time to think it through and who have wondered if a design as proposed would even work for readers, or would be implementable. And I have been gratified to see people dig up things like library statements of principle; as foundational documents these are a good place to start from (as someone who has always seen herself as a free speech advocate inside and outside of the library world, this tactic has made me glad, even if we may differ on interpretation). I also am glad for those comments that took the time to look critically at the vote process -- we did make a lot of mistakes, but we did learn a lot, and I hope with the help of all of this input we can do a better job next time we have a broad-scale vote (did you know that this was the single largest participatory exercise in wikimedia's history? I could not have imagined that at the beginning of this summer).
None of us on the board have any intention of being censors; that is no one's desire and within no one's tolerance. I do think the resolution principles (neutrality, principle of least astonishment) that we laid out as guidelines for the tool are still good, strong principles; and I wouldn't have voted for the resolution in the first place if I thought what we were proposing encompassed or enabled censorship. And what hasn't changed for me is the impetus behind the resolution: a desire to work on behalf of *both* the editing community and our broad (up to 7 billion!) community of readers, and a desire to get perspectives from outside our own sometimes narrow conversational community on the mailing lists and wikis.
We know there are a lot of questions that have been resolved over the last few weeks about releasing vote data and so on that aren't addressed in this letter; we did not address everything in our board meeting either. As a board, we trust Sue to continue to implement the resolution; that means both managing the vote and its results, and design issues as well. And while we all of course are coming from different backgrounds and have different opinions, I think we are all on the same page about wanting to build helpful things for both our readers and our editors, and in wanting to treat minority views in our community as well as we treat majority ones.
best, phoebe
On Mon, Oct 10, 2011 at 6:47 AM, phoebe ayers phoebe.wiki@gmail.com wrote:
On Sun, Oct 9, 2011 at 9:10 AM, MZMcBride z@mzmcbride.com wrote:
David Gerard wrote:
On 9 October 2011 14:18, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
I fear this point will need restating every time someone claims the "referendum" shows support.
I wonder what the image filter referendum results would have had to look like in order to get anything other than a rambling "we march forward, unabated!" letter from the Board.
MZMcBride
Hi MZM and all! Greetings from the end of a long -- but productive and inspiring -- meeting weekend.
"Marching forward unabated" is not, in fact, what we are saying. The board, and individual members of the board, are quite aware of all of the criticisms from the vote and from the conversations on and off list -- believe me. This is not an official report on behalf of the board, but here is what we discussed doing:
- not going ahead with the category-based design that was proposed in
the mockups; it is clear there are too many substantive problems that have been raised with this. Although this design (or any other) was actually not specified in the resolution, it is obvious that many of the critical comments were about using categorization in particular, and we hear that.
- we are asking the staff to explore alternative designs, e.g. for a
way for readers to flag images for themselves, and collapse individual images. This isn't fixed yet because it shouldn't be: we need to have a further period of iterative community & technical design.
- not changing or revoking the Board resolution, because we do still
think that there is a problem with our handling of potentially controversial content that needs to be addressed. We don't want to ignore the criticism, and we *also* don't want to ignore the positive comments from those who identified a problem and thought such a tool would be helpful and useful in addressing it. Our view is holistic. The Board discussed amending the resolution (we think, in particular, that the word 'filter' has led to many assumptions about design), but decided that for now the language of the resolution is broad enough that it leaves room for alternative solutions. And we also do not want to ignore the rest of the resolution -- the parts that call for better tools for commons, and that lay out that we respect the principle of least astonishment.
The speculation on this list the last few weeks about what individual board members think and want has generally been wildly, hilariously off base -- I have seen many statements about board member motivations that couldn't have been more wrong -- but so has the speculation that we don't care and have not been paying attention. My own views on whether a filter as proposed is workable have changed over the past couple of months. I appreciate especially the reasoned comments I have seen from people who have taken the time to think it through and who have wondered if a design as proposed would even work for readers, or would be implementable. And I have been gratified to see people dig up things like library statements of principle; as foundational documents these are a good place to start from (as someone who has always seen herself as a free speech advocate inside and outside of the library world, this tactic has made me glad, even if we may differ on interpretation). I also am glad for those comments that took the time to look critically at the vote process -- we did make a lot of mistakes, but we did learn a lot, and I hope with the help of all of this input we can do a better job next time we have a broad-scale vote (did you know that this was the single largest participatory exercise in wikimedia's history? I could not have imagined that at the beginning of this summer).
None of us on the board have any intention of being censors; that is no one's desire and within no one's tolerance. I do think the resolution principles (neutrality, principle of least astonishment) that we laid out as guidelines for the tool are still good, strong principles; and I wouldn't have voted for the resolution in the first place if I thought what we were proposing encompassed or enabled censorship. And what hasn't changed for me is the impetus behind the resolution: a desire to work on behalf of *both* the editing community and our broad (up to 7 billion!) community of readers, and a desire to get perspectives from outside our own sometimes narrow conversational community on the mailing lists and wikis.
We know there are a lot of questions that have been resolved over the last few weeks about releasing vote data and so on that aren't addressed in this letter; we did not address everything in our board meeting either. As a board, we trust Sue to continue to implement the resolution; that means both managing the vote and its results, and design issues as well. And while we all of course are coming from different backgrounds and have different opinions, I think we are all on the same page about wanting to build helpful things for both our readers and our editors, and in wanting to treat minority views in our community as well as we treat majority ones.
best, phoebe
I would like to spread a wide expanse of blue water between the view that there are no trolls at all opposing the filter, and the view that the history of this issue does seem to inform; trolls have been driving the filter issue historically. Never gained any traction. Now it seems (only talking about appearances) the trolls are running the asylum. I know that is not accurate and you do too. But it is a perception we have to address, head on. This issue is a Perennial Proposals Elephant Graveyard. That is the main thrust. What people in favour of "doing something" are still trying to achieve is "no we are not doing what we promised to not do, nudge nudge, wink wink.".
Taking a step back, to look at the bigger picture -- one thing that has always struck me
as odd is how different our approach to text and illustrations is.
For text, we are incredibly "censorious", insisting that any material presented to the reader
must reflect what is found in reliable sources. Anything unsourceable is deleted. No one in
the community has a problem with that. The occasional newbie who complains that their
original research has been "censored" generally gets very short shrift.
But when it comes to discussing whether a specific illustration or media file should be added
to an article, the one criterion nobody seems to raise is whether this is the type of image or
video a reliably published educational source would include. Instead, we often hear that
because Wikipedia is not censored, we *must* keep an image or media file in the article,
*especially so* if it is controversial.
The underlying assumption seems to be that reliable sources somehow *are* censored when
it comes to illustrations, and we are not. But if we assume that about illustrations, why don't
we assume it about text? It doesn't make sense.
The whole of Wikipedia is built on the premise that its text should reflect the editorial
judgment of reliable sources. It's not built on the premise of forging ahead of reliable sources,
of breaking new ground, or of being a subversive force in society (beyond the arguably
subversive idea of presenting a free summary of the world's knowledge, as collected in
reliable sources).
The logical thing to do would be to take more of a lead from reliable sources in choosing a
style of illustration. And given that reliable sources differ in their editorial standards depending
on region, philosophical stance, intended audience, etc., an optional image filter, used or not
used at the discretion of the reader, would be a useful complement to adjust to these differences.
Andreas
________________________________ From: phoebe ayers phoebe.wiki@gmail.com To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Monday, 10 October 2011, 4:47 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
On Sun, Oct 9, 2011 at 9:10 AM, MZMcBride z@mzmcbride.com wrote:
David Gerard wrote:
On 9 October 2011 14:18, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
I fear this point will need restating every time someone claims the "referendum" shows support.
I wonder what the image filter referendum results would have had to look like in order to get anything other than a rambling "we march forward, unabated!" letter from the Board.
MZMcBride
Hi MZM and all! Greetings from the end of a long -- but productive and inspiring -- meeting weekend.
"Marching forward unabated" is not, in fact, what we are saying. The board, and individual members of the board, are quite aware of all of the criticisms from the vote and from the conversations on and off list -- believe me. This is not an official report on behalf of the board, but here is what we discussed doing:
* not going ahead with the category-based design that was proposed in the mockups; it is clear there are too many substantive problems that have been raised with this. Although this design (or any other) was actually not specified in the resolution, it is obvious that many of the critical comments were about using categorization in particular, and we hear that. * we are asking the staff to explore alternative designs, e.g. for a way for readers to flag images for themselves, and collapse individual images. This isn't fixed yet because it shouldn't be: we need to have a further period of iterative community & technical design. * not changing or revoking the Board resolution, because we do still think that there is a problem with our handling of potentially controversial content that needs to be addressed. We don't want to ignore the criticism, and we *also* don't want to ignore the positive comments from those who identified a problem and thought such a tool would be helpful and useful in addressing it. Our view is holistic. The Board discussed amending the resolution (we think, in particular, that the word 'filter' has led to many assumptions about design), but decided that for now the language of the resolution is broad enough that it leaves room for alternative solutions. And we also do not want to ignore the rest of the resolution -- the parts that call for better tools for commons, and that lay out that we respect the principle of least astonishment.
The speculation on this list the last few weeks about what individual board members think and want has generally been wildly, hilariously off base -- I have seen many statements about board member motivations that couldn't have been more wrong -- but so has the speculation that we don't care and have not been paying attention. My own views on whether a filter as proposed is workable have changed over the past couple of months. I appreciate especially the reasoned comments I have seen from people who have taken the time to think it through and who have wondered if a design as proposed would even work for readers, or would be implementable. And I have been gratified to see people dig up things like library statements of principle; as foundational documents these are a good place to start from (as someone who has always seen herself as a free speech advocate inside and outside of the library world, this tactic has made me glad, even if we may differ on interpretation). I also am glad for those comments that took the time to look critically at the vote process -- we did make a lot of mistakes, but we did learn a lot, and I hope with the help of all of this input we can do a better job next time we have a broad-scale vote (did you know that this was the single largest participatory exercise in wikimedia's history? I could not have imagined that at the beginning of this summer).
None of us on the board have any intention of being censors; that is no one's desire and within no one's tolerance. I do think the resolution principles (neutrality, principle of least astonishment) that we laid out as guidelines for the tool are still good, strong principles; and I wouldn't have voted for the resolution in the first place if I thought what we were proposing encompassed or enabled censorship. And what hasn't changed for me is the impetus behind the resolution: a desire to work on behalf of *both* the editing community and our broad (up to 7 billion!) community of readers, and a desire to get perspectives from outside our own sometimes narrow conversational community on the mailing lists and wikis.
We know there are a lot of questions that have been resolved over the last few weeks about releasing vote data and so on that aren't addressed in this letter; we did not address everything in our board meeting either. As a board, we trust Sue to continue to implement the resolution; that means both managing the vote and its results, and design issues as well. And while we all of course are coming from different backgrounds and have different opinions, I think we are all on the same page about wanting to build helpful things for both our readers and our editors, and in wanting to treat minority views in our community as well as we treat majority ones.
best, phoebe
_______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 10 October 2011 12:16, Andreas Kolbe jayen466@yahoo.com wrote:
Taking a step back, to look at the bigger picture
I would; but someone added it to this pesky image filter...
(too soon? sorry :P)
Tom
phoebe ayers wrote:
On Sun, Oct 9, 2011 at 9:10 AM, MZMcBride z@mzmcbride.com wrote:
David Gerard wrote:
On 9 October 2011 14:18, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 13:55, Ting Chen tchen@wikimedia.org wrote:
The majority of editors who responded to the referendum are not opposed to the feature. However, a significant minority is opposed.
How do you know? The "referendum" didn't ask whether people were opposed or not.
I fear this point will need restating every time someone claims the "referendum" shows support.
I wonder what the image filter referendum results would have had to look like in order to get anything other than a rambling "we march forward, unabated!" letter from the Board.
Hi MZM and all! Greetings from the end of a long -- but productive and inspiring -- meeting weekend.
"Marching forward unabated" is not, in fact, what we are saying. The board, and individual members of the board, are quite aware of all of the criticisms from the vote and from the conversations on and off list -- believe me. This is not an official report on behalf of the board, but here is what we discussed doing:
- not going ahead with the category-based design that was proposed in
the mockups; it is clear there are too many substantive problems that have been raised with this. Although this design (or any other) was actually not specified in the resolution, it is obvious that many of the critical comments were about using categorization in particular, and we hear that.
- we are asking the staff to explore alternative designs, e.g. for a
way for readers to flag images for themselves, and collapse individual images. This isn't fixed yet because it shouldn't be: we need to have a further period of iterative community & technical design.
- not changing or revoking the Board resolution, because we do still
think that there is a problem with our handling of potentially controversial content that needs to be addressed. We don't want to ignore the criticism, and we *also* don't want to ignore the positive comments from those who identified a problem and thought such a tool would be helpful and useful in addressing it. Our view is holistic. The Board discussed amending the resolution (we think, in particular, that the word 'filter' has led to many assumptions about design), but decided that for now the language of the resolution is broad enough that it leaves room for alternative solutions. And we also do not want to ignore the rest of the resolution -- the parts that call for better tools for commons, and that lay out that we respect the principle of least astonishment.
[...]
I found this e-mail very helpful and insightful. Thank you for writing it, Phoebe.
I think the issue of "I'll put down my gun when you put down yours" is still being a bit side-stepped, but it isn't really the responsibility of a single Board member (or even the Board) to make agreements not to impose this feature on a particular wiki community. That has to come from the Executive Director in this case, I think. As others have said, it might go a long way toward more open and honest dialogue if people don't feel as though their efforts will inevitably be futile. (And, it isn't as though this is without precedent. Even less controversial new features like the Vector skin were made optional on a per-wiki basis.)
With the categorization scheme now being re-thought, I'm curious if there are any central brainstorming pages about an image filter, either on Meta-Wiki or mediawiki.org. If not, I'd be happy to start one. I've had some ideas about filtering based on thresholds and percentages. For example, if 90% of viewers in your country have hidden a particular image and you've set your personal threshold at 50%, an image might be automatically obscured. This isn't a perfect idea, but discussing and debating the merits of each idea might reveal a solution that's tenable.
MZMcBride
On 10 October 2011 15:27, MZMcBride z@mzmcbride.com wrote:
I think the issue of "I'll put down my gun when you put down yours" is still being a bit side-stepped, but it isn't really the responsibility of a single Board member (or even the Board) to make agreements not to impose this feature on a particular wiki community. That has to come from the Executive Director in this case, I think.
The bit of the board resolution that Ting quoted (and confirmed as still standing) says "all" projects. That means the Board has already made that decision and Sue has no choice in the matter. The resolution leaves Sue a lot of discretion in terms of how the feature will work, but not on the subject of whether the feature will be implemented.
On Mon, Oct 10, 2011 at 10:27:59AM -0400, MZMcBride wrote:
I found this e-mail very helpful and insightful. Thank you for writing it, Phoebe.
Idem. Phoebe++ :-)
I'm curious if there are any central brainstorming pages about an image filter, either on Meta-Wiki or mediawiki.org. If not, I'd be happy to start one.
Me 3! I'll definitely help. :-)
For example, if 90% of viewers in your country have hidden a particular image and you've set your personal threshold at 50%, an image might be automatically obscured. This isn't a perfect idea, but discussing and debating the merits of each idea might reveal a solution that's tenable.
There are well known standard attacks against pretty much any current user-facing data-collection system [1]. :-/ But perhaps there are even other ideas! :-)
sincerely, Kim Bruning [1] http://musicmachinery.com/2009/04/27/moot-wins-time-inc-loses/
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
I was thinking about that too. So what? --Ebe123
On 11-10-09 10:43 AM, "church.of.emacs.ml" church.of.emacs.ml@googlemail.com wrote:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
Greetings Ting
Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
http://de.wikipedia.org/wiki/Gewalt
Anneke
Am 09.10.2011 um 16:12 schrieb Ting Chen:
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
Greetings Ting
Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/ foundation-l
-- Ting
Ting's Blog: http://wingphilopp.blogspot.com/
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
dear Anneke,
+1
and see the basic difference and the disaccordance in understanding and meaning of violence.
http://en.wikipedia.org/wiki/Violence
hubertl.
Am 09.10.2011 16:35, schrieb Anneke Wolf:
http://de.wikipedia.org/wiki/Gewalt
Anneke
Am 09.10.2011 um 16:12 schrieb Ting Chen:
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
Greetings Ting
Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/ foundation-l
-- Ting
Ting's Blog: http://wingphilopp.blogspot.com/
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 10 October 2011 11:17, Hubert hubert.laska@gmx.at wrote:
Am 09.10.2011 16:35, schrieb Anneke Wolf:
dear Anneke, +1 and see the basic difference and the disaccordance in understanding and meaning of violence. http://en.wikipedia.org/wiki/Violence
I don't understand what point is being made here.
- d.
David, did you read the german article completely? have you compared the contents of which part of the concept of violence and more attention is paid to what portion of the term violence in en: wp did not occur?
Gewalt ist nicht unbedingt in gleicher Form Gewalt.
to say it simply: hitting someones head, to shoot s.o. is in en: WP the primary part of the article (this is very simplifying, indeed). The German article is a far greater degree of philosophical and sociological issues of violence.
This difference alone makes it clear that a single definition of what violence is or may be, and how it manifests itself in images, can not even enter.
Quite frankly, I do not want that maybe people who are socialized to a far greater degree in a culture of violence than other cultures can be categorized by images of tens of thousands to impose his concept of violence.
Even though we are all Wikipedians, even within the German Wikipedia, there are significant cultural differences.
And violence is - contrary to religion and sexuality - just the smaller problem. h
Am 10.10.2011 12:22, schrieb David Gerard:
On 10 October 2011 11:17, Hubert hubert.laska@gmx.at wrote:
Am 09.10.2011 16:35, schrieb Anneke Wolf:
dear Anneke, +1 and see the basic difference and the disaccordance in understanding and meaning of violence. http://en.wikipedia.org/wiki/Violence
I don't understand what point is being made here.
- d.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 10 October 2011 18:58, Hubert hubert.laska@gmx.at wrote:
David, did you read the german article completely? have you compared the contents of which part of the concept of violence and more attention is paid to what portion of the term violence in en: wp did not occur? Gewalt ist nicht unbedingt in gleicher Form Gewalt. to say it simply: hitting someones head, to shoot s.o. is in en: WP the primary part of the article (this is very simplifying, indeed). The German article is a far greater degree of philosophical and sociological issues of violence.
Ah, I only read German badly, via babelfish :-)
- d.
Hubert,
The fact is that the English word "violence" has a quite different etymology, and a much narrower meaning, than the German word "Gewalt", which historically also means "control", or even "administrative competence".
The scope of the English article is indeed appropriate to the English word "violence", because that word lacks several shades of meaning that the German word "Gewalt" has.
Andreas
________________________________ From: Hubert hubert.laska@gmx.at To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Monday, 10 October 2011, 18:58 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
David, did you read the german article completely? have you compared the contents of which part of the concept of violence and more attention is paid to what portion of the term violence in en: wp did not occur?
Gewalt ist nicht unbedingt in gleicher Form Gewalt.
to say it simply: hitting someones head, to shoot s.o. is in en: WP the primary part of the article (this is very simplifying, indeed). The German article is a far greater degree of philosophical and sociological issues of violence.
This difference alone makes it clear that a single definition of what violence is or may be, and how it manifests itself in images, can not even enter.
Quite frankly, I do not want that maybe people who are socialized to a far greater degree in a culture of violence than other cultures can be categorized by images of tens of thousands to impose his concept of violence.
Even though we are all Wikipedians, even within the German Wikipedia, there are significant cultural differences.
And violence is - contrary to religion and sexuality - just the smaller problem. h
Am 10.10.2011 12:22, schrieb David Gerard:
On 10 October 2011 11:17, Hubert hubert.laska@gmx.at wrote:
Am 09.10.2011 16:35, schrieb Anneke Wolf:
dear Anneke, +1 and see the basic difference and the disaccordance in understanding and meaning of violence. http://en.wikipedia.org/wiki/Violence
I don't understand what point is being made here.
- d.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
_______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Dear Andreas,
This is, what I wanted to express. But it is not only a quite different etymology, it is also a definition of gender-related positions to be interpreted and applied.
But what we care about minorities - as I always say - as long as they representing only women, children, homosexuals and cyclists.
Based on needs of minorities, the majority always knows better what has to be good for them.
In our case it´s a little bit different, this seems, that an ivory tower conference knows exactly how the community will solve the problem - what should categorized as violence-, sexuality- and religious-related.
Meanwhile, I prefer the following solution:
Everyone, who will not understand and perceive the world so as it is, should unsubscribe his internet connection - just like his newspaper subscription, radio and television and - of course - any advertising on streets. And this individuals should deny any public schools for their children.
And I mean this not in a depreciatory way. Maybe, this may be a better world.
I just hope, they will even throw the bible then.
h.
Am 10.10.2011 20:37, schrieb Andreas Kolbe:
Hubert,
The fact is that the English word "violence" has a quite different etymology, and a much narrower meaning, than the German word "Gewalt", which historically also means "control", or even "administrative competence".
The scope of the English article is indeed appropriate to the English word "violence", because that word lacks several shades of meaning that the German word "Gewalt" has.
Andreas
From: Hubert hubert.laska@gmx.at To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Monday, 10 October 2011, 18:58 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
David, did you read the german article completely? have you compared the contents of which part of the concept of violence and more attention is paid to what portion of the term violence in en: wp did not occur?
Gewalt ist nicht unbedingt in gleicher Form Gewalt.
to say it simply: hitting someones head, to shoot s.o. is in en: WP the primary part of the article (this is very simplifying, indeed). The German article is a far greater degree of philosophical and sociological issues of violence.
This difference alone makes it clear that a single definition of what violence is or may be, and how it manifests itself in images, can not even enter.
Quite frankly, I do not want that maybe people who are socialized to a far greater degree in a culture of violence than other cultures can be categorized by images of tens of thousands to impose his concept of violence.
Even though we are all Wikipedians, even within the German Wikipedia, there are significant cultural differences.
And violence is - contrary to religion and sexuality - just the smaller problem. h
Am 10.10.2011 12:22, schrieb David Gerard:
On 10 October 2011 11:17, Hubert hubert.laska@gmx.at wrote:
Am 09.10.2011 16:35, schrieb Anneke Wolf:
dear Anneke, +1 and see the basic difference and the disaccordance in understanding and meaning of violence. http://en.wikipedia.org/wiki/Violence
I don't understand what point is being made here.
- d.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Am 13.10.2011 09:54, schrieb Hubert:
Meanwhile, I prefer the following solution:
Everyone, who will not understand and perceive the world so as it is, should unsubscribe his internet connection - just like his newspaper subscription, radio and television and - of course - any advertising on streets. And this individuals should deny any public schools for their children.
Some do.
Related to the filter I always remember a story Asaf Bartov wrote in the German Wikipedia-book. A young man, born in an ultraorthodox jewish movement, named "Gur". They denied him access to books and further education beyond elementary school level. He got the chance to access the internet, found Wikipedia and this encounter changed his life completely.
Two years later he is not only an active Wikipedian, but studying computer sciences and a well educated citizen. He experienced a mind shift. He found out, what power knowledge has and made use of it. Because he dared to step into it.
I doubt that a Wikipedia containing a system to exclude content by individual taste or belief will still have this sheer breathtaking energy. The loss of this energy would be a pretty high price for not seeing a cucumber used in an unconventional manner in the wrong place.
Maybe we should consider to promote daring.
Regards, Denis
On Sun, Oct 9, 2011 at 7:42 PM, Ting Chen wing.philopp@gmx.de wrote:
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
So nothing changed since before the reaction and referendums?
I really don't see the point of making statement in complete or partial ambiguity from the board. If I read the last email correctly, the board acknowledged some of the things that were said but nothing will change from the original resolution?
Regards Theo
On 9 October 2011 15:12, Ting Chen wing.philopp@gmx.de wrote:
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
So you do intend to force this on projects that don't want it? Do you really think that's going to work? If the WMF picks a fight with the community on something the community feel very strongly about (which this certainly seems to be), the WMF will lose horribly and the fall-out for the whole movement will be very bad indeed.
On 10/09/2011 04:56 PM, Thomas Dalton wrote:
If the WMF picks a fight with the community on something the community feel very strongly about (which this certainly seems to be), the WMF will lose horribly and the fall-out for the whole movement will be very bad indeed.
+1.
(And I say that, not being opposed to the image filter itself)
-- Tobias
On 9 October 2011 16:31, church.of.emacs.ml church.of.emacs.ml@googlemail.com wrote:
On 10/09/2011 04:56 PM, Thomas Dalton wrote:
If the WMF picks a fight with the community on something the community feel very strongly about (which this certainly seems to be), the WMF will lose horribly and the fall-out for the whole movement will be very bad indeed.
+1.
(And I say that, not being opposed to the image filter itself)
Indeed. I'm not in against the filter. In fact, I'm very much in favour of it. I am, however, very much against civil war.
On 9 October 2011 08:50, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 16:31, church.of.emacs.ml church.of.emacs.ml@googlemail.com wrote:
On 10/09/2011 04:56 PM, Thomas Dalton wrote:
If the WMF picks a fight with the community on something the community feel very strongly about (which this certainly seems to be), the WMF will lose horribly and the fall-out for the whole movement will be very bad indeed.
+1.
(And I say that, not being opposed to the image filter itself)
Indeed. I'm not in against the filter. In fact, I'm very much in favour of it. I am, however, very much against civil war.
Nobody wants civil war.
Please read Ting's note carefully. The Board is asking me to work with the community to develop a solution that meets the original requirements as laid out in its resolution. It is asking me to do something. But it is not asking me to do the specific thing that has been discussed over the past several months, and which the Germans voted against.
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal. The Board definitely does not want a war with the community, and it does not want people to fork or leave the projects. The goal is a solution that's acceptable for everyone.
Thanks, Sue
[1] http://wikimediafoundation.org/wiki/Resolution:Controversial_content --
Sue Gardner Executive Director Wikimedia Foundation
415 839 6885 office 415 816 9967 cell
Imagine a world in which every single human being can freely share in the sum of all knowledge. Help us make it a reality!
On 9 October 2011 17:19, Sue Gardner sgardner@wikimedia.org wrote:
Nobody wants civil war.
I'm sure they don't actively want one, but it seems the board do consider one an acceptable cost.
Please read Ting's note carefully. The Board is asking me to work with the community to develop a solution that meets the original requirements as laid out in its resolution. It is asking me to do something. But it is not asking me to do the specific thing that has been discussed over the past several months, and which the Germans voted against.
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal. The Board definitely does not want a war with the community, and it does not want people to fork or leave the projects. The goal is a solution that's acceptable for everyone.
But what happens in the event that such a goal cannot be achieved? Ting has made it very clear that they intend some kind of image filter to be implemented on all projects, regardless of community wishes. I hope the community will come around and accept some kind of filter, but if they don't then the WMF needs to accept that it has failed, do so gracefully, and not try to start a war that in cannot possibly win and will cause a great deal of damage.
I think that if the WMF made it clear that they will not implement any kind of image filter on a project if there is overwhelming opposition to it, the relevant communities would be much more willing to engage in constructive dialogue.
On 9 October 2011 09:31, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 17:19, Sue Gardner sgardner@wikimedia.org wrote:
Nobody wants civil war.
I'm sure they don't actively want one, but it seems the board do consider one an acceptable cost.
It may seem that way, but it's not actually true. The Board's conversation yesterday was thoughtful and serious: the Board members take very seriously the concerns expressed by editors, and they don't want to alienate them. We discussed Achim Raschka for example specifically: he's a 70K-edit editor on the German Wikipedia with I think 100+ good and featured articles. The last thing the Board wants is for people like Achim to leave the projects.
Please read Ting's note carefully. The Board is asking me to work with the community to develop a solution that meets the original requirements as laid out in its resolution. It is asking me to do something. But it is not asking me to do the specific thing that has been discussed over the past several months, and which the Germans voted against.
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal. The Board definitely does not want a war with the community, and it does not want people to fork or leave the projects. The goal is a solution that's acceptable for everyone.
But what happens in the event that such a goal cannot be achieved? Ting has made it very clear that they intend some kind of image filter to be implemented on all projects, regardless of community wishes. I hope the community will come around and accept some kind of filter, but if they don't then the WMF needs to accept that it has failed, do so gracefully, and not try to start a war that in cannot possibly win and will cause a great deal of damage.
I think that if the WMF made it clear that they will not implement any kind of image filter on a project if there is overwhelming opposition to it, the relevant communities would be much more willing to engage in constructive dialogue.
Yes, I hear you. The Board didn't specifically discuss yesterday what to do if there is no acceptable solution. So I don't think they can make a statement like this: it hasn't been discussed. I hear what you're saying here, but my hope is that even in the absence of such a statement, people will be willing to join with the Wikimedia Foundation to engage seriously on the topic and figure out a solution that works.
I need to run -- I've got a meeting in the office with Ting, JB and Kat. But thank you, Thomas, for your comments here -- I think they're constructive. I would love for people on this list to help others understand what's happening here. The Wikimedia Foundation does not want a war: it is hoping for a solution here that is acceptable for everyone. If the folks here can help editors understand that, that would be a service to everyone, I think.
Thanks, Sue
-- Sue Gardner Executive Director Wikimedia Foundation
415 839 6885 office 415 816 9967 cell
Imagine a world in which every single human being can freely share in the sum of all knowledge. Help us make it a reality!
On 9 October 2011 17:46, Sue Gardner sgardner@wikimedia.org wrote:
On 9 October 2011 09:31, Thomas Dalton thomas.dalton@gmail.com wrote:
On 9 October 2011 17:19, Sue Gardner sgardner@wikimedia.org wrote:
Nobody wants civil war.
I'm sure they don't actively want one, but it seems the board do consider one an acceptable cost.
It may seem that way, but it's not actually true. The Board's conversation yesterday was thoughtful and serious: the Board members take very seriously the concerns expressed by editors, and they don't want to alienate them. We discussed Achim Raschka for example specifically: he's a 70K-edit editor on the German Wikipedia with I think 100+ good and featured articles. The last thing the Board wants is for people like Achim to leave the projects.
If it's the last thing they want to happen, that would mean the board would rather abandon the image filter than have people like Achim leave. Is that the case? The rest of your email says they haven't actually discussed that scenario.
But what happens in the event that such a goal cannot be achieved? Ting has made it very clear that they intend some kind of image filter to be implemented on all projects, regardless of community wishes. I hope the community will come around and accept some kind of filter, but if they don't then the WMF needs to accept that it has failed, do so gracefully, and not try to start a war that in cannot possibly win and will cause a great deal of damage.
I think that if the WMF made it clear that they will not implement any kind of image filter on a project if there is overwhelming opposition to it, the relevant communities would be much more willing to engage in constructive dialogue.
Yes, I hear you. The Board didn't specifically discuss yesterday what to do if there is no acceptable solution. So I don't think they can make a statement like this: it hasn't been discussed. I hear what you're saying here, but my hope is that even in the absence of such a statement, people will be willing to join with the Wikimedia Foundation to engage seriously on the topic and figure out a solution that works.
That would be a good discussion for the board to have, and sooner rather than later.
I would also hope that people will engage in constructive discussions with the WMF, but I can understand why they would be reluctant to when all the evidence so far is that the WMF doesn't really intend to listen.
We had a "referendum" that didn't really ask any useful questions and, unsurprisingly, the result we now have from it is that the WMF intends to carry on exactly as it had intended to before the "referendum" was held (you always said you intended to consult with the community on the details of how it would work, so that isn't responding to the community's views).
The community doesn't trust the WMF at the moment. A firm commitment not to go against an overwhelming community opinion would go a long way towards fixing that.
On 10/09/2011 07:20 PM, Thomas Dalton wrote:
The community doesn't trust the WMF at the moment. A firm commitment not to go against an overwhelming community opinion would go a long way towards fixing that.
That's exactly the situation. Right now, we're in a deadlock: WMF is waiting for the community to engange in a constructive dialog, forming new ideas and consensus. The community members* are waiting for a signal of trust by the WMF, a real recognition of their opposition, a clear statment that WMF and the community are on a par in this discussion and neither will do anything deemed unacceptable by the other, before they will rethink their own position.
You know, it's hard to lead a constructive discussion on controversial content when half of the people are thinking about forking. Believe me, I've tried.**
--Tobias
* that is, most of the opposing community members ** http://commons.wikimedia.org/wiki/File:2011-09-11_Podiumsdiskussion_Bildfilt...
Since no one has explicitly come out and said exactly what the issue is here, I'll ask:
*What exactly is harmful about an opt-in filter? *If it's opt-in, then you have the choice to not even enable it if you so choose. You don't have to use it; it'd just be an option in the preferences page or maybe even a link on the margin similar to the WikiBooks link I never use. Can another option or link you never click really hurt the world, or even an individual for that matter?
Also, *an idea for how this could be implemented*:
Anyway, back in 2008 I attempted to popularize such an "add-on" style filter; the means for operating it were very simple. Here's how it worked:
The client installs Ad-Block Plus in their Firefox browser. They subscribe to any of a set of Wikimedia-image-targeted filters. The beauty of Ad-Block Plus is you can turn it off as desired, block an image yourself, specify not to block something blocked by your subscription filter, and receive updates on the filter.
I dropped the project when I realized there were no free places to host a text file that had links to hundreds of nude images-- my accounts kept getting banned! :-D
This structure would allow for several different types of filters; a disadvantage, however, would be that each filter would need maintained exclusively by a different individual (for instance, the person who maintains a human nudity filter and the person who maintains a human torment filter would need to be using different computers), though any individual could subscribe to as many filters as he or she chooses. Another disadvantage is that each filter can be filled only by a single user, so suggestions for the filter would need sent to them.
Bob
On 10/9/2011 1:03 PM, church.of.emacs.ml wrote:
On 10/09/2011 07:20 PM, Thomas Dalton wrote:
The community doesn't trust the WMF at the moment. A firm commitment not to go against an overwhelming community opinion would go a long way towards fixing that.
That's exactly the situation. Right now, we're in a deadlock: WMF is waiting for the community to engange in a constructive dialog, forming new ideas and consensus. The community members* are waiting for a signal of trust by the WMF, a real recognition of their opposition, a clear statment that WMF and the community are on a par in this discussion and neither will do anything deemed unacceptable by the other, before they will rethink their own position.
You know, it's hard to lead a constructive discussion on controversial content when half of the people are thinking about forking. Believe me, I've tried.**
--Tobias
- that is, most of the opposing community members
** http://commons.wikimedia.org/wiki/File:2011-09-11_Podiumsdiskussion_Bildfilt...
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Since no one has explicitly come out and said exactly what the issue is here, I'll ask:
*What exactly is harmful about an opt-in filter? *If it's opt-in, then you have the choice to not even enable it if you so choose. You don't have to use it; it'd just be an option in the preferences page or maybe even a link on the margin similar to the WikiBooks link I never use. Can another option or link you never click really hurt the world, or even an individual for that matter?
You don't get to grind someone's nose into your shit.
Fred
Well we can't have that... lol. Bob
On 10/9/2011 2:19 PM, Fred Bauder wrote:
You don't get to grind someone's nose into your shit. Fred _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Objection to the WMF implementing an image filter would in fact be removed by such a project--if, like AdBlock, it were run outside and independently of the WMF. If i believe in individual freedom, I must believe in the ability of individuals to choose in what manner they access information, even or especially when it is in a different manner than I would choose. In fact, neither we nor the board could prevent it without changing our license to forbid such a derivative use. I am aware there is some discussion of trying to adjust our category system in order to deliberately frustrate such a use, but I would regard that as showing an equal lack of devotion to intellectual freedom as would be adjusting our categories to facilitate such a use.
I believe there are some members of the board who positively approve of a filter, rather than merely regard it as a lesser evil. I call upon them to form an organization to accomplish what they think is needed; I can think of many organizations in the US that would gladly fund them.
On Sun, Oct 9, 2011 at 3:12 PM, Bob the Wikipedian bobthewikipedian@gmail.com wrote:
Since no one has explicitly come out and said exactly what the issue is here, I'll ask:
*What exactly is harmful about an opt-in filter? *If it's opt-in, then you have the choice to not even enable it if you so choose. You don't have to use it; it'd just be an option in the preferences page or maybe even a link on the margin similar to the WikiBooks link I never use. Can another option or link you never click really hurt the world, or even an individual for that matter?
Also, *an idea for how this could be implemented*:
Anyway, back in 2008 I attempted to popularize such an "add-on" style filter; the means for operating it were very simple. Here's how it worked:
The client installs Ad-Block Plus in their Firefox browser. They subscribe to any of a set of Wikimedia-image-targeted filters. The beauty of Ad-Block Plus is you can turn it off as desired, block an image yourself, specify not to block something blocked by your subscription filter, and receive updates on the filter.
I dropped the project when I realized there were no free places to host a text file that had links to hundreds of nude images-- my accounts kept getting banned! :-D
This structure would allow for several different types of filters; a disadvantage, however, would be that each filter would need maintained exclusively by a different individual (for instance, the person who maintains a human nudity filter and the person who maintains a human torment filter would need to be using different computers), though any individual could subscribe to as many filters as he or she chooses. Another disadvantage is that each filter can be filled only by a single user, so suggestions for the filter would need sent to them.
Bob
On 10/9/2011 1:03 PM, church.of.emacs.ml wrote:
On 10/09/2011 07:20 PM, Thomas Dalton wrote:
The community doesn't trust the WMF at the moment. A firm commitment not to go against an overwhelming community opinion would go a long way towards fixing that.
That's exactly the situation. Right now, we're in a deadlock: WMF is waiting for the community to engange in a constructive dialog, forming new ideas and consensus. The community members* are waiting for a signal of trust by the WMF, a real recognition of their opposition, a clear statment that WMF and the community are on a par in this discussion and neither will do anything deemed unacceptable by the other, before they will rethink their own position.
You know, it's hard to lead a constructive discussion on controversial content when half of the people are thinking about forking. Believe me, I've tried.**
--Tobias
- that is, most of the opposing community members
** http://commons.wikimedia.org/wiki/File:2011-09-11_Podiumsdiskussion_Bildfilt...
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Sun, Oct 9, 2011 at 10:12 PM, Bob the Wikipedian bobthewikipedian@gmail.com wrote:
Since no one has explicitly come out and said exactly what the issue is here, I'll ask:
*What exactly is harmful about an opt-in filter? *If it's opt-in, then you have the choice to not even enable it if you so choose. You don't have to use it; it'd just be an option in the preferences page or maybe even a link on the margin similar to the WikiBooks link I never use. Can another option or link you never click really hurt the world, or even an individual for that matter?
Also, *an idea for how this could be implemented*:
The big one is that just isn't one and all the people on the board and the executives are trying their hardest at trying their best at proposing to the community that they are acting in good faith, while being *viciously* in Bad Faith wrt to the community. Back off and lick your wounds would be a nice option. There are worse ones.
________________________________ From: Sue Gardner sgardner@wikimedia.org Yes, I hear you. The Board didn't specifically discuss yesterday what to do if there is no acceptable solution. So I don't think they can make a statement like this: it hasn't been discussed. I hear what you're saying here, but my hope is that even in the absence of such a statement, people will be willing to join with the Wikimedia Foundation to engage seriously on the topic and figure out a solution that works.
Quite. We have a responsibility to the thousands of people who voiced the opinion that it was important for Wikimedia to offer this function to readers, as well as a responsibility to those editors who are unhappy with the proposals so far put forward.
Two valid objections that have been brought forward to date and have stuck in my mind are (1) the use any categories or tagging systems could be put to by censors wishing to block *all* access to media files within the scope of the filter, and (2) the amount of work involved.
The first objection is something the Harris study actually addressed in its recommendations: "10. That, by and large, Wikimedians make the decisions about what is filterable or not on Wikimedia sites, and consequently, that tagging regimes that would allow third-parties to filter Wikimedia content be restricted in their use."
I suggest we could profitably give that matter some thought, and try to think of technical solutions that would address this specific concern.
The second objection, the amount of work involved, could also benefit from some thought. It should be possible to make the work easier by creating gadgets that automatically present likely, and as yet unassessed, candidates to an editor for assessment.
I still feel that *readers* – i.e. the wider public – should be asked as well whether they would like to see a function like that implemented, or not. If it turns out that the population of readers differs in its views from the population of editors in statistically significant ways, that could lead to an interesting discussion. In general, I would like to see more reader surveys – giving readers an opportunity to tell us what they like and dislike about the projects we provide, so we have some feedback informing our internal discussions.
Andreas
* Sue Gardner wrote:
Please read Ting's note carefully. The Board is asking me to work with the community to develop a solution that meets the original requirements as laid out in its resolution. It is asking me to do something. But it is not asking me to do the specific thing that has been discussed over the past several months, and which the Germans voted against.
There is nothing useful to be learned from the Letter to the Community. What we can assume is that someone on the Board raised the issue about people complaining about images, someone suggested if there are images people don't like, they should have the option to have them hidden from them, and then they agreed that someone should figure that out. Board members do not thing they have to contribute to the solution and they don't think the community should have any say in whether the feature is actually wanted by the community. Whoever is tasked with figuring this out isn't actually taking useful steps towards solving the problem.
Instead we are burning goodwill by arguing the finer points of what is, exactly, censorship, how there are provocateurs in our midst, and how important, relative to not, it is that users have this feature whether they are logged in or not, and any number of other things. This is not an issue where you can hope to get everyone on board by appealing to people's empathy and understanding, people do not know whether they are to board the Titanic or the QE2, so you get a lot of talk about how the ship will sink if you build it incorrectly or steer it badly.
It would be easy for the Board to resolve that at this point they ex- pect whoever they tasked with it to come up with a technical proposal in coordination with the community which might then be implemented on projects who volunteer to test it and then there will be an evaluation also in coordination with the community before any further steps are taken, for instance. But the Chair has chosen to instead inform the community that it's far too late to argue about this feature and there is no reason for the Board to do as little as hint at the possibility that this feature will not be imposed on projects by force.
We can read the Letter to the Community carefully if you want. I note, e.g., "deliberately offending or provoking them is not respectful, and is not okay". This is insinuating a notable group of people is taking the opposite position, which is not true. That part starts "We believe we need, and should want, to treat readers with respect. Their opinions and preferences are as legitimate as our own". The list of opinions and preferences humans have held throughout history that today "we" would find abhorrent is very, very long. "The majority of editors who responded to the referendum are not opposed to the feature." I do not see how one can have followed the discussion without running across the fact that this statement is regarded as invalid inference from the poll.
Like I said, it does not really matter what he wrote, the people who've expressed concern about the filter do not care about random claims how the Board is listening and hearing and paying attention and wants us to work with "you" despite the Board being openly hostile towards the com- munity, whether it means to be or is just exceptionally bad at dealing with the community in a manner that is well received. What they want is that this issue goes away, whether that is by abandoning the project or a brilliant idea that nobody has thought of so far or whatever.
Clearly an image filter can be developed and maintained. Having one has costs and benefits. It may well be that no filter can be developed such that the benefits outweigh the costs. Without knowing that it is not reasonable to command implementation of the filter. If this had been framed as some explorative feasibility and requirements gathering study with an open outcome and proposals sought, we would have a different kind of discussion.
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal. The Board definitely does not want a war with the community, and it does not want people to fork or leave the projects. The goal is a solution that's acceptable for everyone.
Well, then the Board should not have commanded implementation before an idea what to implement had been developed, the development should not have happened way out of reach of the community, there should not have been a referendum without a proposal that enjoys some meaningful level of community support, the referndum should have asked more meaningful questions, and the whole thing should have been very clearly branded as an experiment participation in which will be genuinely optional. It'll be necessary to move a few steps back to re-synchronize with the rest of the community and move together from there on. There would need to be a forum and process to discuss and agree on questions like whether it's sufficient if there is a feature to hide, blur, or otherwise obscure all images, and if not, what kind of process should be used to decide to de- cide how to classify images, and all manners of questions like these.
If the people who wish to participate in this come up with something they anticipate overwhelming support for, let them doublecheck with the larger community informally, like calling for comments in the Signpost or whatever may be suitable, ideally allowing people to test this in a test wiki with reasonable sample images, then ask the community if they want this feature on their wikis and enable it as appropriate. This is not rocket science, if you want community support, make it easy for the community to be informed about and participate in all steps in the pro- cess, make decisions only after people could voice their well informed opinions, improve documentation based on what you learn in discussions.
Gather more data, conduct a representative poll among german speaking people, ask them how often they use Wikipedia, how often they encouter images in some general category that would rather not see. If you have 50% of regular users who find such images on 1 of 10 articles they see, german speaking editors would most likely be surprised and re-evaluate their position, not necessarily with respect to the filter, but they'd pay more attention to their image selections, perhaps there are areas where image selection is a problem because we lack better ones, then new images could be made and so on. Conduct a global poll, demonstrate that this is an issue for so and so many people. The "referendum" is of no use here since, when the Board decides we must have a filter, people assume they have carefully studied the need and the feasibility and are all the more likely to agree than if it's a matter of "some people com- plain, someone suggested maybe some sort of filter would help, should we spend donor's money on this problem rather than better tools to com- bat vandalism?". Having studies would allow you to cite your sources when making claims about whether there is a problem; a familiar concept. As it is, we do not even have the full results of the "referendum" ...
On 10/9/11 11:57 PM, Bjoern Hoehrmann wrote:
- Sue Gardner wrote:
Please read Ting's note carefully. The Board is asking me to work with the community to develop a solution that meets the original requirements as laid out in its resolution. It is asking me to do something. But it is not asking me to do the specific thing that has been discussed over the past several months, and which the Germans voted against.
There is nothing useful to be learned from the Letter to the Community.
The problem is that what is usually called "the Board" on this list is not a single entity. It is actually a group of persons.
And right now, the situation is that there is no real agreement within "the Board" about what to exactly do or not do.
Accordingly, it is probably tough for "the Board" as an entity to issue statements or letters or recommandations without bumping in the fact that they do not have a single common position.
Consequently, there is nothing really useful in any statements they can issue.
Florence
What we can assume is that someone on the Board raised the issue about people complaining about images, someone suggested if there are images people don't like, they should have the option to have them hidden from them, and then they agreed that someone should figure that out. Board members do not thing they have to contribute to the solution and they don't think the community should have any say in whether the feature is actually wanted by the community. Whoever is tasked with figuring this out isn't actually taking useful steps towards solving the problem.
Instead we are burning goodwill by arguing the finer points of what is, exactly, censorship, how there are provocateurs in our midst, and how important, relative to not, it is that users have this feature whether they are logged in or not, and any number of other things. This is not an issue where you can hope to get everyone on board by appealing to people's empathy and understanding, people do not know whether they are to board the Titanic or the QE2, so you get a lot of talk about how the ship will sink if you build it incorrectly or steer it badly.
It would be easy for the Board to resolve that at this point they ex- pect whoever they tasked with it to come up with a technical proposal in coordination with the community which might then be implemented on projects who volunteer to test it and then there will be an evaluation also in coordination with the community before any further steps are taken, for instance. But the Chair has chosen to instead inform the community that it's far too late to argue about this feature and there is no reason for the Board to do as little as hint at the possibility that this feature will not be imposed on projects by force.
We can read the Letter to the Community carefully if you want. I note, e.g., "deliberately offending or provoking them is not respectful, and is not okay". This is insinuating a notable group of people is taking the opposite position, which is not true. That part starts "We believe we need, and should want, to treat readers with respect. Their opinions and preferences are as legitimate as our own". The list of opinions and preferences humans have held throughout history that today "we" would find abhorrent is very, very long. "The majority of editors who responded to the referendum are not opposed to the feature." I do not see how one can have followed the discussion without running across the fact that this statement is regarded as invalid inference from the poll.
Like I said, it does not really matter what he wrote, the people who've expressed concern about the filter do not care about random claims how the Board is listening and hearing and paying attention and wants us to work with "you" despite the Board being openly hostile towards the com- munity, whether it means to be or is just exceptionally bad at dealing with the community in a manner that is well received. What they want is that this issue goes away, whether that is by abandoning the project or a brilliant idea that nobody has thought of so far or whatever.
Clearly an image filter can be developed and maintained. Having one has costs and benefits. It may well be that no filter can be developed such that the benefits outweigh the costs. Without knowing that it is not reasonable to command implementation of the filter. If this had been framed as some explorative feasibility and requirements gathering study with an open outcome and proposals sought, we would have a different kind of discussion.
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal. The Board definitely does not want a war with the community, and it does not want people to fork or leave the projects. The goal is a solution that's acceptable for everyone.
Well, then the Board should not have commanded implementation before an idea what to implement had been developed, the development should not have happened way out of reach of the community, there should not have been a referendum without a proposal that enjoys some meaningful level of community support, the referndum should have asked more meaningful questions, and the whole thing should have been very clearly branded as an experiment participation in which will be genuinely optional. It'll be necessary to move a few steps back to re-synchronize with the rest of the community and move together from there on. There would need to be a forum and process to discuss and agree on questions like whether it's sufficient if there is a feature to hide, blur, or otherwise obscure all images, and if not, what kind of process should be used to decide to de- cide how to classify images, and all manners of questions like these.
If the people who wish to participate in this come up with something they anticipate overwhelming support for, let them doublecheck with the larger community informally, like calling for comments in the Signpost or whatever may be suitable, ideally allowing people to test this in a test wiki with reasonable sample images, then ask the community if they want this feature on their wikis and enable it as appropriate. This is not rocket science, if you want community support, make it easy for the community to be informed about and participate in all steps in the pro- cess, make decisions only after people could voice their well informed opinions, improve documentation based on what you learn in discussions.
Gather more data, conduct a representative poll among german speaking people, ask them how often they use Wikipedia, how often they encouter images in some general category that would rather not see. If you have 50% of regular users who find such images on 1 of 10 articles they see, german speaking editors would most likely be surprised and re-evaluate their position, not necessarily with respect to the filter, but they'd pay more attention to their image selections, perhaps there are areas where image selection is a problem because we lack better ones, then new images could be made and so on. Conduct a global poll, demonstrate that this is an issue for so and so many people. The "referendum" is of no use here since, when the Board decides we must have a filter, people assume they have carefully studied the need and the feasibility and are all the more likely to agree than if it's a matter of "some people com- plain, someone suggested maybe some sort of filter would help, should we spend donor's money on this problem rather than better tools to com- bat vandalism?". Having studies would allow you to cite your sources when making claims about whether there is a problem; a familiar concept. As it is, we do not even have the full results of the "referendum" ...
I was following the discussion without ever giving my own opinion, and my impression is that we are going nowhere.
Imagine we make another poll, properly prepared, and the poll shows, say, that 65% support the filter and 35% oppose. So what? Concluding then then the community rejecting the filter would not be a good way of looking at things. We should not decide such things by majority vote, since the majority vote leaves underrepresented groups out. We should be looking for a consensus solution. The poll could still be an important indication on what solutions are clearly outside the consensus, but we have already many indications to this point.
I think what should come next is that one of the filter proponents would come up with a suggestion for a workable scheme. (I guess the opponents of the filter would not be so much interested). We had already on this list for the last couple of weeks a number of schemes proposed, what should happen now is that somebody summarizes the main suggestions and the criticism of these suggestions. This will be a good starting point to think in the further directions and see what is doable and what is acceptable.
I believe continuing to discuss whether the board should have use different wording in the statement or whether the poll should have gone differently is not really constructive. In the end, if we come to the result that any kind of filter is incompatible with the Wikimedia movement mission - let it be like this. Then we can discuss pro-filter and anti-filter forks. But my impression is that we are not yet at this point.
Cheers Yaroslav
On Mon, Oct 10, 2011 at 01:44:09PM +0400, Yaroslav M. Blanter wrote:
I was following the discussion without ever giving my own opinion, and my impression is that we are going nowhere.
I think what should come next is that one of the filter proponents would come up with a suggestion for a workable scheme. (I guess the opponents of the filter would not be so much interested).
I think that having the image blurring system, combined with an option to unblur, would get us very far towards the stated board directive, and I don't think many in the community would object, and we could reach consensus fairly quickly.
While we're on the business of perhaps putting these options in a right-click menu, we could also make other images tools (more explicitly) available like adding the option to add image-maps, and making explicit the options to view license details and full size versions of the image.
If there are but few additions, we can go straight to bugzilla. If people feel that adjustment is needed, we can take this to meta first, before moving to bugzilla.
sincerely, Kim Bruning
On 10 October 2011 18:37, Kim Bruning kim@bruning.xs4all.nl wrote:
I think that having the image blurring system, combined with an option to unblur, would get us very far towards the stated board directive, and I don't think many in the community would object, and we could reach consensus fairly quickly.
Not sure the blurring system would do the job for a workplace. At a distance, the blurred penis still looks exactly like a penis ...
- d.
On Mon, Oct 10, 2011 at 07:49:04PM +0100, David Gerard wrote:
On 10 October 2011 18:37, Kim Bruning kim@bruning.xs4all.nl wrote:
I think that having the image blurring system, combined with an option to unblur, would get us very far towards the stated board directive, and I don't think many in the community would object, and we could reach consensus fairly quickly.
Not sure the blurring system would do the job for a workplace. At a distance, the blurred penis still looks exactly like a penis ...
Fair dinkum. We could have a "blur or black-out" option for different occaisions. For further discussion, I think MZMcBride was suggesting centralising discussion at mediawiki.org or meta. :-)
sincerely, Kim Bruning
From: David Gerard dgerard@gmail.com
On 10 October 2011 18:37, Kim Bruning kim@bruning.xs4all.nl wrote:
I think that having the image blurring system, combined with an option to unblur, would get us very far towards the stated board directive, and I don't think many in the community would object, and we could reach consensus fairly quickly.
Not sure the blurring system would do the job for a workplace. At a distance, the blurred penis still looks exactly like a penis ...
- d.
Indeed.
* David Gerard wrote:
Not sure the blurring system would do the job for a workplace. At a distance, the blurred penis still looks exactly like a penis ...
There are many alternatives to a blur effect. A much simpler effect would be a Small Images option that shrinks all images to icon size. The information you get is about the same as with a blur effect, but the images would be even easier to ignore and couldn't be recognized at a distance. There would be problems with maps as the point over- lay depends on the size, but that should not be that hard to fix.
It would also match what I do when I am unsure whether I am about to load some web page which I am not sure I want to see the images on, I tell my browser to zoom out, load the page, and then decide whether it's okay to zoom in, or if I should go View -> Images -> No Images, or close the page or whatever.
It's interesting to note that advocates of discriminatory schemes do not discuss, as far as I am aware, how to communicate the tagging of some images as somehow controversial to users who do not filter. I'd wonder how they feel about adding some notice like "Seeing this image makes some people feel bad" to the image caption for all images that would be filtered by one of the discriminatory filter options.
Zooming out may work for individuals like you, but for folks like me, it's actually a distraction, and I try to see what the tiny picture is, staring at it until it makes sense. Yay for ADHD....:-\
Bob
On 10/11/2011 8:17 PM, Bjoern Hoehrmann wrote:
- David Gerard wrote:
Not sure the blurring system would do the job for a workplace. At a distance, the blurred penis still looks exactly like a penis ...
There are many alternatives to a blur effect. A much simpler effect would be a Small Images option that shrinks all images to icon size. The information you get is about the same as with a blur effect, but the images would be even easier to ignore and couldn't be recognized at a distance. There would be problems with maps as the point over- lay depends on the size, but that should not be that hard to fix.
It would also match what I do when I am unsure whether I am about to load some web page which I am not sure I want to see the images on, I tell my browser to zoom out, load the page, and then decide whether it's okay to zoom in, or if I should go View -> Images -> No Images, or close the page or whatever.
It's interesting to note that advocates of discriminatory schemes do not discuss, as far as I am aware, how to communicate the tagging of some images as somehow controversial to users who do not filter. I'd wonder how they feel about adding some notice like "Seeing this image makes some people feel bad" to the image caption for all images that would be filtered by one of the discriminatory filter options.
* Bob the Wikipedian wrote:
Zooming out may work for individuals like you, but for folks like me, it's actually a distraction, and I try to see what the tiny picture is, staring at it until it makes sense. Yay for ADHD....:-\
Zooming out is something that works for me pretty much everywhere with no changes needed to any web site. I did not suggest telling Wikipedia users that's their only option and putting this issue to rest. Rather, I suggested that shrinking all images would be an option so that users who regularily encounter images on Wikipedia they have issues would be less affected by them. You cannot have a personal image filter without expending considerable effort on selecting the images you personally do or do not want to see, it is this very effort that makes it personal.
On Wed, Oct 12, 2011 at 5:08 AM, Bjoern Hoehrmann derhoermi@gmx.net wrote:
- Bob the Wikipedian wrote:
Zooming out may work for individuals like you, but for folks like me, it's actually a distraction, and I try to see what the tiny picture is, staring at it until it makes sense. Yay for ADHD....:-\
You cannot have a personal image filter without
expending considerable effort on selecting the images you personally do or do not want to see, it is this very effort that makes it personal.
Not really. China is doing just fine with Big Mamas supplied by countries that pretend to be critical about the uses of same. Any system that pretends to rely on a consistent mechanism can be taken over and abused. if it is completely independent of choise, it is not culturally neutral. If it pretends to be "personal" it is inherently "middle man reliant" and just won't prevent abuse.
Sigh. There appear to be people here arguing about abtruse theoretical issues and ignoring the big white elephant of it having to work in the real world.
Ideally, this would be as transparent as possible, so that should not be an issue if all goes well. Bob
On 10/11/2011 8:17 PM, Bjoern Hoehrmann wrote:
I'd wonder how they feel about adding some notice like "Seeing this image makes some people feel bad" to the image caption for all images that would be filtered by one of the discriminatory filter options.
On Mon, 10 Oct 2011 19:37:05 +0200, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 01:44:09PM +0400, Yaroslav M. Blanter wrote:
I was following the discussion without ever giving my own opinion, and
my
impression is that we are going nowhere.
I think what should come next is that one of the filter proponents
would
come up with a suggestion for a workable scheme. (I guess the opponents of the filter would not be so much interested).
<...>
If there are but few additions, we can go straight to bugzilla. If
people
feel that adjustment is needed, we can take this to meta first, before moving to bugzilla.
sincerely, Kim Bruning
We definitely must have the page on Meta discussing it. There have been some objections raised already, and there have been other solutions suggested earlier, and other objections raised. Bringing it straight to Bugzilla makes no sense at this point.
Cheers Yaroslav
On Mon, Oct 10, 2011 at 11:18:23PM +0400, Yaroslav M. Blanter wrote:
On Mon, 10 Oct 2011 19:37:05 +0200, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 01:44:09PM +0400, Yaroslav M. Blanter wrote:
I was following the discussion without ever giving my own opinion, and
my
impression is that we are going nowhere.
I think what should come next is that one of the filter proponents
would
come up with a suggestion for a workable scheme. (I guess the opponents of the filter would not be so much interested).
<...>
If there are but few additions, we can go straight to bugzilla. If
people
feel that adjustment is needed, we can take this to meta first, before moving to bugzilla.
sincerely, Kim Bruning
We definitely must have the page on Meta discussing it. There have been some objections raised already, and there have been other solutions suggested earlier, and other objections raised. Bringing it straight to Bugzilla makes no sense at this point.
Roger that,
I'll (re)join the community discussion. Which page(s) are being used atm?
sincerely, Kim Bruning
On Mon, 10 Oct 2011 20:32:57 +0200, Kim Bruning kim@bruning.xs4all.nl wrote:
I'll (re)join the community discussion. Which page(s) are being used atm?
sincerely, Kim Bruning
None I know of.
Cheers Yaroslav
On Mon, Oct 10, 2011 at 11:39:43PM +0400, Yaroslav M. Blanter wrote:
On Mon, 10 Oct 2011 20:32:57 +0200, Kim Bruning kim@bruning.xs4all.nl wrote:
I'll (re)join the community discussion. Which page(s) are being used atm?
None I know of.
That's ok. I'll leave the initiative to MzMcbride && join there.
sincerely, Kim Bruning
On 10 October 2011 10:19, Florence Devouard anthere9@yahoo.com wrote:
On 10/9/11 11:57 PM, Bjoern Hoehrmann wrote:
- Sue Gardner wrote:
Please read Ting's note carefully. The Board is asking me to work with the community to develop a solution that meets the original requirements as laid out in its resolution. It is asking me to do something. But it is not asking me to do the specific thing that has been discussed over the past several months, and which the Germans voted against.
There is nothing useful to be learned from the Letter to the Community.
The problem is that what is usually called "the Board" on this list is not a single entity. It is actually a group of persons.
And right now, the situation is that there is no real agreement within "the Board" about what to exactly do or not do.
Accordingly, it is probably tough for "the Board" as an entity to issue statements or letters or recommandations without bumping in the fact that they do not have a single common position.
Consequently, there is nothing really useful in any statements they can issue.
That may well be the case but since it was the WMF board that decided we should have this feature, they need to come to a clear decision on how they want to proceed. If they can't find a solution that satisfies all of them and the decision has to be made by a vote with a slim majority, then so be it.
If you are right that the board is split on this (and I expect you are), then what seems to be happening is that they can't make a decision so they are telling the staff to make it for them. That is really not the way a board of trustees should work.
On Mon, Oct 10, 2011 at 4:48 PM, Thomas Dalton thomas.dalton@gmail.comwrote:
If you are right that the board is split on this (and I expect you are), then what seems to be happening is that they can't make a decision so they are telling the staff to make it for them. That is really not the way a board of trustees should work.
From my experience, this is not how we work.
There is no question of the staff making decisions for the Board, because we cannot do so!
Best Bishakha
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Tue, Oct 11, 2011 at 8:22 AM, Bishakha Datta bishakhadatta@gmail.com wrote:
On Mon, Oct 10, 2011 at 4:48 PM, Thomas Dalton thomas.dalton@gmail.comwrote:
If you are right that the board is split on this (and I expect you are), then what seems to be happening is that they can't make a decision so they are telling the staff to make it for them. That is really not the way a board of trustees should work.
From my experience, this is not how we work.
There is no question of the staff making decisions for the Board, because we cannot do so!
Best Bishakha
I don't want to single out *anything*. However, in all fairness, much of the acrimony about how WMF is doing "business as usual" could have been avoided with even just that little bit of transparency. It is rather grotesque for people on the board of trustees complaining folks in the community aren't aware of their real views, when they don't air those views to the public.
On Mon, Oct 10, 2011 at 2:49 PM, Florence Devouard anthere9@yahoo.comwrote:
The problem is that what is usually called "the Board" on this list is not a single entity. It is actually a group of persons.
And right now, the situation is that there is no real agreement within "the Board" about what to exactly do or not do.
While I totally agree that each of us, as individual board members has our own individual take on this issue (as we do on many other issues), that does not mean we are incapable of making a collective decision on this as a Board.
I think Ting, Phoebe and Sue have accurately summarized this in their emails to this list.
Cheers Bishakha
On Sun, Oct 09, 2011 at 09:19:40AM -0700, Sue Gardner wrote:
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal.
Perfect. That's exactly what I was hoping for.
It's a deal! :-)
sincerely, Kim Bruning
I'm all for it, too.
Bob
On 10/9/2011 6:31 PM, Kim Bruning wrote:
On Sun, Oct 09, 2011 at 09:19:40AM -0700, Sue Gardner wrote:
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal.
Perfect. That's exactly what I was hoping for.
It's a deal! :-)
sincerely, Kim Bruning
On Mon, Oct 10, 2011 at 2:31 AM, Kim Bruning kim@bruning.xs4all.nl wrote:
On Sun, Oct 09, 2011 at 09:19:40AM -0700, Sue Gardner wrote:
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal.
Perfect. That's exactly what I was hoping for.
It's a deal! :-)
sincerely, Kim Bruning
And while they are at such good form, WMF might try out their skills at a square and compasses construction of making a circle the precise area of a square....
Sue Gardner wrote:
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal.
As I've noted in other threads, Board member Samuel Klein has publicly expressed support for the type of implementation discussed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#ge... or http://goo.gl/t6ly5
Given the absence of elements widely cited as problematic, I believe that something along these lines would be both feasible and "generally acceptable to editors."
David Levy
On Mon, Oct 10, 2011 at 6:13 AM, David Levy lifeisunfair@gmail.com wrote:
Sue Gardner wrote:
The Board is hoping there is a solution that will 1) enable readers to easily hide images they don't want to see, as laid out in the Board's resolution [1], while 2) being generally acceptable to editors. Maybe this will not be possible, but it's the goal.
As I've noted in other threads, Board member Samuel Klein has publicly expressed support for the type of implementation discussed here:
http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#ge... or http://goo.gl/t6ly5
Given the absence of elements widely cited as problematic, I believe that something along these lines would be both feasible and "generally acceptable to editors."
David Levy
Given comments like this, it seems the contingent in support of filters is utterly and completely delusional. That proposal mitigates none of the valid objections to enabling other forces from just taking what we would be foolish enough to supply, and abusing the system to all its delight. Please come up with something more realistic.
Jussi-Ville Heiskanen wrote:
Given comments like this, it seems the contingent in support of filters is utterly and completely delusional. That proposal mitigates none of the valid objections to enabling other forces from just taking what we would be foolish enough to supply, and abusing the system to all its delight. Please come up with something more realistic.
Please elaborate (ideally without hurling insults).
David Levy
On Mon, Oct 10, 2011 at 2:35 PM, David Levy lifeisunfair@gmail.com wrote:
Jussi-Ville Heiskanen wrote:
Given comments like this, it seems the contingent in support of filters is utterly and completely delusional. That proposal mitigates none of the valid objections to enabling other forces from just taking what we would be foolish enough to supply, and abusing the system to all its delight. Please come up with something more realistic.
Please elaborate (ideally without hurling insults).
Gladly. If you sense a little frustration on my part, it is purely because most of us have been round this track more than a few times... Any (and I stress *any*) tagging system is very nicely vulnerable to being hijacked by downstream users. So from a perspective of not helping censorship by own actions, it is a strict no-go. I am being succint and to the point here. The fact that some people have been offered this quite clear explanation, and still keep acting as if they had not even heard it... without hurling any insults, their behaviour does make some of us frustrated.
On 10 October 2011 16:47, Jussi-Ville Heiskanen cimonavaro@gmail.comwrote:
On Mon, Oct 10, 2011 at 2:35 PM, David Levy lifeisunfair@gmail.com wrote:
Jussi-Ville Heiskanen wrote:
Given comments like this, it seems the contingent in support of filters is utterly and completely delusional. That proposal mitigates none of the valid objections to enabling other forces from just taking what we would be foolish enough to supply, and abusing the system to all its delight. Please come up with something more realistic.
Please elaborate (ideally without hurling insults).
Gladly. If you sense a little frustration on my part, it is purely because most of us have been round this track more than a few times... Any (and I stress *any*) tagging system is very nicely vulnerable to being hijacked by downstream users. So from a perspective of not helping censorship by own actions, it is a strict no-go. I am being succint and to the point here. The fact that some people have been offered this quite clear explanation, and still keep acting as if they had not even heard it... without hurling any insults, their behaviour does make some of us frustrated.
So does the current categorization system lend itself to being hijacked by downstream users?
Given the number of people who insist that any categorization system seems to be vulnerable, I'd like to hear the reasons why the current system, which is obviously necessary in order for people to find types of images, does not have the same effect. I'm not trying to be provocative here, but I am rather concerned that this does not seem to have been discussed.
Risker/Anne
On Mon, Oct 10, 2011 at 04:52:48PM -0400, Risker wrote:
Given the number of people who insist that any categorization system seems to be vulnerable, I'd like to hear the reasons why the current system, which is obviously necessary in order for people to find types of images, does not have the same effect. I'm not trying to be provocative here, but I am rather concerned that this does not seem to have been discussed.
Been discussed to death, raised from the dead, chopped up with a chainsaw,reresurrected, taken out with a sawn-off-shotgun, stood back up missing an arm... "they just keep on coming!"
The current category system is not as vulnerable to being abused because it is not a prejudicial labelling system.
In straight english:
Computers are sort of stupid. They can't infer intent.
A. If we want a computer program to offer something to be blocked, it needs a label that essentially says "This Is Something People Might Want To Block"
B. A computer program cannot really safely determine what to do with "licking" or "exposed breasts" (especially as are different norms on what is appropriate in different parts of the world)
Our current category system conforms to B. We would need some sort of mapping to A to make a category based filter work.
Social problem: Mapping B to A is evil, according to ALA. ;-)
sincerely, Kim Bruning
Patient: "Doctor Doctor, it hurts when I map B to A!" Doctor: "So Don't Do That Then"
On 10 October 2011 18:08, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 04:52:48PM -0400, Risker wrote:
Given the number of people who insist that any categorization system
seems
to be vulnerable, I'd like to hear the reasons why the current system,
which
is obviously necessary in order for people to find types of images, does
not
have the same effect. I'm not trying to be provocative here, but I am rather concerned that this does not seem to have been discussed.
Been discussed to death, raised from the dead, chopped up with a chainsaw,reresurrected, taken out with a sawn-off-shotgun, stood back up missing an arm... "they just keep on coming!"
The current category system is not as vulnerable to being abused because it is not a prejudicial labelling system.
In straight english:
Computers are sort of stupid. They can't infer intent.
A. If we want a computer program to offer something to be blocked, it needs a label that essentially says "This Is Something People Might Want To Block"
B. A computer program cannot really safely determine what to do with "licking" or "exposed breasts" (especially as are different norms on what is appropriate in different parts of the world)
Our current category system conforms to B. We would need some sort of mapping to A to make a category based filter work.
Social problem: Mapping B to A is evil, according to ALA. ;-)
sincerely, Kim Bruning
Patient: "Doctor Doctor, it hurts when I map B to A!" Doctor: "So Don't Do That Then"
Oh please, Kim; this is nonsense. Commercially available software is, even right now, blocking certain content areas by category and/or keywords for (at minimum) Commons and English Wikipedia; I've seen it in operation. So there's no reason to believe that the current category system, which we use legitimately for content-finding, is not amenable to use in exactly the same way that an image-filter-specific category would be.
Risker
On Mon, Oct 10, 2011 at 07:12:04PM -0400, Risker wrote:
Oh please, Kim; this is nonsense.
Be careful with what you call nonsense. :-)
Commercially available software is, even right now, blocking certain content areas by category and/or keywords for (at minimum) Commons and English Wikipedia;
Yes. These tools also have a category system. That category system is structured very differently from the commons category system.
Just because mediawiki uses a database and wordpress uses a database, it doesn't mean that the two databases are interchangable. That's just silly! (try it and see if you don't believe me)
The same is true for categories (which are just a particular way to structure a database anyway). Just because mediawiki uses categories, and ACME CensorThemAll(tm) uses categories, doesn't mean that they are necessarily interchangable in any way.
I've seen it in operation.
Let me check: Have seen your image filter software actually directly use categories from commons? Are you sure?
So there's no reason to believe that the current category system, which we use legitimately for content-finding, is not amenable to use in exactly the same way that an image-filter-specific category would be.
It would require some amount of remapping before it could be practically used in that manner.
sincerely, Kim Bruning
On 10 October 2011 18:45, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 07:12:04PM -0400, Risker wrote:
I've seen it in operation.
Let me check: Have seen your image filter software actually directly use categories from commons? Are you sure?
Yes, I have seen net-nanny software directly block entire Commons categories.
Risker
On Mon, Oct 10, 2011 at 07:43:22PM -0400, Risker wrote:
On 10 October 2011 18:45, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 07:12:04PM -0400, Risker wrote:
I've seen it in operation.
Let me check: Have seen your image filter software actually directly use categories from commons? Are you sure?
Yes, I have seen net-nanny software directly block entire Commons categories.
Ok, so you can ask net nanny to obtain categories from commons, and use them to block images on wikipedia? Or is it blocking by key word, and only on commons?
Can you arrange a demonstration for me?
sincerely, Kim Bruning
On 10 October 2011 19:12, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 07:43:22PM -0400, Risker wrote:
On 10 October 2011 18:45, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 07:12:04PM -0400, Risker wrote:
I've seen it in operation.
Let me check: Have seen your image filter software actually directly use categories from commons? Are you sure?
Yes, I have seen net-nanny software directly block entire Commons categories.
Ok, so you can ask net nanny to obtain categories from commons, and use them to block images on wikipedia? Or is it blocking by key word, and only on commons?
Can you arrange a demonstration for me?
sincerely, Kim Bruning
No, I can't arrange a demonstration, Kim. I do not have net nannies on any system that I control. The systems on which I have encountered them are not publicly accessible. They have prevented access to all articles I tested within a given category on English Wikipedia and all images within a given category that I tested on Commons.
Risker
On Mon, Oct 10, 2011 at 08:49:13PM -0400, Risker wrote:
No, I can't arrange a demonstration, Kim. I do not have net nannies on any system that I control. The systems on which I have encountered them are not publicly accessible. They have prevented access to all articles I tested within a given category on English Wikipedia and all images within a given category that I tested on Commons.
That sounds like it works on the basis of keywords, perhaps.
How thoroughly have you tested it, when did you do this test?
Can we check?
Can it block those images from the given category on commons, if viewed on the actual pages they are used for on wikipedia?
And will it also block images from the subcategory - if used on wikipedia?
I might investigate or even buy this software (if not exceptionally expensive) and test it extensively if this is the case.
sincerely, Kim Bruning
On 10 October 2011 20:03, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 08:49:13PM -0400, Risker wrote:
No, I can't arrange a demonstration, Kim. I do not have net nannies on
any
system that I control. The systems on which I have encountered them are
not
publicly accessible. They have prevented access to all articles I tested within a given category on English Wikipedia and all images within a
given
category that I tested on Commons.
That sounds like it works on the basis of keywords, perhaps.
How thoroughly have you tested it, when did you do this test?
Can we check?
Can it block those images from the given category on commons, if viewed on the actual pages they are used for on wikipedia?
And will it also block images from the subcategory - if used on wikipedia?
I might investigate or even buy this software (if not exceptionally expensive) and test it extensively if this is the case.
sincerely, Kim Bruning
I cannot answer your questions, Kim, as these are generally systems in which I do not have longterm access; and those that I do have longterm access, I'm not going to risk my accounts for your experiments. I cannot think of a legitimate reason why I would be investing a large amount of time checking all the articles in [[:Category:Sexual positions]] (or the equivalent Commons category) on those accounts, for example. You may be in a different situation.
Risker
On Mon, Oct 10, 2011 at 09:22:09PM -0400, Risker wrote:
all the articles in [[:Category:Sexual positions]]
<looks extremely puzzeled>
What are you trying to ...
Let's try a question like:
...Can you block [[:Category:Demolished windmills]] (and all subcats?) for yourself?
sincerely, Kim Bruning
On 10 October 2011 20:52, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 09:22:09PM -0400, Risker wrote:
all the articles in [[:Category:Sexual positions]]
<looks extremely puzzeled>
What are you trying to ...
Let's try a question like:
...Can you block [[:Category:Demolished windmills]] (and all subcats?) for yourself?
sincerely, Kim Bruning
Kim, I am getting the impression you are being deliberately obtuse. These are systems that I have encountered where I have no controls whatsoever; I don't even know what the software is because all I get is a screen that says "this page is blocked" or words to that effect.
I cannot decide what is being blocked, as a bottom level user. Those decisions have been made at a sysadmin or software level. I can tell you my experiences as a user on those systems, but I do not have the information you seek, nor am I in a position to obtain it.
Risker
On Mon, Oct 10, 2011 at 09:53:55PM -0400, Risker wrote:
Kim, I am getting the impression you are being deliberately obtuse.
No, I'm being exhaustive. I wanted to ensure that there is no hair of a possibility that I might have missed a good faith avenue.
(I wouldn't have asked this question if you hadn't said I was stating nonsense)
I cannot decide what is being blocked, as a bottom level user. Those decisions have been made at a sysadmin or software level. I can tell you my experiences as a user on those systems, but I do not have the information you seek, nor am I in a position to obtain it.
<flame on> Therefore you cannot claim that I am stating nonsense. The inverse is true: you do not possess the information to support your position, as you now admit.
In future, before you set out to make claims of bad faith in others, it would be wise to ensure that your own information is impeccable first. </flame>
sincerely, Kim Bruning
On 11/10/2011 15:33, Kim Bruning wrote:
<flame on> Therefore you cannot claim that I am stating nonsense. The inverse is true: you do not possess the information to support your position, as you now admit. In future, before you set out to make claims of bad faith in others, it would be wise to ensure that your own information is impeccable first. </flame> sincerely, Kim Bruning
I claim that you are talking total crap. It is not *that* difficult to get the categories of an image and reject based on which categories the image is in are. There are enough people out there busily categorizing all the images already that any org that may wish to could block images that are in disapproved categories.
The problem, and it is a genuine problem, is that the fucking stupid images leak out across commons in unexpected ways. Lets assuime that an 6th grade class is asked to write a report on Queen Victoria, and a child serach commons for prince albert:
http://commons.wikimedia.org/w/index.php?title=Special:Search&search=pri...
If you at work you probably do not want to clicking the above link at all.
Am 16.10.2011 12:53, schrieb ???:
On 11/10/2011 15:33, Kim Bruning wrote:
<flame on> Therefore you cannot claim that I am stating nonsense. The inverse is true: you do not possess the information to support your position, as you now admit. In future, before you set out to make claims of bad faith in others, it would be wise to ensure that your own information is impeccable first.</flame> sincerely, Kim Bruning
I claim that you are talking total crap. It is not *that* difficult to get the categories of an image and reject based on which categories the image is in are. There are enough people out there busily categorizing all the images already that any org that may wish to could block images that are in disapproved categories.
I have to throw that kind wording back at you. It isn't very difficult to judge what is offensive and what isn't, because it is impossible to do this, if you want to stay neutral and to respect any, even if only any major, opinion out there. Wikipedia and Commons are projects that gather knowledge or media. Wikipedia has an editorial system that watches over the content to be accurate and representative. Commons is a media library with a categorization system that aids the reader to what he want's to find. The category system in itself is (or should be) build upon directional labels. Anything else is contradictory to current practice and unacceptable:
* Wikipedia authors do not judge about topics. They also do not claim for themselves that something is controversial, ugly, bad, ... * Commons contributers respect this terms as well. They don't judge about the content. They gather and categorize it. But they will not append prejudicial labels.
The problem, and it is a genuine problem, is that the fucking stupid images leak out across commons in unexpected ways. Lets assuime that an 6th grade class is asked to write a report on Queen Victoria, and a child serach commons for prince albert:
http://commons.wikimedia.org/w/index.php?title=Special:Search&search=pri...
If you at work you probably do not want to clicking the above link at all.
Worst case scenarios will always happen. With filter or without filter, you will still and always find such examples. They are seldom, and might happen from time to time. But they aren't the rule. They aren't at the same height as you should use to measure a flood.
To give an simple example of the opposite. Enable "strict filtering" on google and search for images with the term "futanari" . Don't say that i did not warn you...
A last word: Categorizing content rightful as "good" and "evil" is impossible for human beings, that we are. But categorizing content as "good" and "evil" always led to destructive consequences if human beings are involved, that we are.
On 16/10/2011 12:37, Tobias Oelgarte wrote:
Am 16.10.2011 12:53, schrieb ???:
On 11/10/2011 15:33, Kim Bruning wrote:
<flame on> Therefore you cannot claim that I am stating nonsense. The inverse is true: you do not possess the information to support your position, as you now admit. In future, before you set out to make claims of bad faith in others, it would be wise to ensure that your own information is impeccable first.</flame> sincerely, Kim Bruning
I claim that you are talking total crap. It is not *that* difficult to get the categories of an image and reject based on which categories the image is in are. There are enough people out there busily categorizing all the images already that any org that may wish to could block images that are in disapproved categories.
I have to throw that kind wording back at you. It isn't very difficult to judge what is offensive and what isn't, because it is impossible to do this, if you want to stay neutral and to respect any, even if only any major, opinion out there. Wikipedia and Commons are projects that gather knowledge or media. Wikipedia has an editorial system that watches over the content to be accurate and representative. Commons is a media library with a categorization system that aids the reader to what he want's to find. The category system in itself is (or should be) build upon directional labels. Anything else is contradictory to current practice and unacceptable:
It is incredibly easy. One justs says any image within Category:Sex is not acceptable. Its not hard to do. An organisation can run a script once a week or so to delve down through the category hierachy to pick up any changes.
You already categorize the images for any one with enough processing power, or the will to censor the content. I doubt that anyone doing so is going to be too bothered whether they've falsely censored an image that is in Category:Sex that isn't 'controversial' or not.
- Wikipedia authors do not judge about topics. They also do not claim
for themselves that something is controversial, ugly, bad, ...
- Commons contributers respect this terms as well. They don't judge
about the content. They gather and categorize it. But they will not append prejudicial labels.
Of course they do: they add categories. Some else applies the value judgment as to whether images in that category are controversial or not. The job of WMF editors just to categorise them. If Arachnids are 'controversial' then anything under that category goes. Just label the damn things and shut the fuck up.
The problem, and it is a genuine problem, is that the fucking stupid images leak out across commons in unexpected ways. Lets assuime that an 6th grade class is asked to write a report on Queen Victoria, and a child serach commons for prince albert:
http://commons.wikimedia.org/w/index.php?title=Special:Search&search=pri...
If you at work you probably do not want to clicking the above link at all.
Worst case scenarios will always happen. With filter or without filter, you will still and always find such examples. They are seldom, and might happen from time to time. But they aren't the rule. They aren't at the same height as you should use to measure a flood.
That are not seldom, they leak all over the place. You can get porn on commons by searching for 'furniture'. Porn images are everywhere on Commons.
To give an simple example of the opposite. Enable "strict filtering" on google and search for images with the term "futanari" . Don't say that i did not warn you...
Don't be an arsehole you get the same sort of stuff if you search for cumshot. We aren't talking about terms that have primarily a sexual context but phrases like 'furniture' or 'prince albert' which do not.
On 16 October 2011 14:40, ??? wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
- d.
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual images.
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
nya~
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
If the entire premise of an email comes down to "I'm taunting you", that's an indication it probably shouldn't be sent.
Dan Rosenthal
On Sun, Oct 16, 2011 at 10:27 PM, ??? wiki-list@phizz.demon.co.uk wrote:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual
images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Personality conflicts aside, we're noting that non-sexual search terms in Commons can prominently return sexual images of varying explicitness, from mild nudity to hardcore, and that this is different from entering a sexual search term and finding that Google fails to filter some results.
I posted some more Commons search terms where this happens on Meta; they include
Black, Caucasian, Asian;
Male, Female, Teenage, Woman, Man;
Vegetables;
Drawing, Drawing style;
Barbie, Doll;
Demonstration, Slideshow;
Drinking, Custard, Tan;
Hand, Forefinger, Backhand, Hair;
Bell tolling, Shower, Furniture, Crate, Scaffold;
Galipette – French for "somersault"; this leads to a collection of 1920s pornographic films which are undoubtedly of significant historical interest, but are also pretty much as explicit as any modern representative of the genre.
Andreas
From: Dan Rosenthal swatjester@gmail.com To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Sunday, 16 October 2011, 20:31 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
If the entire premise of an email comes down to "I'm taunting you", that's an indication it probably shouldn't be sent.
Dan Rosenthal
On Sun, Oct 16, 2011 at 10:27 PM, ??? wiki-list@phizz.demon.co.uk wrote:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual
images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
* Andreas Kolbe wrote:
Personality conflicts aside, we're noting that non-sexual search terms in Commons can prominently return sexual images of varying explicitness, from mild nudity to hardcore, and that this is different from entering a sexual search term and finding that Google fails to filter some results.
That is normal and expected with a full text search that does not filter out "sexual" images. Even if you search for sexual content on Commons it is normal and expected that you get results you would rather not get. It is possible to largely avoid this by using, say, Google Image Search and a site:commons.wikimedia.org constraint and the right SafeSearch setting if you want for the simpler cases, but I would not want to search for, say, "penis", on either site when unprepared for shock. I do not think Commons is relevant to the Image Filter discussion, the image filter is for things editors largely agree should be included in context, while on Commons you lack context and editorial control. If there was a MediaWiki extension that is good at emulating Google's SafeSearch, installing that on Commons might be an acceptable idea, but there is not, and making one would be rather expensive.
Commons featured prominently in the Harris study, as well as the board resolution on controversial content.
http://meta.wikimedia.org/wiki/2010_Wikimedia_Study_of_Controversial_Content...
http://wikimediafoundation.org/wiki/Resolution:Controversial_content
Andreas
From: Bjoern Hoehrmann derhoermi@gmx.net To: Andreas Kolbe jayen466@yahoo.com; Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Monday, 17 October 2011, 2:15 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
- Andreas Kolbe wrote:
Personality conflicts aside, we're noting that non-sexual search terms in Commons can prominently return sexual images of varying explicitness, from mild nudity to hardcore, and that this is different from entering a sexual search term and finding that Google fails to filter some results.
That is normal and expected with a full text search that does not filter out "sexual" images. Even if you search for sexual content on Commons it is normal and expected that you get results you would rather not get. It is possible to largely avoid this by using, say, Google Image Search and a site:commons.wikimedia.org constraint and the right SafeSearch setting if you want for the simpler cases, but I would not want to search for, say, "penis", on either site when unprepared for shock. I do not think Commons is relevant to the Image Filter discussion, the image filter is for things editors largely agree should be included in context, while on Commons you lack context and editorial control. If there was a MediaWiki extension that is good at emulating Google's SafeSearch, installing that on Commons might be an acceptable idea, but there is not, and making one would be rather expensive. -- Björn Höhrmann · mailto:bjoern@hoehrmann.de · http://bjoern.hoehrmann.de Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de 25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/
It is a in house made problem, as i explained at brainstorming [1]. To put it short: It is a self made problem, based on the fact that this images got more attention then others. Thanks to failed deletion requests they had many people caring about them. This results in more exact descriptions and file naming then in average images. Thats what search engines prefer; and now we have them at a top spot. Thanks for caring so much about this images and not treating them like anything else.
Andreas, you currently represent exactly that kind of argumentation that leads into anything, but not to a solution. I described it already in the post "Controversial Content vs Only-Image-Filter" [2], that single examples don't represent the overall thematic. It also isn't an addition to the discussion as an argument. It would be an argument if we would know the effects that occur. We have to clear the question:
* Is it a problem that the search function displays sexual content? (A search should find anything related, by definition.) * Is sexual content is overrepresented by the search? * If that is the case. Why is it that way? * Can we do something about it, without drastic changes, like blocking/excluding categories?
[1] http://meta.wikimedia.org/w/index.php?title=Controversial_content%2FBrainsto... [2] http://lists.wikimedia.org/pipermail/foundation-l/2011-October/069699.html
Am 17.10.2011 02:56, schrieb Andreas Kolbe:
Personality conflicts aside, we're noting that non-sexual search terms in Commons can prominently return sexual images of varying explicitness, from mild nudity to hardcore, and that this is different from entering a sexual search term and finding that Google fails to filter some results.
I posted some more Commons search terms where this happens on Meta; they include
Black, Caucasian, Asian;
Male, Female, Teenage, Woman, Man;
Vegetables;
Drawing, Drawing style;
Barbie, Doll;
Demonstration, Slideshow;
Drinking, Custard, Tan;
Hand, Forefinger, Backhand, Hair;
Bell tolling, Shower, Furniture, Crate, Scaffold;
Galipette – French for "somersault"; this leads to a collection of 1920s pornographic films which are undoubtedly of significant historical interest, but are also pretty much as explicit as any modern representative of the genre.
Andreas
From: Dan Rosenthalswatjester@gmail.com To: Wikimedia Foundation Mailing Listfoundation-l@lists.wikimedia.org Sent: Sunday, 16 October 2011, 20:31 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
If the entire premise of an email comes down to "I'm taunting you", that's an indication it probably shouldn't be sent.
Dan Rosenthal
On Sun, Oct 16, 2011 at 10:27 PM, ???wiki-list@phizz.demon.co.uk wrote:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
> Don't be an arsehole you get the same sort of stuff if you search for Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual
images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Note: This foundation-l post is cross-posted to commons-l, since this discussion may be of interest there as well.
From: Tobias Oelgarte tobias.oelgarte@googlemail.com
It is a in house made problem, as i explained at brainstorming [1]. To put it short: It is a self made problem, based on the fact that this images got more attention then others. Thanks to failed deletion requests they had many people caring about them. This results in more exact descriptions and file naming then in average images. Thats what search engines prefer; and now we have them at a top spot. Thanks for caring so much about this images and not treating them like anything else.
I don't think that is the case, actually. Brandon described how the search function works here:
http://www.quora.com/Why-is-the-second-image-returned-on-Wikimedia-Commons-w...
To take an example, the file
http://commons.wikimedia.org/w/index.php?title=File:Golden_Shower.jpg&ac...
(a prominent search result in searches for "shower") has never had its name or description changed since it was uploaded from Flickr. My impression is that refinement of file names and descriptions following discussions has little to do with sexual or pornography-related media appearing prominently in search listings. The material is simply there, and the search function finds it, as it is designed to do.
Andreas, you currently represent exactly that kind of argumentation that
leads into anything, but not to a solution. I described it already in the post "Controversial Content vs Only-Image-Filter" [2], that single examples don't represent the overall thematic. It also isn't an addition to the discussion as an argument. It would be an argument if we would know the effects that occur. We have to clear the question:
It is hard to say how else to provide evidence of a problem, other than by giving multiple (not single) examples of it.
You could also search for blond, blonde, red hair, strawberry, or peach ...
What is striking is the crass sexism of some of the filenames and image descriptions: "blonde bombshell", "Blonde teenie sucking", "so, so sexy", "These two had a blast showing off" etc.
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=b...
One of the images shows a young woman in the bathroom, urinating:
http://commons.wikimedia.org/wiki/File:Blonde_woman_urinating.jpg
Her face is fully shown, and the image, displayed in the Czech Wikipedia, carries no personality rights warning, nor is there evidence that she has consented to or is even aware of the upload.
And I am surprised how often images of porn actresses are found in search results, even for searches like "Barbie". Commons has 917 files in Category:Unidentified porn actresses alone. There is no corresponding Category:Unidentified porn actors (although there is of course a wealth of categories and media for gay porn actors).
- Is it a problem that the search function displays sexual content? (A
search should find anything related, by definition.)
I think the search function works as designed, looking for matches in file names and descriptions.
- Is sexual content is overrepresented by the search?
I don't think so. The search function simply shows what is there. However, the sexual content that comes up for innocuous searches sometimes violates the principle of least astonishment, and thus may turn some users off using, contributing to, or recommending Commons as an educational resource.
- If that is the case. Why is it that way?
- Can we do something about it, without drastic changes, like
blocking/excluding categories?
One thing that might help would be for the search function to privilege files that are shown in top-level categories containing the search term: e.g. for "cucumber", first display all files that are in category "cucumber", rather than those contained in subcategories, like "sexual penetrative use of cucumbers", regardless of the file name (which may not have the English word "cucumber" in it).
A second step would be to make sure that sexual content is not housed in the top categories, but in appropriately named subcategories. This is generally already established practice. Doing both would reduce the problem somewhat, at least in cases where there is a category that matches the search term.
Regards, Andreas
[1]
http://meta.wikimedia.org/w/index.php?title=Controversial_content%2FBrainsto... [2] http://lists.wikimedia.org/pipermail/foundation-l/2011-October/069699.html
Am 17.10.2011 02:56, schrieb Andreas Kolbe:
Personality conflicts aside, we're noting that non-sexual search terms in Commons can prominently return sexual images of varying explicitness, from mild nudity to hardcore, and that this is different from entering a sexual search term and finding that Google fails to filter some results.
I posted some more Commons search terms where this happens on Meta; they include
Black, Caucasian, Asian;
Male, Female, Teenage, Woman, Man;
Vegetables;
Drawing, Drawing style;
Barbie, Doll;
Demonstration, Slideshow;
Drinking, Custard, Tan;
Hand, Forefinger, Backhand, Hair;
Bell tolling, Shower, Furniture, Crate, Scaffold;
Galipette – French for "somersault"; this leads to a collection of 1920s pornographic films which are undoubtedly of significant historical interest, but are also pretty much as explicit as any modern representative of the genre.
Andreas
On Mon, Oct 17, 2011 at 12:57 AM, ??? wiki-list@phizz.demon.co.uk wrote:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual
images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
I think he meant ignoring you, "I don't feel the need to respond to your arguments at all". Which has also been an argument against the filter, I don't see anything hypocritical in that.
Regards Theo
Am 16.10.2011 21:27, schrieb ???:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
I guess you did not understand my answer. Thats why I'm feeling free to respond one more time.
I have no problem with any kind of controversial content. Showing progress of fisting on the mainpage? No problem for me. Reading your comments? No problem for me. Reading your insults? Also no problem. The only thing i did, was the following: I told you, that i will not react any longer to your comments, if they are worded in the manner as they currently are.
Literary: I'm feeling free to open your book and start to read. If it is interesting and constructive i will continue to read it and i will respond to you to share my thoughts. If it is purely meant to insult, without any other meaning, then i will get bored and fly over the lines, reading only the half or less. I also have no intention to share my thoughts with the author of this book. Why? I have nothing to talk about. Should i complain over it's content? Which content anyway?
Give it a try. Make constructive arguments and explain your thoughts. There is no need for strong-wording, if the construction of the words itself is strong.
nya~
On 17 Oct 2011, at 09:19, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
Am 16.10.2011 21:27, schrieb ???:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
Don't be an arsehole you get the same sort of stuff if you search for
Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
I guess you did not understand my answer. Thats why I'm feeling free to respond one more time.
I have no problem with any kind of controversial content. Showing progress of fisting on the mainpage? No problem for me. Reading your comments? No problem for me. Reading your insults? Also no problem. The only thing i did, was the following: I told you, that i will not react any longer to your comments, if they are worded in the manner as they currently are.
Literary: I'm feeling free to open your book and start to read. If it is interesting and constructive i will continue to read it and i will respond to you to share my thoughts. If it is purely meant to insult, without any other meaning, then i will get bored and fly over the lines, reading only the half or less. I also have no intention to share my thoughts with the author of this book. Why? I have nothing to talk about. Should i complain over it's content? Which content anyway?
Give it a try. Make constructive arguments and explain your thoughts. There is no need for strong-wording, if the construction of the words itself is strong.
nya~
And that is a mature and sensible attitude.
Some people do not share your view and are unable to ignore what to them are rude or offensive things.
Are they wrong?
Should they be doing what you (and I) do?
Tom
On Tuesday, October 18, 2011, Thomas Morton wrote:
On 17 Oct 2011, at 09:19, Tobias Oelgarte <tobias.oelgarte@googlemail.com javascript:;> wrote:
I have no problem with any kind of controversial content. Showing progress of fisting on the mainpage? No problem for me. Reading your comments? No problem for me. Reading your insults? Also no problem. The only thing i did, was the following: I told you, that i will not react any longer to your comments, if they are worded in the manner as they currently are.
Literary: I'm feeling free to open your book and start to read. If it is interesting and constructive i will continue to read it and i will respond to you to share my thoughts. If it is purely meant to insult, without any other meaning, then i will get bored and fly over the lines, reading only the half or less. I also have no intention to share my thoughts with the author of this book. Why? I have nothing to talk about. Should i complain over it's content? Which content anyway?
Give it a try. Make constructive arguments and explain your thoughts. There is no need for strong-wording, if the construction of the words itself is strong.
nya~
And that is a mature and sensible attitude.
Some people do not share your view and are unable to ignore what to them are rude or offensive things.
Are they wrong?
Should they be doing what you (and I) do?
I share the same attitude. I'm pretty much immune to almost anything you can throw at me in terms of potentially offensive content.
But, despite this enlightenment, I am not an island. I use my computer in public places: at the workplace, in the university library, on the train, at conferences, and in cafes.
I may have been inured to 'Autofellatio6.jpg', but I'm not sure the random person sitting next to me on the train needs to see it. Being able to read, edit and patrol Wikipedia in public without offending the moral sensibilities of people who catch a glance at my laptop screen would be a feature. Being able to click 'Random page' without the chance of a public order offence flowing from it would also be pretty nifty.
Sorry to take a tangential point from Tom's email, but is the random article tool truly random or does it direct to only stable articles or some other sub-set of article space?
Thanks Fae
Am 18.10.2011 09:57, schrieb Tom Morris:
On Tuesday, October 18, 2011, Thomas Morton wrote:
On 17 Oct 2011, at 09:19, Tobias Oelgarte <tobias.oelgarte@googlemail.comjavascript:;> wrote:
I have no problem with any kind of controversial content. Showing progress of fisting on the mainpage? No problem for me. Reading your comments? No problem for me. Reading your insults? Also no problem. The only thing i did, was the following: I told you, that i will not react any longer to your comments, if they are worded in the manner as they currently are.
Literary: I'm feeling free to open your book and start to read. If it is interesting and constructive i will continue to read it and i will respond to you to share my thoughts. If it is purely meant to insult, without any other meaning, then i will get bored and fly over the lines, reading only the half or less. I also have no intention to share my thoughts with the author of this book. Why? I have nothing to talk about. Should i complain over it's content? Which content anyway?
Give it a try. Make constructive arguments and explain your thoughts. There is no need for strong-wording, if the construction of the words itself is strong.
nya~
And that is a mature and sensible attitude.
Some people do not share your view and are unable to ignore what to them are rude or offensive things.
Are they wrong?
Should they be doing what you (and I) do?
I share the same attitude. I'm pretty much immune to almost anything you can throw at me in terms of potentially offensive content.
But, despite this enlightenment, I am not an island. I use my computer in public places: at the workplace, in the university library, on the train, at conferences, and in cafes.
I may have been inured to 'Autofellatio6.jpg', but I'm not sure the random person sitting next to me on the train needs to see it. Being able to read, edit and patrol Wikipedia in public without offending the moral sensibilities of people who catch a glance at my laptop screen would be a feature. Being able to click 'Random page' without the chance of a public order offence flowing from it would also be pretty nifty.
But that is exactly this typical scenario that does not need a category based filtering system. There are many other proposed solutions that would handle exactly this case, without the need for any categorization. The "hide all image" feature would already be an good option. An improved version is the "blured images/pixelated images" feature, where you enter the "hide/distort/..." mode and any image is not visible in detail as long you don't hover it or click on it.
Still, we discuss about filter categories and their need. In your example no categorization is needed at all, to provide a well working solution.
nya~
Am 18.10.2011 01:54, schrieb Thomas Morton:
On 17 Oct 2011, at 09:19, Tobias Oelgarte tobias.oelgarte@googlemail.com wrote:
Am 16.10.2011 21:27, schrieb ???:
On 16/10/2011 19:36, Tobias Oelgarte wrote:
Am 16.10.2011 16:17, schrieb ???:
On 16/10/2011 14:50, David Gerard wrote:
On 16 October 2011 14:40, ???wiki-list@phizz.demon.co.uk wrote:
> Don't be an arsehole you get the same sort of stuff if you search for Presumably this is the sort of quality of discourse Sue was complaining about from filter advocates: provocateurs lacking in empathy.
Trolling much eh David?
But thanks for showing once again your incapacity to acknowledge that searching for sexual images and seeing such images, is somewhat different, from searching for non sexual imagary and getting sexual images.
I have to agree with David. Your behavior is provocative and unproductive. I don't feel the need to respond to your arguments at all, if you write in this tone. You could either excuse yourself for this kind of wording, or we are done.
Now you wouldn't be complainng about seeing content not to your liking would you. What are you going to do filter out the posts? Bet your glad your email provider added that option for you.
Yet another censorship hipocrite.
I guess you did not understand my answer. Thats why I'm feeling free to respond one more time.
I have no problem with any kind of controversial content. Showing progress of fisting on the mainpage? No problem for me. Reading your comments? No problem for me. Reading your insults? Also no problem. The only thing i did, was the following: I told you, that i will not react any longer to your comments, if they are worded in the manner as they currently are.
Literary: I'm feeling free to open your book and start to read. If it is interesting and constructive i will continue to read it and i will respond to you to share my thoughts. If it is purely meant to insult, without any other meaning, then i will get bored and fly over the lines, reading only the half or less. I also have no intention to share my thoughts with the author of this book. Why? I have nothing to talk about. Should i complain over it's content? Which content anyway?
Give it a try. Make constructive arguments and explain your thoughts. There is no need for strong-wording, if the construction of the words itself is strong.
nya~
And that is a mature and sensible attitude.
Some people do not share your view and are unable to ignore what to them are rude or offensive things.
Are they wrong?
Should they be doing what you (and I) do?
Tom
The question is, if we should support "them" to not even try to start this learning progress. It's like saying: "That is all you have to know. Don't bother with the rest, it is not good for you."
nya~
And that is a mature and sensible attitude.
Some people do not share your view and are unable to ignore what to them are rude or offensive things.
Are they wrong?
Should they be doing what you (and I) do?
Tom
The question is, if we should support "them" to not even try to start this learning progress. It's like saying: "That is all you have to know. Don't bother with the rest, it is not good for you."
nya~
Which assumes that they want to, or should, change - and that our approach is better and we are right. These are arrogant assumptions, not at all in keeping with our mission.
It is this fallacious logic that underpins our crazy politics of "neutrality" which we attempt to enforce on people (when in practice we lack neutrality almost as much as the next man!).
It's like religion; I am not religious, and if a religious person wants to discuss their beliefs against my lack then great, I find that refreshing and will take the opportunity to try and show them my argument. If they don't want to, not going to force the issue :)
It's like saying: "That is all you have to know. Don't bother with the
rest, it is not good for you."
Actually, no, it's exactly not that. Because we are talking about user-choice filtering. In that context, providing individual filtering tools for each user should not be controversial.
I understand where that becomes a problem is when we look at offering pre-built "block lists", so that our readers don't have to manually construct their own preferences, but can click a few buttons and largely have the experience they desire. So we have this issue of trading usability against potential for abuse; I don't have an immediate solution there, but I think we can come up with one. Although we do quite poorly at handling abuse of process and undermining of content on-wiki at the moment, this could be a unique opportunity to brainstorm wider solutions that impact everywhere in a positive way.
If an individual expresses a preference to hide certain content, it is reasonable for us to provide that option for use at their discretion.
Anything else is like saying "No, your views on acceptability are wrong and we insist you must see this".[1]
*That* is censorship.
Tom
1. I appreciate that this is the status quo at the moment, I still think it is censorship, and this is why we must address it as a problem.
On 18 October 2011 10:43, Thomas Morton morton.thomas@googlemail.com wrote:
If an individual expresses a preference to hide certain content, it is reasonable for us to provide that option for use at their discretion. Anything else is like saying "No, your views on acceptability are wrong and we insist you must see this".[1] *That* is censorship.
This argument appears to be of the form "black is *actually* just a very dark shade of white, so offering a choice between beige or cream to replace black is entirely acceptable."
- I appreciate that this is the status quo at the moment, I still think it
is censorship, and this is why we must address it as a problem.
This is a combination of argument by assertion and the politician's syllogism.
- d.
On 18 October 2011 11:08, David Gerard dgerard@gmail.com wrote:
On 18 October 2011 10:43, Thomas Morton morton.thomas@googlemail.com wrote:
If an individual expresses a preference to hide certain content, it is reasonable for us to provide that option for use at their discretion. Anything else is like saying "No, your views on acceptability are wrong
and
we insist you must see this".[1] *That* is censorship.
This argument appears to be of the form "black is *actually* just a very dark shade of white, so offering a choice between beige or cream to replace black is entirely acceptable."
Care to expand on that; I've turned it over in my head and don't see the connection :)
What I am saying is that if someone prefers black, let them have black.
- I appreciate that this is the status quo at the moment, I still think
it
is censorship, and this is why we must address it as a problem.
This is a combination of argument by assertion and the politician's syllogism.
Not really; it is simply staving off the whole argument of "but this is how we currently do things", which isn't always a good viewpoint.
Tom
Am 18.10.2011 11:43, schrieb Thomas Morton:
And that is a mature and sensible attitude.
Some people do not share your view and are unable to ignore what to them are rude or offensive things.
Are they wrong?
Should they be doing what you (and I) do?
Tom
The question is, if we should support "them" to not even try to start this learning progress. It's like saying: "That is all you have to know. Don't bother with the rest, it is not good for you."
nya~
Which assumes that they want to, or should, change - and that our approach is better and we are right. These are arrogant assumptions, not at all in keeping with our mission.
I don't assume that. I say that they should have the opportunity to change if they like to. That controversial content is hidden or that we provide a button to hide controversial content is prejudicial. It deepens the viewpoint that this content is objectionable and that it is generally accepted this way, even if not. That means that we would fathering the readers that have a tendency to enable a filter (not even particularly an image filter).
It is this fallacious logic that underpins our crazy politics of "neutrality" which we attempt to enforce on people (when in practice we lack neutrality almost as much as the next man!).
... and that is exactly what makes me curious about this approach. You assume that we aren't neutral and Sue described us in median a little bit geeky, which goes in the same direction. But if we aren't neutral at all, how can we even believe that an controversial-content-filter-system based upon our views would be neutral in judgment or as proposed in the referendum "cultural neutral". (Question: Is there even a thing as cultural neutrality?)
It's like religion; I am not religious, and if a religious person wants to discuss their beliefs against my lack then great, I find that refreshing and will take the opportunity to try and show them my argument. If they don't want to, not going to force the issue :)
We also don't force anyone to read Wikipedia. If he does not like it, he has multiple options. He could close it, he could still read it, even if he don't like any part of it, he could participate to change it or he could start his own project.
Sue and Phoebe didn't liked the idea to compare Wikipedia with a library since we create our own content. But if can't agree on that, than we should at least be able to agree that Wikipedia is like a huge book, an encyclopedia. But if we apply this logic, then you will see the same words repeated over and over again. Pullman did, any many other authors before his lifetime did it as well. Have a look at his short and direct answer to this question:
* http://www.youtube.com/watch?v=HQ3VcbAfd4w
It's like saying: "That is all you have to know. Don't bother with the
rest, it is not good for you."
Actually, no, it's exactly not that. Because we are talking about user-choice filtering. In that context, providing individual filtering tools for each user should not be controversial.
I indirectly answered to this already at the top. If we can't be neutral in judgement and if hiding of particular content results in strengthen the already present positions, then even a user-choice-filtering-system fails in the aspect of being neutral and being not prejudicial.
I understand where that becomes a problem is when we look at offering pre-built "block lists", so that our readers don't have to manually construct their own preferences, but can click a few buttons and largely have the experience they desire. So we have this issue of trading usability against potential for abuse; I don't have an immediate solution there, but I think we can come up with one. Although we do quite poorly at handling abuse of process and undermining of content on-wiki at the moment, this could be a unique opportunity to brainstorm wider solutions that impact everywhere in a positive way.
We can definitely think about possible solution. But at first i have to insist to get an answer to the question: Is there a problem, big and worthy enough, to make it a main priority?
After that comes the question for (non neutral) categorization of content. That means: Do we need to label offensive content, or could same goal be reached without doing this?
In the other mail you said that you have a problem with reading/working on Wikipedia in public, because some things might be offensive to bystanders. One typical, widely spread argument. But this problem can easily be solved without the need for categorization. The brainstorming sections are full with easy, non disturbing solutions for exactly this potential problem.
If an individual expresses a preference to hide certain content, it is reasonable for us to provide that option for use at their discretion.
Anything else is like saying "No, your views on acceptability are wrong and we insist you must see this".[1]
*That* is censorship.
Tom
- I appreciate that this is the status quo at the moment, I still think it
is censorship, and this is why we must address it as a problem.
Nobody ever forced him to read the article. He could just choose something else. That isn't censorship at all. Censorship is hiding of information, not showing of information someone might not like.
The only reason this could be seen as censorship by some is the moment when they don't want to let others see content that "they should not see". This matches exactly the case noted by Joseph Henry Jackson:
“Did you ever hear anyone say, 'That work had better be banned because I might read it and it might be very damaging to me'?”
The only group claiming, that not being able to hide something is censorship, are the people, that want to hide content from others.
nya~
On 18 October 2011 11:56, Tobias Oelgarte tobias.oelgarte@googlemail.comwrote:
I don't assume that. I say that they should have the opportunity to change if they like to.
Absolutely - we do not disagree on this.
That controversial content is hidden or that we provide a button to hide controversial content is prejudicial.
I disagree on this, though. There is a balance between encouraging people to question their views (and, yes, even our own!) and giving them no option but to accept our view.
This problem can be addressed via wording related to the filter and avoidance of phrases like "controversial", "problematic" etc.
I disagree very strongly with the notion that providing a button to hide material is prejudicial.
It deepens the viewpoint that this content is objectionable and that it is
generally accepted this way, even if not. That means that we would fathering the readers that have a tendency to enable a filter (not even particularly an image filter).
This is a reasonable objection; and again it goes back to this idea of how far do we enforce our world view on readers. I think that there are ways that a filter could be enabled that improves Wikipedia for our readers (helping neutrality) and equally there are ways that it could be enabled that adversely affect this goal.
So if done; it needs to be done right.
... and that is exactly what makes me curious about this approach. You assume that we aren't neutral and Sue described us in median a little bit geeky, which goes in the same direction.
We are not; over time it is fairly clear that we reflect certain world views. To pluck an example out of thin air - in the 9/11 article there is extremely strong resistance to adding a "see also" link to the article on 9/11 conspiracies. This reflects a certain bias/world view we are imposing. That is an obvious example - there are many more.
The bias is not uniform; we have various biases depending on the subject - and over time those biases can swing back and forth depending on the prevalent group of editors at that time. Many of our articles have distinctly different tone/content/slant to foreign language ones (which is a big giveaway IMO).
Another example: English Wikipedia has a pretty strong policy on BLP material that restricts a lot of what we record - other language Wiki's do not have the same restrictions and things we would not consider noting (such as non-notable children names) are not considered a problem on other Wikis.
But if we aren't neutral at
all, how can we even believe that an controversial-content-filter-system based upon our views would be neutral in judgment or as proposed in the referendum "cultural neutral". (Question: Is there even a thing as cultural neutrality?)
No; this is the underlying problem I mentioned with implementing a filter that offers pre-built lists.
It is a problem to address, but not one that kills the idea stone dead IMO.
We also don't force anyone to read Wikipedia.
Oh come on :) we are a highly visible source of information with millions of inbound links/pointers. No we don't force anyone to read, but this is not an argument against accommodating as many people as possible.
If he does not like it, he has multiple options. He could close it, he could still read it, even if he don't like any part of it, he could participate to change it or he could start his own project.
And most of those options belie our primary purpose.
We can definitely think about possible solution. But at first i have to
insist to get an answer to the question: Is there a problem, big and worthy enough, to make it a main priority?
Absolutely - and the first question I asked in this debate (weeks ago) was when we were going to poll readers for their opinion. This devolved slightly into an argument over whether our readers should have a say in Wikipedia... but the issue still stands - no clear picture has been built.
We are still stuck in our little house....
I doubt it will ever be done; which is why if it comes to a "vote", despite my advocacy here, I will staunchly oppose any filter on grounds of process and poor planning.
I am willing to be pleasantly surprised.
After that comes the question for (non neutral) categorization of content. That means: Do we need to label offensive content, or could same goal be reached without doing this?
Well from a practical perspective a self-managed filter is the sensible option.
I think we can do an objective categorisation of things people might not like to see, though. Say, nudity, we could have an entirely objective classification for nudity.. just thinking off-hand & imperfectly:
- Incidental nudity (background etc.) - Partial nudity - Full nudity - Full frontal nudity / Close ups - Sexual acts
And then independently classify articles as "sexuality topic", "physiology topic" & "neither" (with neither being default). By combining the two classifications you can build a dynamic score of how likely that any image of nudity in any article should be shown based on the user preference (on the basis that nudity might is more expected in sexuality topics).
Then the user would have the option to filter all nudity, or some level of nudity - defaulting to no filter. Plus they could include/remove images from the filter at will.
I hope this rough idea shows how you could approach the difficult topic of classification under an objective criteria without allowing article-specific abuse.
I think part of the problem is constant re-use of the term "offensive" - because this biases us to approach content from the perspective of controversy. But I go back to a previous point that I made; which is that this is about filtering things users don't want to see. This may not be offensive content (although obviously that is the most common example).
Many people have fear of clowns; allowing them to filter out clown images is a good thing and helps make their experience better.
In the other mail you said that you have a problem with reading/working on Wikipedia in public, because some things might be offensive to bystanders. One typical, widely spread argument. But this problem can easily be solved without the need for categorization. The brainstorming sections are full with easy, non disturbing solutions for exactly this potential problem.
That wasn't me; and it was perhaps not a good example any way.
Although I would love to see a "public" mode for me to enable that hid nudity etc. for use in public - nice and simple without totally degrading my browsing experience.
The only group claiming, that not being able to hide something is
censorship, are the people, that want to hide content from others.
Strong accusation. Total nonsense I'm afraid - if you mean it in the way it comes across (i.e. we are pursuing an agenda to try and hide content from people).
You'll notice I have not offered a counter accusation (say, that people arguing the filter would be censoring are people trying to enforce their politics/world view on everyone); perhaps we could be afforded the same courtesy?
Tom
Am 18.10.2011 14:00, schrieb Thomas Morton:
On 18 October 2011 11:56, Tobias Oelgartetobias.oelgarte@googlemail.comwrote:
That controversial content is hidden or that we provide a button to hide controversial content is prejudicial.
I disagree on this, though. There is a balance between encouraging people to question their views (and, yes, even our own!) and giving them no option but to accept our view.
This problem can be addressed via wording related to the filter and avoidance of phrases like "controversial", "problematic" etc.
I disagree very strongly with the notion that providing a button to hide material is prejudicial.
That comes down to the two layers of judgment involved in this proposal. At first we give them the option to view anything and we give them the option to view not anything. The problem is that we have to define what "not anything" is. This imposes our judgment to the reader. That means, that even if the reader decides to hide some content, then it was our (and not his) decision what is hidden.
This concludes to two cases:
1. If he does not use the filter, then - as you say - we impose our judgment to the reader, 2. If he does use the filter, then - as i say - we impose our judgment to the reader as well.
Both cases seam to be equal. No win or loss with or without filter. But there is a slight difference.
If we treat nothing as objectionable (no filter), then we don't need to play the judge. We say: "We accept anything, it's up to you to judge". If we start to add a "category based" filter, then we play the judge over our own content. We say: "We accept anything, but this might not be good to look at. Now it is up to you to trust our opinion or not".
The later imposes our judgment to the reader, while the first makes no judgment at all and leaves anything to free mind of the reader. ("free mind" means, that the reader has to find his own answer to this question. He might have objections or could agree.)
It deepens the viewpoint that this content is objectionable and that it is
generally accepted this way, even if not. That means that we would fathering the readers that have a tendency to enable a filter (not even particularly an image filter).
This is a reasonable objection; and again it goes back to this idea of how far do we enforce our world view on readers. I think that there are ways that a filter could be enabled that improves Wikipedia for our readers (helping neutrality) and equally there are ways that it could be enabled that adversely affect this goal.
So if done; it needs to be done right.
The big question is: Can be done right?
A filter that only knows a "yes" or "no" to questions that are influenced by different cultural views, seams to fail right away. It draws a sharp line through anything, ignoring the fact that even in one culture there are lot of border cases. I did not want to use examples, but i will still give one. If we have a photography of a young woman at the beach. How would we handle the case that her swimsuit shows a lot of "naked flesh"? I'm sure more then 90% of western country citizens would have no objection against this image, if it is inside a corresponding article. But as soon we go to other cultures, lets say Turkey, then we might find very different viewpoints if this should be hidden by the filter or not. I remember the question in the referendum, if the filter should be cultural neutral. Many agreed on this point. But how in gods name should this be done? Especially: How can this be done right?
... and that is exactly what makes me curious about this approach. You assume that we aren't neutral and Sue described us in median a little bit geeky, which goes in the same direction.
We are not; over time it is fairly clear that we reflect certain world views. To pluck an example out of thin air - in the 9/11 article there is extremely strong resistance to adding a "see also" link to the article on 9/11 conspiracies. This reflects a certain bias/world view we are imposing. That is an obvious example - there are many more.
The bias is not uniform; we have various biases depending on the subject - and over time those biases can swing back and forth depending on the prevalent group of editors at that time. Many of our articles have distinctly different tone/content/slant to foreign language ones (which is a big giveaway IMO).
Another example: English Wikipedia has a pretty strong policy on BLP material that restricts a lot of what we record - other language Wiki's do not have the same restrictions and things we would not consider noting (such as non-notable children names) are not considered a problem on other Wikis.
But if we aren't neutral at
all, how can we even believe that an controversial-content-filter-system based upon our views would be neutral in judgment or as proposed in the referendum "cultural neutral". (Question: Is there even a thing as cultural neutrality?)
No; this is the underlying problem I mentioned with implementing a filter that offers pre-built lists.
It is a problem to address, but not one that kills the idea stone dead IMO.
I belive that the idea dies at the moment as we assume that we can achieve neutrality through filtering. Speaking theoretically there are only three types of neutral filters. The first leaves anything through, the second blocks all and the third is totally random, resulting in an equal 50:50 chance for large numbers. Currently we would ideally have the first filter. Your examples show that this isn't always true. But at least this is the goal. Filter two would equal to don't show anything, or shut down Wikipedia. Not an real option. I know. The third option is a construct out of theory that would not work, since it contains an infinite amount of information, but also nothing at all.
Considering this cases, we can assume that Wikipedia isn't neutral, but that it aims for option 1. But we can also see that there is not any other solution that could be neutral. It is an impossible task to begin with. No filter could fix such a problem.
We also don't force anyone to read Wikipedia.
Oh come on :) we are a highly visible source of information with millions of inbound links/pointers. No we don't force anyone to read, but this is not an argument against accommodating as many people as possible.
No it isn't an argument against this. Accommodating as many people as possible was never the goal of the project. The goal was to create and represent free knowledge to everyone. That we have so many readers can make us proud. But did they come to us, because we had the goal to accommodate them? They come to us to take and share the knowledge that is represented. If we really wanted to accommodate people in the first place, then we should have created something more entertaining then an encyclopedia.
The whole problems starts with the intention to spread our knowledge to more people that we currently reach, faster then necessary. The only problem is, that we leave the facts behinds that made this project to what it is. We have a mission, but it is not the mission to entertain as many people as possible. It is not to gain as much money trough donors as possible.
If he does not like it, he has multiple options. He could close it, he could still read it, even if he don't like any part of it, he could participate to change it or he could start his own project.
And most of those options belie our primary purpose.
It isn't our purpose to please the readers by only representing knowledge they would like to hear of. We hold the knowledge that someone might want to read/see. He has to play the active part. The typical expansion ("world invasion", a reference to Ika) dreams of Jimbo are quite contradictory to that.
We can definitely think about possible solution. But at first i have to
insist to get an answer to the question: Is there a problem, big and worthy enough, to make it a main priority?
Absolutely - and the first question I asked in this debate (weeks ago) was when we were going to poll readers for their opinion. This devolved slightly into an argument over whether our readers should have a say in Wikipedia... but the issue still stands - no clear picture has been built.
We are still stuck in our little house....
I doubt it will ever be done; which is why if it comes to a "vote", despite my advocacy here, I will staunchly oppose any filter on grounds of process and poor planning.
I am willing to be pleasantly surprised.
Thats why i was so interested in the raw data from the referendum (votes/results per language). It could have at least answered some basic questions (Who does see the need?...). Now more then two month have passed since i was promised multiple times that this data would be released. Nothing ever happened.
* http://meta.wikimedia.org/wiki/User_talk:Philippe#Personal_image_filter
After that comes the question for (non neutral) categorization of content. That means: Do we need to label offensive content, or could same goal be reached without doing this?
Well from a practical perspective a self-managed filter is the sensible option.
I think we can do an objective categorisation of things people might not like to see, though. Say, nudity, we could have an entirely objective classification for nudity.. just thinking off-hand& imperfectly:
- Incidental nudity (background etc.)
- Partial nudity
- Full nudity
- Full frontal nudity / Close ups
- Sexual acts
And then independently classify articles as "sexuality topic", "physiology topic"& "neither" (with neither being default). By combining the two classifications you can build a dynamic score of how likely that any image of nudity in any article should be shown based on the user preference (on the basis that nudity might is more expected in sexuality topics).
Then the user would have the option to filter all nudity, or some level of nudity - defaulting to no filter. Plus they could include/remove images from the filter at will.
I hope this rough idea shows how you could approach the difficult topic of classification under an objective criteria without allowing article-specific abuse.
I think part of the problem is constant re-use of the term "offensive" - because this biases us to approach content from the perspective of controversy. But I go back to a previous point that I made; which is that this is about filtering things users don't want to see. This may not be offensive content (although obviously that is the most common example).
Many people have fear of clowns; allowing them to filter out clown images is a good thing and helps make their experience better.
We discussed this distinction at length already. It either was going to be hundreds of categories, which are neither manageable by us nor by the reader, or it would come down to some very vague categories, with no sharp lines at all. That means: If we want to do it right, then it will result in a effort we can't take.
In the other mail you said that you have a problem with reading/working on Wikipedia in public, because some things might be offensive to bystanders. One typical, widely spread argument. But this problem can easily be solved without the need for categorization. The brainstorming sections are full with easy, non disturbing solutions for exactly this potential problem.
That wasn't me; and it was perhaps not a good example any way.
Although I would love to see a "public" mode for me to enable that hid nudity etc. for use in public - nice and simple without totally degrading my browsing experience.
The only group claiming, that not being able to hide something is
censorship, are the people, that want to hide content from others.
Strong accusation. Total nonsense I'm afraid - if you mean it in the way it comes across (i.e. we are pursuing an agenda to try and hide content from people).
You'll notice I have not offered a counter accusation (say, that people arguing the filter would be censoring are people trying to enforce their politics/world view on everyone); perhaps we could be afforded the same courtesy?
Tom
This is an very old story. I guess we don't need to ague over it again. My viewpoint on this matter is also very fixed, which would make things only harder then they already are.
nya~
That comes down to the two layers of judgment involved in this proposal. At first we give them the option to view anything and we give them the option to view not anything. The problem is that we have to define what "not anything" is. This imposes our judgment to the reader. That means, that even if the reader decides to hide some content, then it was our (and not his) decision what is hidden.
No; because the core functionality of a filter should always present the choice "do you want to see this image or not". Which is specifically not imposing our judgement on the reader :) Whether we then place some optional preset filters for the readers to use is certainly a matter of discussion - but nothing I have seen argues against this core ideas.
If we treat nothing as objectionable (no filter), then we don't need to
play the judge. We say: "We accept anything, it's up to you to judge". If we start to add a "category based" filter, then we play the judge over our own content. We say: "We accept anything, but this might not be good to look at. Now it is up to you to trust our opinion or not".
By implementing a graded filter; one which lets you set grades of visibility rather than off/on addresses this concern - because once again it gives the reader ultimate control over the question of what they want to see. If they are seeing "too much" for their preference they can tweak up, and vice versa.
The later imposes our judgment to the reader, while the first makes no judgment at all and leaves anything to free mind of the reader. ("free mind" means, that the reader has to find his own answer to this question. He might have objections or could agree.)
And if he objects, we are then just ignoring him?
I disagree with your argument; both points are imposing our judgement on the reader.
A filter that only knows a "yes" or "no" to questions that are
influenced by different cultural views, seams to fail right away. It draws a sharp line through anything, ignoring the fact that even in one culture there are lot of border cases. I did not want to use examples, but i will still give one. If we have a photography of a young woman at the beach.
How would we handle the case that her swimsuit shows a lot of "naked flesh"? I'm sure more then 90% of western country citizens would have no objection against this image, if it is inside a corresponding article. But as soon we go to other cultures, lets say Turkey, then we might find very different viewpoints if this should be hidden by the filter or not.
Agreed; which is why we allow people to filter based on a sliding scale, rather than a discrete yes or no. So someone who has no objection to such an image, but wants to hide people having sex can do so. And someone who wants to hide that image can have a stricter grade on the filter.
If nothing else the latter case is the more important one to address; because sexual images are largely tied to sexual subjects, and any reasonably person should expect those images to appear. But if culturally you object to seeing people in swimwear then this could be found in almost any article.
We shouldn't judge those cultural objections as invalid. Equally we shouldn't endorse them as valid. There is a balance somewhere between those two extremes.
I remember the question in the referendum, if the filter should be cultural neutral. Many agreed on this point. But how in gods name should this be done? Especially: How can this be done right?
I suggested a way in which we could cover a broad spectrum of views on one key subject without setting discrete categories of visibility.
I belive that the idea dies at the moment as we assume that we can achieve neutrality through filtering. Speaking theoretically there are only three types of neutral filters. The first leaves anything through, the second blocks all and the third is totally random, resulting in an equal 50:50 chance for large numbers. Currently we would ideally have the first filter. Your examples show that this isn't always true. But at least this is the goal. Filter two would equal to don't show anything, or shut down Wikipedia. Not an real option. I know. The third option is a construct out of theory that would not work, since it contains an infinite amount of information, but also nothing at all.
What about the fourth type; that gives you extensive options to filter out (or better description; to collapse) content from initial viewing per your specific preferences.
This is a technical challenge, but in no way unachievable.
I made an analogy before that some people might prefer to surf Wikipedia with plot summaries collapsed (I would be one of them!). In a perfect world we would have the option to collapse *any* section in a Wikipedia article and have that option stored. Over time the software would notice I was collapsing plot summaries and, so, intelligently collapse summaries on newly visited pages for me. Plus there might even be an option in preferences saying "collapse plot summaries" because it's recognised as a common desire.
In this scenario we keep all of the knowledge present, but optionally hide some aspects of it until the reader pro-actively accesses it. Good stuff.
Considering this cases, we can assume that Wikipedia isn't neutral, but that it aims for option 1.
That's a somewhat rudimentary way of putting it.. it's not so much about showing/hiding information - but again about grades of how information is presented. You can take a fact and present it in many different ways in prose; depending on the bias being exhibited. This is demonstrated across the language Wikipedias.
But we can also see that there is not any other solution that could be neutral. It is an impossible task to begin with. No filter could fix such a problem.
Well, there could be... merge language/subject Wiki content together intelligently to represent all biases and filter against each other.
Again, a technical challenge.
No it isn't an argument against this. Accommodating as many people as possible was never the goal of the project. The goal was to create and represent free knowledge to everyone.
Agreed; and if we are inhibiting that by showing images that put people off reading the content.... That is against our goals surely :)
Of course - this has not been examined... so while I make this argument I can't support it (and it can't really be discarded either). Hence we need to ask.
The whole problems starts with the intention to spread our knowledge to more people that we currently reach, faster then necessary.
That we might not be reaching certain people due to a potentially fixable problem is certainly something we can/should address :)
We have a mission, but it is not the mission to entertain as many people as possible. It is not to gain as much money trough donors as possible.
Is this a language barrier? do you mean entertain in the context of having them visit us, or in the context of them having a fun & enjoyable time.
Because in the latter case - of course you are right. I don't see the relevance though because this isn't about entertaining people, just making material accessible.
It isn't our purpose to please the readers by only representing
knowledge they would like to hear of.
Yeh, this is a finicky area to think about... because although we ostensibly report facts, we also record opinions on those facts. Conceivably a conservative reading a topic would prefer to see more conservative opinion on that topic and a liberal more liberal opinion.
Ok, so we have forks that cover this situation - but often they are of poor quality, and present the facts in a biased way. In an ideal future world I see us maintaining a core, netural and broad article that could be extended per reader preference with more commentary from their political/religious/career/interest spectrum.
The point is to inform, after all.
Tom
Am 18.10.2011 17:23, schrieb Thomas Morton:
That comes down to the two layers of judgment involved in this proposal. At first we give them the option to view anything and we give them the option to view not anything. The problem is that we have to define what "not anything" is. This imposes our judgment to the reader. That means, that even if the reader decides to hide some content, then it was our (and not his) decision what is hidden.
No; because the core functionality of a filter should always present the choice "do you want to see this image or not". Which is specifically not imposing our judgement on the reader :) Whether we then place some optional preset filters for the readers to use is certainly a matter of discussion - but nothing I have seen argues against this core ideas.
Yes; because even the provision of a filter implies that some content is seen as objectionable and treated different from other content. This is only no problem, as long we don't represent default settings, aka categories, which introduce our judgment to the readership. Only the fact that our judgment is visible, is already enough to manipulate the reader in what to see as objectionable or not. This scenario is very much comparable to the unknown man that sits behind you, looking randomly onto your screen, while you want to inform yourself. Just the thought that someone else could be upset is already an issue. Having us to directly show/indicate what we think of as objectionable "by others" is even the stronger.
If we treat nothing as objectionable (no filter), then we don't need to
play the judge. We say: "We accept anything, it's up to you to judge". If we start to add a "category based" filter, then we play the judge over our own content. We say: "We accept anything, but this might not be good to look at. Now it is up to you to trust our opinion or not".
By implementing a graded filter; one which lets you set grades of visibility rather than off/on addresses this concern - because once again it gives the reader ultimate control over the question of what they want to see. If they are seeing "too much" for their preference they can tweak up, and vice versa.
This would imply that we, the ones that are unable to neutrally handle content, would be perfect in categorizing images after a fine degree of nudity. But even having multiple steps would not be a satisfying solution. There are many cultural regions which differentiate strongly between man an woman. While they would have no problem to see a man in just his boxer short, it would be seen as offending to show a woman open hair. I wonder what effort it would need to accomplish this goal (if even possible), compared to the benefits.
The later imposes our judgment to the reader, while the first makes no judgment at all and leaves anything to free mind of the reader. ("free mind" means, that the reader has to find his own answer to this question. He might have objections or could agree.)
And if he objects, we are then just ignoring him?
I disagree with your argument; both points are imposing our judgement on the reader.
If _we_ do the categorization, then we impose our judgment, since it was us, who made the decision. It is not a customized filter where the user decides what is best for himself. Showing anything might not be ideal for all readers. Hiding more then preferred might also no be ideal for all readers. Hiding less then preferred is just another not ideal case. We can't meet everyones taste like no book can meet everyones taste. While Harry Potter seams to be fine in many cultures, in some there might be parts that are seen as offensive. Would you hide/rewrite parts from Harry Potter to make them all happy, or would you go after the majority of the market and ignore the rest?
There is one simple way to deal with it. If someone does not like our content, then he don't need to use it. If someone does not like the content of a book he does not need to buy it. He can complain about it. Thats whats Philip Pullman meant with: "No one has the right to life without being shocked".
Agreed; which is why we allow people to filter based on a sliding scale, rather than a discrete yes or no. So someone who has no objection to such an image, but wants to hide people having sex can do so. And someone who wants to hide that image can have a stricter grade on the filter.
If nothing else the latter case is the more important one to address; because sexual images are largely tied to sexual subjects, and any reasonably person should expect those images to appear. But if culturally you object to seeing people in swimwear then this could be found in almost any article.
We shouldn't judge those cultural objections as invalid. Equally we shouldn't endorse them as valid. There is a balance somewhere between those two extremes.
Yes there is a balance between two extremes. But who ever said that the center between two opinions is seen as an valid option by both parties? If that would be the case and if that would work in practice, then we wouldn't have problems like in Israel. In this case everyone has a viewpoint, but neither party is willing to agree with a median. Both have very different perspectives about what a median should look like. This applies at large scale to situations like in Israel and it also applies to small things like a single line of text or an image.
The result is simple: Neither side is happy with a balance. Evey side has his point of view and they won't back down. At the result we have the so called second battle field aside from the articles itself. As soon we start to categorize it will happen, and I'm sure that even you would shake with the head as you see those differences colliding with each other. The battles inside articles can be described as the mild ones. Here we have arguments and sources. How many sources do belong to our images? What would you cite as the base for your argumentation?
I suggested a way in which we could cover a broad spectrum of views on one key subject without setting discrete categories of visibility.
As explained above, this will be a very very hard job to do. Even in the most simple subject "sexuality" you will need more then one scale to measure content against. Other topics, like the religious or cultural topics, will be even a much harder job.
I belive that the idea dies at the moment as we assume that we can achieve neutrality through filtering. Speaking theoretically there are only three types of neutral filters. The first leaves anything through, the second blocks all and the third is totally random, resulting in an equal 50:50 chance for large numbers. Currently we would ideally have the first filter. Your examples show that this isn't always true. But at least this is the goal. Filter two would equal to don't show anything, or shut down Wikipedia. Not an real option. I know. The third option is a construct out of theory that would not work, since it contains an infinite amount of information, but also nothing at all.
What about the fourth type; that gives you extensive options to filter out (or better description; to collapse) content from initial viewing per your specific preferences.
This is a technical challenge, but in no way unachievable.
This is by far not a technical challenge. It's a new/additional challenge for the authors. A new burden if you will. The finer the categorization, the more effort you will need to put into it. The more exceptions have to be made. Technically you could support thousands of categories with different degrees. But what can be managed by our authors and what by the readers? At which point the error made by us (we are humans, an computers can't judge images) is bigger then thin lines we draw?
In technical theory it sounds nice and handy, but in practice we also have to consider effort vs result. I'm strongly confident that the effort would not justify the result, even if we ignore side effects like third party filtering, based upon our categories, removing the options from the user.
I made an analogy before that some people might prefer to surf Wikipedia with plot summaries collapsed (I would be one of them!). In a perfect world we would have the option to collapse *any* section in a Wikipedia article and have that option stored. Over time the software would notice I was collapsing plot summaries and, so, intelligently collapse summaries on newly visited pages for me. Plus there might even be an option in preferences saying "collapse plot summaries" because it's recognised as a common desire.
In this scenario we keep all of the knowledge present, but optionally hide some aspects of it until the reader pro-actively accesses it. Good stuff.
That would be a solution. But this would not imply any categorization by ourself, since the program on the servers would find out what to do. This works pretty well for simple things like text already. Images are a much bigger problem, which can't be simply handed down to program, since no program at the current time would be able to do this. So we are back again at: effort vs result + gathering of private user data + works only opt-in with an account.
I removed some paragraphs below, since all come down to the effort vs result problem. Additionally we have no way to implement a system like this at the moment. That is something for the future.
The whole problems starts with the intention to spread our knowledge to more people that we currently reach, faster then necessary.
That we might not be reaching certain people due to a potentially fixable problem is certainly something we can/should address :)
Yes we should address it. But we also should start to think about other options then to hide content. There are definitely better and more effective solutions as this quicky called "image filter".
We have a mission, but it is not the mission to entertain as many people as possible. It is not to gain as much money trough donors as possible.
Is this a language barrier? do you mean entertain in the context of having them visit us, or in the context of them having a fun& enjoyable time.
Because in the latter case - of course you are right. I don't see the relevance though because this isn't about entertaining people, just making material accessible.
With "entertain" i meant this: Providing them only with content that will please their mind, causing no bad thoughts or surprise to learn something new or very different.
It isn't our purpose to please the readers by only representing
knowledge they would like to hear of.
Yeh, this is a finicky area to think about... because although we ostensibly report facts, we also record opinions on those facts. Conceivably a conservative reading a topic would prefer to see more conservative opinion on that topic and a liberal more liberal opinion.
Ok, so we have forks that cover this situation - but often they are of poor quality, and present the facts in a biased way. In an ideal future world I see us maintaining a core, netural and broad article that could be extended per reader preference with more commentary from their political/religious/career/interest spectrum.
The point is to inform, after all.
Tom
That is kind of another "drawing the line" case. To be neutral we should represent both (or more) point of views. But showing the reader only that what he want's to read is not real knowledge. Real knowledge is gained by looking over the borders and not by building own temples/territories with huge walls around them, to trap or bait an otherwise free mind. It is always good to have an opposition that thinks different. Making them all happy by dividing them into two (or more) territories, while differences remain or grow, will not help both of them, especially if you can't draw a clean line.
nya~
This is only no problem, as long we don't represent default settings, aka
categories, which introduce our judgment to the readership. Only the
fact that our judgment is visible, is already enough to manipulate the reader in what to see as objectionable or not. This scenario is very much comparable to the unknown man that sits behind you, looking randomly onto your screen, while you want to inform yourself. Just the thought that someone else could be upset is already an issue. Having us to directly show/indicate what we think of as objectionable "by others" is even the stronger.
I guess we just sit at opposites sides on this point; I think that a broad but clear categorisation with a slider control to figure out how much or little you wished to see is perfectly fine.
It is uncontroversial that people find nudity inconvenient or objectional. I see no issue in considering that a filter area.
This would imply that we, the ones that are unable to neutrally handle
content, would be perfect in categorizing images after a fine degree of nudity.
But even having multiple steps would not be a satisfying solution. There are many cultural regions which differentiate strongly between man an woman.
By using broad strokes that disregard gender we address this concern - sure it may be somewhat imperfect for people who specifically don't want to see bare armed women because it would end up blocking similarly attired men. But it is better than the situation we have.
We can't meet everyones taste like no book can meet everyones taste.
True; but we can try to improve things.
While Harry Potter seams to be fine in many cultures, in some there might be parts that are seen as offensive. Would you hide/rewrite parts from Harry Potter to make them all happy, or would you go after the majority of the market and ignore the rest?
I'm not sure of the relevance; HP is a commercial product with a distinctly different aim or market to ourselves. They go after the core market because it makes commercial sense, we are not limited in this way.
There is one simple way to deal with it. If someone does not like our content, then he don't need to use it. If someone does not like the content of a book he does not need to buy it.
I find this a non-optimal and very bad solution.
I suggested a way in which we could cover a broad spectrum of views on
one
key subject without setting discrete categories of visibility.
As explained above, this will be a very very hard job to do. Even in the most simple subject "sexuality" you will need more then one scale to measure content against. Other topics, like the religious or cultural topics, will be even a much harder job.
Not really; one scale would do nicely and cover most of the use cases.
That is kind of another "drawing the line" case. To be neutral we should represent both (or more) point of views.
No; this is not neutrality (this is my bug bear because it is the underlying reason we are not neutral, and have trouble apprising neutrality in content).
But showing the reader only that what he want's to read is not real knowledge.
This really comes back to my argument of our views and biases. If you read a topic you obviously want to know the view of it by people you agree with.
Now I agree that throwing differing views into the mix can be useful, of give you another viewpoint. But you are still predominantly interested in a view point that you consider accurate and compelling.
*There is nothing wrong with this.*
Presenting two parallel views with the aim of bouncing them off each other to impart the knowledge is also not neutral.
Tom
From: Tobias Oelgarte tobias.oelgarte@googlemail.com
Am 18.10.2011 11:43, schrieb Thomas Morton:
It is this fallacious logic that underpins our crazy politics of "neutrality" which we attempt to enforce on people (when in practice we lack neutrality almost as much as the next man!).
... and that is exactly what makes me curious about this approach. You assume that we aren't neutral and Sue described us in median a little bit geeky, which goes in the same direction. But if we aren't neutral at all, how can we even believe that an controversial-content-filter-system based upon our views would be neutral in judgment or as proposed in the referendum "cultural neutral". (Question: Is there even a thing as cultural neutrality?)
Who said that the personal image filter function should be based on *our* judgment? It shouldn't.
As Wikipedians, we are used to working from sources. In deciding what content to include, we look at high-quality, educational sources, and try to reflect them fairly.
Now, given that we are a top-10 website, why should it not make sense to look at what other large websites like Google, Bing, and Yahoo allow the user to filter, and what media Flickr and YouTube require opt-ins for? Why should we not take our cues from them? The situation seems quite analogous.
As the only major website *not* to offer users a filter, we have more in common with 4chan than the mainstream. Any abstract discussion of neutrality that neglects to address this fundamental point misses the mark. Our present approach is not neutral by our own definition of neutrality; it owes more to Internet culture than to the sources we cite.
Another important point that Thomas made is that any filter set-up should use objective criteria, rather than criteria based on offensiveness. We should not make a value judgment, we should simply offer users the browsing choices they are used to in mainstream sites.
Best, Andreas
Am 18.10.2011 19:04, schrieb Andreas Kolbe:
From: Tobias Oelgartetobias.oelgarte@googlemail.com
Am 18.10.2011 11:43, schrieb Thomas Morton:
It is this fallacious logic that underpins our crazy politics of "neutrality" which we attempt to enforce on people (when in practice we lack neutrality almost as much as the next man!).
... and that is exactly what makes me curious about this approach. You assume that we aren't neutral and Sue described us in median a little bit geeky, which goes in the same direction. But if we aren't neutral at all, how can we even believe that an controversial-content-filter-system based upon our views would be neutral in judgment or as proposed in the referendum "cultural neutral". (Question: Is there even a thing as cultural neutrality?)
Who said that the personal image filter function should be based on *our* judgment? It shouldn't.
As Wikipedians, we are used to working from sources. In deciding what content to include, we look at high-quality, educational sources, and try to reflect them fairly.
Now, given that we are a top-10 website, why should it not make sense to look at what other large websites like Google, Bing, and Yahoo allow the user to filter, and what media Flickr and YouTube require opt-ins for? Why should we not take our cues from them? The situation seems quite analogous.
As the only major website *not* to offer users a filter, we have more in common with 4chan than the mainstream. Any abstract discussion of neutrality that neglects to address this fundamental point misses the mark. Our present approach is not neutral by our own definition of neutrality; it owes more to Internet culture than to the sources we cite.
Another important point that Thomas made is that any filter set-up should use objective criteria, rather than criteria based on offensiveness. We should not make a value judgment, we should simply offer users the browsing choices they are used to in mainstream sites.
Best, Andreas
You said that we should learn from Google and other top websites, but at the same time you want to introduce objective criteria, which neither of this websites did? You also compare Wikipedia with an image board like 4chan? You want the readers to define what they want see. That means they should play the judge and that majority will win. But this in contrast to the proposal that the filter should work with objective criteria.
Could you please crosscheck your own comment and tell me what kind of solution is up on your mind? Currently it is mix of very different approaches, that don't fit together.
nya~
On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte < tobias.oelgarte@googlemail.com> wrote:
You said that we should learn from Google and other top websites, but at the same time you want to introduce objective criteria, which neither of this websites did?
What I mean is that we should not classify media as offensive, but in terms such as "photographic depictions of real-life sex and masturbation", "images of Muhammad". If someone feels strongly that they do not want to see these by default, they should not have to. In terms of what areas to cover, we can look at what people like Google do (e.g. by comparing "moderate safe search" and "safe search off" results), and at what our readers request.
You also compare Wikipedia with an image board like 4chan? You want the readers to define what they want see. That means they should play the judge and that majority will win. But this in contrast to the proposal that the filter should work with objective criteria.
I do not see this as the majority winning, and a minority losing. I see it as everyone winning -- those who do not want to be confronted with whatever media don't have to be, and those who want to see them can.
Could you please crosscheck your own comment and tell me what kind of solution is up on your mind? Currently it is mix of very different approaches, that don't fit together.
My mind is not made up; we are still in a brainstorming phase. Of the alternatives presented so far, I like the opt-in version of Neitram's proposal best:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fh...
If something better were proposed, my views might change.
Best, Andreas
Am 18.10.2011 23:20, schrieb Andreas K.:
On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte< tobias.oelgarte@googlemail.com> wrote:
You said that we should learn from Google and other top websites, but at the same time you want to introduce objective criteria, which neither of this websites did?
What I mean is that we should not classify media as offensive, but in terms such as "photographic depictions of real-life sex and masturbation", "images of Muhammad". If someone feels strongly that they do not want to see these by default, they should not have to. In terms of what areas to cover, we can look at what people like Google do (e.g. by comparing "moderate safe search" and "safe search off" results), and at what our readers request.
The problem is, that we never asked our readers, before the whole thing was running wild already. It would be really the time to question the feelings of the readers. That would mean to ask the readers in very different regions to get an good overview about this topic. What Google and other commercial groups do shouldn't be a reference to us. They serve their core audience and ignore the rest, since their aim is profit, and only profit, no matter what "good reasons" they represent. We are quite an exception from them. Not in popularity, but in concept. If we put to the example of "futanari", then we surely agree that there could be quite a lot of people that would be surprised. Especially if "safe-search" is on. But now we have to ask why it is that way? Why does it work so well for other, more common terms in a western audience?
You also compare Wikipedia with an image board like 4chan? You want the readers to define what they want see. That means they should play the judge and that majority will win. But this in contrast to the proposal that the filter should work with objective criteria.
I do not see this as the majority winning, and a minority losing. I see it as everyone winning -- those who do not want to be confronted with whatever media don't have to be, and those who want to see them can.
I guess you missed the point that a minority of offended people would just be ignored. Looking at the goal and Tings examples, then we would just strengthen the current position (western majority and point of view) but doing little to nothing in the areas that where the main concern, or at least the strong argument to start the progress. If it really comes down to the point that a majority does not find Muhammad caricatures offensive and it "wins", then we have no solution.
Could you please crosscheck your own comment and tell me what kind of solution is up on your mind? Currently it is mix of very different approaches, that don't fit together.
My mind is not made up; we are still in a brainstorming phase. Of the alternatives presented so far, I like the opt-in version of Neitram's proposal best:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fh...
If something better were proposed, my views might change.
Best, Andreas
I read this proposal and can't see a real difference in a second thought. At first it is good that the decision stays related to the topic and is not separated as in the first proposals. But it also has a bad taste in itself. We directly deliver the tags needed to remove content by third parties (SPI, Local Network, Institutions), no matter if the reader chooses to view the image or not, and we are still in charge to declare what might be or is offensive to others, forcing our judgment onto the users of the feature.
Overall it follows a good intention, but I'm very concerned about the side effects, which just let me say "no way" to this proposal as it is.
nya~
On Tue, Oct 18, 2011 at 11:10 PM, Tobias Oelgarte < tobias.oelgarte@googlemail.com> wrote:
Am 18.10.2011 23:20, schrieb Andreas K.:
On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte< tobias.oelgarte@googlemail.com> wrote:
You said that we should learn from Google and other top websites, but at the same time you want to introduce objective criteria, which neither of this websites did?
What I mean is that we should not classify media as offensive, but in
terms
such as "photographic depictions of real-life sex and masturbation",
"images
of Muhammad". If someone feels strongly that they do not want to see
these
by default, they should not have to. In terms of what areas to cover, we
can
look at what people like Google do (e.g. by comparing "moderate safe
search"
and "safe search off" results), and at what our readers request.
The problem is, that we never asked our readers, before the whole thing was running wild already. It would be really the time to question the feelings of the readers. That would mean to ask the readers in very different regions to get an good overview about this topic.
I agree with you here, and in fact said so months ago. We should have surveyed our readership (as well), rather than (just) our editorship.
What Google and other commercial groups do shouldn't be a reference to us. They serve their core audience and ignore the rest, since their aim is profit, and only profit, no matter what "good reasons" they represent. We are quite an exception from them. Not in popularity, but in concept. If we put to the example of "futanari", then we surely agree that there could be quite a lot of people that would be surprised. Especially if "safe-search" is on. But now we have to ask why it is that way? Why does it work so well for other, more common terms in a western audience?
I think we addressed this example previously.
I do not see this as the majority winning, and a minority losing. I see it
as everyone winning -- those who do not want to be confronted with
whatever
media don't have to be, and those who want to see them can.
I guess you missed the point that a minority of offended people would just be ignored. Looking at the goal and Tings examples, then we would just strengthen the current position (western majority and point of view) but doing little to nothing in the areas that where the main concern, or at least the strong argument to start the progress. If it really comes down to the point that a majority does not find Muhammad caricatures offensive and it "wins", then we have no solution.
I am all in favour of taking minority concerns on board. Specifically that Muhammad images should be filterable; no question. The point is that the more disparate filter wishes we accommodate, the more filter attributes will be necessary, which is something that worries other editors. I haven't really made my mind up on this one.
My mind is not made up; we are still in a brainstorming phase. Of the alternatives presented so far, I like the opt-in version of Neitram's proposal best:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fh...
If something better were proposed, my views might change.
Best, Andreas
I read this proposal and can't see a real difference in a second thought. At first it is good that the decision stays related to the topic and is not separated as in the first proposals. But it also has a bad taste in itself. We directly deliver the tags needed to remove content by third parties (SPI, Local Network, Institutions), no matter if the reader chooses to view the image or not, and we are still in charge to declare what might be or is offensive to others, forcing our judgment onto the users of the feature.
Overall it follows a good intention, but I'm very concerned about the side effects, which just let me say "no way" to this proposal as it is.
The community will always be in charge, one way or the other. I think that's unavoidable. And to many people, this is the exact opposite of a bad thing: it's an absolute *must* that the community should be in charge, and understandably so, as they are the ones doing the work. I disagree though that it necessarily must mean that the community declares what is offensive to others.
We, as a community, can *listen* to what people are telling us, and take their concerns on board.
I have no problem adding a filter attribute to a file that a reader tells me offends him, and which he wishes to be able to filter out, even if I think it is a perfectly fine image.
Nor would I feel the need to impose my view on them the other way round, telling them they should just grow a thicker skin and get used to the image.
Andreas
Andreas Kolbe wrote:
Now, given that we are a top-10 website, why should it not make sense to look at what other large websites like Google, Bing, and Yahoo allow the user to filter, and what media Flickr and YouTube require opt-ins for? Why should we not take our cues from them? The situation seems quite analogous.
Again, those websites are commercial endeavors whose decisions are based on profitability, not an obligation to maintain neutrality (a core element of most WMF projects). These services can cater to the revenue-driving majorities (with geographic segregation, if need be) and ignore minorities whose beliefs fall outside the "mainstream" for a given country.
This probably works fairly well for them; most users are satisfied, with the rest too fragmented to be accommodated in a cost-effective manner. Revenues are maximized. Mission accomplished.
The WMF projects' missions are dramatically different. For most, neutrality is a nonnegotiable principle. To provide an optional filter for one image type and not another is to formally validate the former objection and not the latter. That's unacceptable.
David Levy
On Tue, Oct 18, 2011 at 9:17 PM, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
Now, given that we are a top-10 website, why should it not make sense to look at what other large websites like Google, Bing, and Yahoo allow the user to filter, and what media Flickr and YouTube require opt-ins for? Why should we not take our cues from them? The situation seems quite analogous.
Again, those websites are commercial endeavors whose decisions are based on profitability, not an obligation to maintain neutrality (a core element of most WMF projects). These services can cater to the revenue-driving majorities (with geographic segregation, if need be) and ignore minorities whose beliefs fall outside the "mainstream" for a given country.
This probably works fairly well for them; most users are satisfied, with the rest too fragmented to be accommodated in a cost-effective manner. Revenues are maximized. Mission accomplished.
Satisfying most users is a laudable aim for any service provider, whether revenue is involved or not. Why should we not aim to satisfy most our users, or appeal to as many potential users as possible?
The WMF projects' missions are dramatically different. For most, neutrality is a nonnegotiable principle. To provide an optional filter for one image type and not another is to formally validate the former objection and not the latter. That's unacceptable.
This goes back to our fundamental disagreement about what neutrality means. You give it your own definition, which, as I understand you, means refraining from making judgments. But that is not how we work. We constantly apply judgment, based on the judgment of reliable sources.
We constantly discriminate.
We say, This is unsourced; it may be true, but you can't have it in the article.
We say, This is interesting, but it is synthesis, or original research, and you can't have it in the article.
We say, This is a self-published source, it does not have an editorial staff, therefore it is not reliable.
By doing so, we are constantly empowering the judgment of the professional, commercial outfits who produce what we term reliable sources.
If this is unacceptable to you, do you also object to our sourcing policies and guidelines?
Andreas
* Andreas K. wrote:
Satisfying most users is a laudable aim for any service provider, whether revenue is involved or not. Why should we not aim to satisfy most our users, or appeal to as many potential users as possible?
Many Wikipedians would disagree that they or Wikipedia as a whole is a "service provider". The first sentence on the german language version for instance is "Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie aus freien Inhalten in allen Sprachen der Welt." That's about creating something, not about providing some service to others, much less trying to satisfy most people who might wish to be serviced.
I invite you to have a look at http://katograph.appspot.com/ which shows the category system of the german Wikipedia at the end of 2009 with in- formation about how many articles can be found under them and the number of views of articles in the category over a three day period. You will find for instance that there are many more articles on buildings than on movies, many times more, but articles on movies get more views in total.
On Wed, Oct 19, 2011 at 1:17 AM, Bjoern Hoehrmann derhoermi@gmx.net wrote:
- Andreas K. wrote:
Satisfying most users is a laudable aim for any service provider, whether revenue is involved or not. Why should we not aim to satisfy most our
users,
or appeal to as many potential users as possible?
Many Wikipedians would disagree that they or Wikipedia as a whole is a "service provider". The first sentence on the german language version for instance is "Wikipedia ist ein Projekt zum Aufbau einer Enzyklopädie aus freien Inhalten in allen Sprachen der Welt." That's about creating something, not about providing some service to others, much less trying to satisfy most people who might wish to be serviced.
I see our vision and mission as entirely service-focused. We are not doing this for our own amusement:
*Vision: Imagine a world in which every single human being can freely share in the sum of all knowledge. That's our commitment.* * * *Mission: The mission of the Wikimedia Foundation is to empower and engage people around the world to collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally.* * * *Values: An essential part of the Wikimedia Foundation's mission is encouraging the development of free-content educational resources that may be created, used, and reused by the entire human community.*
It's about providing a service to the entire human community. Quality is defined by the recipient of a service, not the producer.
I invite you to have a look at http://katograph.appspot.com/ which shows the category system of the german Wikipedia at the end of 2009 with in- formation about how many articles can be found under them and the number of views of articles in the category over a three day period. You will find for instance that there are many more articles on buildings than on movies, many times more, but articles on movies get more views in total.
That's a fascinating piece of work. :) If I understand it correctly, the colour of each rectangle reflects average number of page views per article in this category (blue = low, orange = high), and the area of each rectangle reflects the number of articles in that category. What do the dropdown menus do? I can't figure them out. Do you have an FAQ for this application?
Best, Andreas
* Andreas K. wrote:
I see our vision and mission as entirely service-focused. We are not doing this for our own amusement:
You are talking about the Wikimedia Foundation while I was talking about Wikipedians. I certainly "do this" for my own amusement, not to satisfy.
That's a fascinating piece of work. :) If I understand it correctly, the colour of each rectangle reflects average number of page views per article in this category (blue = low, orange = high), and the area of each rectangle reflects the number of articles in that category. What do the dropdown menus do? I can't figure them out. Do you have an FAQ for this application?
http://lists.wikimedia.org/pipermail/wikide-l/2010-January/022758.html has some additional information. By default, the rectangles are sized according to the number of articles in the category and coloured by the median number of requests per article in the category. So a very big rectangle with a cold colour indicates there are many articles under it that nobody reads, while small rectangles with a warm colour indicate categories with few articles that draw a lot of traffic. If you set the colour to "Anzahl" and the size to "(inv) Anzahl Artikel", the smallest category will be in the left top and the colours get warmer towards the bottom right corner. The third dropdown specifies the layout algorithm.
Andreas Kolbe wrote:
Satisfying most users is a laudable aim for any service provider, whether revenue is involved or not. Why should we not aim to satisfy most our users, or appeal to as many potential users as possible?
It depends on the context. There's nothing inherently bad about satisfying as many users as possible. It's doing so in a discriminatory, non-neutral manner that's problematic.
We probably could satisfy most users by detecting their locations and displaying information intended to reflect the beliefs prevalent there (e.g. by favoring the majority religions). But that, like the creation of special categories for images deemed "potentially objectionable," is incompatible with most WMF projects' missions.
We constantly discriminate.
We say, This is unsourced; it may be true, but you can't have it in the article.
We say, This is interesting, but it is synthesis, or original research, and you can't have it in the article.
We say, This is a self-published source, it does not have an editorial staff, therefore it is not reliable.
By doing so, we are constantly empowering the judgment of the professional, commercial outfits who produce what we term reliable sources.
If this is unacceptable to you, do you also object to our sourcing policies and guidelines?
You're still conflating disparate concepts. (I've elaborated on this point several times.)
David Levy
Risker,
The net nanny software could have been doing a keyword filter on the word "Sex", which would reject every page and image in [[Category:Sexual positions]] because it contains the word "sex". That is not a category based filter. If you believe it was a category based filter, I would definitely like to know the name of the software in order to verify your assertion.
On 10 October 2011 21:26, John Vandenberg jayvdb@gmail.com wrote:
Risker,
The net nanny software could have been doing a keyword filter on the word "Sex", which would reject every page and image in [[Category:Sexual positions]] because it contains the word "sex". That is not a category based filter. If you believe it was a category based filter, I would definitely like to know the name of the software in order to verify your assertion.
I don't have the funniest notion what the software is; these are systems on which I have no control and no rights above first level user, and they are not open systems.
It may be that they are using keywords, but many obvious keywords are legitimately used as category names on our projects. Therefore, it makes no difference whether they're using keywords that match our categories, or the categories themselves: the effect is exactly the same.
Risker
On Tue, Oct 11, 2011 at 7:11 AM, Risker risker.wp@gmail.com wrote:
On 10 October 2011 21:26, John Vandenberg jayvdb@gmail.com wrote:
Risker,
The net nanny software could have been doing a keyword filter on the word "Sex", which would reject every page and image in [[Category:Sexual positions]] because it contains the word "sex". That is not a category based filter. If you believe it was a category based filter, I would definitely like to know the name of the software in order to verify your assertion.
I don't have the funniest notion what the software is; these are systems on which I have no control and no rights above first level user, and they are not open systems.
It may be that they are using keywords, but many obvious keywords are legitimately used as category names on our projects. Therefore, it makes no difference whether they're using keywords that match our categories, or the categories themselves: the effect is exactly the same.
Risker _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
As long as we're brainstorming, I added this to the page on Meta.
"...,a viable alternative to not relying blindly on the categorization system, would be implementing a new "image reviewer" flag on en.wp and maybe in commons. This method would create a list of reviewed images that can be considered objectionable, that could be filtered/black-listed. The difference is, 1) this system already works "article reviewer", 2) does not rely on the existing categorization system and would create 3) a new process that won't be fool-proof but probably harder to exploit for vandals. The technical implementation of this would probably be easier too, and the community can decide on the offensive-ness on its own through a request for review or something similar, in case of contentious decisions. Whether other projects can have this should of course remain their decision, they can choose to completely opt-out of this flag similar to "article reviewer", and for that very reason, enwp community should vote on this itself- not random readers but a straight forward vote on wiki."
It's an alternative, albeit a slower process to mark offensive images, without relying on the current categorization system.
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#A_new_gro...
Regards Theo
On Tue, Oct 11, 2011 at 07:19:00AM +0530, Theo10011 wrote:
"...,a viable alternative to not relying blindly on the categorization system, would be implementing a new "image reviewer" flag on en.wp and maybe in commons. This method would create a list of reviewed images that can be considered objectionable, that could be filtered/black-listed.
We could also just delete them, unless someone actually uses them in a sensible way in an article. :-)
sincerely, Kim Bruning
"...,a viable alternative to not relying blindly on the categorization system, would be implementing a new "image reviewer" flag on en.wp and maybe in commons. This method would create a list of reviewed images that can be considered objectionable, that could be filtered/black-listed.
We could also just delete them, unless someone actually uses them in a sensible way in an article. :-)
sincerely, Kim Bruning
Not on Commons; being "objectionable" to some viewers and not being currently in use does not make a potentially educational image out of scope. I have seen many poorly worded deletion requests on Commons on the basis of a potentially useable image being "orphaned" rather than it being unrealistic to expect it to ever be used for an educational purpose.
Fae
From: Fae fae@wikimedia.org.uk
We could also just delete them, unless someone actually uses them in a sensible way in an article. :-)
sincerely, Kim Bruning
Not on Commons; being "objectionable" to some viewers and not being currently in use does not make a potentially educational image out of scope. I have seen many poorly worded deletion requests on Commons on the basis of a potentially useable image being "orphaned" rather than it being unrealistic to expect it to ever be used for an educational purpose.
Fae
Agree with Fae; Commons is a general image repository in its own right, serving a bigger audience than just the other Wikimedia projects.
So the fact is that Commons will contain controversial images – and that we have to curate them responsibly.
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Andreas
Am 11.10.2011 17:42, schrieb Andreas Kolbe:
From: Faefae@wikimedia.org.uk
We could also just delete them, unless someone actually uses them in a sensible way in an article. :-)
sincerely, Kim Bruning
Not on Commons; being "objectionable" to some viewers and not being currently in use does not make a potentially educational image out of scope. I have seen many poorly worded deletion requests on Commons on the basis of a potentially useable image being "orphaned" rather than it being unrealistic to expect it to ever be used for an educational purpose.
Fae
Agree with Fae; Commons is a general image repository in its own right, serving a bigger audience than just the other Wikimedia projects.
So the fact is that Commons will contain controversial images – and that we have to curate them responsibly.
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Andreas
This might just be coincidence for special cases. I'm sure if you search long enough you will find opposite examples as well. But wouldn't it run against the intention of a search engine to rate down content by "possibly offensive"? If you search for a cucumber you should expect to find one. If the description is correct, you should find the most suitable images first. But that should be based on the rating algorithm that works on the description, not on the fact that content is/might be/could be controversial.
Implementing such a restriction for a search engine (by default) would go against any principal and would be discrimination of content. We should not do this.
nya~
MediaWiki serves more than the Wikimedia Foundation too. ~~Ebe123
On 11-10-11 4:42 PM, "Tobias Oelgarte" tobias.oelgarte@googlemail.com wrote:
Am 11.10.2011 17:42, schrieb Andreas Kolbe: From: Faefae@wikimedia.org.uk
We could also just delete them, unless someone
actually uses them in a sensible way in an article. :-)
sincerely,
Kim Bruning Not on Commons; being "objectionable" to some viewers and not being currently in use does not make a potentially educational image out of scope. I have seen many poorly worded deletion requests on Commons on
the basis of a potentially useable image being "orphaned" rather than it being unrealistic to expect it to ever be used for an educational
purpose.
Fae
Agree with Fae; Commons is a general image repository in its own right, serving a bigger audience than just the other Wikimedia projects.
So the fact is that Commons will contain controversial images and that we have to curate them responsibly.
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Andreas
This might just be coincidence for special cases. I'm sure if you
search
long enough you will find opposite examples as well. But wouldn't it
run
against the intention of a search engine to rate down content by
"possibly offensive"? If you search for a cucumber you should expect to find
one. If the description is correct, you should find the most
suitable images
first. But that should be based on the rating algorithm
that works on the
description, not on the fact that content is/might
be/could be
controversial.
Implementing such a restriction for a search engine (by
default) would
go against any principal and would be discrimination of
content. We
should not do
this.
nya~
_______________________________________________ foundation-l
mailing list
foundation-l@lists.wikimedia.org Unsubscribe:
From: Tobias Oelgarte tobias.oelgarte@googlemail.com
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Andreas
This might just be coincidence for special cases. I'm sure if you search long enough you will find opposite examples as well.
Tobias,
If you can find counterexamples, I'll gladly look at them. These were the only three we checked this afternoon, and the difference was striking.
Here is another search, "underwater":
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=u...
The third search result in Commons is a bondage image:
http://commons.wikimedia.org/wiki/File:Underwater_bondage.jpg
On Google, with safe search off, the same image is the 58th result:
http://www.google.co.uk/search?gcx=w&q=underwater+site:commons.wikimedia...
But wouldn't it run against the intention of a search engine to rate down content by "possibly offensive"? If you search for a cucumber you should expect to find one. If the description is correct, you should find the most suitable images first. But that should be based on the rating algorithm that works on the description, not on the fact that content is/might be/could be controversial.
Implementing such a restriction for a search engine (by default) would go against any principal and would be discrimination of content. We should not do this.
You are not being realistic. If someone searches for "cucumber", "toothbrush" or "necklace" on Commons, they will not generally be looking for sexual images, and it is no use saying, "Well, you looked for a cucumber, and here you have one. Stuck up a woman's vagina."
Similarly, users entering "jumping ball" in the search field are unlikely to be looking for this image:
http://commons.wikimedia.org/wiki/File:Jumping_ball_01.jpg
Yet that is the first one the Commons search for "jumping ball" displays:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=j...
We are offering an image service, and the principle of least astonishment should apply. By having these images come at the top of our search results, we are alienating at least part of our readers who were simply looking for an image of a toothbrush, cucumber, or whatever.
On the other hand, if these images don't show up among our top results, we are not alienating users who look for images of the penetrative use of cucumbers or toothbrushes, because they can easily narrow their search if that is the image they're after.
Are you really saying that this is how Commons should work, bringing up sexual images for the most innocuous searches, and that this is how you would design the user experience for Commons users?
Andreas
On Wed, Oct 12, 2011 at 12:23 AM, Andreas Kolbe jayen466@yahoo.com wrote:
From: Tobias Oelgarte tobias.oelgarte@googlemail.com
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Andreas
This might just be coincidence for special cases. I'm sure if you search long enough you will find opposite examples as well.
Tobias,
If you can find counterexamples, I'll gladly look at them. These were the only three we checked this afternoon, and the difference was striking.
Here is another search, "underwater":
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=u...
The third search result in Commons is a bondage image:
http://commons.wikimedia.org/wiki/File:Underwater_bondage.jpg
On Google, with safe search off, the same image is the 58th result:
http://www.google.co.uk/search?gcx=w&q=underwater+site:commons.wikimedia...
But wouldn't it run against the intention of a search engine to rate down content by "possibly offensive"? If you search for a cucumber you should expect to find one. If the description is correct, you should find the most suitable images first. But that should be based on the rating algorithm that works on the description, not on the fact that content is/might be/could be controversial.
Implementing such a restriction for a search engine (by default) would go against any principal and would be discrimination of content. We should not do this.
You are not being realistic. If someone searches for "cucumber", "toothbrush" or "necklace" on Commons, they will not generally be looking for sexual images, and it is no use saying, "Well, you looked for a cucumber, and here you have one. Stuck up a woman's vagina."
Similarly, users entering "jumping ball" in the search field are unlikely to be looking for this image:
http://commons.wikimedia.org/wiki/File:Jumping_ball_01.jpg
Yet that is the first one the Commons search for "jumping ball" displays:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=j...
We are offering an image service, and the principle of least astonishment should apply. By having these images come at the top of our search results, we are alienating at least part of our readers who were simply looking for an image of a toothbrush, cucumber, or whatever.
On the other hand, if these images don't show up among our top results, we are not alienating users who look for images of the penetrative use of cucumbers or toothbrushes, because they can easily narrow their search if that is the image they're after.
Are you really saying that this is how Commons should work, bringing up sexual images for the most innocuous searches, and that this is how you would design the user experience for Commons users?
'There may be a middle ground on this whole issue, but I don't really see where it is at, because so few people seem to occupy it. Does that encapsulate the conundrum we are at?'
Hello,
To me, this shows that the search engine is badly configured, or has a major problem. So fix it instead of creating a filter, which would have unwanted side effects. Having a good search engine would be within the WMF mission, creating a filter is not.
Regards,
Yann
2011/10/12 Andreas Kolbe jayen466@yahoo.com:
From: Tobias Oelgarte tobias.oelgarte@googlemail.com
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Andreas
This might just be coincidence for special cases. I'm sure if you search long enough you will find opposite examples as well.
Tobias,
If you can find counterexamples, I'll gladly look at them. These were the only three we checked this afternoon, and the difference was striking.
Here is another search, "underwater":
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=u...
The third search result in Commons is a bondage image:
http://commons.wikimedia.org/wiki/File:Underwater_bondage.jpg
On Google, with safe search off, the same image is the 58th result:
http://www.google.co.uk/search?gcx=w&q=underwater+site:commons.wikimedia...
But wouldn't it run against the intention of a search engine to rate down content by "possibly offensive"? If you search for a cucumber you should expect to find one. If the description is correct, you should find the most suitable images first. But that should be based on the rating algorithm that works on the description, not on the fact that content is/might be/could be controversial.
Implementing such a restriction for a search engine (by default) would go against any principal and would be discrimination of content. We should not do this.
You are not being realistic. If someone searches for "cucumber", "toothbrush" or "necklace" on Commons, they will not generally be looking for sexual images, and it is no use saying, "Well, you looked for a cucumber, and here you have one. Stuck up a woman's vagina."
Similarly, users entering "jumping ball" in the search field are unlikely to be looking for this image:
http://commons.wikimedia.org/wiki/File:Jumping_ball_01.jpg
Yet that is the first one the Commons search for "jumping ball" displays:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=j...
We are offering an image service, and the principle of least astonishment should apply. By having these images come at the top of our search results, we are alienating at least part of our readers who were simply looking for an image of a toothbrush, cucumber, or whatever.
On the other hand, if these images don't show up among our top results, we are not alienating users who look for images of the penetrative use of cucumbers or toothbrushes, because they can easily narrow their search if that is the image they're after.
Are you really saying that this is how Commons should work, bringing up sexual images for the most innocuous searches, and that this is how you would design the user experience for Commons users?
Andreas
On Tue, Oct 11, 2011 at 6:42 PM, Andreas Kolbe jayen466@yahoo.com wrote:
From: Fae fae@wikimedia.org.uk
We could also just delete them, unless someone actually uses them in a sensible way in an article. :-)
sincerely, Kim Bruning
Not on Commons; being "objectionable" to some viewers and not being currently in use does not make a potentially educational image out of scope. I have seen many poorly worded deletion requests on Commons on the basis of a potentially useable image being "orphaned" rather than it being unrealistic to expect it to ever be used for an educational purpose.
Fae
Agree with Fae; Commons is a general image repository in its own right, serving a bigger audience than just the other Wikimedia projects.
So the fact is that Commons will contain controversial images – and that we have to curate them responsibly.
Someone on Meta has pointed out that Commons seems to list sexual image results for search terms like cucumber, electric toothbrushes or pearl necklace way higher than a corresponding Google search. See http://lists.wikimedia.org/pipermail/commons-l/2011-October/006290.html
Concur strenously. Jimbo tried deleting things he thought would have no useful purpose but merely titillation from commons and crashed and burned. Not the way to go folks! The finnish wikipedia uses a victorian or pre-victorian era mildly pedophilic suggestive copperplate drawing as an illustration of the "Pedophilia" article. By modern day standards the image is more comical than titillating *by our Finnish standards* --- but would be highly suspect in the US, atleast if the deletion debate for that image at commons is to be given credence to...
By modern day standards the image is more comical than titillating *by our Finnish standards* --- but would be highly suspect in the US, atleast if the deletion debate for that image at commons is to be given credence to...
It is a horrendously useless illustration of Pedophilia (from the context of illustrating an article), but I don't think the US would consider it "suspect" or problematic (at least in my experience).
Where's the deletion discussion? I couldn't find it.
Tom
On Wed, Oct 12, 2011 at 1:29 AM, Thomas Morton morton.thomas@googlemail.com wrote:
By modern day standards the image is more comical than titillating *by our Finnish standards* --- but would be highly suspect in the US, atleast if the deletion debate for that image at commons is to be given credence to...
It is a horrendously useless illustration of Pedophilia (from the context of illustrating an article), but I don't think the US would consider it "suspect" or problematic (at least in my experience).
Where's the deletion discussion? I couldn't find it.
Sorry, my mistake, my bad. The image removal discussion from the English article... ;D yikes
On Tue, Oct 11, 2011 at 10:43 AM, Risker risker.wp@gmail.com wrote:
On 10 October 2011 18:45, Kim Bruning kim@bruning.xs4all.nl wrote:
On Mon, Oct 10, 2011 at 07:12:04PM -0400, Risker wrote:
I've seen it in operation.
Let me check: Have seen your image filter software actually directly use categories from commons? Are you sure?
Yes, I have seen net-nanny software directly block entire Commons categories.
What was the name of the software? Or where was it installed where you saw it?
Risker wrote:
So does the current categorization system lend itself to being hijacked by downstream users?
Yes, but not nearly to the same extent.
Given the number of people who insist that any categorization system seems to be vulnerable, I'd like to hear the reasons why the current system, which is obviously necessary in order for people to find types of images, does not have the same effect. I'm not trying to be provocative here, but I am rather concerned that this does not seem to have been discussed.
The current system doesn't involve categorizing images based on criteria under which they're considered "potentially objectionable," so someone wishing to censor images of x is far less likely to find Category:x.
David Levy
Risker wrote:
Given the number of people who insist that any categorization system seems to be vulnerable, I'd like to hear the reasons why the current system, which is obviously necessary in order for people to find types of images, does not have the same effect. I'm not trying to be provocative here, but I am rather concerned that this does not seem to have been discussed.
Personally, from the technical side, I don't think there's any way to make per-category filtering work. What happens when a category is deleted? Or a category is renamed (which is effectively deleting the old category name currently)? And are we really expecting individual users to go through millions of categories and find the ones that may be offensive to them? Surely users don't want to do that. The whole point is that they want to limit their exposure to such images, not dig into the millions of categories that may exist looking for ones that largely contain content they find objectionable. Surely.
So that leaves you with much broader categorization, I guess? "Violence", "Gore", etc. And then that leaves you with people debating which images belong to which broad category?
Not trying to be provocative, I've just never understood how the category-based system is supposed to work in practice. In (abstract) theory, it seems magical.
MZMcBride
________________________________ From: MZMcBride z@mzmcbride.com Personally, from the technical side, I don't think there's any way to make per-category filtering work. What happens when a category is deleted? Or a category is renamed (which is effectively deleting the old category name currently)? And are we really expecting individual users to go through millions of categories and find the ones that may be offensive to them? Surely users don't want to do that. The whole point is that they want to limit their exposure to such images, not dig into the millions of categories that may exist looking for ones that largely contain content they find objectionable. Surely.
So that leaves you with much broader categorization, I guess? "Violence", "Gore", etc. And then that leaves you with people debating which images belong to which broad category?
Not trying to be provocative, I've just never understood how the category-based system is supposed to work in practice. In (abstract) theory, it seems magical.
The way it is supposed to work is by creating categories that simply describe media content. A bit like alt.texts, I guess. Examples might be:
Images of people engaged in sexual intercourse.
Videos of people masturbating.
Images of genitals.
Pictures of the prophet Muhammad.
Images of open wounds.
In other words, the idea is to give the user objective definitions of media content (not a subjective assessment of any likely offence).
Working out good category definitions would be an important task. There is little potential for arguments, provided the definitions are clear. A media file either shows genitals, or it doesn't. It either shows people having sexual intercourse, or it doesn't. If there is any doubt (say, visibility is largely obscured, or you can't tell), then the basic rule should be "leave it out" (unless and until filter users start complaining).
Andreas
The question arises, however, of where to draw the rather thick gray line. If you're not sure what I'm talking about, take for instance the famous Renaissance paintings; often innocent at first glance, but perhaps one of the subjects is nude. Perhaps in the background there is a nude individual. Maybe that individual is too tiny to see clearly. Or perhaps it's adorned with nude cherubim around the corners. Or maybe there's a photo of something where in the background you can see a nude sculpture. And that's just the topic of nudity within the scope of the Renaissance art-- it gets worse.
This is precisely the thing that makes it difficult to decide whether to block an image or not.
Whatever system is used, it needs to be a bit more intricate than just "either / or".
Bob
On 10/10/2011 7:17 PM, Andreas Kolbe wrote:
A media file either shows genitals, or it doesn't. It either shows people having sexual intercourse, or it doesn't. If there is any doubt (say, visibility is largely obscured, or you can't tell), then the basic rule should be "leave it out" (unless and until filter users start complaining).
From: Bob the Wikipedian bobthewikipedian@gmail.com
The question arises, however, of where to draw the rather thick gray line. If you're not sure what I'm talking about, take for instance the famous Renaissance paintings; often innocent at first glance, but perhaps one of the subjects is nude. Perhaps in the background there is a nude individual. Maybe that individual is too tiny to see clearly. Or perhaps it's adorned with nude cherubim around the corners. Or maybe there's a photo of something where in the background you can see a nude sculpture. And that's just the topic of nudity within the scope of the Renaissance art-- it gets worse.
This is precisely the thing that makes it difficult to decide whether to block an image or not.
Whatever system is used, it needs to be a bit more intricate than just "either / or".
Bob,
I agree it needs a large upfront investment in defining categories sensibly.
Photos of genitals attached to live human beings are different from historical paintings, or photos of Greek sculptures. Few if any would want to filter the latter.
But the idea is to use pedestrian descriptions, telling the user exactly what sort of media files are meant.
Andreas
Andreas Kolbe wrote:
The way it is supposed to work is by creating categories that simply describe media content. A bit like alt.texts, I guess. Examples might be:
Images of people engaged in sexual intercourse.
Videos of people masturbating.
Images of genitals.
Pictures of the prophet Muhammad.
Images of open wounds.
In other words, the idea is to give the user objective definitions of media content (not a subjective assessment of any likely offence).
As has been mentioned numerous times, deeming certain subjects (and not others) "potentially objectionable" is inherently subjective and non-neutral.
Unveiled women, pork consumption, miscegenation and homosexuality are considered objectionable by many people. Will they be assigned categories? If not, why not? If so, who's gong to analyze millions of images (with thousands more uploaded on a daily basis) to tag them?
And what if the barefaced, bacon-eating, interracial lesbians are visible only in the image's background? Does that count?
David Levy
From: David Levy lifeisunfair@gmail.com
Andreas Kolbe wrote:
The way it is supposed to work is by creating categories that simply describe media content. A bit like alt.texts, I guess. Examples might be:
Images of people engaged in sexual intercourse.
Videos of people masturbating.
Images of genitals.
Pictures of the prophet Muhammad.
Images of open wounds.
In other words, the idea is to give the user objective definitions of media content (not a subjective assessment of any likely offence).
As has been mentioned numerous times, deeming certain subjects (and not others) "potentially objectionable" is inherently subjective and non-neutral.
Unveiled women, pork consumption, miscegenation and homosexuality are considered objectionable by many people. Will they be assigned categories? If not, why not? If so, who's gong to analyze millions of images (with thousands more uploaded on a daily basis) to tag them?
And what if the barefaced, bacon-eating, interracial lesbians are visible only in the image's background? Does that count?
David,
Please let's get real.
If I search Commons for "electric toothbrushes", the second search result is an image of a woman masturbating with an electric toothbrush:
http://commons.wikimedia.org/w/index.php?title=Special:Search&search=ele...
If I search Commons for "pearl necklace", the first search result is an image of a woman with sperm on her throat:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=p...
If I search Commons for "cucumber", the first page of search results shows a woman with a cucumber up her vagina:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=c...
Please accept that people who are looking for images of electric toothbrushes, cucumbers and pearl necklaces in Commons may be somewhat taken aback by this. Surely your vision of neutrality does not include that we have to force people interested in personal hygiene, vegetables and fashion to look at graphic sex images? There is theory and practice. Philosophically, I agree with you. But looking at the results of trying to find an image of a cucumber or pearl necklace in Commons is a pragmatic question. Users should be able to tailor their user experience to their needs.
Cheers, Andreas
Andreas Kolbe wrote:
If I search Commons for "electric toothbrushes", the second search result is an image of a woman masturbating with an electric toothbrush:
http://commons.wikimedia.org/w/index.php?title=Special:Search&search=ele... toothbrushes&fulltext=Search&redirs=1&ns0=1&ns6=1&ns9=1&ns12=1&ns14=1&ns100=1& ns106=1
If I search Commons for "pearl necklace", the first search result is an image of a woman with sperm on her throat:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=p... ecklace&fulltext=Search
If I search Commons for "cucumber", the first page of search results shows a woman with a cucumber up her vagina:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=c... r&fulltext=Search
Please accept that people who are looking for images of electric toothbrushes, cucumbers and pearl necklaces in Commons may be somewhat taken aback by this. Surely your vision of neutrality does not include that we have to force people interested in personal hygiene, vegetables and fashion to look at graphic sex images? There is theory and practice. Philosophically, I agree with you. But looking at the results of trying to find an image of a cucumber or pearl necklace in Commons is a pragmatic question. Users should be able to tailor their user experience to their needs.
Brainstorming for a workable solution, now ongoing: https://meta.wikimedia.org/wiki/Controversial_content/Brainstorming.
MZMcBride
Andreas Kolbe wrote:
If I search Commons for "electric toothbrushes", the second search result is an image of a woman masturbating with an electric toothbrush:
http://commons.wikimedia.org/w/index.php?title=Special:Search&search=ele...
If I search Commons for "pearl necklace", the first search result is an image of a woman with sperm on her throat:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=p...
If I search Commons for "cucumber", the first page of search results shows a woman with a cucumber up her vagina:
http://commons.wikimedia.org/w/index.php?title=Special%3ASearch&search=c...
Please accept that people who are looking for images of electric toothbrushes, cucumbers and pearl necklaces in Commons may be somewhat taken aback by this.
I agree that this is a problem. I disagree that the proposed image-tagging system is a viable solution.
Please answer the questions from my previous message.
Surely your vision of neutrality does not include that we have to force people interested in personal hygiene, vegetables and fashion to look at graphic sex images?
I don't wish to force *any* off-topic images on people (including those who haven't activated an optional feature intended to filter "objectionable" ones). Methods of preventing this should be pursued.
There is theory and practice. Philosophically, I agree with you. But looking at the results of trying to find an image of a cucumber or pearl necklace in Commons is a pragmatic question. Users should be able to tailor their user experience to their needs.
I agree, provided that we seek to accommodate all users equally. That's why I support the type of implementation discussed here: http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/en/Categories#ge... or http://goo.gl/t6ly5
David Levy
What you are all missing here is that commons is a service site, not a repository for the public to go into without knowing it caters to different cultures than their own. Period.
From: Jussi-Ville Heiskanen cimonavaro@gmail.com To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Tuesday, 11 October 2011, 22:40 Subject: Re: [Foundation-l] Letter to the community on Controversial Content - Commons searches
What you are all missing here is that commons is a service site, not a repository for the public to go into without knowing it caters to different cultures than their own. Period.
Google is a service site too, and a better-designed one than Commons.
If I search for cream pie, and I want to find cakes, I leave safe search on:
http://www.google.co.uk/search?q=creampie&um=1&hl=en&sa=N&tb...
If I search for cream pie and want to find porn, I switch safe search off:
http://www.google.co.uk/search?um=1&hl=en&tbm=isch&q=cream+pie&a...
I know Google caters to both "cultures", but I can decide beforehand what it will show me.
That is good service.
Andreas
David,
You asked for a reply to your earlier questions.
As has been mentioned numerous times, deeming certain subjects (and not others) "potentially objectionable" is inherently subjective and non-neutral.
Unveiled women, pork consumption, miscegenation and homosexuality are considered objectionable by many people. Will they be assigned categories? If not, why not? If so, who's gong to analyze millions of images (with thousands more uploaded on a daily basis) to tag them?
And what if the barefaced, bacon-eating, interracial lesbians are visible only in the image's background? Does that count?
If we provide a filter, we have to be pragmatic, and restrict its application to media that significant demographics really might want to filter.
We should take our lead from real-world media.
Real-world media show images of lesbians, pork, mixed-race couples, and unveiled women (even Al-Jazeera).
There is absolutely no need to filter them, as there is no significant target group among our readers who would want such a filter.
Images of Muhammad, or masturbation with cucumbers, are different. There is a significant demographic of users who might not want to see such images, especially if they come across them unprepared.
If there is doubt whether or not an image should belong in a category (because the potentially controversial content is mostly covered, far in the background etc.), it should be left out, until there are complaints from filter users.
You mentioned a discussion about category-based filter systems in your other post. One other avenue I would like to explore is whether the existing Commons category system could, with a bit of work, be used as a basis for the filter. I've made a corresponding post here:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_th...
This would reduce the need to tag thousands and thousands of images in Commons to a need to tag a few hundred categories. Some clean-up and recategorisation might be required for categories with mixed content. (Images stored on the projects themselves, rather than on Commons, would need to be addressed separately.)
I understand you are more in favour of users being able to switch all images off, depending on the page they are on. This has some attractive aspects, but it would not help e.g. the Commons user searching for an image of a pearl necklace. To see the images Commons contains, they have to have image display on, and then the first image they see is the image of the woman with sperm on her throat.
It also does not necessarily prepare users for the media they might find in WP articles like the ones on fisting, ejaculation and many others; there are always users who are genuinely shocked to see that we have the kind of media we have on those pages, and are unprepared for them.
Andreas
On Tue, Oct 11, 2011 at 18:19, Andreas Kolbe jayen466@yahoo.com wrote:
If we provide a filter, we have to be pragmatic, and restrict its application to media that significant demographics really might want to filter.
That should be designed well and maintained, too. I am really frustrated by Google's insisting that my interface should be in Serbian (or in Spanish while I was in Spain), although I am a logged in user.
IPv4 address blocks are being sold from one company to another now, which means that GeoIP method *requires* the last database.
Andreas Kolbe wrote:
If we provide a filter, we have to be pragmatic, and restrict its application to media that significant demographics really might want to filter.
Define "significant demographics." Do you have a numerical cut-off point in mind (below which we're to convey "you're a small minority, so we've deemed you insignificant")?
We should take our lead from real-world media.
WMF websites display many types of images that most media don't. That's because our mission materially differs. We seek to spread knowledge, not to cater to majorities in a manner that maximizes revenues.
For most WMF projects, neutrality is a core principle. Designating certain subjects (and not others) "potentially objectionable" is inherently non-neutral.
Real-world media show images of lesbians, pork, mixed-race couples, and unveiled women (even Al-Jazeera).
There is absolutely no need to filter them, as there is no significant target group among our readers who would want such a filter.
So only insignificant target groups would want that?
Many ultra-Orthodox Jewish newspapers and magazines maintain an editorial policy forbidding the publication of photographs depicting women. Some have even performed digital alterations to remove them from both the foreground and background.
http://en.wikipedia.org/wiki/The_Situation_Room_(photograph)
These publications (which routinely run photographs of deceased women's husbands when publishing obituaries) obviously have large enough readerships to be profitable and remain in business.
"As of 2011, there are approximately 1.3 million Haredi Jews. The Haredi Jewish population is growing very rapidly, doubling every 17 to 20 years."
http://en.wikipedia.org/wiki/Haredi_Judaism
Are we to tag every image containing a woman, or are we to deem this religious group insignificant?
You mentioned a discussion about category-based filter systems in your other post.
The ability to blacklist categories is only one element of the proposal (and a secondary one, in my view).
One other avenue I would like to explore is whether the existing Commons category system could, with a bit of work, be used as a basis for the filter. I've made a corresponding post here:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_th...
This was discussed at length on the talk pages accompanying the "referendum" and on this list.
Our current categorization is based primarily on what images are about, *not* what they contain. For example, a photograph depicting a protest rally might include nudity in the crowd, but its categorization probably won't specify that. Of course, if we were to introduce a filter system reliant upon the current categories, it's likely that some users would seek to change that (resulting in harmful dilution).
Many "potentially objectionable" subjects lack categories entirely (though as discussed above, you evidently have deemed them insignificant).
On the brainstorming page, you suggest that "[defining] a small number of categories (each containing a group of existing Commons categories) that users might want to filter" would "alleviate the concern that we are creating a special infrastructure that censors could exploit." I don't understand how. What would stop censors from utilizing the categories of categories in precisely the same manner?
I understand you are more in favour of users being able to switch all images off, depending on the page they are on.
The proposal that I support includes both blacklisting and whitelisting.
This has some attractive aspects, but it would not help e.g. the Commons user searching for an image of a pearl necklace. To see the images Commons contains, they have to have image display on, and then the first image they see is the image of the woman with sperm on her throat.
This problem extends far beyond the issue of "objectionable" images, and I believe that we should pursue solutions separately.
It also does not necessarily prepare users for the media they might find in WP articles like the ones on fisting, ejaculation and many others; there are always users who are genuinely shocked to see that we have the kind of media we have on those pages, and are unprepared for them.
Such users could opt to block images by default, whitelisting only the articles or specific images whose captions indicate content that they wish to view.
David Levy
From: David Levy lifeisunfair@gmail.com
Andreas Kolbe wrote:
If we provide a filter, we have to be pragmatic, and restrict its application to media that significant demographics really might want to filter.
Define "significant demographics." Do you have a numerical cut-off point in mind (below which we're to convey "you're a small minority, so we've deemed you insignificant")?
I would use indicators like the number and intensity of complaints received.
That is one of the indicators the Foundation has used as well.
WMF websites display many types of images that most media don't. That's because our mission materially differs. We seek to spread knowledge, not to cater to majorities in a manner that maximizes revenues.
Generally, what we display in Wikipedia should match what reputable educational sources in the field display. Just like Wikipedia text reflects the text in reliable sources. Anything that goes beyond that should be accessible via a Commons link, rather than displayed on the article page.
Commons, however, is different, and has a wider scope. It has an important role in its own right, and its media categories should be linked from Wikipedia.
For most WMF projects, neutrality is a core principle. Designating certain subjects (and not others) "potentially objectionable" is inherently non-neutral.
What we present should be neutral (where neutrality is, as always, defined by reliable sources, rather than editor preference).
That does not mean that we should not listen to users who tell us that they don't want to see certain media because they find them upsetting, or unappealing.
So only insignificant target groups would want that?
Many ultra-Orthodox Jewish newspapers and magazines maintain an editorial policy forbidding the publication of photographs depicting women. Some have even performed digital alterations to remove them from both the foreground and background.
http://en.wikipedia.org/wiki/The_Situation_Room_(photograph)
These publications (which routinely run photographs of deceased women's husbands when publishing obituaries) obviously have large enough readerships to be profitable and remain in business.
"As of 2011, there are approximately 1.3 million Haredi Jews. The Haredi Jewish population is growing very rapidly, doubling every 17 to 20 years."
Are we to tag every image containing a woman, or are we to deem this religious group insignificant?
I would deem them insignificant for the purposes of the image filter. They are faced with images of women everywhere in modern life, and we cannot cater for every fringe group. At some point, there are diminishing returns, especially when it amounts to filtering images of more than half the human race.
We need to look at mainstream issues (including Muhammad images).
You mentioned a discussion about category-based filter systems in your other post.
The ability to blacklist categories is only one element of the proposal (and a secondary one, in my view).
One other avenue I would like to explore is whether the existing Commons category system could, with a bit of work, be used as a basis for the filter. I've made a corresponding post here:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Refine_th...
This was discussed at length on the talk pages accompanying the "referendum" and on this list.
Our current categorization is based primarily on what images are about, *not* what they contain. For example, a photograph depicting a protest rally might include nudity in the crowd, but its categorization probably won't specify that. Of course, if we were to introduce a filter system reliant upon the current categories, it's likely that some users would seek to change that (resulting in harmful dilution).
Many "potentially objectionable" subjects lack categories entirely (though as discussed above, you evidently have deemed them insignificant).
I believe the most important content is identifiable by categories.
On the brainstorming page, you suggest that "[defining] a small number of categories (each containing a group of existing Commons categories) that users might want to filter" would "alleviate the concern that we are creating a special infrastructure that censors could exploit." I don't understand how. What would stop censors from utilizing the categories of categories in precisely the same manner?
What I meant is that compiling a collection of a few hundred categories would not be saving censors an awful lot of work. They could -- and can -- achieve the same thing in an afternoon now, based on our existing category system.
I understand you are more in favour of users being able to switch all images off, depending on the page they are on.
The proposal that I support includes both blacklisting and whitelisting.
That would involve a user switching all images off, and then whitelisting those they wish to see; is that correct? Or blacklisting individual categories?
This would be better from the point of view of project neutrality, but would seem to involve a *lot* more work for the individual user.
It would also be equally likely to aid censorship, as the software would have to recognise the user's blacklists, and a country or ISP could then equally generate its own blacklists and apply them across the board to all users.
It also does not necessarily prepare users for the media they might find in WP articles like the ones on fisting, ejaculation and many others; there are always users who are genuinely shocked to see that we have the kind of media we have on those pages, and are unprepared for them.
Such users could opt to block images by default, whitelisting only the articles or specific images whose captions indicate content that they wish to view.
David Levy
Again, requiring these users to do without *any pictures at all*, except those they individually whitelist, doesn't seem like a feasible proposition. It's not user-friendly.
Regards, Andreas
Andreas Kolbe wrote:
I would use indicators like the number and intensity of complaints received.
For profit-making organizations seeking to maximize revenues by catering to majorities, this is a sensible approach. For most WMF projects, conversely, neutrality is a fundamental, non-negotiable principle.
Generally, what we display in Wikipedia should match what reputable educational sources in the field display. Just like Wikipedia text reflects the text in reliable sources.
This is a tangential matter, but you're comparing apples to oranges.
We look to reliable sources to determine factual information and the extent of coverage thereof. We do *not* emulate their value judgements.
A reputable publication might include textual documentation of a subject, omitting useful illustrations to avoid upsetting its readers. That's non-neutral.
That does not mean that we should not listen to users who tell us that they don't want to see certain media because they find them upsetting, or unappealing.
Agreed. That's why I support the introduction of a system enabling users (including those belonging to "insignificant" groups) to filter images to which they object.
I would deem them insignificant for the purposes of the image filter. They are faced with images of women everywhere in modern life, and we cannot cater for every fringe group.
The setup that I support would accommodate all groups, despite being *far* simpler and easier to implement/maintain than one based on tagging would be.
At some point, there are diminishing returns, especially when it amounts to filtering images of more than half the human race.
That such an endeavor is infeasible is my point.
We need to look at mainstream issues (including Muhammad images).
We needn't focus on *any* "objectionable" content in particular.
That would involve a user switching all images off, and then whitelisting those they wish to see; is that correct? Or blacklisting individual categories?
Those would be two options. The inverse options (blacklisting images and whitelisting entire categories) also should be included.
And it should be possible to black/whitelist every image appearing in a particular page revision (either permanently or on a one-off basis).
This would be better from the point of view of project neutrality, but would seem to involve a *lot* more work for the individual user.
Please keep in mind that I don't regard a category-based approach as feasible, let alone neutral. The amount of work for editors (and related conflicts among them) would be downright nightmarish.
It would also be equally likely to aid censorship, as the software would have to recognise the user's blacklists, and a country or ISP could then equally generate its own blacklists and apply them across the board to all users.
They'd have to identify specific images/categories to block, which they can do *now* (and simply intercept and suppress the data themselves).
David Levy
From: David Levy lifeisunfair@gmail.com Andreas Kolbe wrote:
I would use indicators like the number and intensity of complaints received.
For profit-making organizations seeking to maximize revenues by catering to majorities, this is a sensible approach. For most WMF projects, conversely, neutrality is a fundamental, non-negotiable principle.
Neutrality applies to content. I don't think it applies in the same way to *display options* or other gadget infrastructure.
> Generally, what we display in Wikipedia should match what reputable > educational sources in the field display. Just like Wikipedia text reflects > the text in reliable sources. This is a tangential matter, but you're comparing apples to oranges. We look to reliable sources to determine factual information and the extent of coverage thereof. We do *not* emulate their value judgements. A reputable publication might include textual documentation of a subject, omitting useful illustrations to avoid upsetting its readers. That's non-neutral.
Thanks for mentioning it, because it's a really important point.
Neutrality is defined as following reliable sources, not following editors' opinions. NPOV "means representing fairly, proportionately, and as far as possible without bias, all significant views that have been published by reliable sources."
Editors can (and sometimes do) argue in just the same way that reliable sources might be omitting certain theories they subscribe to, because of non-neutral value judgments (or at least value judgments they disagree with) – in short, arguing that reliable sources are all biased.
I see this as no different. I really wonder where this idea entered that when it comes to text, reliable sources' judgment is sacrosanct, while when it comes to illustrations, reliable sources' judgment is suspect, and editors' judgment is better.
If we reflected reliable sources in our approach to illustration, unbiased, we wouldn't be having half the problems we are having.
<snip>
The setup that I support would accommodate all groups, despite being
*far* simpler and easier to implement/maintain than one based on tagging would be.
I agree the principle is laudable. Would you like to flesh it out in more detail on http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming%C2%A0?
It can then benefit from further discussion.
<snip>
> It would also be equally likely to aid censorship, as the software would have > to recognise the user's blacklists, and a country or ISP could then equally > generate its own blacklists and apply them across the board to all users. They'd have to identify specific images/categories to block, which they can do *now* (and simply intercept and suppress the data themselves).
Probably true, and I am beginning to wonder if the concern that censors could abuse any filter infrastructure isn't somewhat overstated. After all, as WereSpielChequers pointed out to me on Meta, we have public "bad image" lists now, and hundreds of categories that could be (and maybe are) used that way.
Cheers, Andreas
Andreas Kolbe wrote:
Neutrality applies to content. I don't think it applies in the same way to *display options* or other gadget infrastructure.
Category tags = content.
Setting aside the matter of category tags, I disagree with the premise that the neutrality principle is inapplicable to display options. When an on-wiki gadget is used to selectively suppress material deemed "objectionable," that's a content issue (despite not affecting pages by default).
Neutrality is defined as following reliable sources, not following editors' opinions. NPOV "means representing fairly, proportionately, and as far as possible without bias, all significant views that have been published by reliable sources."
Editors can (and sometimes do) argue in just the same way that reliable sources might be omitting certain theories they subscribe to, because of non-neutral value judgments (or at least value judgments they disagree with) – in short, arguing that reliable sources are all biased.
I see this as no different. I really wonder where this idea entered that when it comes to text, reliable sources' judgment is sacrosanct, while when it comes to illustrations, reliable sources' judgment is suspect, and editors' judgment is better.
Again, you're conflating separate issues.
We consult reliable sources to obtain factual information and gauge the manner in which topics receive coverage. It's quite true that the latter often reflects biases, but we seek to neutrally convey the real-world balance (which _includes_ those biases).
Conversely, we don't take on subjective views — no matter how widespread — as our own. For example, if most mainstream media outlets publish the opinion that x is bad, we simply relay the fact that said information was published. We don't adopt "x is bad" as *our* position.
Likewise, if most publications decide that it would be bad to publish illustrations alongside their coverage of a subject (on the basis that such images are likely to offend), we might address this determination via prose, but won't adopt it as *our* position.
I agree the principle is laudable. Would you like to flesh it out in more detail on http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming ?
It can then benefit from further discussion.
I intend to edit that page when I have more free time. But note that the idea isn't mine.
Probably true, and I am beginning to wonder if the concern that censors could abuse any filter infrastructure isn't somewhat overstated.
I regard such concerns as valid, but other elements of the proposed setup strike me as significantly more problematic.
David Levy
From:David Levy lifeisunfair@gmail.com
Setting aside the matter of category tags, I disagree with the premise that the neutrality principle is inapplicable to display options. When an on-wiki gadget is used to selectively suppress material deemed "objectionable," that's a content issue (despite not affecting pages by default).
Again, I think you are being too philosophical, and lack pragmatism.
We already have bad image lists like
http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list
If you remain wedded to an abstract philosophical approach, such lists are not neutral. But they answer a real need.
> I see this as no different. I really wonder where this idea entered that > when it comes to text, reliable sources' judgment is sacrosanct, while when > it comes to illustrations, reliable sources' judgment is suspect, and editors' > judgment is better.
Again, you're conflating separate issues.
We consult reliable sources to obtain factual information and gauge the manner in which topics receive coverage. It's quite true that the latter often reflects biases, but we seek to neutrally convey the real-world balance (which _includes_ those biases).
Conversely, we don't take on subjective views — no matter how widespread — as our own. For example, if most mainstream media outlets publish the opinion that x is bad, we simply relay the fact that said information was published. We don't adopt "x is bad" as *our* position.
I would invite you to think some more about this, and view it from a different angle. You said earlier,
A reputable publication might include textual documentation of a subject, omitting useful illustrations to avoid upsetting its readers. That's non-neutral.
You assume here that there is any kind of neutrality in Wikipedia that is not defined by reliable sources.
There isn't. The very definition of neutrality in our projects is tied to the editorial judgment of reliable sources.
If I go along with your statement that reliable sources avoid upsetting their readers, why would we be more "neutral" by deciding to depart from reliable sources' judgment, and consciously upsetting our readers in a way reliable sources do not?
It seems to me we do not become more neutral by doing so, but are implementing a clear bias – a departure from, as you put it, the "real- world balance". And I think this is a fact. Wikipedia departs from reliable sources in its approach to illustration, and has a clear bias in favour of showing offensive content that sets its editorial policy apart from real-world publishing standards.
Likewise, if most publications decide that it would be bad to publish illustrations alongside their coverage of a subject (on the basis that such images are likely to offend), we might address this determination via prose, but won't adopt it as *our* position.
That exact same argument could be made about text as well:
"Likewise, if most publications decide that it would be bad to publish that X is a scoundrel (on the basis that it would be likely to offend), we might address this determination via prose, but won't adopt it as *our* position."
So then we would have articles saying, "No newspaper has reported that X is a scoundrel, but he is, because –."
And then you can throw in NOTCENSORED for good measure as a riposte to anyone wishing to delete such original research.
Seen from this perspective, your judgment that illustrations which reliable sources have not found "useful" for their readers should be "useful" for readers of Wikipedia is directly analogous to this kind of original research.
In my view, our articles should show what a reliable educational source would show, no more and no less. Anything beyond that should be left to a prominent Commons link.
That would be neutral.
> Probably true, and I am beginning to wonder if the concern that censors > could abuse any filter infrastructure isn't somewhat overstated.
I regard such concerns as valid, but other elements of the proposed setup strike me as significantly more problematic.
I regard them as valid too, and if it can be avoided I am all for it, but the fact is that bad image lists and categories exist already, and those who would censor us do so already. We don't forbid cars because some people drive drunk.
Andreas
Andreas Kolbe wrote:
Again, I think you are being too philosophical, and lack pragmatism.
We already have bad image lists like
http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list
If you remain wedded to an abstract philosophical approach, such lists are not neutral. But they answer a real need.
Apart from the name (which the MediaWiki developers inexplicably refused to change), the bad image list is entirely compliant with the principle of neutrality (barring abuse by a particular project, which I haven't observed). It's used to prevent a type of vandalism, which typically involves the insertion of images among the most likely to offend/disgust large numbers of people. But if the need arises (for example, in the case of a "let's post harmless pictures of x everywhere" meme), it can be applied to *any* image (including one that practically no one regards as inherently objectionable).
I would invite you to think some more about this, and view it from a different angle. You said earlier,
A reputable publication might include textual documentation of a subject, omitting useful illustrations to avoid upsetting its readers. That's non-neutral.
You assume here that there is any kind of neutrality in Wikipedia that is not defined by reliable sources.
There isn't.
Again, you're conflating two separate concepts.
In most cases, we can objectively determine, based on information from reliable sources, that an image depicts x. This is comparable to confirming written facts via the same sources.
If a reliable source declines to include an image because it's considered "offensive," that's analogous to censoring a word (e.g. replacing "fuck" with "f**k") for the same reason.
"Include suitably licensed images illustrating their subjects" is a neutral pursuit. (Debates arise regarding images' utility — just as they do regarding text — but the goal is neutral.) "Include suitably licensed images illustrating their subjects, provided that they aren't upsetting" is *not* neutral, just as "include properly sourced information, provided that it isn't upsetting" is not.
If I go along with your statement that reliable sources avoid upsetting their readers, why would we be more "neutral" by deciding to depart from reliable sources' judgment, and consciously upsetting our readers in a way reliable sources do not?
"This is an image of x" (corroborated by information from reliable sources) is a neutral statement. "This image is upsetting" is not.
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
As I noted previously, to emulate such a standard would be to adopt a value judgement as our own (analogous to stating as fact that "x is bad" because most reputable sources agree that it is).
Likewise, if most publications decide that it would be bad to publish illustrations alongside their coverage of a subject (on the basis that such images are likely to offend), we might address this determination via prose, but won't adopt it as *our* position.
That exact same argument could be made about text as well:
"Likewise, if most publications decide that it would be bad to publish that X is a scoundrel (on the basis that it would be likely to offend), we might address this determination via prose, but won't adopt it as *our* position."
So then we would have articles saying, "No newspaper has reported that X is a scoundrel, but he is, because –."
"X is a scoundrel" is a statement of opinion. "X is a photograph of y" (corroborated by information from reliable sources) is a statement of fact.
And as noted earlier, this is tangential to the image filter discussion.
David Levy
On 12 October 2011 14:09, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
We already have bad image lists like http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list If you remain wedded to an abstract philosophical approach, such lists are not neutral. But they answer a real need.
Apart from the name (which the MediaWiki developers inexplicably refused to change), the bad image list is entirely compliant with the principle of neutrality (barring abuse by a particular project, which I haven't observed). It's used to prevent a type of vandalism, which typically involves the insertion of images among the most likely to offend/disgust large numbers of people. But if the need arises (for example, in the case of a "let's post harmless pictures of x everywhere" meme), it can be applied to *any* image (including one that practically no one regards as inherently objectionable).
In fact, it is specifically a measure used only in case of vandalism, and only after an image is actually being used for vandalism, per:
http://en.wikipedia.org/wiki/MediaWiki_talk:Bad_image_list
"The images listed on MediaWiki:Bad image list are prohibited by technical means from being displayed inline on pages, besides specified exceptions. Images on the list have normally been used for widespread vandalism where user blocks and page protections are impractical."
Using it as justification for the image filter under discussion shows a misunderstanding of its purpose and scope.
- d.
On Sun, Oct 9, 2011 at 9:55 AM, Ting Chen tchen@wikimedia.org wrote:
Their opinions and preferences are as legitimate as our own
This is a problematic statement. Although as a bland truism it initially seems unexceptional and obvious, it is in fact flatly untrue. It is greatly troubling to think that this statement might represent the level of discussion going among board members.
Firstly, it ignores the basic problem that the "opinion and preference" of a large group is not merely that they should not have to see a particular class of image, but rather that no one at all should be able to see them. By showing those images by default, as we clearly plan to do, we are deliberately and knowingly privileging our "opinions and preferences" over theirs. Of course, this is the right thing to do - but it directly contradicts Ting's statement.
Secondly, it ignores the fact that an encyclopedia, at least in intention, does not deal in opinions at all, but rather in facts - and while everyone is entitled to their own opinions, no one is entitled to their own facts. The Earth is spherical, and we will show a picture illustrating that. Person A's opinion that it is actually flat is not legitimate, and we will disregard it. Millions of Jews were killed by the Nazi regime, and we will show a picture of a mass grave, because it is the truth, and we greatly prefer it to person B's opinion that the Holocaust is a mere propaganda conspiracy.
In these and a thousand other topics, there are groups who have opinions which are directly contrary to known truth, and to pretend that we regard those opinions as "equally valid" is utter nonsense, and completely contrary to the spirit that drives Wikipedians.
Cheers,
Andrew (Thparkth)
Secondly, it ignores the fact that an encyclopedia, at least in intention, does not deal in opinions at all, but rather in facts
Not at all!
You've confused "a fact" with factual. What we record is factual - but it might be a fact, or it might be an opinion. When relating opinions we reflect (or ostensibly try to) the global opinion, and occasionally some of the more significant alternative views.
Consider:
*Abby killed Betty.*
compared to
*The judge convicted Abby of killing Betty, saying that the overwhelming evidence indicated manslaughter.*
The latter is factual, and contains facts & opinions.
But this is really irrelevant to the problem at hand - because we are not talking about presenting a factually different piece of prose to suit an individuals preference (that is what the forks are for...!). Although it could be argued that we could handle alternate viewpoints better.
What we are talking about is hiding illustrative images per the sensibilities of the person viewing the page. This is an editorial rather than a content matter; related to choice of presentation & illustration. Akin to deciding on how to word a sentence. Rather than "Freddy thought frogs were fucking stupid" we might choose "Freddy did not have a high opinion of frog intelligence", because the former isn't a particularly polite expression of the material. Most people would probably wish to learn Freddies view of frogs without the bad language!
Removal of, say, a nude image on the Vagina article does not bias or detract from the information. The image is there to provide illustration, and a visual cue to accompany the text. Hiding the image for optional viewing for people who would prefer it that way* doesn't seem controversial*.
Tom
On Wed, Oct 12, 2011 at 10:44 AM, Thomas Morton < morton.thomas@googlemail.com> wrote:
You've confused "a fact" with factual.
I've confused the adjective form with the noun form of "fact"? I'm quite sure that I have.
*The judge convicted Abby of killing Betty, saying that the overwhelming
evidence indicated manslaughter.*
The latter is factual, and contains facts & opinions.
It contains facts about opinions - it does not itself express an opinion. It is both factual, and a fact.
But this is really irrelevant to the problem at hand
Definitely!
- because we are not
talking about presenting a factually different piece of prose to suit an individuals preference
Although that is true, it doesn't make any difference. There is information content in an image - if there wasn't, we wouldn't need any. Making a decision to use or not to use an image is an editorial decision, and in some cases it could enhance or detract from the neutrality of the article.
Removal of, say, a nude image on the Vagina article does not bias or detract from the information.
Then we can solve the problem by removing the image completely, since the article would be completely unaffected by it.
Cheers,
Andrew (Thparkth)
It contains facts about opinions - it does not itself express an opinion. It is both factual, and a fact.
It expresses the *opinion* of the judge that Abbey killed Betty :) We include it because the global *opinion* is that judges are in a position to make such statements with authority. And the fact is that Abbey is convicted of killing Betty.
My point was that opinion influences both our content and our choice of material (just not our opinion, theoretically).
Perhaps I was confused by your original:
*an encyclopedia, at least in intention, does not deal in opinions at all, but rather in facts*
Which suggested were were uninterested in opinion (not true, of course).
There is information content in an image - if there wasn't, we wouldn't need
any.
We regularly (and rightly) use images in a purely illustrative context - this is fine. Images look nice. They can also express the same concepts as the prose in a different way (which might connect with different people). But in the vast majority of cases images are supplementary to the prose.
Yes; in some cases an image may contain information not in the prose - this is a legitimate problem to consider (although if we are just hiding images & leaving them accessible then there doesn't seem to be an issue to me).
Making a decision to use or not to use an image is an editorial decision, and in some cases it could enhance or detract from the neutrality of the article.
Yes, it could. But this is where we get to the finicky part of the situation - because if we get the filtering right this won't matter, because it is an individual choice about what to see/not see.
What you are talking about there is abusing any filter to bias or detract from the neutrality of an article for readers.
When I put together a product for a user base you have to look at what they want, and what they also need. They want a filter to hide X, and they need one that does so properly and without abuse.
So, yes, I agree that a filter has potential for abuse - and any technical solution should take that into consideration and prevent it.
Removal of, say, a nude image on the Vagina article does not bias or detract from the information.
Then we can solve the problem by removing the image completely, since the article would be completely unaffected by it.
Not really; the image certainly has value for some. Hiding it on page load for those who do not wish it to appear is also good. We don't have to have a binary solution....
So long as the image
a) Appears for people who use and appreciate it b) Is initially hidden for those who do not wish to see it c) Appears for those apathetic to it's appearance
Then this is surely a nice improvement to the current situation of "Appears for everyone", one which does not remove *any* information from the reader and provides them with the experience they wish.
Here's a similar point; if we had a setting that said "do not show plots initially" that collapsed plots on movies, books, etc. this would effect the same thing. The reader would have expressed a preference in viewing material; none of that material is removed from his access, but he is able to browse Wikipedia in a format he prefers. Win!
If a reader wanted to read Wikipedia with the words "damn" and "crap" substituted for every (non-quoted) "fuck" and "shit" why is this a problem? It alters presentation of the content to suit their sensibilities, but without necessarily detracting from the content.
Another thought; the mobile interfaces collapses all sections by default on page load (apart from the lead). Hiding material in this format (where the reader has expressed an implicit preference to use Wikipedia on a mobile device) doesn't seem to be controversial.
Hiding an image to suit individual preference is a good thing. It's just a technical challenge to make sure the preference is well reflected, the system is not abused and the content remains accessible.
Tom
From: David Levy lifeisunfair@gmail.com
You assume here that there is any kind of neutrality in Wikipedia that is not defined by reliable sources.
There isn't.
Again, you're conflating two separate concepts.
In most cases, we can objectively determine, based on information from reliable sources, that an image depicts x. This is comparable to confirming written facts via the same sources.
If a reliable source declines to include an image because it's considered "offensive," that's analogous to censoring a word (e.g. replacing "fuck" with "f**k") for the same reason.
"Include suitably licensed images illustrating their subjects" is a neutral pursuit.
Well, you need to be clear that you're using the word "neutral" here with a different meaning than the one ascribed to it in NPOV policy.
Neutrality is not abstractly defined: like notability or verifiability, it has a very specific meaning within Wikipedia policy. That meaning is irrevocably tied to reliable sources.
Neutrality consists in our reflecting fairly, proportionately, and without bias, how reliable sources treat a subject.
"Including suitably licensed images illustrating their subjects" can easily *not* be a neutral pursuit. For example, if we end up featuring more female nudity than reliable sources do, and in places where reliable sources would eschew it, we are not being neutral, even if each image illustrates its subject.
(Debates arise regarding images' utility — just as
they do regarding text — but the goal is neutral.) "Include suitably licensed images illustrating their subjects, provided that they aren't upsetting" is *not* neutral, just as "include properly sourced information, provided that it isn't upsetting" is not.
Your assumption that reliably published sources do not publish the images you have in mind here because they do not wish to upset people is unexamined, and disregards other considerations – of aesthetics, didactics, psychology, professionalism, educational value, quality of execution, and others.
It also disregards the possibility that Wikipedians may wish to include images for other reasons than simply to educate the reader – because they like the images, find them attractive, wish to shock, and so forth.
Basically, you are positing that whatever you like, or the community likes, is neutral. :) That is an approach that would not fly for text, and it disregards our demographic imbalance.
If I go along with your statement that reliable sources avoid upsetting their
readers, why would we be more "neutral" by deciding to depart from reliable sources' judgment, and consciously upsetting our readers in a way reliable sources do not?
"This is an image of x" (corroborated by information from reliable sources) is a neutral statement. "This image is upsetting" is not.
Here I need to remind you that it was you who expressed the belief that reliable sources choose not to publish imagery because it might "upset" people. As I said above, this is an unexamined assumption that discards other considerations.
Our approach to illustration should follow that embraced by reliable sources, and just as we should not second-guess why sources say what they do, and whether they omitted to say important things for fear of upsetting readers, we should not second-guess their approach to illustration either, but simply follow it.
What we *should* second-guess is the motivation of Wikipedians who wish to depart from that approach.
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and
magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
As I noted previously, to emulate such a standard would be to adopt a value judgement as our own (analogous to stating as fact that "x is bad" because most reputable sources agree that it is).
You said in an earlier mail that in writing our texts, our job is to neutrally reflect the real-world balance, *including* any presumed biases. I agree with that.
My argument is that the same applies to illustration, for exactly the same reasons.
We seem to be agreeing on one thing: that Wikipedia's approach to illustration differs from that in our sources. You seem to be saying that is a good thing; I say it isn't, or at least that we may have a little too much of a good thing.
Andreas
Likewise, if most publications decide that it would be bad to publish
illustrations alongside their coverage of a subject (on the basis that such images are likely to offend), we might address this determination via prose, but won't adopt it as *our* position.
That exact same argument could be made about text as well:
"Likewise, if most publications decide that it would be bad to publish that X is a scoundrel (on the basis that it would be likely to offend), we might address this determination via prose, but won't adopt it as *our* position."
So then we would have articles saying, "No newspaper has reported that X is a scoundrel, but he is, because –."
"X is a scoundrel" is a statement of opinion. "X is a photograph of y" (corroborated by information from reliable sources) is a statement of fact.
And as noted earlier, this is tangential to the image filter discussion.
David Levy
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Andreas Kolbe wrote:
Well, you need to be clear that you're using the word "neutral" here with a different meaning than the one ascribed to it in NPOV policy.
Neutrality is not abstractly defined: like notability or verifiability, it has a very specific meaning within Wikipedia policy. That meaning is irrevocably tied to reliable sources.
Neutrality consists in our reflecting fairly, proportionately, and without bias, how reliable sources treat a subject.
Again, reflecting views != adopting views as our own.
We're going around in circles, so I don't care to elaborate again.
Your assumption that reliably published sources do not publish the images you have in mind here because they do not wish to upset people is unexamined, and disregards other considerations – of aesthetics, didactics, psychology, professionalism, educational value, quality of execution, and others.
I referred to a scenario in which an illustration is omitted because of a belief that its inclusion would upset people, but I do *not* assume that this is the only possible rationale.
I also don't advocate that every relevant image be shoehorned into an article. (Many are of relatively low quality and/or redundant to others.) My point is merely that "it upsets people" isn't a valid reason for us to omit an image.
As our image availability differs from that of most publications (i.e. we can't simply duplicate the pictures that they run), we *always* must evaluate — using the most objective criteria possible — how well an image illustrates its subject. It's impossible to eliminate all subjectivity, but we do our best.
It also disregards the possibility that Wikipedians may wish to include images for other reasons than simply to educate the reader – because they like the images, find them attractive, wish to shock, and so forth.
No, I don't disregard that possibility. Such problems arise with text too.
Basically, you are positing that whatever you like, or the community likes, is neutral. :)
If you were familiar with my on-wiki rants, you wouldn't have written that.
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
Please answer the above question.
You said in an earlier mail that in writing our texts, our job is to neutrally reflect the real-world balance, *including* any presumed biases. I agree with that.
Yes, our content reflects the biases' existence. It does *not* affirm their correctness.
David Levy
From: David Levy lifeisunfair@gmail.com
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
Please answer the above question.
NPOV policy as written would require us to do the same, yes. In the same way, if no reliable sources were written about women, we would not be able to have articles on them.
You said in an earlier mail that in writing our texts, our job is to
neutrally reflect the real-world balance, *including* any presumed biases. I agree with that.
Yes, our content reflects the biases' existence. It does *not* affirm their correctness.
By following sources, and describing points of view with which you personally do not agree, you are not affirming the correctness of these views. You are simply writing neutrally. Do you see the difference?
Images are content too, just like text. By following sources' illustration conventions, you are not affirming that you agree with those conventions, or consider them neutral yourself, but you *are* editing neutrally, i.e. in line with reliable sources.
Just as an idea, if we want to gather data on what readers think of our use of illustrations, we should add a point about image use to the article feedback template.
Andreas
I wrote:
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
Andreas Kolbe replied:
NPOV policy as written would require us to do the same, yes.
The community obviously doesn't share your interpretation of said policy.
In the same way, if no reliable sources were written about women, we would not be able to have articles on them.
The images in question depict subjects documented by reliable sources (through which the images' accuracy and relevance are verifiable).
Essentially, you're arguing that we're required to present information only in the *form* published by reliable sources.
By following sources, and describing points of view with which you personally do not agree, you are not affirming the correctness of these views. You are simply writing neutrally.
Agreed. And that's what we do. We describe views. We don't adopt them as their own.
If reliable sources deem a word objectionable and routinely censor it (e.g. when referring to the Twitter feed "Shit My Dad Says"), we don't follow suit.
The same principle applies to imagery deemed objectionable. We might cover the controversy in our articles (depending on the context), but we won't suppress such content on the basis that others do.
As previously discussed, this is one of many reasons why reliable sources might decline to include images. Fortunately, we needn't read their minds. As I noted, we *always* must evaluate our available images (the pool of which differs substantially from those of most publications) to gauge their illustrative value. We simply apply the same criteria (intended to be as objective as possible) across the board.
Images are content too, just like text.
Precisely. And unless an image introduces information that isn't verifiable via our reliable sources' text, there's no material distinction.
David Levy
bla
From: David Levy lifeisunfair@gmail.com To: foundation-l@lists.wikimedia.org Sent: Friday, 14 October 2011, 3:52 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
I wrote:
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
Andreas Kolbe replied:
NPOV policy as written would require us to do the same, yes.
The community obviously doesn't share your interpretation of said policy.
It's not a question of interpretation; it is the very letter of the policy. Due weight and neutrality are established by reliable sources.
Now, let's look at your example: if you and I lived in a society that did not produce reliable sources about women, and refused to publish pictures of them, then I guess we would be unlikely to work on a wiki that
- defines neutrality as fairly representing reliable sources without bias, - derives its definition of due weight from the weight any topic (incl. women) is given in reliable sources, - requires verifiability in reliable sources for every statement made in our wiki, - and disallows original research.
Instead, we would start a revolutionary wiki with a political agenda that
- denounces the status quo, - criticises the inhuman and pervasive bias against women, - refuses to be bound by it, - sets out to start a new tradition of writing about, and depicting, women, - and vows to subvert the established system in order to create a new world.
We would set out to be *different* from the existing sources.
However, in our world, that is not how Wikipedia views reliable sources. Wikipedia is not set up to be in antagonism to its sources; it is set up to be in agreement with them.
Andreas
In the same way, if no reliable sources were written about women, we would not
be able to have articles on them.
The images in question depict subjects documented by reliable sources (through which the images' accuracy and relevance are verifiable).
Essentially, you're arguing that we're required to present information only in the *form* published by reliable sources.
By following sources, and describing points of view with which you personally do not agree, you are not affirming the correctness of these views. You are simply writing neutrally.
Agreed. And that's what we do. We describe views. We don't adopt them as their own.
If reliable sources deem a word objectionable and routinely censor it (e.g. when referring to the Twitter feed "Shit My Dad Says"), we don't follow suit.
The same principle applies to imagery deemed objectionable. We might cover the controversy in our articles (depending on the context), but we won't suppress such content on the basis that others do.
As previously discussed, this is one of many reasons why reliable sources might decline to include images. Fortunately, we needn't read their minds. As I noted, we *always* must evaluate our available images (the pool of which differs substantially from those of most publications) to gauge their illustrative value. We simply apply the same criteria (intended to be as objective as possible) across the board.
Images are content too, just like text.
Precisely. And unless an image introduces information that isn't verifiable via our reliable sources' text, there's no material distinction.
David Levy
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
David,
I just noticed that I left a "bla" at the top of my reply to you. That wasn't a comment on your post: my e-mail editor often doesn't allow me to break the indent of the post I'm replying to. My work-around is to type some random unindented text at the top of my editor window, and then copy that down to the place where I want to insert a reply, so I can start an unindented line. That's what I did here; I just forgot to delete it before I posted.
Cheers, Andreas
From: Andreas Kolbe jayen466@yahoo.com To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Sent: Friday, 14 October 2011, 5:45 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
bla
From: David Levy lifeisunfair@gmail.com To: foundation-l@lists.wikimedia.org Sent: Friday, 14 October 2011, 3:52 Subject: Re: [Foundation-l] Letter to the community on Controversial Content
I wrote:
In an earlier reply, I cited ultra-Orthodox Jewish newspapers and magazines that refuse to publish photographs of women. If this were a mainstream policy, would that make it "neutral"?
Andreas Kolbe replied:
NPOV policy as written would require us to do the same, yes.
The community obviously doesn't share your interpretation of said policy.
It's not a question of interpretation; it is the very letter of the policy. Due weight and neutrality are established by reliable sources.
Now, let's look at your example: if you and I lived in a society that did not produce reliable sources about women, and refused to publish pictures of them, then I guess we would be unlikely to work on a wiki that
- defines neutrality as fairly representing reliable sources without bias,
- derives its definition of due weight from the weight any topic (incl. women) is given in reliable sources,
- requires verifiability in reliable sources for every statement made in our wiki,
- and disallows original research.
Instead, we would start a revolutionary wiki with a political agenda that
- denounces the status quo,
- criticises the inhuman and pervasive bias against women,
- refuses to be bound by it,
- sets out to start a new tradition of writing about, and depicting, women,
- and vows to subvert the established system in order to create a new world.
We would set out to be *different* from the existing sources.
However, in our world, that is not how Wikipedia views reliable sources. Wikipedia is not set up to be in antagonism to its sources; it is set up to be in agreement with them.
Andreas
In the same way, if no reliable sources were written about women, we would not
be able to have articles on them.
The images in question depict subjects documented by reliable sources (through which the images' accuracy and relevance are verifiable).
Essentially, you're arguing that we're required to present information only in the *form* published by reliable sources.
By following sources, and describing points of view with which you personally do not agree, you are not affirming the correctness of these views. You are simply writing neutrally.
Agreed. And that's what we do. We describe views. We don't adopt them as their own.
If reliable sources deem a word objectionable and routinely censor it (e.g. when referring to the Twitter feed "Shit My Dad Says"), we don't follow suit.
The same principle applies to imagery deemed objectionable. We might cover the controversy in our articles (depending on the context), but we won't suppress such content on the basis that others do.
As previously discussed, this is one of many reasons why reliable sources might decline to include images. Fortunately, we needn't read their minds. As I noted, we *always* must evaluate our available images (the pool of which differs substantially from those of most publications) to gauge their illustrative value. We simply apply the same criteria (intended to be as objective as possible) across the board.
Images are content too, just like text.
Precisely. And unless an image introduces information that isn't verifiable via our reliable sources' text, there's no material distinction.
David Levy
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Andreas Kolbe wrote:
NPOV policy as written would require us to do the same, yes.
The community obviously doesn't share your interpretation of said policy.
It's not a question of interpretation; it is the very letter of the policy.
It most certainly is a matter of interpretation. If the English Wikipedia community shared yours, we wouldn't be having this discussion.
In this context, you view images as entities independent from the people and things depicted therein (and believe that our use of illustrations not included in other publications constitutes undue weight).
Conversely, the community doesn't treat "images of x" as a subject separate from "x" (unless the topic "images of x" is sufficiently noteworthy in its own right). If an image illustrates x in a manner consistent with what reliable sources tell us about x, it clears the pertinent hurdle. (There are, of course, other inclusion criteria.)
Due weight and neutrality are established by reliable sources.
And these are the sources through which the images' accuracy and relevance are verified.
David Levy
From: David Levy lifeisunfair@gmail.com
It most certainly is a matter of interpretation. If the English Wikipedia community shared yours, we wouldn't be having this discussion.
In this context, you view images as entities independent from the people and things depicted therein
I view images as *content*, subject to the same fundamental policies and principles as any other content.
(and believe that our use of illustrations not included in other publications constitutes undue weight).
For the avoidance of doubt, I am not saying that we should use the *very same* illustrations that reliable sources use – we can't, for obvious copyright reasons, and there is no need to follow sources that slavishly anyway.
But as we are writing an encyclopedia, it would be good to strive for images equivalent to those found in educational standard works. We could also look at good educational websites (bearing in mind that some specialist scholarly works do without colour images to keep printing costs low).
So I view it as important, before we use an illustration, to consider whether reliable sources in the field use the same kind of illustration. For example, the German vulva image that has been discussed several times conforms in style to the illustrations used in scholarly (e.g. medical) works, and even educational works for minors (at least in Germany). So, good image. The anal fisting image included in the English Wikipedia I would not have used, because I don't think it's the type of image we would find in a reputably published illustrated source, even an uncensored one, on sexology (which would be the model to follow in this topic area). It just looks too amateurish and home-made, and home-made + sexually explicit is a poor combination. (The image in the frotting article is another example.)
I agree by the way that we should never write F*** or s***. Some newspapers do that, but it is not a practice that the best and most reliable sources (scholarly, educational sources as opposed to popular press) use. We should be guided by the best, most encyclopedic sources. YMMV.
Cheers, Andreas
I wrote:
In this context, you view images as entities independent from the people and things depicted therein (and believe that our use of illustrations not included in other publications constitutes undue weight).
Andreas Kolbe replied:
I view images as *content*, subject to the same fundamental policies and principles as any other content.
You view them as standalone pieces of information, entirely distinct from those conveyed textually. You believe that their inclusion constitutes undue weight unless reliable sources utilize the same or similar illustrations (despite their publication of text establishing the images' accuracy and relevance).
The English Wikipedia community disagrees with you.
For the avoidance of doubt, I am not saying that we should use the *very same* illustrations that reliable sources use – we can't, for obvious copyright reasons, and there is no need to follow sources that slavishly anyway.
I realize that you advocate the use of comparable illustrations, but in my view, "slavish" is a good description of the extent to which you want us to emulate our sources' presentational styles.
I agree by the way that we should never write F*** or s***. Some newspapers do that, but it is not a practice that the best and most reliable sources (scholarly, educational sources as opposed to popular press) use. We should be guided by the best, most encyclopedic sources. YMMV.
I previously mentioned "Shit My Dad Says." Have you seen the sources cited in the English Wikipedia's article?
Time (the world's largest weekly news magazine) refers to it as "Sh*t My Dad Says."
http://www.time.com/time/arts/article/0,8599,1990838,00.html
The New York Times (recipient of more Pulitzer Prizes than any other news organization) uses "Stuff My Dad Says." So does the Los Angeles Times, which states that the subject's actual name is "unsuitable for a family publication."
http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
You might dismiss those sources as the "popular press," but they're the most reputable ones available on the subject. Should we deem their censorship sacrosanct and adopt it as our own?
David Levy
You view them as standalone pieces of information, entirely distinct from those conveyed textually. You believe that their inclusion constitutes undue weight unless reliable sources utilize the same or similar illustrations (despite their publication of text establishing the images' accuracy and relevance).
The English Wikipedia community disagrees with you.
The English Wikipedia community, like any other, has always contained a wide spectrum of opinion on such matters. We have seen this in the past, with long discussions about contentious cases like the goatse image, or the Katzouras photos. That is unlikely to ever change.
But we do also subscribe to the principle of least astonishment. If the average reader finds our image choices odd, or unexpectedly and needlessly offensive, then we alienate a large part of our target audience, and may indeed only attract an unnecessarily limited demographic as contributors.
The New York Times (recipient of more Pulitzer Prizes than any other news organization) uses "Stuff My Dad Says." So does the Los Angeles Times, which states that the subject's actual name is "unsuitable for a family publication."
http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
You might dismiss those sources as the "popular press," but they're the most reputable ones available on the subject. Should we deem their censorship sacrosanct and adopt it as our own?
No. :)
Best, Andreas
P.S. It's been pointed out to me that my e-mail client (yahoo) does a poor job with formatting and threading. That's true, and I'm not happy with it either. I'll have a look at alternatives.
On Tue, Oct 18, 2011 at 2:44 AM, Andreas Kolbe jayen466@yahoo.com wrote:
The English Wikipedia community, like any other, has always contained a wide spectrum of opinion on such matters. We have seen this in the past, with long discussions about contentious cases like the goatse image, or the Katzouras photos. That is unlikely to ever change.
But we do also subscribe to the principle of least astonishment. If the average reader finds our image choices odd, or unexpectedly and needlessly offensive, then we alienate a large part of our target audience, and may indeed only attract an unnecessarily limited demographic as contributors.
You completely and utterly misrepresent what the principle of least astonishment is supposed to address. It is a matter of where people should be directed, when there are confliting disambiguation issues. It doesn't refer to content issues in the slightest. Period. We don't say you can read an article about X and not see pictures of X. That is ridiculous.
On Tue, Oct 18, 2011 at 7:00 PM, Jussi-Ville Heiskanen <cimonavaro@gmail.com
wrote:
On Tue, Oct 18, 2011 at 2:44 AM, Andreas Kolbe jayen466@yahoo.com wrote:
The English Wikipedia community, like any other, has always contained a
wide spectrum of opinion on such matters. We have seen this in the past, with long discussions about contentious cases like the goatse image, or the Katzouras photos. That is unlikely to ever change.
But we do also subscribe to the principle of least astonishment. If the
average reader finds our image choices odd, or unexpectedly and needlessly offensive, then we alienate a large part of our target audience, and may indeed only attract an unnecessarily limited demographic as contributors.
You completely and utterly misrepresent what the principle of least astonishment is supposed to address. It is a matter of where people should be directed, when there are confliting disambiguation issues. It doesn't refer to content issues in the slightest. Period. We don't say you can read an article about X and not see pictures of X. That is ridiculous.
The principle of least astonishment is mentioned thrice in the board resolution on controversial content:
http://wikimediafoundation.org/wiki/Resolution:Controversial_content
"We support the principle of least astonishment: content on Wikimedia projects should be presented to readers in such a way as to respect their expectations of what any page or feature might contain"
Signpost coverage: http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2011-06-06/News_an...
Andreas
On Tue, Oct 18, 2011 at 9:41 PM, Andreas K. jayen466@gmail.com wrote:
On Tue, Oct 18, 2011 at 7:00 PM, Jussi-Ville Heiskanen <cimonavaro@gmail.com
wrote:
On Tue, Oct 18, 2011 at 2:44 AM, Andreas Kolbe jayen466@yahoo.com wrote:
The English Wikipedia community, like any other, has always contained a
wide spectrum of opinion on such matters. We have seen this in the past, with long discussions about contentious cases like the goatse image, or the Katzouras photos. That is unlikely to ever change.
But we do also subscribe to the principle of least astonishment. If the
average reader finds our image choices odd, or unexpectedly and needlessly offensive, then we alienate a large part of our target audience, and may indeed only attract an unnecessarily limited demographic as contributors.
You completely and utterly misrepresent what the principle of least astonishment is supposed to address. It is a matter of where people should be directed, when there are confliting disambiguation issues. It doesn't refer to content issues in the slightest. Period. We don't say you can read an article about X and not see pictures of X. That is ridiculous.
The principle of least astonishment is mentioned thrice in the board resolution on controversial content:
http://wikimediafoundation.org/wiki/Resolution:Controversial_content
"We support the principle of least astonishment: content on Wikimedia projects should be presented to readers in such a way as to respect their expectations of what any page or feature might contain"
Signpost coverage: http://en.wikipedia.org/wiki/Wikipedia:Wikipedia_Signpost/2011-06-06/News_an...
Yes, but that is not proof of what we as a community understand the principle to mean, it means the board is on crack.
On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
Yes, but that is not proof of what we as a community understand the principle to mean, it means the board is on crack.
That's not a helpful contribution to this discussion.
On Wed, Oct 19, 2011 at 12:07 PM, Andrew Garrett agarrett@wikimedia.org wrote:
On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
Yes, but that is not proof of what we as a community understand the principle to mean, it means the board is on crack.
That's not a helpful contribution to this discussion.
Stating the obvious never should be, but there are people here living in denial, so it has to be stated, no matter how obviously true.
On 19 October 2011 10:07, Andrew Garrett agarrett@wikimedia.org wrote:
On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
Yes, but that is not proof of what we as a community understand the principle to mean, it means the board is on crack.
That's not a helpful contribution to this discussion.
I can quite understand you don't want to hear it (the tone argument), but it remains both true and relevant.
- d.
Am 19.10.2011 11:07, schrieb Andrew Garrett:
On Wed, Oct 19, 2011 at 7:59 PM, Jussi-Ville Heiskanen cimonavaro@gmail.com wrote:
Yes, but that is not proof of what we as a community understand the principle to mean, it means the board is on crack.
That's not a helpful contribution to this discussion.
But if i look at the current reactions, some might agree with this point of view. So far i did not see any reaction to provide sufficient information, so that would strengthen the argumentation of the WMF or the Board. All we get represented are assumptions on what the problem might be and that it might be existing. There was not a single study that was directed at the readers, particularity not a single one directed at a diverse, multicultural audience. All we got is the worthless result of the referendum.
I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!
I asked for this shit multiple month ago. I repeated my request on daily/weekly basis. All i got wasn't a T-Shirt, it was nothing. That makes people like me very angry and lets me believe that the WMF is either trying to hide the facts, to push their own point of view, or that they are entirely incompetent. Alternatively they are just busy with counting the money...
I lost all trust inside the Foundation and I believe that they would sell out the basic idea of the project, whenever possible. Knowledge + Principle of least astonishment, applied to everything, no matter how the facts are? You truly did not understand the foundation of knowledge. Knowledge is interesting because it is shocking. It destroys your own sand-castle-world on daily basis.
Hard words? Yes it are hard words, based upon the current situation and reactions. All we got are messages to calm down, while nothing changes. Now we read at some back-pages (discussions spread out everywhere) that there will be a test-run, to invite the readers to flag images. Another measure to improve the acceptance if the filter will be enabled, another study based on a only English speaking community/audience to make it the rule over thumb for every project? It seams to be the case. But where does all this will to implement a filter come from? No one said it clearly, no one published reliable source ("Harris report", a true insider joke) and you expect us to believe this shit?
The referendum was a farce, the new approach is again a farce. The only way left to assume good faith is to claim that they are on crack. Anything else would be worse.
nya~
On Wed, Oct 19, 2011 at 5:07 AM, Tobias Oelgarte < tobias.oelgarte@googlemail.com> wrote:
I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!
First, there's a bit of a framing difference here. We did not initially promise results by project. Even now, I've never promised that. What I've said is that we would attempt to do so. But it's not solely in the WMF's purview - the election had a team of folks in charge of it who came from the community and it's not the WMF's role to dictate to them how to do their job.
I (finally) have the full results parsed in such a way as to make it * potentially* possible to release them for discussion by project. However, I'm still waiting for the committee to approve that release. I'll re-ping on that, because, frankly, it's been a week or so. That will be my next email. :)
pb
___________________ Philippe Beaudette Head of Reader Relations Wikimedia Foundation, Inc.
415-839-6885, x 6643
philippe@wikimedia.org
Am 19.10.2011 23:19, schrieb Philippe Beaudette:
On Wed, Oct 19, 2011 at 5:07 AM, Tobias Oelgarte< tobias.oelgarte@googlemail.com> wrote:
I ask Sue and Philippe again: WHERE ARE THE PROMISED RESULTS - BY PROJECT?!
First, there's a bit of a framing difference here. We did not initially promise results by project. Even now, I've never promised that. What I've said is that we would attempt to do so. But it's not solely in the WMF's purview - the election had a team of folks in charge of it who came from the community and it's not the WMF's role to dictate to them how to do their job.
I (finally) have the full results parsed in such a way as to make it * potentially* possible to release them for discussion by project. However, I'm still waiting for the committee to approve that release. I'll re-ping on that, because, frankly, it's been a week or so. That will be my next email. :)
pb
Don't get me wrong. But this should have been part of the results in the first place. The first calls for such results go back to times before the referendum even started. [1] That leaves an very bad impression, and so far the WMF did nothing to regain any trust. Instead you started to loose even more. [2]
[1] http://meta.wikimedia.org/wiki/Talk:Image_filter_referendum/Archive1#Quantif... [2] http://meta.wikimedia.org/wiki/User_talk:WereSpielChequers/filter#Thanks_for...
nya~
Andreas Kolbe wrote:
The English Wikipedia community, like any other, has always contained a wide spectrum of opinion on such matters.
Of course. But consensus != unanimity.
Your interpretation of the English Wikipedia's neutrality policy contradicts that under which the site operates.
The New York Times (recipient of more Pulitzer Prizes than any other news organization) uses "Stuff My Dad Says." So does the Los Angeles Times, which states that the subject's actual name is "unsuitable for a family publication."
http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
You might dismiss those sources as the "popular press," but they're the most reputable ones available on the subject. Should we deem their censorship sacrosanct and adopt it as our own?
No. :)
Please elaborate. Why shouldn't we follow the example set by the most reliable sources?
David Levy
From: David Levy lifeisunfair@gmail.com
The New York Times (recipient of more Pulitzer Prizes than any other
news organization) uses "Stuff My Dad Says." So does the Los Angeles Times, which states that the subject's actual name is "unsuitable for a family publication."
http://www.nytimes.com/2010/05/23/books/review/InsideList-t.html http://latimesblogs.latimes.com/technology/2009/09/mydadsays-twitter.html
You might dismiss those sources as the "popular press," but they're the most reputable ones available on the subject. Should we deem their censorship sacrosanct and adopt it as our own?
No. :)
Please elaborate. Why shouldn't we follow the example set by the most reliable sources?
I don't consider press sources the most reliable sources, or in general a good model to follow. Even among press sources, there are many (incl. Reuters) who call the Twitter feed by its proper name, "Shit my dad says".
Scholars don't write f*ck when they mean fuck. As an educational resource, we should follow the best practices adopted by educational and scholarly sources.
Best, Andreas
Andreas Kolbe wrote:
I don't consider press sources the most reliable sources, or in general a good model to follow. Even among press sources, there are many (incl. Reuters) who call the Twitter feed by its proper name, "Shit my dad says".
The sources to which I referred are the most reputable ones cited in the English Wikipedia's article.
Of course, I agree that we needn't emulate the style in which they present information. That's my point.
David Levy
On Tue, Oct 18, 2011 at 10:30 PM, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
I don't consider press sources the most reliable sources, or in general a
good
model to follow. Even among press sources, there are many (incl. Reuters) who call the Twitter feed by its proper name, "Shit my dad says".
The sources to which I referred are the most reputable ones cited in the English Wikipedia's article.
Of course, I agree that we needn't emulate the style in which they present information. That's my point.
David Levy
I understand that. But if we use a *different* style, it should still be traceable to an educational or scholarly standard, rather than one we have made up, or inherited from 4chan. Would you agree?
Andreas
Andreas Kolbe wrote:
But if we use a *different* style, it should still be traceable to an educational or scholarly standard, rather than one we have made up, or inherited from 4chan. Would you agree?
Yes, and I dispute the premise that the English Wikipedia has failed in this respect.
As I've noted, we always must gauge available images' illustrative value on an individual basis. We do so by applying criteria intended to be as objective as possible, thereby reflecting (as closely as we can, given the relatively small pool of libre images) the quality standards upheld by reputable publications. We also reject images inconsistent with reliable sources' information on the subjects depicted therein.
We don't, however, exclude images on the basis that others declined to publish the same or similar illustrations.
Images widely regarded as "objectionable" commonly are omitted for this reason (which is no more relevant to Wikipedia than the censorship of "objectionable" words is). But again, we needn't seek to determine when this has occurred. We can simply apply our normal assessment criteria across the board (irrespective of whether an image depicts a sexual act or a pine tree).
David Levy
On Wed, Oct 19, 2011 at 4:11 AM, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
But if we use a *different* style, it should still be traceable to an educational or scholarly standard, rather than one we have made up, or inherited from 4chan. Would you agree?
Yes, and I dispute the premise that the English Wikipedia has failed in this respect.
I think we have already agreed that our standards for inclusion differ from those used by reliable sources.
As I've noted, we always must gauge available images' illustrative
value on an individual basis. We do so by applying criteria intended to be as objective as possible, thereby reflecting (as closely as we can, given the relatively small pool of libre images) the quality standards upheld by reputable publications. We also reject images inconsistent with reliable sources' information on the subjects depicted therein.
We don't, however, exclude images on the basis that others declined to publish the same or similar illustrations.
Again, on this point you advocate that we should differ from the standards upheld by reputable publications.
Images widely regarded as "objectionable" commonly are omitted for this reason (which is no more relevant to Wikipedia than the censorship of "objectionable" words is). But again, we needn't seek to determine when this has occurred. We can simply apply our normal assessment criteria across the board (irrespective of whether an image depicts a sexual act or a pine tree).
We're coming back to the same sticking point: you're assuming that reputable sources omit media because they are "objectionable", rather than for any valid reason, and you think they are wrong to do so. You are putting your judgment above that of the sources, something that I presume you would never do in matters of text.
On Wed, Oct 19, 2011 at 4:12 AM, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
Satisfying most users is a laudable aim for any service provider, whether revenue is involved or not. Why should we not aim to satisfy most our
users,
or appeal to as many potential users as possible?
It depends on the context. There's nothing inherently bad about satisfying as many users as possible. It's doing so in a discriminatory, non-neutral manner that's problematic.
In my view, the best we can do is follow the standards of international scholarship. I trust the international body of scholarship (as a whole, not necessarily each individual representative of it) to be as non-discriminatory and neutral as is humanly possible.
Best, Andreas
Andreas Kolbe wrote:
But if we use a *different* style, it should still be traceable to an educational or scholarly standard, rather than one we have made up, or inherited from 4chan. Would you agree?
Yes, and I dispute the premise that the English Wikipedia has failed in this respect.
I think we have already agreed that our standards for inclusion differ from those used by reliable sources.
Yes, in part. You wrote "traceable to," not "identical to."
I elaborated in the text quoted below.
As I've noted, we always must gauge available images' illustrative value on an individual basis. We do so by applying criteria intended to be as objective as possible, thereby reflecting (as closely as we can, given the relatively small pool of libre images) the quality standards upheld by reputable publications. We also reject images inconsistent with reliable sources' information on the subjects depicted therein.
We don't, however, exclude images on the basis that others declined to publish the same or similar illustrations.
Again, on this point you advocate that we should differ from the standards upheld by reputable publications.
Indeed, but *not* when it comes to images' basic illustrative properties. Again, I elaborated in the text quoted below.
Images widely regarded as "objectionable" commonly are omitted for this reason (which is no more relevant to Wikipedia than the censorship of "objectionable" words is). But again, we needn't seek to determine when this has occurred. We can simply apply our normal assessment criteria across the board (irrespective of whether an image depicts a sexual act or a pine tree).
For the "Pine" article, we examine the available images of pine trees (and related entities, such as needles, cones and seeds) and assess their illustrative properties as objectively as possible. Our goal is to include the images that best enhance readers' understanding of the subject.
This is exactly what reputable publications do. (The specific images available to them differ and sometimes exceed the quality of those available to us, of course.)
This process can be applied to images depicting almost any subject, even if others decline to do so. I don't insist that we automatically include lawful, suitably licensed images or shout "WIKIPEDIA IS NOT CENSORED!" when we don't. I merely advocate that we apply the same assessment criteria across the board. Inferior images (whether they depict pine trees, sexual acts or anything else) should be omitted.
We're coming back to the same sticking point: you're assuming that reputable sources omit media because they are "objectionable", rather than for any valid reason, and you think they are wrong to do so.
No, I'm *not* assuming that this is the only reason, nor am I claiming that this "wrong" for them to do.
We *always* must independently determine whether a valid reason to omit media exists. We might share some such reasons (e.g. low illustrative value, inferiority to other available media, copyright issues) with reliable sources. Other reasons (e.g. non-free licensing) might apply to us and not to reliable sources. Still other reasons (e.g. "upsetting"/"offensive" nature, noncompliance with local print/broadcast regulations, incompatibility with paper, space/time constraints) might apply to reliable sources and not to us.
Again, we needn't ponder why a particular illustration was omitted or what was available to a publication by its deadline. We need only determine whether the images currently available to us meet the standards that we apply across the board.
David Levy
On Wed, Oct 19, 2011 at 10:29 PM, David Levy lifeisunfair@gmail.com wrote:
Indeed, but *not* when it comes to images' basic illustrative properties. Again, I elaborated in the text quoted below.
This process can be applied to images depicting almost any subject,
even if others decline to do so.
I mentioned before that a video of rape would have basic illustrative properties in the article on rape, yet still be deeply inappropriate. Rather than enhancing the educational value of the article, it would completely destroy it. Whether to add a media file to an article or not is always a cost/benefit question. It does not make sense to argue that any benefit, however small and superficial, outweighs any cost, however large and substantive.
We're coming back to the same sticking point: you're assuming that reputable sources omit media because they are "objectionable", rather
than
for any valid reason, and you think they are wrong to do so.
No, I'm *not* assuming that this is the only reason, nor am I claiming that this "wrong" for them to do.
We *always* must independently determine whether a valid reason to omit media exists. We might share some such reasons (e.g. low illustrative value, inferiority to other available media, copyright issues) with reliable sources. Other reasons (e.g. non-free licensing) might apply to us and not to reliable sources. Still other reasons (e.g. "upsetting"/"offensive" nature, noncompliance with local print/broadcast regulations, incompatibility with paper, space/time constraints) might apply to reliable sources and not to us.
Again, we needn't ponder why a particular illustration was omitted or what was available to a publication by its deadline. We need only determine whether the images currently available to us meet the standards that we apply across the board.
I would rather apply the standards of reputable publications in our articles, and leave the rest to a Commons link. YMMV.
Andreas
Andreas Kolbe wrote:
Whether to add a media file to an article or not is always a cost/benefit not is always a cost/benefit question. It does not make sense to argue that any benefit, however small and superficial, outweighs any cost, however large and substantive.
Agreed. I'm not arguing that.
Your replies seem indicative of a belief that my position is "Let's include every illustrative image, no matter what." That isn't so. My point is merely that we aren't bound by others' decisions.
David Levy
On Thu, Oct 20, 2011 at 7:19 PM, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
Whether to add a media file to an article or not is always a cost/benefit not is always a cost/benefit question. It does not make sense to argue that any benefit, however small and superficial, outweighs any cost, however large and substantive.
Agreed. I'm not arguing that.
Your replies seem indicative of a belief that my position is "Let's include every illustrative image, no matter what." That isn't so. My point is merely that we aren't bound by others' decisions.
David Levy
David,
I think we've reached about as much agreement in this stimulating exchange as we're likely to. I don't actually know what your position in any specific dispute around illustration would be; I don't think we've ever met in one of those on-wiki. I don't assume that we'd necessarily be far apart.
I wouldn't go so far as to say that we should consider ourselves *bound* by others' decisions either. But I do think that the presence or absence of precedents in reliable sources is an important factor that we should weigh when we're contemplating the addition of a particular type of illustration.
For example, if a reader complains about images in the article on the [[rape of Nanking]], it is useful if an editor can say, Look, these are the standard works on the rape of Nanking, and they include images like that. If someone complains about an image or media file in some other article and we cannot point to a single reputable source that has included a similar illustration, then we may indeed be at fault.
Regards, Andreas
Andreas Kolbe wrote:
I wouldn't go so far as to say that we should consider ourselves *bound* by others' decisions either. But I do think that the presence or absence of precedents in reliable sources is an important factor that we should weigh when we're contemplating the addition of a particular type of illustration.
I believe that we should focus on the criteria behind reliable sources' illustrative decisions, *not* the decisions themselves. As previously noted, some considerations are applicable to Wikipedia, while others are not.
We needn't know why a particular illustration was omitted. If we apply similar criteria, we'll arrive at similar decisions, excepting instances in which considerations applicable to reliable sources (e.g. those based on images' "upsetting"/"offensive" nature) are inapplicable to Wikipedia and instances in which considerations inapplicable to reliable sources (e.g. those based on images' non-free licensing) are applicable to Wikipedia.
For example, if a reader complains about images in the article on the [[rape of Nanking]], it is useful if an editor can say, Look, these are the standard works on the rape of Nanking, and they include images like that.
An editor *can* do that. It's the inverse situation that requires deeper analysis.
If someone complains about an image or media file in some other article and we cannot point to a single reputable source that has included a similar illustration, then we may indeed be at fault.
Quite possibly. We'd need to determine whether the relevant criteria have been met.
David Levy
On Fri, Oct 21, 2011 at 2:13 AM, David Levy lifeisunfair@gmail.com wrote:
Andreas Kolbe wrote:
I wouldn't go so far as to say that we should consider ourselves *bound*
by
others' decisions either. But I do think that the presence or absence of precedents in reliable sources is an important factor that we should
weigh
when we're contemplating the addition of a particular type of
illustration.
I believe that we should focus on the criteria behind reliable sources' illustrative decisions, *not* the decisions themselves.
Ah well, that *is* second-guessing the source, because unless the author tells you, you have no way of knowing *why* they didn't include a particular type of image.
As I said, there may be other good reasons such as educational psychology – we make up our own rules at our peril.
If we did that for text, we'd be guessing why an author might not have mentioned such and such a thing, and applying our "correction".
As previously noted, some considerations are applicable to Wikipedia, while others are not.
We needn't know why a particular illustration was omitted. If we apply similar criteria, we'll arrive at similar decisions, excepting instances in which considerations applicable to reliable sources (e.g. those based on images' "upsetting"/"offensive" nature) are inapplicable to Wikipedia ...
I don't subscribe to the notion that Wikipedia should go out of its way (= depart from reliable sources' standards) to upset or offend readers where reliable sources don't.
Andreas
I wrote:
I believe that we should focus on the criteria behind reliable sources' illustrative decisions, *not* the decisions themselves.
Andreas Kolbe replied:
Ah well, that *is* second-guessing the source, because unless the author tells you, you have no way of knowing *why* they didn't include a particular type of image.
I've repeatedly addressed this point and explained why I regard it as moot. You needn't agree with me, but it's frustrating when you seemingly disregard what I've written.
You actually quoted the relevant text later in your message:
We needn't know why a particular illustration was omitted. If we apply similar criteria, we'll arrive at similar decisions, excepting instances in which considerations applicable to reliable sources (e.g. those based on images' "upsetting"/"offensive" nature) are inapplicable to Wikipedia ...
I used the phrase "why a particular illustration was omitted," which is remarkably similar to "why they didn't include a particular type of image." I've made such statements (sometimes with further elaboration) in several replies.
Again, I don't demand that you agree with me, but I humbly request that you acknowledge my position.
If we did that for text, we'd be guessing why an author might not have mentioned such and such a thing, and applying our "correction".
Again, the images in question don't introduce information inconsistent with that published by reliable sources; they merely illustrate the things that said sources tell us.
And again, we haven't pulled our image evaluation criteria out of thin air. They reflect those employed by the very same publications.
Our application of these criteria entails no such "guessing." You seem to envision a scenario in which we seek to determine whether a particular illustration was omitted for a reason inapplicable to Wikipedia. In actuality, we simply set aside such considerations (but we retain the others, so if an illustration was omitted for a reason applicable to Wikipedia, we're likely to arrive at the same decision).
.> I don't subscribe to the notion that Wikipedia should go out of its way (=
depart from reliable sources' standards) to upset or offend readers where reliable sources don't.
Do you honestly believe that this is our motive?
David Levy
David Levy wrote:
Andreas Kolbe wrote:
Again, I think you are being too philosophical, and lack pragmatism.
We already have bad image lists like
http://en.wikipedia.org/wiki/MediaWiki:Bad_image_list
If you remain wedded to an abstract philosophical approach, such lists are not neutral. But they answer a real need.
Apart from the name (which the MediaWiki developers inexplicably refused to change), the bad image list is entirely compliant with the principle of neutrality (barring abuse by a particular project, which I haven't observed).
Not inexplicably: https://bugzilla.wikimedia.org/show_bug.cgi?id=14281#c10
MZMcBride
I wrote:
Apart from the name (which the MediaWiki developers inexplicably refused to change), the bad image list is entirely compliant with the principle of neutrality (barring abuse by a particular project, which I haven't observed).
MZMcBride replied:
Not inexplicably: https://bugzilla.wikimedia.org/show_bug.cgi?id=14281#c10
Actually, that's precisely what I had in mind. Maybe it's me, but I found Tim's response rather bewildering. It certainly isn't standard procedure to forgo an accurate description in favor of nebulous social commentary/parody, particularly when this proves controversial.
David Levy
On 11/10/2011 00:47, MZMcBride wrote:
Risker wrote:
Given the number of people who insist that any categorization system seems to be vulnerable, I'd like to hear the reasons why the current system, which is obviously necessary in order for people to find types of images, does not have the same effect. I'm not trying to be provocative here, but I am rather concerned that this does not seem to have been discussed.
Personally, from the technical side, I don't think there's any way to make per-category filtering work. What happens when a category is deleted? Or a category is renamed (which is effectively deleting the old category name currently)? And are we really expecting individual users to go through millions of categories and find the ones that may be offensive to them? Surely users don't want to do that. The whole point is that they want to limit their exposure to such images, not dig into the millions of categories that may exist looking for ones that largely contain content they find objectionable. Surely.
People that care will filter on broadest categories as those are least likely to change. They may start with category:sex, Category:Depictions of Muhammad, etc.
Jussi-Ville Heiskanen wrote:
Any (and I stress *any*) tagging system is very nicely vulnerable to being hijacked by downstream users.
I've steadfastly opposed the introduction of a tag-based image filter system.
The proposal to which I linked involves no "tagging" (as I understand the term). Is it possible that you misread/misinterpreted it? If not, please explain how it would enable such an exploit.
David Levy
On Tue, Oct 11, 2011 at 2:31 AM, David Levy lifeisunfair@gmail.com wrote:
Jussi-Ville Heiskanen wrote:
Any (and I stress *any*) tagging system is very nicely vulnerable to being hijacked by downstream users.
I've steadfastly opposed the introduction of a tag-based image filter system.
The proposal to which I linked involves no "tagging" (as I understand the term). Is it possible that you misread/misinterpreted it? If not, please explain how it would enable such an exploit.
David Levy
if you like the image browsers....
Am 09.10.2011 16:56, schrieb Thomas Dalton:
On 9 October 2011 15:12, Ting Chen wing.philopp@gmx.de wrote:
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
So you do intend to force this on projects that don't want it? Do you really think that's going to work? If the WMF picks a fight with the community on something the community feel very strongly about (which this certainly seems to be), the WMF will lose horribly and the fall-out for the whole movement will be very bad indeed.
hi Thomas, I would say, it is a perfect example of one of the Parkinsons law. They did´nt diskuss how it may work and how many man-hour it will need to achieve the escape of maybe hundreds of today hard working editors.
And how much money it will really need. Because, implementing this software is just a fractional amount of overall costs.
“The time spent on any item of the agenda will be in inverse proportion to the sum involved.”
Because the wars in Commons, which Categories at least will fit violence, will be unmanageable.
I don´t want to confront myself with fundamental christian groups in categorising cruzification and holy cross as to become a to be hidden category because of atrocious violence or not.
Hubertl.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
________________________________ From: Hubert hubert.laska@gmx.at
Because the wars in Commons, which Categories at least will fit violence, will be unmanageable.
I don´t want to confront myself with fundamental christian groups in categorising cruzification and holy cross as to become a to be hidden category because of atrocious violence or not.
Hubertl.
Actually, I don't foresee these types of issues becoming overly contentious, at least not in the context of the image filter as proposed (opt-in).
Editors would eventually realise that the choices they make only affect the small proportion of readers who actually switch the filter ON, and decide they have better things to do than to edit-war over whether such a user will need to click on the image to see it, or not.
Andreas
Andreas Kolbe wrote:
Actually, I don't foresee these types of issues becoming overly contentious, at least not in the context of the image filter as proposed (opt-in).
Editors would eventually realise that the choices they make only affect the small proportion of readers who actually switch the filter ON, and decide they have better things to do than to edit-war over whether such a user will need to click on the image to see it, or not.
Yes, because rational thought like that is a hallmark of wiki discussions.
MZMcBride
That means it will be pushed in no matter if wanted/needed or in respect to the local communities? I think that will push over the line of acceptability.
I also want to remember you that the "referendum"/referendumm
1. asked the wrong question(s) 2. did not mention any of the possible issues beforehand (biased formulation) 3. left much room for possible implementations
!!! IM STILL WAITING FOR RESULTS PER PROJECT !!! Im very, very disappointed to see that this data is still not released. I requested it a dozen times. Every time i got rejected that it will be released later on and that we should stay patient. How many weeks ago this request was made? I did not count anymore...
Seriously pissed off greetings from Tobias Oelgarte / user:niabot
Am 09.10.2011 16:12, schrieb Ting Chen:
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
Greetings Ting
Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
Calm down. No one is "forcing" or "pushing" anything, more like "offering". Everything I've read indicates it will be opt-in (though the manner for opting in will be easily accessible upon arrival at Wikipedia). This will probably be something just as transparent to those not using it as is the color picker on various search engines. The fact is that a majority of the community expressed it was either a good idea or something important to them (interpret that however you care to), and Wikimedia finds it important to please the majority of their users.
A few specifications for the filter as they have been expressed so far are as follows:
a) Categories or other onwiki tags must not be used for tagging images for filtering (the community on multiple occasions that doing so is said to violate WP:CENSOR) b) The opt-in feature must not be intrusive yet easy to find and apply c) Filters must be designed keeping in mind that there are multiple cultures of decency and taboo
I personally think these specifications pose an exciting challenge for developers and staff. And if there are organizations willing to fund the development and implementation of it, more power to them.
Bob
On 10/9/2011 3:29 PM, Tobias Oelgarte wrote:
That means it will be pushed in no matter if wanted/needed or in respect to the local communities? I think that will push over the line of acceptability.
I also want to remember you that the "referendum"/referendumm
- asked the wrong question(s)
- did not mention any of the possible issues beforehand (biased
formulation) 3. left much room for possible implementations
!!! IM STILL WAITING FOR RESULTS PER PROJECT !!! Im very, very disappointed to see that this data is still not released. I requested it a dozen times. Every time i got rejected that it will be released later on and that we should stay patient. How many weeks ago this request was made? I did not count anymore...
Seriously pissed off greetings from Tobias Oelgarte / user:niabot
Am 09.10.2011 16:12, schrieb Ting Chen:
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
Greetings Ting
Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 9 October 2011 22:03, Bob the Wikipedian bobthewikipedian@gmail.com wrote:
Calm down. No one is "forcing" or "pushing" anything, more like "offering". Everything I've read indicates it will be opt-in (though the manner for opting in will be easily accessible upon arrival at Wikipedia).
Tobias was talking about the feature being forced onto projects whose communities don't want it, rather than use of the feature being forced onto individuals.
On 9 October 2011 22:03, Bob the Wikipedian bobthewikipedian@gmail.com wrote:
The fact is that a majority of the community expressed it was either a good idea or something important to them (interpret that however you care to), and Wikimedia finds it important to please the majority of their users.
I think that repeating this trivially false claim is unlikely to convince anyone who doesn't already agree with you, and will instead lead them to assume communication is futile.
- d.
very strong support!
hubertl.
Am 09.10.2011 22:29, schrieb Tobias Oelgarte:
That means it will be pushed in no matter if wanted/needed or in respect to the local communities? I think that will push over the line of acceptability.
I also want to remember you that the "referendum"/referendumm
- asked the wrong question(s)
- did not mention any of the possible issues beforehand (biased
formulation) 3. left much room for possible implementations
!!! IM STILL WAITING FOR RESULTS PER PROJECT !!! Im very, very disappointed to see that this data is still not released. I requested it a dozen times. Every time i got rejected that it will be released later on and that we should stay patient. How many weeks ago this request was made? I did not count anymore...
Seriously pissed off greetings from Tobias Oelgarte / user:niabot
Am 09.10.2011 16:12, schrieb Ting Chen:
Hello Tobias,
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
Greetings Ting
Am 09.10.2011 15:43, schrieb church.of.emacs.ml:
Hi Ting,
one simple question: Is the Wikimedia Foundation going to enable the image filter on _all_ projects, disregarding consensus by local communities of rejecting the image filter? (E.g. German Wikipedia)
We are currently in a very unpleasant situation of uncertainty. Tensions in the community are extremely high (too high, if you ask me, but Wikimedians are emotional people), speculations and rumors about what WMF is going to do prevail. A clear statement would help our discussion process.
Regards, Tobias / User:Church of emacs
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 10/09/11 7:12 AM, Ting Chen wrote:
the text of the May resolution to this question is "... and that the feature be visible, clear and usable on all Wikimedia projects for both logged-in and logged-out readers", and on the current board meeting we decided to not ammend the original resolution.
This is certainly the most problematical part of the entire resolution. It leaves the impression that all negotiations are being held under the Sword of Damocles. That whatever happens can be overruled by force majeure or the tyranny of the majority. Many projects have not encountered problems with objectionable images at all. They are either too small or the scope of what they do would not bring them into contact with such problems.
Making clear that projects would be free to turn the feature on only when it became relevant to them would go a long way to relieving tensions.
Ray
Hi Ting,
Thanks for explaining the position of the board in your own words. I appreciate the board is listening. I am concerned that you state that the board is acting from "belief", I recommend you consider how this can move to proposing a strategy based on facts and non-controversial analysis.
I suspect that any proposal for change will be strongly resisted and continue to divide our community until well understood and well communicated facts underpin the board's resolution rather than personal belief.
Cheers, Fae
Hello Fae,
thank you very much for pointing this out. Yes, I think you indeed hit the nail. We discussed this problem on our meeting and Sue provided some plans on how to work on this problem. I am normally reluctant to comment what the staff is doing or what they are planning to do, because this often can be seen as an intervening of the staff activity. But I think it is ok for me to spoil this a bit now: So Sue suggests a two step approach. In the first step we will only collect reader reactions on images, to see if there is a problem at all, how big is the problem, and where are the problems. And on a second step, when we have those data and can work out an understanding of it, then we can go on to work out dedicated solutions for the problems, as I said in my letter, together with the community.
Greetings Ting
On 09.10.2011 23:55, wrote Fae:
Hi Ting,
Thanks for explaining the position of the board in your own words. I appreciate the board is listening. I am concerned that you state that the board is acting from "belief", I recommend you consider how this can move to proposing a strategy based on facts and non-controversial analysis.
I suspect that any proposal for change will be strongly resisted and continue to divide our community until well understood and well communicated facts underpin the board's resolution rather than personal belief.
Cheers, Fae
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
From: Ting Chen wing.philopp@gmx.de
Hello Fae,
thank you very much for pointing this out. Yes, I think you indeed hit the nail. We discussed this problem on our meeting and Sue provided some plans on how to work on this problem. I am normally reluctant to comment what the staff is doing or what they are planning to do, because this often can be seen as an intervening of the staff activity. But I think it is ok for me to spoil this a bit now: So Sue suggests a two step approach. In the first step we will only collect reader reactions on images, to see if there is a problem at all, how big is the problem, and where are the problems. And on a second step, when we have those data and can work out an understanding of it, then we can go on to work out dedicated solutions for the problems, as I said in my letter, together with the community.
Ting,
I do think that asking *readers* in different parts of the world for their views is the way to go here.
This being a feature designed for readers' use, we should primarily be guided by readers' wishes, not editors' wishes.
Andreas
On Sun, Oct 9, 2011 at 14:55, Ting Chen tchen@wikimedia.org wrote:
Dear Wikimedia community,
First, I want to thank the 24,000 editors who participated in the Wikimedia Foundation's referendum on the proposed personal image hiding feature. We are particularly grateful to the nearly seven thousand people who took the time to write in detailed and thoughtful comments. Thank you.
Although the Board did not commission the referendum (it was commissioned by our Executive Director), we have read the results and followed the discussions afterwards with great interest. We discussed them at our Board meeting in San Francisco, in October. We are listening, and we are hearing you.
The referendum results show that there is significant division inside the Wikimedia community about the potential value and impact of an image hiding feature.
many thanks, ting, for this thoughtful mail. since the beginning of the discussion i was wondering if it would be controversial to just give up on image filters. and since the beginning of the discussion i was wondering if its the foundations desired role to ignite controversial discussions within the community.
and since the beginning of the discussion about image filters i was wondering if it would not be one additional thing distracting a part of the community, the developers, the chapters, the foundation, and the foundations board from listening to the world outside wikipedia, both with respect to contents, and technology.
to give you an example: a single person, salman khan, was able to build a youtube channel containing a couple of thousand educational videos, subscribed nearly 200'000 times, and watched nearly a 100 million times. with a budget of a couple of 100'000 usd, maximum. despite questionable details (e.g. npov is missing completely) i find the quality of the videos impressingly good.
additionally, there are others doing the same thing with even less budget. "aggregators" are developping around this ecosysytem as well. and everything without wikimedia foundation, whose vision is "..freely share in the sum of all knowledge", and whose mission is " ... collect and develop educational content under a free license or in the public domain, and to disseminate it effectively and globally." partially this evolves right on wmf's doorsteps, in san francisco.
knowing this vision and mission, and knowing the new projects were built up without any involvment of the wikimedia foundation, operating e.g. wikiversity, having 20 times the budget and 20-50 times the people, having a multiple thousand times the volunteers, this leaves me completely speachless ....
some links: * http://khanacademy.org * http://youtube.com/watch?v=-ROhfKyxgCo * http://academicearth.org/lectures/gender-sex-linked-traits * http://academicearth.org/about/team * http://wikimediafoundation.org/wiki/Mission_statement * http://wikimediafoundation.org/wiki/Vision
best regards,
rupert.
wikimedia-l@lists.wikimedia.org