[Foundation-l] Letter to the community on Controversial Content

Tobias Oelgarte tobias.oelgarte at googlemail.com
Tue Oct 18 14:48:20 UTC 2011


Am 18.10.2011 14:00, schrieb Thomas Morton:
> On 18 October 2011 11:56, Tobias Oelgarte<tobias.oelgarte at googlemail.com>wrote:
>
>> That controversial content is hidden or that we
>> provide a button to hide controversial content is prejudicial.
>
> I disagree on this, though. There is a balance between encouraging people to
> question their views (and, yes, even our own!) and giving them no option but
> to accept our view.
>
> This problem can be addressed via wording related to the filter and
> avoidance of phrases like "controversial", "problematic" etc.
>
> I disagree very strongly with the notion that providing a button to hide
> material is prejudicial.
That comes down to the two layers of judgment involved in this proposal. 
At first we give them the option to view anything and we give them the 
option to view not anything. The problem is that we have to define what 
"not anything" is. This imposes our judgment to the reader. That means, 
that even if the reader decides to hide some content, then it was our 
(and not his) decision what is hidden.

This concludes to two cases:

1. If he does not use the filter, then - as you say - we impose our 
judgment to the reader,
2. If he does use the filter, then - as i say - we impose our judgment 
to the reader as well.

Both cases seam to be equal. No win or loss with or without filter. But 
there is a slight difference.

If we treat nothing as objectionable (no filter), then we don't need to 
play the judge. We say: "We accept anything, it's up to you to judge".
If we start to add a "category based" filter, then we play the judge 
over our own content. We say: "We accept anything, but this might not be 
good to look at. Now it is up to you to trust our opinion or not".

The later imposes our judgment to the reader, while the first makes no 
judgment at all and leaves anything to free mind of the reader. ("free 
mind" means, that the reader has to find his own answer to this 
question. He might have objections or could agree.)
> It deepens the viewpoint that this content is objectionable and that it is
>> generally accepted this way, even if not. That means that we would
>> fathering the readers that have a tendency to enable a filter (not even
>> particularly an image filter).
>>
> This is a reasonable objection; and again it goes back to this idea of how
> far do we enforce our world view on readers. I think that there are ways
> that a filter could be enabled that improves Wikipedia for our readers
> (helping neutrality) and equally there are ways that it could be enabled
> that adversely affect this goal.
>
> So if done; it needs to be done right.
The big question is: Can be done right?

A filter that only knows a "yes" or "no" to questions that are 
influenced by different cultural views, seams to fail right away. It 
draws a sharp line through anything, ignoring the fact that even in one 
culture there are lot of border cases. I did not want to use examples, 
but i will still give one. If we have a photography of a young woman at 
the beach. How would we handle the case that her swimsuit shows a lot of 
"naked flesh"? I'm sure more then 90% of western country citizens would 
have no objection against this image, if it is inside a corresponding 
article. But as soon we go to other cultures, lets say Turkey, then we 
might find very different viewpoints if this should be hidden by the 
filter or not. I remember the question in the referendum, if the filter 
should be cultural neutral. Many agreed on this point. But how in gods 
name should this be done? Especially: How can this be done right?
>
>> ... and that is exactly what makes me curious about this approach. You
>> assume that we aren't neutral and Sue described us in median a little
>> bit geeky, which goes in the same direction.
>
> We are not; over time it is fairly clear that we reflect certain world
> views. To pluck an example out of thin air - in the 9/11 article there is
> extremely strong resistance to adding a "see also" link to the article on
> 9/11 conspiracies. This reflects a certain bias/world view we are imposing.
> That is an obvious example - there are many more.
>
> The bias is not uniform; we have various biases depending on the subject -
> and over time those biases can swing back and forth depending on
> the prevalent group of editors at that time. Many of our articles have
> distinctly different tone/content/slant to foreign language ones (which is a
> big giveaway IMO).
>
> Another example: English Wikipedia has a pretty strong policy on BLP
> material that restricts a lot of what we record - other language Wiki's do
> not have the same restrictions and things we would not consider noting (such
> as non-notable children names) are not considered a problem on other Wikis.
>
>
> But if we aren't neutral at
>> all, how can we even believe that an controversial-content-filter-system
>> based upon our views would be neutral in judgment or as proposed in the
>> referendum "cultural neutral". (Question: Is there even a thing as
>> cultural neutrality?)
>>
> No; this is the underlying problem I mentioned with implementing a filter
> that offers pre-built lists.
>
> It is a problem to address, but not one that kills the idea stone dead IMO.
I belive that the idea dies at the moment as we assume that we can 
achieve neutrality through filtering. Speaking theoretically there are 
only three types of neutral filters. The first leaves anything through, 
the second blocks all and the third is totally random, resulting in an 
equal 50:50 chance for large numbers. Currently we would ideally have 
the first filter. Your examples show that this isn't always true. But at 
least this is the goal. Filter two would equal to don't show anything, 
or shut down Wikipedia. Not an real option. I know. The third option is 
a construct out of theory that would not work, since it contains an 
infinite amount of information, but also nothing at all.

Considering this cases, we can assume that Wikipedia isn't neutral, but 
that it aims for option 1. But we can also see that there is not any 
other solution that could be neutral. It is an impossible task to begin 
with. No filter could fix such a problem.
>
>> We also don't force anyone to read Wikipedia.
>>
> Oh come on :) we are a highly visible source of information with millions of
> inbound links/pointers. No we don't force anyone to read, but this is not an
> argument against accommodating as many people as possible.
No it isn't an argument against this. Accommodating as many people as 
possible was never the goal of the project. The goal was to create and 
represent free knowledge to everyone. That we have so many readers can 
make us proud. But did they come to us, because we had the goal to 
accommodate them? They come to us to take and share the knowledge that 
is represented. If we really wanted to accommodate people in the first 
place, then we should have created something more entertaining then an 
encyclopedia.

The whole problems starts with the intention to spread our knowledge to 
more people that we currently reach, faster then necessary. The only 
problem is, that we leave the facts behinds that made this project to 
what it is. We have a mission, but it is not the mission to entertain as 
many people as possible. It is not to gain as much money trough donors 
as possible.
>
>> If he does not like it, he
>> has multiple options. He could close it, he could still read it, even if
>> he don't like any part of it, he could participate to change it or he
>> could start his own project.
>>
> And most of those options belie our primary purpose.
It isn't our purpose to please the readers by only representing 
knowledge they would like to hear of. We hold the knowledge that someone 
might want to read/see. He has to play the active part. The typical 
expansion ("world invasion", a reference to Ika) dreams of Jimbo are 
quite contradictory to that.
> We can definitely think about possible solution. But at first i have to
>> insist to get an answer to the question: Is there a problem, big and
>> worthy enough, to make it a main priority?
>>
> Absolutely - and the first question I asked in this debate (weeks ago) was
> when we were going to poll readers for their opinion. This devolved slightly
> into an argument over whether our readers should have a say in Wikipedia...
> but the issue still stands - no clear picture has been built.
>
> We are still stuck in our little house....
>
> I doubt it will ever be done; which is why if it comes to a "vote", despite
> my advocacy here, I will staunchly oppose any filter on grounds of process
> and poor planning.
>
> I am willing to be pleasantly surprised.
Thats why i was so interested in the raw data from the referendum 
(votes/results per language). It could have at least answered some basic 
questions (Who does see the need?...). Now more then two month have 
passed since i was promised multiple times that this data would be 
released. Nothing ever happened.

* http://meta.wikimedia.org/wiki/User_talk:Philippe#Personal_image_filter
>
>> After that comes the question for (non neutral) categorization of
>> content. That means: Do we need to label offensive content, or could
>> same goal be reached without doing this?
>>
> Well from a practical perspective a self-managed filter is the sensible
> option.
>
> I think we can do an objective categorisation of things people might not
> like to see, though. Say, nudity, we could have an entirely objective
> classification for nudity.. just thinking off-hand&  imperfectly:
>
> - Incidental nudity (background etc.)
> - Partial nudity
> - Full nudity
> - Full frontal nudity / Close ups
> - Sexual acts
>
> And then independently classify articles as "sexuality topic", "physiology
> topic"&  "neither" (with neither being default). By combining the two
> classifications you can build a dynamic score of how likely that any image
> of nudity in any article should be shown based on the user preference (on
> the basis that nudity might is more expected in sexuality topics).
>
> Then the user would have the option to filter all nudity, or some level of
> nudity - defaulting to no filter. Plus they could include/remove images from
> the filter at will.
>
> I hope this rough idea shows how you could approach the difficult topic of
> classification under an objective criteria without allowing article-specific
> abuse.
>
> I think part of the problem is constant re-use of the term "offensive" -
> because this biases us to approach content from the perspective of
> controversy. But I go back to a previous point that I made; which is that
> this is about filtering things users don't want to see. This may not be
> offensive content (although obviously that is the most common example).
>
> Many people have fear of clowns; allowing them to filter out clown images is
> a good thing and helps make their experience better.

We discussed this distinction at length already. It either was going to 
be hundreds of categories, which are neither manageable by us nor by the 
reader, or it would come down to some very vague categories, with no 
sharp lines at all. That means: If we want to do it right, then it will 
result in a effort we can't take.
>
>> In the other mail you said that you have a problem with reading/working
>> on Wikipedia in public, because some things might be offensive to
>> bystanders. One typical, widely spread argument. But this problem can
>> easily be solved without the need for categorization. The brainstorming
>> sections are full with easy, non disturbing solutions for exactly this
>> potential problem.
>>
> That wasn't me; and it was perhaps not a good example any way.
>
> Although I would love to see a "public" mode for me to enable that hid
> nudity etc. for use in public - nice and simple without totally degrading my
> browsing experience.
>
>
>> The only group claiming, that not being able to hide something is
>>
> censorship, are the people, that want to hide content from others.
>
>
> Strong accusation. Total nonsense I'm afraid - if you mean it in the way it
> comes across (i.e. we are pursuing an agenda to try and hide content from
> people).
>
> You'll notice I have not offered a counter accusation (say, that people
> arguing the filter would be censoring are people trying to enforce their
> politics/world view on everyone); perhaps we could be afforded the same
> courtesy?
>
> Tom
This is an very old story. I guess we don't need to ague over it again. 
My viewpoint on this matter is also very fixed, which would make things 
only harder then they already are.

nya~




More information about the foundation-l mailing list