[Foundation-l] Letter to the community on Controversial Content

Tobias Oelgarte tobias.oelgarte at googlemail.com
Tue Oct 18 19:00:32 UTC 2011


Am 18.10.2011 17:23, schrieb Thomas Morton:
>> That comes down to the two layers of judgment involved in this proposal.
>> At first we give them the option to view anything and we give them the
>> option to view not anything. The problem is that we have to define what
>> "not anything" is. This imposes our judgment to the reader. That means,
>> that even if the reader decides to hide some content, then it was our
>> (and not his) decision what is hidden.
>>
> No; because the core functionality of a filter should always present the
> choice "do you want to see this image or not". Which is specifically not
> imposing our judgement on the reader :) Whether we then place some optional
> preset filters for the readers to use is certainly a matter of discussion -
> but nothing I have seen argues against this core ideas.
Yes; because even the provision of a filter implies that some content is 
seen as objectionable and treated different from other content. This is 
only no problem, as long we don't represent default settings, aka 
categories, which introduce our judgment to the readership. Only the 
fact that our judgment is visible, is already enough to manipulate the 
reader in what to see as objectionable or not. This scenario is very 
much comparable to the unknown man that sits behind you, looking 
randomly onto your screen, while you want to inform yourself. Just the 
thought that someone else could be upset is already an issue. Having us 
to directly show/indicate what we think of as objectionable "by others" 
is even the stronger.
> If we treat nothing as objectionable (no filter), then we don't need to
>> play the judge. We say: "We accept anything, it's up to you to judge".
>> If we start to add a "category based" filter, then we play the judge
>> over our own content. We say: "We accept anything, but this might not be
>> good to look at. Now it is up to you to trust our opinion or not".
>>
> By implementing a graded filter; one which lets you set grades of visibility
> rather than off/on addresses this concern - because once again it gives the
> reader ultimate control over the question of what they want to see. If they
> are seeing "too much" for their preference they can tweak up, and vice
> versa.
This would imply that we, the ones that are unable to neutrally handle 
content, would be perfect in categorizing images after a fine degree of 
nudity. But even having multiple steps would not be a satisfying 
solution. There are many cultural regions which differentiate strongly 
between man an woman. While they would have no problem to see a man in 
just his boxer short, it would be seen as offending to show a woman open 
hair. I wonder what effort it would need to accomplish this goal (if 
even possible), compared to the benefits.
>
>> The later imposes our judgment to the reader, while the first makes no
>> judgment at all and leaves anything to free mind of the reader. ("free
>> mind" means, that the reader has to find his own answer to this
>> question. He might have objections or could agree.)
>>
> And if he objects, we are then just ignoring him?
>
> I disagree with your argument; both points are imposing our judgement on the
> reader.
If _we_ do the categorization, then we impose our judgment, since it was 
us, who made the decision. It is not a customized filter where the user 
decides what is best for himself. Showing anything might not be ideal 
for all readers. Hiding more then preferred might also no be ideal for 
all readers. Hiding less then preferred is just another not ideal case. 
We can't meet everyones taste like no book can meet everyones taste. 
While Harry Potter seams to be fine in many cultures, in some there 
might be parts that are seen as offensive. Would you hide/rewrite parts 
from Harry Potter to make them all happy, or would you go after the 
majority of the market and ignore the rest?

There is one simple way to deal with it. If someone does not like our 
content, then he don't need to use it. If someone does not like the 
content of a book he does not need to buy it. He can complain about it. 
Thats whats Philip Pullman meant with: "No one has the right to life 
without being shocked".
> Agreed; which is why we allow people to filter based on a sliding scale,
> rather than a discrete yes or no. So someone who has no objection to such an
> image, but wants to hide people having sex can do so. And someone who wants
> to hide that image can have a stricter grade on the filter.
>
> If nothing else the latter case is the more important one to address;
> because sexual images are largely tied to sexual subjects, and any
> reasonably person should expect those images to appear. But if culturally
> you object to seeing people in swimwear then this could be found in almost
> any article.
>
> We shouldn't judge those cultural objections as invalid.  Equally we
> shouldn't endorse them as valid. There is a balance somewhere between those
> two extremes.
Yes there is a balance between two extremes. But who ever said that the 
center between two opinions is seen as an valid option by both parties? 
If that would be the case and if that would work in practice, then we 
wouldn't have problems like in Israel. In this case everyone has a 
viewpoint, but neither party is willing to agree with a median. Both 
have very different perspectives about what a median should look like. 
This applies at large scale to situations like in Israel and it also 
applies to small things like a single line of text or an image.

The result is simple: Neither side is happy with a balance. Evey side 
has his point of view and they won't back down. At the result we have 
the so called second battle field aside from the articles itself. As 
soon we start to categorize it will happen, and I'm sure that even you 
would shake with the head as you see those differences colliding with 
each other. The battles inside articles can be described as the mild 
ones. Here we have arguments and sources. How many sources do belong to 
our images? What would you cite as the base for your argumentation?
> I suggested a way in which we could cover a broad spectrum of views on one
> key subject without setting discrete categories of visibility.
As explained above, this will be a very very hard job to do. Even in the 
most simple subject "sexuality" you will need more then one scale to 
measure content against. Other topics, like the religious or cultural 
topics, will be even a much harder job.
>
>> I belive that the idea dies at the moment as we assume that we can
>> achieve neutrality through filtering. Speaking theoretically there are
>> only three types of neutral filters. The first leaves anything through,
>> the second blocks all and the third is totally random, resulting in an
>> equal 50:50 chance for large numbers. Currently we would ideally have
>> the first filter. Your examples show that this isn't always true. But at
>> least this is the goal. Filter two would equal to don't show anything,
>> or shut down Wikipedia. Not an real option. I know. The third option is
>> a construct out of theory that would not work, since it contains an
>> infinite amount of information, but also nothing at all.
>>
> What about the fourth type; that gives you extensive options to filter out
> (or better description; to collapse) content from initial viewing per your
> specific preferences.
>
> This is a technical challenge, but in no way unachievable.
This is by far not a technical challenge. It's a new/additional 
challenge for the authors. A new burden if you will. The finer the 
categorization, the more effort you will need to put into it. The more 
exceptions have to be made. Technically you could support thousands of 
categories with different degrees. But what can be managed by our 
authors and what by the readers? At which point the error made by us (we 
are humans, an computers can't judge images) is bigger then thin lines 
we draw?

In technical theory it sounds nice and handy, but in practice we also 
have to consider effort vs result. I'm strongly confident that the 
effort would not justify the result, even if we ignore side effects like 
third party filtering, based upon our categories, removing the options 
from the user.
> I made an analogy before that some people might prefer to surf Wikipedia
> with plot summaries collapsed (I would be one of them!). In a perfect world
> we would have the option to collapse *any* section in a Wikipedia article
> and have that option stored. Over time the software would notice I was
> collapsing plot summaries and, so, intelligently collapse summaries on newly
> visited pages for me. Plus there might even be an option in preferences
> saying "collapse plot summaries" because it's recognised as a common desire.
>
> In this scenario we keep all of the knowledge present, but optionally hide
> some aspects of it until the reader pro-actively accesses it. Good stuff.
>
>
That would be a solution. But this would not imply any categorization by 
ourself, since the program on the servers would find out what to do. 
This works pretty well for simple things like text already. Images are a 
much bigger problem, which can't be simply handed down to program, since 
no program at the current time would be able to do this. So we are back 
again at: effort vs result + gathering of private user data + works only 
opt-in with an account.

I removed some paragraphs below, since all come down to the effort vs 
result problem. Additionally we have no way to implement a system like 
this at the moment. That is something for the future.

>> The whole problems starts with the intention to spread our knowledge to
>> more people that we currently reach, faster then necessary.
>
> That we might not be reaching certain people due to a potentially fixable
> problem is certainly something we can/should address :)
Yes we should address it. But we also should start to think about other 
options then to hide content. There are definitely better and more 
effective solutions as this quicky called "image filter".
>
>> We have a mission, but it is not the mission to entertain as
>> many people as possible. It is not to gain as much money trough donors
>> as possible.
>>
> Is this a language barrier? do you mean entertain in the context of having
> them visit us, or in the context of them having a fun&  enjoyable time.
>
> Because in the latter case - of course you are right. I don't see the
> relevance though because this isn't about entertaining people, just making
> material accessible.
With "entertain" i meant this: Providing them only with content that 
will please their mind, causing no bad thoughts or surprise to learn 
something new or very different.
> It isn't our purpose to please the readers by only representing
>> knowledge they would like to hear of.
>>
> Yeh, this is a finicky area to think about... because although we ostensibly
> report facts, we also record opinions on those facts. Conceivably  a
> conservative reading a topic would prefer to see more conservative opinion
> on that topic and a liberal more liberal opinion.
>
> Ok, so we have forks that cover this situation - but often they are of poor
> quality, and present the facts in a biased way. In an ideal future world I
> see us maintaining a core, netural and broad article that could be extended
> per reader preference with more commentary from their
> political/religious/career/interest spectrum.
>
> The point is to inform, after all.
>
>
> Tom
That is kind of another "drawing the line" case. To be neutral we should 
represent both (or more) point of views. But showing the reader only 
that what he want's to read is not real knowledge. Real knowledge is 
gained by looking over the borders and not by building own 
temples/territories with huge walls around them, to trap or bait an 
otherwise free mind. It is always good to have an opposition that thinks 
different. Making them all happy by dividing them into two (or more) 
territories, while differences remain or grow, will not help both of 
them, especially if you can't draw a clean line.

nya~



More information about the foundation-l mailing list