[Foundation-l] Letter to the community on Controversial Content

Thomas Morton morton.thomas at googlemail.com
Tue Oct 18 15:23:15 UTC 2011


>
> That comes down to the two layers of judgment involved in this proposal.
> At first we give them the option to view anything and we give them the
> option to view not anything. The problem is that we have to define what
> "not anything" is. This imposes our judgment to the reader. That means,
> that even if the reader decides to hide some content, then it was our
> (and not his) decision what is hidden.
>

No; because the core functionality of a filter should always present the
choice "do you want to see this image or not". Which is specifically not
imposing our judgement on the reader :) Whether we then place some optional
preset filters for the readers to use is certainly a matter of discussion -
but nothing I have seen argues against this core ideas.

If we treat nothing as objectionable (no filter), then we don't need to
> play the judge. We say: "We accept anything, it's up to you to judge".
> If we start to add a "category based" filter, then we play the judge
> over our own content. We say: "We accept anything, but this might not be
> good to look at. Now it is up to you to trust our opinion or not".
>

By implementing a graded filter; one which lets you set grades of visibility
rather than off/on addresses this concern - because once again it gives the
reader ultimate control over the question of what they want to see. If they
are seeing "too much" for their preference they can tweak up, and vice
versa.


>
> The later imposes our judgment to the reader, while the first makes no
> judgment at all and leaves anything to free mind of the reader. ("free
> mind" means, that the reader has to find his own answer to this
> question. He might have objections or could agree.)
>

And if he objects, we are then just ignoring him?

I disagree with your argument; both points are imposing our judgement on the
reader.

A filter that only knows a "yes" or "no" to questions that are
> influenced by different cultural views, seams to fail right away. It
> draws a sharp line through anything, ignoring the fact that even in one
> culture there are lot of border cases. I did not want to use examples,
> but i will still give one. If we have a photography of a young woman at
> the beach.



> How would we handle the case that her swimsuit shows a lot of
> "naked flesh"? I'm sure more then 90% of western country citizens would
> have no objection against this image, if it is inside a corresponding
> article. But as soon we go to other cultures, lets say Turkey, then we
> might find very different viewpoints if this should be hidden by the
> filter or not.


Agreed; which is why we allow people to filter based on a sliding scale,
rather than a discrete yes or no. So someone who has no objection to such an
image, but wants to hide people having sex can do so. And someone who wants
to hide that image can have a stricter grade on the filter.

If nothing else the latter case is the more important one to address;
because sexual images are largely tied to sexual subjects, and any
reasonably person should expect those images to appear. But if culturally
you object to seeing people in swimwear then this could be found in almost
any article.

We shouldn't judge those cultural objections as invalid.  Equally we
shouldn't endorse them as valid. There is a balance somewhere between those
two extremes.


> I remember the question in the referendum, if the filter
> should be cultural neutral. Many agreed on this point. But how in gods
> name should this be done? Especially: How can this be done right?
>

I suggested a way in which we could cover a broad spectrum of views on one
key subject without setting discrete categories of visibility.


> I belive that the idea dies at the moment as we assume that we can
> achieve neutrality through filtering. Speaking theoretically there are
> only three types of neutral filters. The first leaves anything through,
> the second blocks all and the third is totally random, resulting in an
> equal 50:50 chance for large numbers. Currently we would ideally have
> the first filter. Your examples show that this isn't always true. But at
> least this is the goal. Filter two would equal to don't show anything,
> or shut down Wikipedia. Not an real option. I know. The third option is
> a construct out of theory that would not work, since it contains an
> infinite amount of information, but also nothing at all.
>

What about the fourth type; that gives you extensive options to filter out
(or better description; to collapse) content from initial viewing per your
specific preferences.

This is a technical challenge, but in no way unachievable.

I made an analogy before that some people might prefer to surf Wikipedia
with plot summaries collapsed (I would be one of them!). In a perfect world
we would have the option to collapse *any* section in a Wikipedia article
and have that option stored. Over time the software would notice I was
collapsing plot summaries and, so, intelligently collapse summaries on newly
visited pages for me. Plus there might even be an option in preferences
saying "collapse plot summaries" because it's recognised as a common desire.

In this scenario we keep all of the knowledge present, but optionally hide
some aspects of it until the reader pro-actively accesses it. Good stuff.


> Considering this cases, we can assume that Wikipedia isn't neutral, but
> that it aims for option 1.


That's a somewhat rudimentary way of putting it.. it's not so much about
showing/hiding information - but again about grades of how information is
presented. You can take a fact and present it in many different ways in
prose; depending on the bias being exhibited. This is demonstrated across
the language Wikipedias.


> But we can also see that there is not any
> other solution that could be neutral. It is an impossible task to begin
> with. No filter could fix such a problem.
>

Well, there could be... merge language/subject Wiki content together
intelligently to represent all biases and filter against each other.

Again, a technical challenge.


> No it isn't an argument against this. Accommodating as many people as
> possible was never the goal of the project. The goal was to create and
> represent free knowledge to everyone.


Agreed; and if we are inhibiting that by showing images that put people off
reading the content.... That is against our goals surely :)

Of course - this has not been examined... so while I make this argument I
can't support it (and it can't really be discarded either). Hence we need to
ask.


> The whole problems starts with the intention to spread our knowledge to
> more people that we currently reach, faster then necessary.


That we might not be reaching certain people due to a potentially fixable
problem is certainly something we can/should address :)


> We have a mission, but it is not the mission to entertain as
> many people as possible. It is not to gain as much money trough donors
> as possible.
>

Is this a language barrier? do you mean entertain in the context of having
them visit us, or in the context of them having a fun & enjoyable time.

Because in the latter case - of course you are right. I don't see the
relevance though because this isn't about entertaining people, just making
material accessible.

It isn't our purpose to please the readers by only representing
> knowledge they would like to hear of.
>

Yeh, this is a finicky area to think about... because although we ostensibly
report facts, we also record opinions on those facts. Conceivably  a
conservative reading a topic would prefer to see more conservative opinion
on that topic and a liberal more liberal opinion.

Ok, so we have forks that cover this situation - but often they are of poor
quality, and present the facts in a biased way. In an ideal future world I
see us maintaining a core, netural and broad article that could be extended
per reader preference with more commentary from their
political/religious/career/interest spectrum.

The point is to inform, after all.


Tom


More information about the foundation-l mailing list