[Foundation-l] Letter to the community on Controversial Content

Andreas Kolbe jayen466 at yahoo.com
Wed Oct 12 00:59:49 UTC 2011

> From: David Levy <lifeisunfair at gmail.com>
> Andreas Kolbe wrote:

> > I would use indicators like the number and intensity of complaints received.
> For profit-making organizations seeking to maximize revenues by
> catering to majorities, this is a sensible approach.  For most WMF
> projects, conversely, neutrality is a fundamental, non-negotiable
> principle.

Neutrality applies to content. I don't think it applies in the same way to
*display options* or other gadget infrastructure.

> > Generally, what we display in Wikipedia should match what reputable
> > educational sources in the field display. Just like Wikipedia text reflects
> > the text in reliable sources.
> This is a tangential matter, but you're comparing apples to oranges.
> We look to reliable sources to determine factual information and the
> extent of coverage thereof.  We do *not* emulate their value
> judgements.
> A reputable publication might include textual documentation of a
> subject, omitting useful illustrations to avoid upsetting its readers.
> That's non-neutral.

Thanks for mentioning it, because it's a really important point.

Neutrality is defined as following reliable sources, not following 
editors' opinions. NPOV "means representing fairly, proportionately, 
and as far as possible without bias, all significant views that have 
been published by reliable sources."

Editors can (and sometimes do) argue in just the same way that reliable 
sources might be omitting certain theories they subscribe to, because of 
non-neutral value judgments (or at least value judgments they disagree
with) – in short, arguing that reliable sources are all biased. 

I see this as no different. I really wonder where this idea entered that
when it comes to text, reliable sources' judgment is sacrosanct, while when 
it comes to illustrations, reliable sources' judgment is suspect, and editors'
judgment is better. 

If we reflected reliable sources in our approach to illustration, unbiased,
we wouldn't be having half the problems we are having.

> The setup that I support would accommodate all groups, despite being

> *far* simpler and easier to implement/maintain than one based on
> tagging would be.

I agree the principle is laudable. Would you like to flesh it out in more
detail on http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming ?

It can then benefit from further discussion.


> > It would also be equally likely to aid censorship, as the software would have
> > to recognise the user's blacklists, and a country or ISP could then equally
> > generate its own blacklists and apply them across the board to all users.
> They'd have to identify specific images/categories to block, which
> they can do *now* (and simply intercept and suppress the data
> themselves).

Probably true, and I am beginning to wonder if the concern that censors could 
abuse any filter infrastructure isn't somewhat overstated. After all, as 
WereSpielChequers pointed out to me on Meta, we have public "bad image" 
lists now, and hundreds of categories that could be (and maybe are) used
that way.


More information about the foundation-l mailing list