[Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

Erik Moeller erik at wikimedia.org
Fri Sep 30 08:44:00 UTC 2011


On Wed, Sep 28, 2011 at 11:45 PM, David Gerard <dgerard at gmail.com> wrote:
> The complete absence of mentioning the de:wp poll that was 85% against
> any imposed filter is just *weird*.

The intro and footer of Sue's post say: "The purpose of this post is
not to talk specifically about the referendum results or the image
hiding feature"

She also wrote in the comments: "What I talk about in this post is
completely independent of the filter, and it’s worth discussing (IMO)
on its own merits"

So it's perhaps not surprising that she doesn't mention the de.wp poll
regarding the filter in a post that she says is not about the filter.
;-)

Now, it's completely fair to say that the filter issue remains the
elephant in the room until it's resolved what will actually be
implemented and how. And it's understandable that lots of people are
responding accordingly. But I think it's pretty clear that Sue was
trying to start a broader conversation in good faith. I know that
she's done lots of thinking about the conversations so far including
the de.wp poll, and she's also summarized some of this in her report
to the Board:

http://meta.wikimedia.org/wiki/Image_filter_referendum/Sue%27s_report_to_the_board/en#What_has_happened_since_the_referendum

The broader conversation she's seeking to kick off in her blog post
_can_, IMO, usefully inform the filter conversation.

What Sue is saying is that we sometimes fail to take the needs and
expectations of our readers fully into account. Whether you agree with
her specific examples or not, this is certainly generally true in a
community where decisions are generally made by whoever happens to
show up, and sometimes the people who show up are biased, stupid or
wrong. And even when the people who show up are thoughtful,
intelligent and wise, the existing systems, processes and expectations
may lead them to only be able to make imperfect decisions.

Let me be specific. Let's take the good old autofellatio article,
which was one of the first examples of an article with a highly
disputed explicit image on the English Wikipedia (cf.
http://en.wikipedia.org/wiki/Talk:Autofellatio/Archive_1 ).

If you visit http://en.wikipedia.org/wiki/Talk:Autofellatio , you'll
notice that there are two big banners: "Wikipedia is not censored" and
"If you find some images offensive you can configure your browser to
mask them", with further instructions.

Often, these kinds of banners come into being because people (readers
and active editors) find their way to the talk page and complain about
an image being offensive. They are intended to do two things: Explain
our philosophy, but also give people support in making more informed
choices.

This is, in other words, the result of reasonable discussion by
thoughtful, intelligent and wise people about how to deal with
offensive images (and in some cases, text).

And yet, it's a deeply imperfect solution. The autofellatio page has
been viewed 85,000 times in September. The associated discussion page
has been viewed 400 times.  The "options not to see an image" page,
which is linked from many many of these pages, has been viewed 750
times.

We can reasonably hypothesize without digging much further into the
data that there's a significant number of people who are offended by
images they see in Wikipedia but who don't know how to respond, and we
can reasonably hypothesize that the responses that Wikipedians have
conceived so far to help them have been overall insufficient in doing
so. It would be great to have much more data -- but again, I think
these are reasonable hypotheses.

The image filter in an incarnation similar to the one that's been
discussed to-date is one possible response, but it's not the only one.
Indeed, nothing in the Board resolution prescribes a complex system
based on categories that exists adjacent to normal mechanisms of
editorial control.

An alternative would be, for example, to give Wikipedians a piece of
wiki syntax that they can use to selectively make images hideable on
specific articles. Imagine visiting the article Autofellatio and
seeing small print at the top that says:

"This article contains explicit images that some readers may find
objectionable. [[Hide all images on this page]]."

As requested by the Board resolution, it could then be trivial to
selectively unhide specific images.

If desired, it could be made easy to browse articles with that setting
on-by-default, which would be similar to the way the Arabic Wikipedia
handles some types of controversial content ( cf.
http://ar.wikipedia.org/wiki/%D9%88%D8%B6%D8%B9_%D8%AC%D9%86%D8%B3%D9%8A
).

This could possibly be entirely implemented in JS and templates
without any complex additional software support, but it would probably
be nice to create a standardized tag for it and design the feature
itself for maximum usability.

Solutions of this type would have the advantage of giving
Wiki[mp]edians full editorial judgment and responsibility to use them
as they see fit, as opposed to being an imposition from WMF, with an
image filter tool showing up on the page about tangential
quadrilaterals, and with constant warfare about correct labeling of
controversial content. They would also be so broad as to be not very
useful for third party censorship.

Clearly, one wouldn't just want to tag all articles in this fashion if
people complain -- some complaints should be discussed and resolved,
not responded to by adding a "Hide it if you don't like it" tag; some
should be ignored. By putting the control of when to add the tag fully
in the hands of the community, one would also give communities the
option to say "Why would we use this feature? We don't need it!" This
could then lead to further internal and external conversations.

I don't think this would address all the concerns Sue expresses. For
example, I think we need to do more to bring readers into
conversations, and to treat them respectfully. Our core community is
91% male, and that does lead to obvious perception biases (and yes,
occasional sexism and other -isms). Polls and discussions in our
community are typically not only dominated by that core group, they're
sometimes in fact explicitly closed to people who aren't meeting
sufficient edit count criteria, etc. For good reasons, of course --
but we need to find ways to hear those voices as well.

Overall, I think Sue's post was an effort to move the conversation
away from thinking of this issue purely in the terms of the debate as
it's taken place so far. I think that's a very worthwhile thing to do.
I would also point out that lots of good and thoughtful ideas have
been collected at:
http://meta.wikimedia.org/wiki/Image_filter_referendum/Next_steps/en

IMO the appropriate level of WMF attention to this issue is to 1) look
for simple technical help that we can give the community, 2) use the
resources that WMF and chapters have (in terms of dedicated, focused
attention) to help host conversations in the communities, and bring
new voices into the debate, to help us all be the best possible
versions of ourselves. And as Sue said, we shouldn't demonize each
other in the process. Everyone's trying to think about these topics in
a serious fashion, balancing many complex interests, and bringing
their own useful perspective.

Erik



More information about the foundation-l mailing list