[Foundation-l] Board resolutions on controversial content and images of identifiable people

Andrew Gray andrew.gray at dunelm.org.uk
Sat Aug 27 21:19:45 UTC 2011


On 26 August 2011 02:15, David Goodman <dggenwp at gmail.com> wrote:

> make it plainer, that people who find   Wikipedia articles appropriate
> for advocating their religious beliefs may use the content for that
> purpose, to that the WMF should find some universally acceptable sets
> of spiritual beliefs, and use its content to advocate them. Taking one
> of the proposed possibilities (probably the one that instigated this),
> providing for censoring images on the grounds of sexual content is
> doing exactly that for views on  sexual behavior. We're officially
> saying that X is content you may find objectionable, but Y isn't.
> That's making an editorial statement about what is shown on X and Y.

I've finally twigged what's worrying me about this discussion.

We're *already* making these editorial statements, deciding what is
and isn't appropriate or offensive for the readers on their behalf,
and doing it within articles on a daily basis.

When we, as editors, consider including a contentious image, we have a
binary choice - do it or don't do it. It's not like text, where we can
spend a nice meandering paragraph weighting the merits of position A
and position B and referring in passing to position C; the picture's
there or it isn't, and we've gone with the "inclusionist" or the
"exclusionist" position. At the moment, there is a general consensus
that, more or less, we prefer including images unless there's a
problem with them, and when we exclude them, we do so after an
editorial discussion, guided by policy and determined by our users on
the basis of what they feel is appropriate, offensive, excessively
graphic, excessively salacious, etc.

In other words, we decide whether or not to include images, and select
between images, based on our own community standards. These aren't
particularly bad as standards go, and they're broadly sensible and
coherent and clear-headed, but they're ours; they're one particular
perspective,  and it is inextricably linked to the systemic bias
issues we've known about for years and years. This is a bit of a weird
situation for us to be in. We can - and we do - try hard to make our
texts free of systemic bias, of overt value judgements, and so forth,
and then we promptly have to make binary yes-or-no value judgements
about what is and isn't appropriate to include in them. As Kim says
upthread somewhere, these judgements can't and won't be culturally
neutral.

(To use a practical example, different readers in different languages
get given different sets of images, handled differently, in comparable
Wikipedia articles - sometimes the differences are trivial, sometimes
significant. Does this mean that one project is neutral in selection
and one not? All sorts of cans of worms...)

As such, I don't think considering this as the first step towards
censorship, or as a departure from initial neutrality, is very
meaningful; it's presuming that the alternative is reverting to a
neutral and balanced status quo, but that never really existed. The
status quo is that every reader, in every context, gets given the one
particular image selection that a group of Wikipedians have decided is
appropriate for them to have, on a take-it-or-leave-it basis...

-- 
- Andrew Gray
  andrew.gray at dunelm.org.uk



More information about the foundation-l mailing list