[Foundation-l] Blog from Sue about censorship, editorial judgement, and image filters

Bjoern Hoehrmann derhoermi at gmx.net
Fri Sep 30 00:03:41 UTC 2011


* Keegan Peterzell wrote:
>http://suegardner.org/2011/09/28/on-editorial-judgment-and-empathy/

I don't think this is contributing much to the discussion. The point in
the blog post is basically just that people should discuss how to make
articles better. Everybody agrees. That, in the sense of the blog post,
the existing decision making processes, including demographies of those
who would participate in discussions, is insufficient is asserted but
as far as I can tell rather unproven. There are three examples. First:

  When an editor asks if the image cleavage_(breasts).jpg really
  belongs in the article about clothing necklines, she shouldn’t get
  shouted down about prudishness: we should try to find better images
  that don’t overly sexualize a non-sexual topic.

I checked out http://en.wikipedia.org/wiki/Neckline and it's Talk page.
The only edit there since March 2011 was by CommonsDelinker renaming a
file (the one in question in fact). The Talk page has not been edited
since July 2007 and there is no mention of this image, much less anyone
being shouted down about it. The second example:

  When an editor writes “you can’t be serious?!” after
  vagina,anus,perineum_(detail).jpg is posted on the front page,
  the response shouldn’t be WP:NOTCENSORED: we should have a
  discussion about who visits the homepage, and we should try to
  understand, and be sensitive to, their expectations and
  circumstances and needs.

This discussion took place beforehand, people quite firmly decided the
article would be featured and would feature and image on the front page.
I am quite sure if you conducted a representative survey among Wikipedia
users in D-A-CH these two decisions would be rather uncontroversial. The
image selection process did not work so well, but I rather doubt people
who would write "you can’t be serious?!" in response would be affected
notably by the image choice.

Clearly this is not without friction, but that is by design. Society has
a need for this kind of friction, a gray area where you can explore the
boundaries, to tell what our current cultural norms are. Monty Python's
Life of Brian helped us learn where we stand with satire with respect to
religion and politics when it was released, for instance, with friction,
a lot, but now it's one of the greatest comedy films of all time.

  When we get thousands of angry e-mails about our decision to republish
  the Jyllands-Posten Muhammad cartoons, we should acknowledge the
  offence the cartoons cause, and explain why, on balance, we think they
  warrant publication anyway.

This case is entirely different. If there was no controversy there would
be no fair use rationale. They are published there due to offence. There
is no balance, people's sensitivities are what put them there. It's nice
to explain this, but has nothing to do with any kind of process failure.

The conference Sue Gardner mentions in the blog posting, to take another
example, was accompanied by a presentation where the german article on
Arachnophobia was shown featuring some huge spider image. Turns out that
was an old revision that had long since been changed.

It is not entirely surprising that those who see a problem that needs to
be solved have trouble providing evidence in sufficient quantity, if you
can actually persuade the community there is a problem, they will go and
fix it. What's left, in turn, going by the argument, is where the normal
editorial process has failed, but examples of that cannot be given to
the established community as they wouldn't agree there is a problem else
it would fall into the first category, unless the problem is so enormous
the established community could not hope to address it, in which case we
would not be talking about this either as examples would be unnecessary.

So that leaves the argument about the demographics of the established
community, saying the established community cannot address the problem
because it does not understand it on an emphatic level. This is true of
course in as much as there is a problem that cannot be explained.

It is normal and expected that communities reject solutions to problems
they are told they cannot understand, especially when the communities
are expected to participate in the implementation of the solution which
they cannot as they do not understand the problem to begin with. So the
image filter proposal is largely kafkaesque in this sense.

I will note in passing that Sue Gardner is quite wrong in my opinion on
the distinction between editorial judgement and censorship. We all use
and expects others to use some restraint to make living togehter easier.
There are laws, there are social norms, we may refrain from expressing
ourselves in a certain way due to the sensitivities of others without
calling that self-censorship. Laws and widely accepted social normas are
authorities in a broad sense of the word. Without authority to consider,
deference to the sensitivities of others is self-censorship, harmfully
so, as we constantly need to slightly cross boundaries to know where the
boundaries are currently, which we need to tell the two cases apart.

In a collective, like a group of editors who decide some article issue,
this is more difficult. The collective may, for instance, have standing
rules that are not representative of the overwhelming majority's views,
so individuals cannot intuitively decide whether something is "okay" per
the social norms they are accustomed to for instance. Nevertheless, if
they collectively find the best way, say, to illustrate an article $this
way is best, but then do not implement that as it might lead to a couple
of complaints, then that is self-censorship, not editorial judgement.

Anyway, yeah, it's easy to agree that not everything is perfect, and we
should work together, discussing and explaining things, to make things
better, but that has always been so and everybody understands this. The
blog posting does not help us understand the problem the Board thinks
can be solved with an image filter and it does not help us understand
how it should be implemented, or allow us to come up with alternatives
that would solve the problem, if indeed there is any. There is nothing
wrong with that, I am just saying it doesn't help us make progress.

My impression is that the Board, and the people who support the filter,
can related to there being sensitivities with respect to images, but no
effort has been made to understand what practical problems there are or
how they can be solved, and by my reasoning above it cannot be under-
stood or solved within existing infrastructure. That means there is no
point in continuing the discussion unless and until people come up with
substantial new information.
-- 
Björn Höhrmann · mailto:bjoern at hoehrmann.de · http://bjoern.hoehrmann.de
Am Badedeich 7 · Telefon: +49(0)160/4415681 · http://www.bjoernsworld.de
25899 Dagebüll · PGP Pub. KeyID: 0xA4357E78 · http://www.websitedev.de/ 



More information about the foundation-l mailing list