[Foundation-l] Image filter brainstorming: Personal filter lists

WereSpielChequers werespielchequers at gmail.com
Fri Dec 2 11:00:13 UTC 2011


I'm pretty sure that the community is against a filter system based on our
commons categories. Those who oppose that type of scheme range from the
idealists who are opposed to censorship in principle to the pragmatists who
are aware of our categorisation backlog and don't want to set us up to fail
(or to implement something that would undermine our GLAM programs).
Thankfully the Foundation seems to have taken that message on board and
though we can expect to continue to have pro-filter people joining the
debate and trying to revive that type of proposal, I'm pretty sure it is
dead in the water.

I'm not sure that we have a consensus for or against the principle of
censorship, or whether the community as a whole regards a private personal
filter as censorship. The "referendum" could have established if we have
such consensus, but it didn't include the right questions. I suspect that
I'm not unusual in opposing more censorship than we already have in our
somewhat misnamed "not censored" policy, but also in regarding censorship
as one person stopping another from seeing or hearing something. To my mind
censorship starts when someone tells me I can't have certain images or
information, if I choose not to see certain things that's my choice and not
something I consider censorship, but I don't know what proportion of the
community shares my view on that. This question raises two contentious
areas; On the one hand at least one Wikimedian is asserting that the
community is opposed to censorship in principle, and that even a private
personal filter would be censorship. On the other hand the board still
wants the image filter to be usable for IPs and not just logged in users -
despite the fact that we have no way to implement an IP level system
without allowing some people to censor other people's Wikimedia viewing.

Another contentious area exists re NPOV and globalisation. Some other
websites have a clear POV and a focus on a particular culture, and for them
a filter is relatively easy. "Not Safe for Work" is probably quite similar
in Peoria, Portsmouth and Perth and such a filter prudish but not totally
alien to many Europeans. But in some parts of the world cultural concerns
are very different, so different that the idea of a simple single filter or
even a complex filter with a sliding scale from burka to bare naked via
swim wear isn't enough. To comply with NPOV and offer a filter that could
potentially work for everybody we need a multiplex system that allows for
the possibility that two different people could share a distaste for  one
image but have completely opposite perceptions of another image. The
initial WMF proposal only supported a limited number of filters and
therefore inevitably would have lead to POV disputes as to which religions
or filter concerns were important enough to be on the list and which the
community would ignore and deem insufficiently important to merit a filter
option. Both of the filter options in play - the personal filter option and
the personal private filter option are based on the idea that you can have
as many different filter options as you want - the distinguishing issue is
whether there are people who want a particular filter not whether the
movement decides whether a particular filter request is valid or not.
However one of the leading proposals is that we promote the practice of
collapsing contentious images that already operates on two languages and
encourage it elsewhere on Wikipedia. The problem is that you can't have a
policy of allowing "controversial" images to be collapsed without setting a
threshold as to how controversial an image needs to be to merit such
action. If you simply allow anyone to collapse any image they find
offensive then our Political coverage will quickly look odd. If you decide
to only collapse and hide images that have been reported to be
controversial by reliable sources then brace yourself for various
demonstrations at every future Wikimania.

The third contentious area is over the publishing of lists that could
assist censors. Tom Morris has argued that we shouldn't concern ourselves
with that, in effect citing
http://en.wikipedia.org/wiki/Wikipedia:Other_stuff_exists and explaining
that the horse has bolted. Not everyone accepts that argument, and I see
this as a major difference between the personal filter option and the
private personal filter option
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming/personal_private_filters.
Though I'm wondering whether a compromise between the two with seeding
lists would be acceptable providing they were not comprehensive lists.

A fourth area of contention is money and specifically whether this is a
legitimate use of the money donated to the movement. We've already had one
UK board member ask awkward question re this. My view is that one could
argue that a private personal filter is a user preference and within scope;
That a filter which made the projects acceptable to large numbers of people
who currently avoid us would be in scope; and that a filter which cost a
tiny proportion of our total budget could be de minimis. But others
disagree and at present we have no idea as to what these filters would cost
or how many non-users are just waiting for such a filter. Part of that
could be answered by an estimate from the developers, part by doing
research amongst the internet users  who don't use Wikimedia in languages
where we have low readership share. But radically the board could resolve
this by setting up a stand alone organisation to fund the global image
filter. If that couldn't fundraise then we'd have an idea of the value of
the concept, and it might be salutary for the board itself to organise such
a fork and then have to collaborate with it.

WSC


> It isn' one incidence, it isn't a class of incidences. Take it on board
> that the
> community is against the *principle* of censorship. Please.
>
>
> --
> --
> Jussi-Ville Heiskanen, ~ [[User:Cimon Avaro]]
>
>
>
> ------------------------------
>
> Message: 8
> Date: Thu, 1 Dec 2011 19:06:50 +0000
> From: Tom Morris <tom at tommorris.org>
> Subject: Re: [Foundation-l] Image filter brainstorming: Personal
>        filter lists
> To: Wikimedia Foundation Mailing List
>        <foundation-l at lists.wikimedia.org>
> Message-ID:
>        <CAAQB2S88_o=poBAE-mx28jYr=F0uiH9zxRe1kd9HmCisHc3FEw at mail.gmail.com
> >
> Content-Type: text/plain; charset=ISO-8859-1
>
> On Thu, Dec 1, 2011 at 09:11, Jussi-Ville Heiskanen
> <cimonavaro at gmail.com> wrote:
> > This is not a theoretical risk. This has happened. Most famously in
> > the case of Virgin using pictures of persons that were licenced under
> > a free licence, in their advertising campaign. I hesitate to call this
> > argument fatuous, but it's relevance is certainly highly
> > questionable. Nobody has raised this is as a serious argument except
> > you assume it
> > has been. This is the bit that truly is a straw horse. The "downstream
> > use" objection
> > was *never* about downstream use of _content_ but downstream use of
> _labels_ and
> > the structuring of the semantic data. That is a real horse of a
> > different colour, and not
> > of straw.
> >
>
> I was drawing an analogy: the point I was making is very simple - the
> general principle of "we shouldn't do X because someone else might
> reuse it for bad thing Y" is a pretty lousy argument, given that we do
> quite a lot of things in the free culture/open source software world
> that have the same problem. Should the developers of Hadoop worry that
> (your repressive regime of choice) might use their tools to more
> efficiently sort through surveillance data of their citizens?
>
> I'm not at all sure how you concluded that I was suggesting filtering
> groups would be reusing the content? Net Nanny doesn't generally need
> to include copies of Autofellatio6.jpg in their software. The reuse of
> the filtering category tree, or even the unstructured user data, is
> something anti-filter folk have been concerned about. But for the most
> part, if a category tree were built for filtering, it wouldn't require
> much more than identifying clusters of categories within Commons. That
> is the point of my post. If you want to find adult content to filter,
> it's pretty damn easy to do: you can co-opt the existing extremely
> detailed category system on Commons ("Nude images including Muppets",
> anybody?).
>
> Worrying that filtering companies will co-opt a new system when the
> existing system gets them 99% of the way anyway seems just a little
> overblown.
>
> > It isn' one incidence, it isn't a class of incidences. Take it on board
> that
> > the community is against the *principle* of censorship. Please.
>
> As I said in the post, there may still be good arguments against
> filtering. The issue of principle may be very strong - and Kim Bruning
> made the point about the ALA definition, for instance, which is a
> principled rather than consequentialist objection.
>
> Generally, though, I don't particularly care *what* people think, I
> care *why* they think it. This is why the debate over this has been so
> unenlightening, because the arguments haven't actually flowed, just
> lots of emotion and anger.
>
> --
> Tom Morris
> <http://tommorris.org/>
>
>
>


More information about the foundation-l mailing list