[Foundation-l] Image filter brainstorming: Personal filter lists

Andreas K. jayen466 at gmail.com
Wed Nov 30 17:39:53 UTC 2011


On Mon, Nov 28, 2011 at 10:46 AM, Erik Moeller <erik at wikimedia.org> wrote:

> On Mon, Nov 28, 2011 at 10:21 AM, David Gerard <dgerard at gmail.com> wrote:
> > Unfortunately, the issue is not dead.
>
> That's correct; nobody from WMF has said otherwise. What's dead is the
> idea of a category-based image filter, not the idea of giving
> additional options to readers to reversibly collapse images they may
> find offensive, shocking, or inappropriate in the context in which
> they're viewing them (e.g. at work). However, Sue has made it clear
> that she wants the WMF staff to work with the community to find a
> solution that doesn't mean strong opposition. Her presentation on the
> issue in Hannover begins with this slide:
>
>
> http://commons.wikimedia.org/w/index.php?title=File%3APresentation_Gardner_Hannover.pdf&page=15
>
> My personal view is that such a solution will need to take into
> account that actual current editorial practices and perceptions in our
> projects vary a great deal, as did the image filter poll results by
> language. As I pointed out before, projects like Arabic and Hebrew
> Wikipedia are currently collapsing content that's not even on the
> radar in most of these discussions (e.g. the 1866 painting L'Origine
> du monde in Hebrew Wikipedia), while German Wikipedia put the vulva
> photograph on its main page. A solution that pretends that this
> continuum of practice can be covered with a single approach, one which
> doesn't give a lot of flexibility to readers and editors, is IMO not a
> solution at all.
>
> I'm not convinced that the "collapse image one-by-one" approach to
> develop a filter list is very valuable in and of itself due to lack of
> immediate practical impact and likely limited usability. The idea of
> making it easy to build, import and share such lists of images or
> image-categories would move the process of categorization into a
> market economy of sorts where individual or organizational demand
> regulates supply of available filters. This could lead to all kinds of
> groups advertising their own filter-lists, e.g. Scientology, Focus on
> the Family, etc. From there, it would be relatively small step for
> such a group to take its filter list and coerce users to only access
> Wikipedia with the filter irreversibly in place.
>



The "collapse images one-by-one" approach would work for Wikipedians and
readers who generally come across very little content in Wikipedia that's
objectionable to them, except for the odd image that they have seen again
and again and now feel they've seen often enough.

I'm not sure how much of a realistic issue the second point, with
Scientology, FoF etc., is. As designed, the filter only hides the content
from initial view; the content is still accessible by clicking on it. That
wouldn't be good enough for a dyed-in-the-wool censor.




> While third parties are already able to coerce their users to not see
> certain content, creating an official framework for doing so IMO puts
> us dangerously close to censors: it may lead to creation of regimes of
> censorship that did not previously exist, and may be used to exercise
> pressure on WMF to change its default view settings in certain
> geographies since all the required functionality would already be
> readily available.
>



If the image filter uses a user-specific personal filter list stored on the
Foundation's server, that would assume that the censor can populate the
user's list without the user noticing, can prevent the user from emptying
their PFL again, and can disable the user's ability to click on a hidden
image to reveal it. Is there something that we could do to make that more
difficult, or impossible? Because then any censor would be back to square
one, left to their own devices, rather than being able to ride piggy-back
on our filter function.



My personal view on this issue has always been that one of the most
> useful things we could do for readers is to make NPOV, well-vetted and
> thorough advice too users on how to manage and personalize their net
> access available to them. Wikipedia is only one site on the web, and
> whatever we do is not going to extend to the rest of the user's
> experience anyway. There are companies that specialize in filtering
> the Net; we could point people to those providers and give advice on
> how to install specific applications, summarizing criticism and praise
> they have received.
>



I don't understand why you would feel uncomfortable associating with
hobbyists creating crowdsourced filter lists (or indeed moral guardians
creating such lists, if they can be bothered to do the work), but would
feel comfortable endorsing professional filter software companies.

If we are worried about people changing default settings in certain
geographies, the professional filter providers are much more likely to have
that capability, as it's already part of their product portfolio.

The individuals who'd offer Wikipedia editors and readers their own sets of
graded filter lists (no hardcore / no softcore / no spiders / no Muhammad /
etc.) on their websites, just as a hobby and for peer recognition, wouldn't
have the business standing, nor the software design capability, of a
professional filter software company. Their cottage-industry, volunteer
outlook would arguably be more compatible with the Wikipedia mindset. And I
do wonder how interested a moral guardianship organisation would be in
developing a filter list for a filter that any child can override, just by
clicking on the hidden image. Curiosity is a powerful impulse.

I do see that the personal filter list templates that hobbyists might put
together are directly focused on Wikimedia, creating as it were an explicit
inventory of Wikimedia's controversial content that has never existed
before. On the other hand, Tom Morris in his blog post

http://blog.tommorris.org/post/11286767288/opt-in-image-filter-enabling-censorware

made a fairly good argument why professional censors could easily create
such lists themselves, if they were that interested. It wouldn't cost them
much, and they might be more inclined to rely on their own work rather than
that of hobbyists.



> On the other hand, such advice would be pretty removed from the
> experience of the reader, and l do think there are additional
> reasonable things we could do. So I'm supportive of approaches which
> give an editing community additional flexibility in warning their
> readers of content they may find objectionable, and give readers the
> ability to hide (in the general or specific case) such content. As I
> said previously, this wouldn't create a new regime of filter lists or
> categories, merely a broad community-defined standard by which
> exclusion of some content may be desirable, which could vary by
> language as it does today.
>


Do you favour the sort of approach Neitram suggested then? I.e.

http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fhidden

or

http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#Opt-in_version_of_this_proposal
 ?

That's kind of similar to the Hebrew and Arabic collapse templates, except
that the user gets to opt in to it.

Best,
Andreas


More information about the foundation-l mailing list