Andreas Kolbe wrote:
I would use indicators like the number and intensity of complaints received.
For profit-making organizations seeking to maximize revenues by catering to majorities, this is a sensible approach. For most WMF projects, conversely, neutrality is a fundamental, non-negotiable principle.
Generally, what we display in Wikipedia should match what reputable educational sources in the field display. Just like Wikipedia text reflects the text in reliable sources.
This is a tangential matter, but you're comparing apples to oranges.
We look to reliable sources to determine factual information and the extent of coverage thereof. We do *not* emulate their value judgements.
A reputable publication might include textual documentation of a subject, omitting useful illustrations to avoid upsetting its readers. That's non-neutral.
That does not mean that we should not listen to users who tell us that they don't want to see certain media because they find them upsetting, or unappealing.
Agreed. That's why I support the introduction of a system enabling users (including those belonging to "insignificant" groups) to filter images to which they object.
I would deem them insignificant for the purposes of the image filter. They are faced with images of women everywhere in modern life, and we cannot cater for every fringe group.
The setup that I support would accommodate all groups, despite being *far* simpler and easier to implement/maintain than one based on tagging would be.
At some point, there are diminishing returns, especially when it amounts to filtering images of more than half the human race.
That such an endeavor is infeasible is my point.
We need to look at mainstream issues (including Muhammad images).
We needn't focus on *any* "objectionable" content in particular.
That would involve a user switching all images off, and then whitelisting those they wish to see; is that correct? Or blacklisting individual categories?
Those would be two options. The inverse options (blacklisting images and whitelisting entire categories) also should be included.
And it should be possible to black/whitelist every image appearing in a particular page revision (either permanently or on a one-off basis).
This would be better from the point of view of project neutrality, but would seem to involve a *lot* more work for the individual user.
Please keep in mind that I don't regard a category-based approach as feasible, let alone neutral. The amount of work for editors (and related conflicts among them) would be downright nightmarish.
It would also be equally likely to aid censorship, as the software would have to recognise the user's blacklists, and a country or ISP could then equally generate its own blacklists and apply them across the board to all users.
They'd have to identify specific images/categories to block, which they can do *now* (and simply intercept and suppress the data themselves).
David Levy