________________________________ From: MZMcBride z@mzmcbride.com Personally, from the technical side, I don't think there's any way to make per-category filtering work. What happens when a category is deleted? Or a category is renamed (which is effectively deleting the old category name currently)? And are we really expecting individual users to go through millions of categories and find the ones that may be offensive to them? Surely users don't want to do that. The whole point is that they want to limit their exposure to such images, not dig into the millions of categories that may exist looking for ones that largely contain content they find objectionable. Surely.
So that leaves you with much broader categorization, I guess? "Violence", "Gore", etc. And then that leaves you with people debating which images belong to which broad category?
Not trying to be provocative, I've just never understood how the category-based system is supposed to work in practice. In (abstract) theory, it seems magical.
The way it is supposed to work is by creating categories that simply describe media content. A bit like alt.texts, I guess. Examples might be:
Images of people engaged in sexual intercourse.
Videos of people masturbating.
Images of genitals.
Pictures of the prophet Muhammad.
Images of open wounds.
In other words, the idea is to give the user objective definitions of media content (not a subjective assessment of any likely offence).
Working out good category definitions would be an important task. There is little potential for arguments, provided the definitions are clear. A media file either shows genitals, or it doesn't. It either shows people having sexual intercourse, or it doesn't. If there is any doubt (say, visibility is largely obscured, or you can't tell), then the basic rule should be "leave it out" (unless and until filter users start complaining).
Andreas