On 3/27/06, Neil Harris neil@tonal.clara.co.uk wrote:
Unfortunately, that's easier said than done -- "child-safe" and "worksafe" are concepts that are impossible to define in a way that everyone can agree on. What one parent or community regards as acceptable may be unacceptable in another; what a parent wants their ten-year-old child to be able to see will probably differ from what they want their sixteen-year-old child to be able to see, and so on.
I agree completely that defining such concepts in a global way is not possible. However, concepts such as "covered nipples", "men in shorts" or "photographs of genitalia" are objective. Individuals with appropriate software can then filter as they see fit.
For some examples of edge cases: consider pictures of men wearing shorts, which are regularly banned by the censors in some of the more conservative Middle-Eastern states: do we mark all articles showing images of uncovered arms or legs as "unsafe"? How about pictures of
No, we mark them "men with uncovered arms or legs".
women with uncovered hair? Do we mark the [[Holocaust]] article, which is extremely upsetting, as "unsafe" for children to read? How about
Ditto.
[[death]], which is upsetting for very small children? What about pictures of [[Bahá'u'lláh]], which observant Bahá'ís prefer not to see in public, or even in their own homes?
Currently, we have no "offensiveness" markup whatsoever. If and when we implement such a system, someone can create a tag called "pictures of Bahaullah", which can be used and filtered against as appropriate.
I recommend reading RFC 3675 for a full and detailed discussion of all the issues involved: its authors conclude that broad-brush attempts at content filtering as "ill considered [...] from the legal, philosophical, and particularly, the technical points of view."
I agree. Which is why I'm not proposing broad-brush content filtering, but instead fine-brush content *tagging*.
Rather than attempting to define "safe" and "unsafe" categories, we should instead concentrate on assigning all Wikipedia articles to meaningful fine-grained descriptive categories, without any implied judgment that a category is "safe" or "unsafe" for any given viewer. Downstream users who want to filter Wikipedia's content can then use this information to make their own choices.
Ok, I now see that you were actually replying to my original message where I glibly used the terms "kidsafe" and "worksafe". My bad.
Steve