Sorry, I dropped some hot food on me as I wrote this, and then apparently accidentily hit sent.
On Fri, Sep 16, 2011 at 9:57 PM, Andre Engels andreengels@gmail.com wrote:
On Fri, Sep 16, 2011 at 9:13 PM, Tobias Oelgarte < tobias.oelgarte@googlemail.com> wrote:
I would not have any problems if we would not play in the hands of censors (local ISPs, a simple proxy, regimes, institutions, ...) by actually labeling content as objectionable. Which gives away the control over the content by the user itself, while no one would invest the money if he would need to label the content itself.
So how do you expect those censors to use this?
You should know that there are hundreds of phobias, cultural conflicts
and other categories of possibly objectionable content. Do you expect us to manage all this categories of filtering, or would you say that it will be narrowed down to be user friendly and manageable, while leaving out some categories and ignore the complies of some minorities?
I'd say, drop the idea that the filter is supposed to be perfect. A filter that is little-used can get a rough content first time around, preferably specified by the person asking for the filter, then people using the filter can suggest adding or removing images. Volunteers can go and work on the filters if they want, but if they don't, the filter will just be changed by such suggestions.
Then again, there is the alternative of only including filters with at least a certain amount of expected usage. I see no problem with not having a filter for everyone who asks for it. I don't think that doing things perfectly and not doing them at all are the only options.