On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte < tobias.oelgarte@googlemail.com> wrote:
You said that we should learn from Google and other top websites, but at the same time you want to introduce objective criteria, which neither of this websites did?
What I mean is that we should not classify media as offensive, but in terms such as "photographic depictions of real-life sex and masturbation", "images of Muhammad". If someone feels strongly that they do not want to see these by default, they should not have to. In terms of what areas to cover, we can look at what people like Google do (e.g. by comparing "moderate safe search" and "safe search off" results), and at what our readers request.
You also compare Wikipedia with an image board like 4chan? You want the readers to define what they want see. That means they should play the judge and that majority will win. But this in contrast to the proposal that the filter should work with objective criteria.
I do not see this as the majority winning, and a minority losing. I see it as everyone winning -- those who do not want to be confronted with whatever media don't have to be, and those who want to see them can.
Could you please crosscheck your own comment and tell me what kind of solution is up on your mind? Currently it is mix of very different approaches, that don't fit together.
My mind is not made up; we are still in a brainstorming phase. Of the alternatives presented so far, I like the opt-in version of Neitram's proposal best:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fh...
If something better were proposed, my views might change.
Best, Andreas