Am 18.10.2011 23:20, schrieb Andreas K.:
On Tue, Oct 18, 2011 at 8:09 PM, Tobias Oelgarte< tobias.oelgarte@googlemail.com> wrote:
You said that we should learn from Google and other top websites, but at the same time you want to introduce objective criteria, which neither of this websites did?
What I mean is that we should not classify media as offensive, but in terms such as "photographic depictions of real-life sex and masturbation", "images of Muhammad". If someone feels strongly that they do not want to see these by default, they should not have to. In terms of what areas to cover, we can look at what people like Google do (e.g. by comparing "moderate safe search" and "safe search off" results), and at what our readers request.
The problem is, that we never asked our readers, before the whole thing was running wild already. It would be really the time to question the feelings of the readers. That would mean to ask the readers in very different regions to get an good overview about this topic. What Google and other commercial groups do shouldn't be a reference to us. They serve their core audience and ignore the rest, since their aim is profit, and only profit, no matter what "good reasons" they represent. We are quite an exception from them. Not in popularity, but in concept. If we put to the example of "futanari", then we surely agree that there could be quite a lot of people that would be surprised. Especially if "safe-search" is on. But now we have to ask why it is that way? Why does it work so well for other, more common terms in a western audience?
You also compare Wikipedia with an image board like 4chan? You want the readers to define what they want see. That means they should play the judge and that majority will win. But this in contrast to the proposal that the filter should work with objective criteria.
I do not see this as the majority winning, and a minority losing. I see it as everyone winning -- those who do not want to be confronted with whatever media don't have to be, and those who want to see them can.
I guess you missed the point that a minority of offended people would just be ignored. Looking at the goal and Tings examples, then we would just strengthen the current position (western majority and point of view) but doing little to nothing in the areas that where the main concern, or at least the strong argument to start the progress. If it really comes down to the point that a majority does not find Muhammad caricatures offensive and it "wins", then we have no solution.
Could you please crosscheck your own comment and tell me what kind of solution is up on your mind? Currently it is mix of very different approaches, that don't fit together.
My mind is not made up; we are still in a brainstorming phase. Of the alternatives presented so far, I like the opt-in version of Neitram's proposal best:
http://meta.wikimedia.org/wiki/Controversial_content/Brainstorming#thumb.2Fh...
If something better were proposed, my views might change.
Best, Andreas
I read this proposal and can't see a real difference in a second thought. At first it is good that the decision stays related to the topic and is not separated as in the first proposals. But it also has a bad taste in itself. We directly deliver the tags needed to remove content by third parties (SPI, Local Network, Institutions), no matter if the reader chooses to view the image or not, and we are still in charge to declare what might be or is offensive to others, forcing our judgment onto the users of the feature.
Overall it follows a good intention, but I'm very concerned about the side effects, which just let me say "no way" to this proposal as it is.
nya~