[Foundation-l] Personal Image Filter results announced
Tobias Oelgarte
tobias.oelgarte at googlemail.com
Thu Sep 8 11:46:41 UTC 2011
I just read the list of comments and i have quite some questions and
answers:
1. Is Wikipedia comparable to library?
It is right to say that we create the content on our own. But is this
really true in the first meaning? We don't invent/introduce facts, we
collect and display them. We don't judge about topics, we cite critics
that aren't our own. All under the primary rule of NPOV.
We also have to respect the fact that Wikipedia is both. From the point
of the authors we might not be a library. But from the point of the
readers we essentially are a library which hosts millions of articles,
instead of books, from various authors.
2. What is the difference between Commons and a library?
I don't see any difference in this case. A library buys books, we get
images for free. At the end Commons is a collection like a library and
we are labeling images for easy search by content, but not on the basis
if something is controversial or not. A simple and good example is the
category "violence" and it's subcategories . You will find pictures that
show violence. But much more pictures which are related to this topic,
but don't depict any violence. This are demonstrations against violence,
memorials, portraits of persons involved (mostly politicians which tried
to stop it), and so on.
3. Can the current category system be used for the intended purpose?
Well, as described at 2., it can't without making huge mistakes or going
trough all million images. The reason is simple. We categorize by topics
(directive labels) and not by the fact that something might be
controversial or not.
4. Who does decide, the reader or some contributers?
If content will be labeled by controversial topics than this is the
choice of the contributers. The reader decides if he wants to enable the
feature or not. That is his choice. But what is actually filtered isn't
his choice, his understanding what is controversial or not. So we have
two barriers. One thats respects the choice of the reader and one that
respects not the choice of the reader.
5. Will the contributers decide for themselves or for others?
They will do both. There personal preference/judgment over what a reader
might/should see will play a big role. An typical argumentation in
conversations is, that the contributer does not speak for himself,
instead he always tries to speak for a "majority" behind him, to justify
his point of view. A common theme found in nearly all influential
speeches around the world. The other side is the word of the press. To
quote Joseph H. Jackson: "Did you ever hear anyone say, 'That work had
better be banned because I might read it and it might be very damaging
to me'?"
6. How will consensus be found in a global approach?
As many in the list already stated, the preference (the understanding of
what is controversial) of the readership divides strongly. I guess we
don't need to proof this and assume it as a given fact. This opens other
issues: Where to draw the line? Will the readers be happy with the line?
Will the majority (english contributers) dictate what is objectionable,
ignoring the minorities that the filter should take care of?
7. What meaning has the "referendum"?
Looking at the questions it only serves do define what is important
about it. The first question ensures that it isn't actually a
referendum. There was no option to express "no, we don't want it", "no,
other things are more important", and so on. Additionally it was unclear
how to vote on the questions. Assuming someone really liked the idea and
found it important that the feature can be disabled any time, he would
give a 10 and a 10. But what about someone who disliked the idea to
begin with? He would vote the following questions under the assumption
that the filter would be introduced anyway or in protest. As such he
would give also a 10 or a 0. So you don't have any separations for
deeper insight.
8. Can the filter, the labeling, be misused?
Yes it can. In multiple ways.
As we already read from Sarah Stierch, filters are already in place. She
has no choice to avoid the filter, which is a simple proxy server. The
same server can be used to filter content based upon our labeling as
well. Considering regions with small infrastructures, this could quickly
lead to censorship trough third-parties, which provide Internet access.
Additionally small groups of users can hunt down content they don't like
(feeding input to the previous possibility). Considering we have
millions of files and compared to that a very small group of editors
that is widely spread out, it is easy for minorities to reach local
majorities.
Vandalism is also likely and must be resolved by admins which get a new
important job and have less time for waiting authors. In the end it
makes us less productive.
9. Will the filter help to get Wikipedia unblocked?
I doubt that anyone that filters Wikipedia as a whole would allow access
after the filter is introduced. Since we don't want to censor, the
filter can be disabled at any time. Who really thinks that it will
satisfy censors?
That are just some of my thoughts about this topic. I personally think
that the approach goes in very wrong direction and would cause much more
damage then benefits.
Greetings from Tobias Oelgarte
More information about the foundation-l
mailing list