*That is why it was addressed to FOSI and cc'ed to
some parties that might
have clue about such systems. The copy to foundation-l was a courtesy
message. You are welcome to discuss censorship and your opinion about it,
but I would appreciate it even more if people actually talked about rating
systems.*
Very well, lets see if i can write up some something more on the point then:
*Definition and purpose*: The purpose of such a system would be to allow
certain content to be filtered from public view. The scope of such a project
is discussable, and dependent upon the goals we wish to reach.
*Rating System*: In order to decide a contents category and offensiveness
there has to be a method to sort the images. Multiple options are available:
*Categorization:* Categories could be added to images to establish the
subject of an image. For example, one image might be categorized nudity, the
other might be categorized as sexual intercourse and so on. The
categorization could be similar to the way we categorize our stub templates
- we could create a top-level filter for "Nudity" and create more specific
categories under that. That way it is possible to fine-tune the content one
might not wish to see.
*Rating:* Another method is rating each image. Instead of using a category
tree we might use a system that allows users to set a level of explicitness
or severity for each image. An image which shows non sexual nudity would be
rated lower then an image which shows a high level of nudity. Note that such
a system would require a clear set of rules as a rating might be subject to
ones personal idea's and feelings towards a certain subject.
*Control mechanism*:There are various levels at which we can filter content:
*Organization wide:* An organization wide filter would allow an organization
to block content based upon site-wide settings. Techically this would likely
prove to be the more difficult option to implement as it would require both
local and external changes. There are multiple methods to execute this
though. For example a server may rely a certain value to Wikipedia at the
start of each session detailing the content that should not be forwarded
over this connection. Based on such a value the server could be programmed
in such a way that images of a certain category won't be forwarded, or would
be replaced by placeholders.
The advantage of this method is that it allows organizations such as schools
to control which content should be shown, therefor possibly negating
complete blocks of Wikipedia. The negative is that it takes away control
from the user.
*Par-user:* A second method is allowing par-user settings. Such a system
would be easier to build and integrate as it only requires changes on
wikipedia's side. A seperate section could be made under "My preferences"
which would include a set of check boxes where a user could select which
content he or she prefers not to see. Images falling under a certain
category could be replaced with the images alt text or with an image stating
something akin to "Par your preferences, this image was removed".
*Hybrid*: A hybrid system could integrate both systems. A user might
override or increase organization level settings if he or she has personal
preferences.
*Possible concerns*
*Responsibility and vandalism: *One risk with rating systems is that they
might be abused for personal goals, akin to article vandalism. Therefor
there should be some limit on who can rate an image - anonymous rating could
change images in such a way that they may be visible or invisible to people
who might or might not want this.
*Volunteer interest:* Implementing such a system would likely require a lot
of volunteer activity. Not only has every image to be checked and rated, we
would also have a backlog of over 6 million images to rate. Therefor we
should have sufficient volunteers who are interested in such a system.
*Public interest*: Plain and simple: Will people actually use this system?
Will people be content with their ability to filter, or will they still try
to remove images they deem offensive? Also: How many editors would use this
system?
*Implementation area*: Commons only? Local and commons?
That is all i can think of for now. I hope it is somewhat more constructive
towards the point you were initially trying to relay :)
~Excirial
On Sun, May 9, 2010 at 11:26 PM, Derk-Jan Hartman <d.j.hartman(a)gmail.com>wrote;wrote:
This message was an attempt to gain information
and spur discussion about
the system in general, it's limits and effectiveness, not wether or not we
should actually do it. I was trying to gather more information so that we
can have an informed debate if it ever got to discussing about the
possibility of using ratings.
That is why it was addressed to FOSI and cc'ed to some parties that might
have clue about such systems. The copy to foundation-l was a courtesy
message. You are welcome to discuss censorship and your opinion about it,
but I would appreciate it even more if people actually talked about rating
systems.
DJ
On 9 mei 2010, at 15:24, Derk-Jan Hartman wrote:
This message is CC'ed to other people who
might wish to comment on this
potential approach
---
Dear reader at FOSI,
As a member of the Wikipedia community and the community that develops
the
software on which Wikipedia runs, I come to you with a few questions.
Over the past years Wikipedia has become more and
more popular and
omnipresent. This has led to enormous problems, because for the
first time,
a largely uncensored system has to work in the boundaries of a world that is
largely censored. For libraries and schools this means that they want to
provide Wikipedia and its related projects to their readers, but are
presented with the problem of what some people might consider, information
that is not "child-safe". They have several options in that case, either
blocking completely or using context aware filtering software that may make
mistakes, that can cost some of these institutions their funding.
Similar problems are starting to present themselves in countries around
the world,
differing views about sexuality between northern and southern
europe for instance. Add to that the censoring of images of Muhammad,
Tiananman square, the Nazi Swastika, and a host of other problems. Recently
there has been concern that all this all-out-censoring of content by parties
around the world is damaging the education mission of the Wikipedia related
projects because so many people are not able to access large portions of our
content due to a small (think 0.01% ) part of our other content.
This has led some people to infer that perhaps it is time to rate the
content of
Wikipedia ourselves, in order to facilitate external censoring of
material, hopefully making the rest of our content more accessible.
According to statements around the web ICRA ratings are probably the most
widely supported rating by filtering systems. Thus we were thinking of
adding autogenerated ICRA RDF tags to each individual page describing the
rating of the page and the images contained within them. I have a few
questions however, both general and technical.
1: If I am correctly informed, Wikipedia would be the first website of
this size
to label their content with ratings, is this correct?
2: How many content filters understand the RDF
tags
3: How many of those understand multiple labels and path specific
labeling. This
means: if we rate the path of images included on the page
different from the page itself, do filters block the entire content, or just
the images ? (Consider the Virgin Killer album cover on the Virgin Killer
article, if you are aware of that controversial image
http://en.wikipedia.org/wiki/Virgin_Killer)
4: Do filters understand per page labeling ? Or
do they cache the first
RDF file they encounter on a website and use that for all
other pages of the
website ?
5: Is there any chance the vocabulary of ICRA can
be expanded with new
ratings for non-Western world sensitive issues ?
6: Is there a possibility of creating a separate
"namespace" that we
could potentially use for our own labels ?
I hope that you can help me answer these questions, so that we may
continue our
community debate with more informed viewpoints about the
possibilities of content rating. If you have additional suggestions for
systems or problems that this web-property should account for, I would more
than welcome those suggestions as well.
Derk-Jan Hartman
_______________________________________________
foundation-l mailing list
foundation-l(a)lists.wikimedia.org
Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/foundation-l
_______________________________________________
foundation-l mailing list
foundation-l(a)lists.wikimedia.org
Unsubscribe: