This is the first step towards censorship, and we should not take it.
We have no experience or expertise to determine what content is
suitable for particular users, or how content can be classified as
such.Further, doing so is contrary to the basic principle that we do
not perform original research or draw conclusions on disputed
matters, but present the facts and outside opinions and leave the
implication for the readers to decide. This principle has served us
well in dealing with many disputes which in other settings are
intractable.
What we do have expertise and experience in is classifying our content
by subject. We have a complex system of categories, actively
maintained, and a system for determining correct titles and other
metadata that reflect the content of the article. No user wants to
see all of Wikipedia--they all choose what the see on the basis of
these descriptors, and on the basis of external links to our site,
links that are not under our control. They can choose on various
grounds. They can choose by title, by links from another article, by
inclusion in a category. Anyone who wishes to use this information to
provide a selected version of WP can freely do so.
To a certain extent , we also have visible metadata about the format
of our material: the main ones which are easily present to visitors
are the language, the size, and the type of computer file. There is
other material that we could display,such as whether an article
contains other files of particular types (in this context, images), or
references, on external links. We could display a separate list of
the images in an article, including their descriptions.
We could include this in our search criteria. They would be useful for
many purposes; someone might for example wish to see all articles on
southeast Asia that contain maps, or wish to see articles about people
only if they contain photographs of the subjects. This is broadly
useful information, that can be used in many ways. it could easily be
used to design an external filter than would, for example, display
articles on people that contain photographs with the descriptors in
place of the photographs, while displaying photographs in all other
articles. The question is whether we should design such filters as
part of the project.
I think we should not take that step. We should leave it to outside
services, which might for example work by viewing WP through a site
that contains the desired filters, or by using a browser that
incorporates them.
David Goodman, Ph.D, M.L.S.
On Sun, May 9, 2010 at 3:17 PM, Sydney Poore <sydney.poore(a)gmail.com> wrote:
On Sun, May 9, 2010 at 2:34 PM, Gregory Maxwell
<gmaxwell(a)gmail.com> wrote:
On Sun, May 9, 2010 at 9:24 AM, Derk-Jan Hartman
<d.j.hartman(a)gmail.com>
wrote:
This message is CC'ed to other people who
might wish to comment on this
potential approach
---
Dear reader at FOSI,
As a member of the Wikipedia community and the community that develops
the
software on which Wikipedia runs, I come to you with a few questions.
Over the past years Wikipedia has become more and
more popular and
omnipresent. This has led to enormous
I am strongly in favour of allowing our users to choose what they see.
"If you don't like it, don't look at it" is only useful advice when
it's easy to avoid looking at things— and it isn't always on our
sites. By marking up our content better and providing the right
software tools we could _increase_ choice for our users and that can
only be a good thing.
I agree and I'm in favor of WMF allocating resources in order to develop a
system that allows users to filter content based on the particular needs of
their setting.
At the same time, and I think we'll hear a similar message from the
EFF and the ALA, I am opposed to these organized "content labelling
systems". These systems are primary censorship systems and are
overwhelmingly used to subject third parties, often adults, to
restrictions against their will. I'm sure these groups will gladly
confirm this for us, regardless of the sales patter used to sell these
systems to content providers and politicians.
(For more information on the current state of compulsory filtering in
the US I recommend the filing in Bradburn v. North Central Regional
Library District an ongoing legal battle over a library system
refusing to allow adult patrons to bypass the censorware in order to
access constitutionally protected speech, in apparent violation of the
suggestion by the US Supreme Court that the ability to bypass these
filters is what made the filters lawful in the first place
http://docs.justia.com/cases/federal/district-courts/washington/waedce/2:20…
)
It's arguable if we should fight against the censorship of factual
information to adults or merely play no role in it— but it isn't
really acceptable to assist it.
And even when not used as a method of third party control, these
systems require the users to have special software installed— so they
aren't all that useful as a method for our users to self-determine
what they will see on the site. So it sounds like a lose, lose
proposition to me.
Labelling systems are also centred around broad classifications, e.g.
"Drugs", "Pornography" with definitions which defy NPOV. This will
obviously lead to endless arguments on applicability within the site.
Many places exempt Wikipedia from their filtering, after all it's all
educational, so it would be a step backwards for these people for us
to start applying labels that they would have gladly gone without.
The filter the "drugs" category because they want to filter pro-drug
advocacy, but if we follow the criteria we may end up with our factual
articles bunched into the same bin. A labelling system designed for
the full spectrum of internet content simply will not have enough
words for our content... or are there really separate labels for "Drug
_education_", "Hate speech _education_", "Pornography
_education_",
etc. ?
Urban legend says the Eskimos have 100 words for
snow, it's not
true... but I think that it is true that for the Wiki(p|m)edia
projects we really do need 10 million words for education.
Using a third party labelling system we can also expect issues that
would arise where we fail to "correctly" apply the labels, either due
to vandalism, limitations of the community process, or simply because
of a genuine and well founded difference of opinion.
Instead I prefer that we run our own labelling system. By controlling
it ourselves we determine its meaning— avoiding terminology disputes
without outsiders; we can operate the system in a manner which
inhibits its usefulness to the involuntary censorship of adults (e.g.
not actually putting the label data in the pages users view in an
accessible way, creating site TOS which makes the involuntary
application of our filters on adults unlawful), and maximizes its
usefulness for user self determination by making the controls
available right on the site.
The wikimedia sites have enough traffic that its worth peoples time to
customize their own preferences.
There are many technical ways in which such a system could be
constructed, some requiring more development work than others, and
while I'd love to blather on a possible methods the important point at
this time is to establish the principles before we worry about the
tools.
I agree and prefer a system designed for the special needs of WMF wikis and
our global community. We may take some design elements and underlying
concepts from existing systems, but our needs are somewhat unique.
The main objective is not to child proof our sites. We need to recognize
that the people using the system may be adults who choose to put a filtering
system in place to make it possible to edit from setting where some types of
content is inappropriate or disallowed. This makes WMF projects more
accessible to people which moves us toward our overall mission.
Sydney Poore
(FloNight)
Cheers,
_______________________________________________
foundation-l mailing list
foundation-l(a)lists.wikimedia.org
Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/foundation-l
_______________________________________________
foundation-l mailing list
foundation-l(a)lists.wikimedia.org
Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/foundation-l