Tom Haws wrote:
I think if Wikipedia could provide reliable, selectable content, that humanitarian organizations like Care For Life could creatively solve the problem of distribution.
We don't want forks. But our READERS often want a subset of our material. We established our web sites and our community to provide reliable FREE material for everyone in the world. (Remember, the choice of Wiki software was only a means to an end; a means that has served us amazingly well, but only a means: not the end itself.)
Our web sites will ALWAYS contain the full set of articles. So the articles on diverse sexual practices (copiously illustrated) will forever be safe from censorship. But not all our readers want to see the full set. They'll settle for the 99.9% that helps them fulfill their goals.
Poor schools in Africa: MUST they agree to accept articles about (largely Western) notions like [[autofellatio]] (with helpful images showing you how to try this yourself)?
Did we create Wikipedia, only on the CONDITION that everyone who uses it read (or at least receive) every article we write?
There's a web site that copies all our geographical articles. Call it "censorship" or "editorial decisions", if those terms motivate you, but from Uncle Ed's POV they are merely CHOOSING what is important to them and disregarding the rest.
I'm sure somebody will decide to choose ONLY the sexological articles for a free Encyclopedia of Sex. I bet NOBODY at Wikipedia will accuse them of "censorship" for leaving out global warming or tsunami relief. They wouldn't think of it; they'd be gushing about intellectual freedom and so on.
THEREFORE, I propose that we adopt as quickly as possible a system that lets readers and/or publishers easily identify articles that they want to include or exclude. I know Magnus has done a lot of work on this, and I hope he will continue.
Wouldn't it be nice to have a category page that lists "all sex-related topics" or "all articles chosen for WikiDVD 1.0" or "articles selected by the GeoWorld project"?
Imagine a poor country with a non-democratic government. They want to educate their people, but they are unwilling to tolerate some small number of ideas. Should we make it DIFFICULT or EASY for them to select a subset of articles for an Encyclopedia of Lessitania?
Ed Poor, aka Uncle Ed
Poor, Edmund W a écrit:
Tom Haws wrote:
I think if Wikipedia could provide reliable, selectable content, that humanitarian organizations like Care For Life could creatively solve the problem of distribution.
We don't want forks. But our READERS often want a subset of our material. We established our web sites and our community to provide reliable FREE material for everyone in the world. (Remember, the choice of Wiki software was only a means to an end; a means that has served us amazingly well, but only a means: not the end itself.)
Our web sites will ALWAYS contain the full set of articles. So the articles on diverse sexual practices (copiously illustrated) will forever be safe from censorship. But not all our readers want to see the full set. They'll settle for the 99.9% that helps them fulfill their goals.
Poor schools in Africa: MUST they agree to accept articles about (largely Western) notions like [[autofellatio]] (with helpful images showing you how to try this yourself)?
Did we create Wikipedia, only on the CONDITION that everyone who uses it read (or at least receive) every article we write?
There's a web site that copies all our geographical articles. Call it "censorship" or "editorial decisions", if those terms motivate you, but from Uncle Ed's POV they are merely CHOOSING what is important to them and disregarding the rest.
I'm sure somebody will decide to choose ONLY the sexological articles for a free Encyclopedia of Sex. I bet NOBODY at Wikipedia will accuse them of "censorship" for leaving out global warming or tsunami relief. They wouldn't think of it; they'd be gushing about intellectual freedom and so on.
THEREFORE, I propose that we adopt as quickly as possible a system that lets readers and/or publishers easily identify articles that they want to include or exclude. I know Magnus has done a lot of work on this, and I hope he will continue.
Wouldn't it be nice to have a category page that lists "all sex-related topics" or "all articles chosen for WikiDVD 1.0" or "articles selected by the GeoWorld project"?
Imagine a poor country with a non-democratic government. They want to educate their people, but they are unwilling to tolerate some small number of ideas. Should we make it DIFFICULT or EASY for them to select a subset of articles for an Encyclopedia of Lessitania?
Ed Poor, aka Uncle Ed
Nod.
Fully agree with you Ed.
It all depends on who we consider our audience is. Our editorial rules should be different when we consider our primary audience is the reader... or if it is a distributor...
If we aim directly to our reader, our editorial policies should be more stringent, and more adapted to local specificities. If the audience is a redistributor who will only publish a selection of our content, we could provide him with means to more quickly adjust to which content he is looking for.
ant
David Gerard a écrit:
Anthere (anthere9@yahoo.com) [050407 00:28]:
If we aim directly to our reader, our editorial policies should be more stringent, and more adapted to local specificities.
How would you do this for en:, for example? What adaptations do you have in mind?
- d.
Filter systems so that everything is there by default, and readers can choose in their preferences what they would prefer to exclude or at least to present differently (such as inline link rather than fully displayed pictures)
Improvement of category system, so as to improve classification system, allow complex queries etc...
are two examples coming up immediately from my mind. Both are linked. And both require at the same time technical features and policy changes.
But in a simple way, if the database dump can be made to download all content without fair use images, it could be downloadable with all content but certain offending stuff tagged as a special category, so that the result could be used in school. Of course, some would say, "kids should see everything" or "this content is not problematic".
So, we do not do it, and keep it all.
And wikipedia will essentially stay out of schools. This is one choice.
David Gerard a écrit:
Anthere (anthere9@yahoo.com) [050407 00:28]:
If we aim directly to our reader, our editorial policies should be more stringent, and more adapted to local specificities.
How would you do this for en:, for example? What adaptations do you have
in
mind?
- d.
Filter systems so that everything is there by default, and readers can choose in their preferences what they would prefer to exclude or at least to present differently (such as inline link rather than fully displayed pictures)
Improvement of category system, so as to improve classification system, allow complex queries etc...
are two examples coming up immediately from my mind. Both are linked. And both require at the same time technical features and policy changes.
But in a simple way, if the database dump can be made to download all content without fair use images, it could be downloadable with all content but certain offending stuff tagged as a special category, so that the result could be used in school. Of course, some would say, "kids should see everything" or "this content is not problematic".
So, we do not do it, and keep it all.
And wikipedia will essentially stay out of schools. This is one choice.
I suggested a 'mature content' notice previously in the chat room, but it was dismissed by those in there. Basically, it is possible in preferences to opt to get warned before you view 'mature content', or to opt to block it altogether (with a password so it can't be unblocked in schools). Then, a special category is made for such mature images and articles. This ultimately also sorts the never-ending Autofellatio problem thats on-going.
I think that, with that level of being able to opt for censorship, schools would use Wikipedia. That said, an article verification system would help that case also, although that would be hard to implement.
- D. Hedley
David 'DJ' Hedley wrote:
I suggested a 'mature content' notice previously in the chat room, but it was dismissed by those in there.
Subjective labels like "mature" are problematic. Are you familiar with the ICRA content labelling guidelines? They are straightforward, objective labels aimed at allowing providers to select.
ICRA's slogan is "Choice, not censorship"
http://www.icra.org/vocabulary/
Here is an example. I think the Context bit is too subjective, but we could simply implement ICRA tags:
*Nudity and sexual material* [Help http://www.icra.org/vocabulary/#hn]
* Erections and female genitals in details * Male genitals * Female genitals * Female breasts * Bare buttocks * Explicit sexual acts * Obscured or implied sexual acts * Visible sexual touching * Passionate kissing * None of the above
*Context: this material...* [Help * http://www.icra.org/vocabulary/#hcontext] appears in an artistic context and is suitable for young children * appears in an educational context and is suitable for young children * appears in a medical context and is suitable for young children
Tom Haws a écrit:
David 'DJ' Hedley wrote:
I suggested a 'mature content' notice previously in the chat room, but it was dismissed by those in there.
Subjective labels like "mature" are problematic. Are you familiar with the ICRA content labelling guidelines? They are straightforward, objective labels aimed at allowing providers to select.
ICRA's slogan is "Choice, not censorship"
http://www.icra.org/vocabulary/
Here is an example. I think the Context bit is too subjective, but we could simply implement ICRA tags:
*Nudity and sexual material* [Help http://www.icra.org/vocabulary/#hn]
Erections and female genitals in details
Male genitals
Female genitals
Female breasts
Bare buttocks
Explicit sexual acts
Obscured or implied sexual acts
Visible sexual touching
Passionate kissing
None of the above
*Context: this material...* [Help
http://www.icra.org/vocabulary/#hcontext] appears in an artistic context and is suitable for young children
appears in an educational context and is suitable for young children
appears in a medical context and is suitable for young children
Ya. http://meta.wikimedia.org/wiki/Offensive_content
This is of course if we want to focus on primary audience.
Tom Haws wrote:
David 'DJ' Hedley wrote:
I suggested a 'mature content' notice previously in the chat room, but it was dismissed by those in there.
Subjective labels like "mature" are problematic. Are you familiar with the ICRA content labelling guidelines? They are straightforward, objective labels aimed at allowing providers to select.
ICRA's slogan is "Choice, not censorship"
http://www.icra.org/vocabulary/
Here is an example. I think the Context bit is too subjective, but we could simply implement ICRA tags:
[cut example]
I have a feeling that the ICRA vocabulary may be copyrighted by ICRA, so we would not be able to use it without their permission. However, if a similar label scheme were to be adopted, it must be copyright free, non-subjective, and capable of being used in an NPOV manner without endless labelling and re-labelling revert wars. Even the ICRA labels (which represent a good attempt at a reasoned scheme) still have some subjective aspects to them.
-- Neil
Neil Harris wrote:
I have a feeling that the ICRA vocabulary may be copyrighted by ICRA, so we would not be able to use it without their permission. However, if a similar label scheme were to be adopted, it must be copyright free, non-subjective, and capable of being used in an NPOV manner without endless labelling and re-labelling revert wars. Even the ICRA labels (which represent a good attempt at a reasoned scheme) still have some subjective aspects to them.
First some nitpicking. Almost everything on Wikipedia is copyright, it's just licensed for redistribution, with a few restrictions. Your post above that I've quoted is copyright, but you've implicitly licensed it for the usual copying and distribution that you'd expect by posting to a mailing list. I think the chances of finding a copyright-free labelling scheme are fairly slim, and I'm not sure that's a useful goal anyway. The vocabulary is not licensed for copying and redistribution, although presumably a rewrite of the vocabulary would be a distinct new work, and could be incorporated into Wikipedia. After all, information is not copyright (in most countries), just creative expression.
ICRA labels have a terms of use requiring the label to be accurate at all times. ICRA may take legal action against sites that mislabel content, or they may just add them to a blacklist which is used by some content filters. This allows ICRA to assure the people who use the labels that they are accurate. This is not particularly compatible with wikis, although we can always bend the rules and wait to see if they complain.
ICRA labels have the advantage that they will be used by existing software, there will be no need to distribute our own filtering software to schools or to manage IP-specific content filters on the server side. I'm not sure if they're appropriate for Wikipedia, but it's certainly something to discuss.
-- Tim Starling
On Thursday, April 07, 2005 6:27 AM, Tim Starling t.starling@physics.unimelb.edu.au wrote:
[Re. ICRA content warning labels for Wikipedia]
I'm not sure if they're appropriate for Wikipedia, but it's certainly something to discuss.
Well, the insurmountable problem, AFAICS, is that people (and, certainly, software) treat content warning labels as absolutely accurate all-of-the-time ones. Though potentially we could correctly label almost all of the content, it's that last 1% that would be blown up out of all proportion and most likely get us actually blocked by the ICRA. How do you stop someone on a wiki from adding "drat" to a page without also flagging it? Automatically? Then what about "d<span></span>rat", or "dr<span style="color:inherit;">a</span>t", or .... And, certainly, it would require pre-vetting of all images before they could be added. This would be useful to many, quite possibly, but would be entirely impossible to work into the way a wiki, well, works.
ICRA labels (or, at least, an NPOV form of them) could quite possibly be a good idea on the static site, when we launch it, because all articles would by the very nature of the static form of Wikipedia be pre-vetted and checked. But, until we're a lot further along that path, ISTM a little incongruous to discuss this.
I have, of course, ignored the philosophical and moral parts of the argument, but the 'real-world' problems with content labelling trump this, I feel.
Yours,
Tim Starling wrote:
ICRA labels have the advantage that they will be used by existing software, there will be no need to distribute our own filtering software to schools or to manage IP-specific content filters on the server side. I'm not sure if they're appropriate for Wikipedia, but it's certainly something to discuss.
If we use the ICRA labels,
1. I would prefer to use only the objective ones (killings of animals, male genitalia, etc.). The subjective ones (artistic context, child-harming) are argument magnets. I think the ICRA standard allows for this.
2. We would need to label article *versions*. This means that if an anon comes along and edits a page or changes an image, the labels revert to non-labelled (no ICRA meta tag) until replaced. And this is the same way our validation/quality labels need to work. We can't rate the quality or label the content of a fluid article.
Tom Haws
On Thu, Apr 07, 2005 at 03:27:22PM +1000, Tim Starling wrote:
ICRA labels have a terms of use requiring the label to be accurate at all times. ICRA may take legal action against sites that mislabel content, or they may just add them to a blacklist which is used by some content filters. This allows ICRA to assure the people who use the labels that they are accurate. This is not particularly compatible with wikis, although we can always bend the rules and wait to see if they complain.
This sounds like a prime reason to forbid (or strongly discourage) Wikipedia editors from placing ICRA labels on any Wikipedia pages. No editor should be taking actions that gratuitously expose the project to legal threats with so very little benefit for the project's goals.
On Thu, Apr 07, 2005 at 12:21:58PM -0700, Tom Haws wrote:
Karl A. Krueger wrote:
so very little benefit for the project's goals.
"Imagine a world in which every single person on the planet is given free access to the sum of all human knowledge. That's what we're doing."
The result of applying censorship tags would be to prejudice "free access to the sum of all human knowledge" rather than to promote it. Moreover, since any such tagging system has relatively transparent POV assumptions, it would be a rejection of Wikipedia's principles. Since editors cannot be expected to agree on the application of tags, any such tagging system would also be a source of permanent, irreconcilable POV conflicts.
Thus, no benefit to the project's goals, and a great detriment. Yet even despite this, if there were reason to pursue such a system, it would still be highly dangerous, as follows:
The particular brand of censorship tags under discussion were described as being usable only under license terms wherein Wikipedia could be penalized for incorrect application of the tags.
What constitutes correct or incorrect application of specific censorship categories is, naturally, a POV matter. For instance, I believe one of the mentioned catgeories had to do with "passionate kissing". Exactly what constitutes "passionate kissing" in, say, stills from a movie could keep movie critics up all night.
So the only legally safe strategy would be to adopt the prejudices of the putative owner of the tagging system in question, so as to ensure that the owner would not wish to sue Wikipedia. This would be a material rejection of Wikipedia's existing NPOV commitments.
Note, I have no objection to a (legally-operated [*]) Wikipedia fork or derivative work adding censorship tags to the material. Such a project could absorb the legal risk of doing so, possibly by taking out liability insurance paid for by persons interested in the project. It would, naturally, need to avoid making the NPOV commitment that Wikipedia does, so as not to defraud its readers and contributors with the pretext of neutrality.
[*] As far as I can tell, most existing Wikipedia forks and derivative works are operated illegally, in violation of Wikipedia editors' copyrights and license terms under the GFDL. Most seem to be operated as ways of hijacking search terms.
Poor, Edmund W wrote:
Imagine a poor country with a non-democratic government. They want to educate their people, but they are unwilling to tolerate some small number of ideas. Should we make it DIFFICULT or EASY for them to select a subset of articles for an Encyclopedia of Lessitania?
The GFDL makes it perfectly legal for anyone to do whatever they want with all or part of the articles (besides take them proprietary). So the government of Lessitania is free to distribute Wikipedia with all the articles that speak critically of their leaders removed or sanitized. That doesn't mean that the Wikimedia Foundation ought to set up a special version of the encyclopedia where we ourselves cull and censor the articles to cater to the whims of the government of Lessitania. Indeed, I would oppose hosting such a project on our servers, since "satisfying the political concerns of the government of Lessitania" isn't part of our mission.
-Mark
Delirium wrote:
Indeed, I would oppose hosting such a project on our servers, since "satisfying the political concerns of the government of Lessitania" isn't part of our mission.
Pragmatically speaking, even if we do "aim to please" by providing reliable, selectable content, we need to focus our resources on providing the most globally useful tags. Thus, even without resorting to philosphical arguments (mission), it is obvious to us that the special needs of "Lessitania" are beyond the scope of our resources.
In other words, we aren't willing to mobilize the global Wikipedia community to address the special selectability needs of "Lessitania". But we might be convinced to mobilize for the most generally pressing selectability needs.
Tom Haws