Fred Bauder writes:
I think it probably seems to climate change deniers that excluding political opinions from science-based articles on global warming is a violation of neutral point of view, and of basic fairness. That is just one example, but there are other similar situations.
This analogy is breathtakingly unpersuasive. Apart from the fact that consensus about scientific theory is not analogous to consensus about the historical records of particular events, climate-change-denial theory is actually discussed quite thoroughly on Wikipedia. Plus, the author of the op-ed in The Chronicle of Higher Education doesn't seem at all like climate-change deniers.
If there is something specific you want to suggest about the author -- that he's agenda-driven, that his work is unreliable, or that the journal in which he published the article is not a reliable source -- then I think equity requires that you declare why you doubt or dismiss his article.
I read the article in the Chronicle pretty carefully. The author's experience struck me as an example of a pattern that may account for the flattening of the growth curve in new editors as well as for some other phenomena. As you may rememember, Andrew Lih conducted a presentation on "the policy thicket" at Wikimania almost five years ago. The wielding of policy by long-term editors, plus the rewriting of the policy so that it is used to undercut NPOV rather than preserve it, strikes me as worth talking about. Dismissing it out of hand, or analogizing it to climate-change denial, undercuts my trust in the Wikipedian process rather than reinforces it.
--Mike
Fred Bauder writes:
I think it probably seems to climate change deniers that excluding political opinions from science-based articles on global warming is a violation of neutral point of view, and of basic fairness. That is just one example, but there are other similar situations.
This analogy is breathtakingly unpersuasive. Apart from the fact that consensus about scientific theory is not analogous to consensus about the historical records of particular events, climate-change-denial theory is actually discussed quite thoroughly on Wikipedia. Plus, the author of the op-ed in The Chronicle of Higher Education doesn't seem at all like climate-change deniers.
If there is something specific you want to suggest about the author -- that he's agenda-driven, that his work is unreliable, or that the journal in which he published the article is not a reliable source -- then I think equity requires that you declare why you doubt or dismiss his article.
I read the article in the Chronicle pretty carefully. The author's experience struck me as an example of a pattern that may account for the flattening of the growth curve in new editors as well as for some other phenomena. As you may rememember, Andrew Lih conducted a presentation on "the policy thicket" at Wikimania almost five years ago. The wielding of policy by long-term editors, plus the rewriting of the policy so that it is used to undercut NPOV rather than preserve it, strikes me as worth talking about. Dismissing it out of hand, or analogizing it to climate-change denial, undercuts my trust in the Wikipedian process rather than reinforces it.
--Mike
We're talking past one another. It is obvious to me that the author of the Chronicle article should have been able to add his research without difficulty, at least after it was published.
We have material about climate change denial, but do not give political viewpoints the status we give scientific opinion in articles on the science, nor should we. What we would be looking for, and will not be able to find, is substantial work showing that climate warming does not result from an increase in greenhouse gases and other products of human activity. We can't simply say, "According to Rick Santorum, there is no scientific basis...."
Yes, please, lets discuss.
Fred
On 02/19/12 7:31 PM, Fred Bauder wrote:
Fred Bauder writes:
I think it probably seems to climate change deniers that excluding political opinions from science-based articles on global warming is a violation of neutral point of view, and of basic fairness. That is just one example, but there are other similar situations.
This analogy is breathtakingly unpersuasive. Apart from the fact that consensus about scientific theory is not analogous to consensus about the historical records of particular events, climate-change-denial theory is actually discussed quite thoroughly on Wikipedia. Plus, the author of the op-ed in The Chronicle of Higher Education doesn't seem at all like climate-change deniers.
If there is something specific you want to suggest about the author -- that he's agenda-driven, that his work is unreliable, or that the journal in which he published the article is not a reliable source -- then I think equity requires that you declare why you doubt or dismiss his article.
I read the article in the Chronicle pretty carefully. The author's experience struck me as an example of a pattern that may account for the flattening of the growth curve in new editors as well as for some other phenomena. As you may rememember, Andrew Lih conducted a presentation on "the policy thicket" at Wikimania almost five years ago. The wielding of policy by long-term editors, plus the rewriting of the policy so that it is used to undercut NPOV rather than preserve it, strikes me as worth talking about. Dismissing it out of hand, or analogizing it to climate-change denial, undercuts my trust in the Wikipedian process rather than reinforces it.
We're talking past one another. It is obvious to me that the author of the Chronicle article should have been able to add his research without difficulty, at least after it was published.
We have material about climate change denial, but do not give political viewpoints the status we give scientific opinion in articles on the science, nor should we. What we would be looking for, and will not be able to find, is substantial work showing that climate warming does not result from an increase in greenhouse gases and other products of human activity. We can't simply say, "According to Rick Santorum, there is no scientific basis...."
Yes, please, lets discuss.
If we're ever going to get past these problems of Wiki epistemology it won't be done by starting with such a heavily argued contemporary problem as climate change. It has too many active vested interests. Too many people accept political statements as fact. NPOV started off as a great concept, but sometimes when we try to explain it we end up expanding beyond recognition. Reliable sources are fine but deciding on the reliability of a source itself requires a point of view. Calling something original research ends up more a weapon than a valid criticism.
Ray
On Sun, Feb 19, 2012 at 5:48 PM, Mike Godwin mnemonic@gmail.com wrote:
Fred Bauder writes:
I think it probably seems to climate change deniers that excluding political opinions from science-based articles on global warming is a violation of neutral point of view, and of basic fairness. That is just one example, but there are other similar situations.
This analogy is breathtakingly unpersuasive. Apart from the fact that consensus about scientific theory is not analogous to consensus about the historical records of particular events, climate-change-denial theory is actually discussed quite thoroughly on Wikipedia. Plus, the author of the op-ed in The Chronicle of Higher Education doesn't seem at all like climate-change deniers.
If there is something specific you want to suggest about the author -- that he's agenda-driven, that his work is unreliable, or that the journal in which he published the article is not a reliable source -- then I think equity requires that you declare why you doubt or dismiss his article.
I read the article in the Chronicle pretty carefully. The author's experience struck me as an example of a pattern that may account for the flattening of the growth curve in new editors as well as for some other phenomena. As you may rememember, Andrew Lih conducted a presentation on "the policy thicket" at Wikimania almost five years ago. The wielding of policy by long-term editors, plus the rewriting of the policy so that it is used to undercut NPOV rather than preserve it, strikes me as worth talking about. Dismissing it out of hand, or analogizing it to climate-change denial, undercuts my trust in the Wikipedian process rather than reinforces it.
--Mike
Let me make an observation -
The post-facto probability of 1.0 that the researcher was in fact professional, credible, and by all accounts right does not mean that a priori he should automatically have been treated that way before the situation was clarified.
By far the majority of people who come up and "buck the system" or challenge established knowledge in this manner are, in fact, kooks or people with an agenda. This started - as others have pointed out - with a few fields where this is narrowly but clearly established, but has been successfully generalized.
Let us acknowledge some obvious truths here, that we had bad info in an article, that we had a scholar unfamiliar with WP process whose first attempt to correct it went somewhat (but not horrifically) wrong, that the engagement of a number of WP editors/administrators failed to identify the credibility of the scholar and wrongness of the info.
To simply toss UNDUE in response seems a mistake. UNDUE is, every day, actively helping us fight off crap trying to fling itself into WP.
Valid questions, to me, seem to include whether the editors simply failed to notice they were arguing with a subject matter expert history professor and asking for a shrubbery rather than assisting the guy through the rats nest of WP policy, whether the editors had any preexisting biases that may have slanted their engagement here, whether the editors had histories of inappropriate responses to less experienced editors.
I think the answers to the last two are no; I don't know about one.
If the answer to one is "yes", then "These things happen" is an explanation but not an excuse, and should be a prompt to help us all get better at detecting that. These things do happen, but should not. These things do happen, but we should expect better on the average.
On Tue, Feb 21, 2012 at 6:35 PM, George Herbert george.herbert@gmail.com wrote:
If the answer to one is "yes", then "These things happen" is an explanation but not an excuse, and should be a prompt to help us all get better at detecting that. These things do happen, but should not. These things do happen, but we should expect better on the average.
Apart from the question of whether this particular article -- on the Haymarket bombing -- has been hurt by editors' ill-considered application of UNDUE, there's the larger question of what it means for our credibility when a very respected journal, The Chronicle of Higher Education, features an op-ed that outlines, in very convincing detail, what happens when a subject-matter expert attempts to play the rules and is still slapped down. If I thought this author's experience is rare, I wouldn't be troubled by it. But as someone who frequently fielded complaints from folks who were not tendentious kooks, my impression is that it is not rare, and that the language of UNDUE -- as it exists today -- ends up being leveraged in a way that hurts Wikipedia both informationally and reputationally.
--Mike
On Tue, Feb 21, 2012 at 6:48 PM, Mike Godwin mnemonic@gmail.com wrote:
On Tue, Feb 21, 2012 at 6:35 PM, George Herbert george.herbert@gmail.com wrote:
If the answer to one is "yes", then "These things happen" is an explanation but not an excuse, and should be a prompt to help us all get better at detecting that. These things do happen, but should not. These things do happen, but we should expect better on the average.
Apart from the question of whether this particular article -- on the Haymarket bombing -- has been hurt by editors' ill-considered application of UNDUE, there's the larger question of what it means for our credibility when a very respected journal, The Chronicle of Higher Education, features an op-ed that outlines, in very convincing detail, what happens when a subject-matter expert attempts to play the rules and is still slapped down. If I thought this author's experience is rare, I wouldn't be troubled by it. But as someone who frequently fielded complaints from folks who were not tendentious kooks, my impression is that it is not rare, and that the language of UNDUE -- as it exists today -- ends up being leveraged in a way that hurts Wikipedia both informationally and reputationally.
Any policy - or policy change - we can think of will have unforseen consequences. It will somewhere between partly and largely be interpreted, on the fly, often alone, by editors who are tired or not paying 100% attention when they apply it. Some of the applying editors will have a lack of long-term Wikipedia history and knowledge to draw on, a lack of insight into the policy implications, etc. Some will have personal agendas or biases.
I am not you, and neither have worked for the Foundation nor been quite as intimately involved in the higher level "public policy" around internet information and academia as you have for the 20-plus years ... That said, I have somewhat of a grounding in these issues and am comfortable with calling for help or wider attention if I reach my comfort zone on individual issues; I've been on OTRS (and technically still are, though I'm inactive at the moment), and a number of on-and-off wiki contacts of some sort.
Is it possible that you being Mike Godwin is leading to a selection bias, where a large fraction of the actual experts with actual problems with process who did anything about it came to or through you on their way to solving or reporting the problem?
I believe that we're seeing legitimate experts driven away. Perhaps its as often as daily. I know is that I see something (that usually gets eventually resolved constructively) about once a month, a few of which (annually?) get big press of some sort.
On a roughly daily basis, when I'm active on-wiki, I run into people in the less qualified to outright kook realm who are attempting to impersonate a legitimate expert.
It seems that there are a large surplus of the latter, and only a few of the former, statistically. Assuming that's accurate, that should inform the policy discussion.
On Tue, Feb 21, 2012 at 7:06 PM, George Herbert george.herbert@gmail.com wrote:
Any policy - or policy change - we can think of will have unforseen consequences.
I agree with you. But we can't let this paralyze us in responding to a problem that is no longer "unforeseen," but that in fact has occurred. At minimum, the Haymarket article ought to edited to accommodate a well-documented minority scholarly analysis -- surely we agree about that.
Is it possible that you being Mike Godwin is leading to a selection bias, where a large fraction of the actual experts with actual problems with process who did anything about it came to or through you on their way to solving or reporting the problem?
It's entirely possible. But it happens with enough frequency for me to be able to articulate a credible hypothesis that this is happening too often. Certainly there's no "selection bias" problem associated with the sheer fact of the Chronicle of Higher Education article itself -- its existence is something that nobody here disputes, regardless of how we interpret it. And I think there is a second hypothesis that is also credible, which is that the Chronicle article very likely hurts Wikipedia reputationally.
It seems that there are a large surplus of the latter, and only a few of the former, statistically. Assuming that's accurate, that should inform the policy discussion.
Certainly.
--Mike
On Tue, Feb 21, 2012 at 9:48 PM, Mike Godwin mnemonic@gmail.com wrote:
Apart from the question of whether this particular article -- on the Haymarket bombing -- has been hurt by editors' ill-considered application of UNDUE, there's the larger question of what it means for our credibility when a very respected journal, The Chronicle of Higher Education, features an op-ed that outlines, in very convincing detail, what happens when a subject-matter expert attempts to play the rules and is still slapped down. If I thought this author's experience is rare, I wouldn't be troubled by it. But as someone who frequently fielded complaints from folks who were not tendentious kooks, my impression is that it is not rare, and that the language of UNDUE -- as it exists today -- ends up being leveraged in a way that hurts Wikipedia both informationally and reputationally.
Do you have specific ideas either as to what is wrong with the current language, or what it should be changed to say?
Mike
On Tue, Feb 21, 2012 at 9:48 PM, Mike Godwin mnemonic@gmail.com wrote:
Apart from the question of whether this particular article -- on the Haymarket bombing -- has been hurt by editors' ill-considered application of UNDUE, there's the larger question of what it means for our credibility when a very respected journal, The Chronicle of Higher Education, features an op-ed that outlines, in very convincing detail, what happens when a subject-matter expert attempts to play the rules and is still slapped down. If I thought this author's experience is rare, I wouldn't be troubled by it. But as someone who frequently fielded complaints from folks who were not tendentious kooks, my impression is that it is not rare, and that the language of UNDUE -- as it exists today -- ends up being leveraged in a way that hurts Wikipedia both informationally and reputationally.
Do you have specific ideas either as to what is wrong with the current language, or what it should be changed to say?
Mike
Would any of you consider joining the discussion at
Wikipedia_talk:Neutral_point_of_view#The_.27Undue_Weight.27_of_Truth_on_Wikipedia
I've probably gotten it off to a bad start, and perhaps that is not the place to discuss the policy, but I suspect it is.
Fred
An update: Steven Walling will be with me on NPR's Talk of the Nation, today at 3pm US Eastern time talking about this issue.
In preparation for the show, I looked up Messer-Kruse's book on Amazon, and I am pasting in the first two sentences of the blurb (bold emphasis mine).
In this *controversial* and *groundbreaking* new history, Timothy Messer-Kruse *rewrites* the standard narrative of the most iconic event in American labor history: the Haymarket Bombing and Trial of 1886. Using thousands of pages of previously unexamined materials, Messer-Kruse demonstrates that, *contrary* to longstanding historical opinion, the trial was not the “travesty of justice” it has *commonly* been *depicted* as.
I am sympathetic to Messer-Kruse's plight, but these key words highlight perhaps why this case is the perfect storm of conditions (ie. Achilles Heel) for a clash in editing.
The ability of Wikipedia to absorb leading edge, "groundbreaking new history" research is limited, given the emergent norms and accrual of policy that has primarily served to make sure things are verified as a majority view before it makes it into Wikipedia. There are good reasons for this, since every hour of every day Wikipedia is bombarded by vandalism and crackpot contributions.
But I do share Mike Godwin's concerns on what this means for attracting editors and for Wikipedia's public image.
More and more, I'm convinced Wikipedia must focus on embracing a new complementary culture -- an "invitation culture" that Sarah Stierch (of GLAMwiki fame) really brought to my attention at Wikimania Haifa. We have to recognize Wikipedia has a huge monoculture problem when the editor survey says 91% of active editors are male.
Sarah told me as a female, she would never have participated in Wikipedia without someone else inviting her first. And that there were many great folks out there that felt the same thing because on the face of it, Wikipedia is not putting in neon lights that it's soliciting participation and there are many reasons to describe newbie experiences as "jarring" or even "unwelcoming."
That's basically what GLAM can address and I think it is crucial to Wikipedia's future. It reaches out directly to people who share Wikipedia's mission about education and quality by approaching them as valued members of a knowledge creation community to make Wikipedia participation accessible. Wikipedians in Residence have served as the liaisons to make that introductory experience smooth and empowering. Another area ripe for collaboration is journalism, by finding a way to engage journalists in creating content such as for the Oral Citations project. And, coincidentally, both of these fields have a high percentage of females.
We cannot bank Wikipedia's future solely on the prospective lone contributor toughing it out against the obstacles of complex wikimarkup, a cumbersome talk page/discussion system, demoralizing edit reverts, policy pages gone wild, and if he or she gets that far, a frightening administrator hazing ritual.
For more on the GLAM, see the two page summary produced at GLAMcampDC last week:
http://commons.wikimedia.org/wiki/File:GLAM_One-Pager.pdf
-Andrew
On Wed, Feb 22, 2012 at 10:13 AM, Andrew Lih andrew.lih@gmail.com wrote:
But I do share Mike Godwin's concerns on what this means for attracting editors and for Wikipedia's public image.
This is where I disagree. But we can talk about this later. ;)
Steven
On Wed, Feb 22, 2012 at 10:20 AM, Steven Walling steven.walling@gmail.comwrote:
On Wed, Feb 22, 2012 at 10:13 AM, Andrew Lih andrew.lih@gmail.com wrote:
But I do share Mike Godwin's concerns on what this means for attracting editors and for Wikipedia's public image.
This is where I disagree. But we can talk about this later. ;)
Awesome! I was afraid we'd be on the show agreeing on everything. :)
I should add a response on this point:
On Tue, Feb 21, 2012 at 6:35 PM, George Herbert george.herbert@gmail.com wrote:
The post-facto probability of 1.0 that the researcher was in fact professional, credible, and by all accounts right does not mean that a priori he should automatically have been treated that way before the situation was clarified.
Should we declare that "Assume Good Faith" is now a dead letter?
By far the majority of people who come up and "buck the system" or challenge established knowledge in this manner are, in fact, kooks or people with an agenda.
To me the interesting thing is that this author did not "buck the system." It seems clear he attempted to learn the system and abide by the system's rules. If someone goes to the trouble he went to, getting an article published in a peer-reviewed journal, then citing it in his editing of the Wikipedia article, what else could he have done, precisely?
If we pass over this and classify it as an anomaly, then I think the very best thing that can be said is that this is a missed opportunity to review UNDUE specifically, and, more generally, the problem of policy ambiguity and complexity as a barrier to entry for new, knowledgeable, good-faith editors.
--Mike
On Tue, Feb 21, 2012 at 7:04 PM, Mike Godwin mnemonic@gmail.com wrote:
I should add a response on this point:
On Tue, Feb 21, 2012 at 6:35 PM, George Herbert george.herbert@gmail.com wrote:
The post-facto probability of 1.0 that the researcher was in fact professional, credible, and by all accounts right does not mean that a priori he should automatically have been treated that way before the situation was clarified.
Should we declare that "Assume Good Faith" is now a dead letter?
No. But in day-to-day operations, AGF has fallen somewhat in prominence for the simple reason that a lot of the time someone brings it up, it's after credible evidence is already in hand of bad faith actions.
AGF is not a suicide pact; we cannot insist that each and every kook or fringeist gets to waste a man-days worth of Wikipedian senior volunteer time every day that they're active. There simply aren't enough senior volunteers to go around to do that. The policy - as implemented, if not as written - has to acknowledge that reasonable provisions for defending the encyclopedia, that work and are sustainable over months, years, and heading into decades are a necessary function of the encyclopedia.
If you unbalance the defense of the encyclopedia attempting to right another wrong, we all lose.
By far the majority of people who come up and "buck the system" or challenge established knowledge in this manner are, in fact, kooks or people with an agenda.
To me the interesting thing is that this author did not "buck the system." It seems clear he attempted to learn the system and abide by the system's rules. If someone goes to the trouble he went to, getting an article published in a peer-reviewed journal, then citing it in his editing of the Wikipedia article, what else could he have done, precisely?
If we pass over this and classify it as an anomaly, then I think the very best thing that can be said is that this is a missed opportunity to review UNDUE specifically, and, more generally, the problem of policy ambiguity and complexity as a barrier to entry for new, knowledgeable, good-faith editors.
I don't think this is an anomaly, in terms of being rare (I think it happens dozens of times a year at least, perhaps daily-ish) or unusual.
I think it is an anomaly, in the sense that 3,000 senior editors dealt with 10,000 problems that day, and got one (all things considered) slightly horribly wrong.
Again, it's balance. If we just twist the knob the other way, we start to let crap in. Some of the crap in - such as the Seigenthaler fabrications - is as much or more of a problem than good or fixes kept out.
You can say "Just turn the response quality level up", which is all fine and good, but it's a volunteer organization, done again by people with free time (or after work, on breaks, etc; and often tired, or working fast). Realistically, either we turn the knob on number of problems reviewed, or on the threshold for handling something; either of those lets more crap in.
Again, this is not an excuse for someone having gotten it wrong here. But real life activities accept error rates. Some journalists in war zones step in front of friendly fire bullets; police in the US shoot innocent people at a non-zero rate. Surgeons make mistakes and kill people. Journalists make errors of fact or citation. Scientists make data collection, logical, or other errors.
We need to be aware going into a deeper discussion of what tradeoffs are involved.
That should not lead to paralysis. The discussion is useful and change may be beneficial. The problem you're calling out is real. But it should be informed discussion and change.
On 22 February 2012 03:04, Mike Godwin mnemonic@gmail.com wrote:
On Tue, Feb 21, 2012 at 6:35 PM, George Herbert george.herbert@gmail.com wrote:
The post-facto probability of 1.0 that the researcher was in fact professional, credible, and by all accounts right does not mean that a priori he should automatically have been treated that way before the situation was clarified.
Should we declare that "Assume Good Faith" is now a dead letter?
It's been dead for new editors for a while. New editors are assumed to be a problem, to be processed as quickly as possible with Twinkle or similar in the manner of a processed cheese slice. "Assume good faith" is what the processors then say when the newbie protests at being treated in this manner.
- d.
- d.
On Wed, Feb 22, 2012 at 03:35, George Herbert george.herbert@gmail.com wrote:
By far the majority of people who come up and "buck the system" or challenge established knowledge in this manner are, in fact, kooks or people with an agenda. This started - as others have pointed out - with a few fields where this is narrowly but clearly established, but has been successfully generalized.
Let us acknowledge some obvious truths here, that we had bad info in an article, that we had a scholar unfamiliar with WP process whose first attempt to correct it went somewhat (but not horrifically) wrong, that the engagement of a number of WP editors/administrators failed to identify the credibility of the scholar and wrongness of the info.
To simply toss UNDUE in response seems a mistake. UNDUE is, every day, actively helping us fight off crap trying to fling itself into WP.
Okay, I preacknowledge that this is not a "solution" but for me it seems that the problem is to differentiate kooks from experts in regard of not widespread information or selfmade but published research which is important for the articles etc.
So far editors have a tool, UNDUE, to hush away kooks. I'd envision an IAMEXPERT template which would inform the editors that the expert in question considers himself an acknowledged expert (and therefore there are other experts who consider him one), the information was in fact peer reviewed and would kindly ask the editors to allocate a bit more consideration.
What if the template is used by kooks? Well, they should somehow back up the facts, how are they acknowledged as experts, what kind of peer review happened, and most importantly establish why the "minority fact" is important. Do the kooks they fight against possessing proof of their expertise in the field? Do they have reviewed sources? If yes, this hack wouldn't even work. But if all the kooks are just selfmade evangelists of kookery they'll simply fail to prove their right. The template would be just a request for the editors to strongly consider the appropriateness of UNDUE. Could be nicely phrased and offer the background. A tool for the other side.
grin
On Wednesday 22 February 2012 01:36 PM, Peter Gervai wrote:
On Wed, Feb 22, 2012 at 03:35, George Herbertgeorge.herbert@gmail.com wrote:
By far the majority of people who come up and "buck the system" or challenge established knowledge in this manner are, in fact, kooks or people with an agenda. This started - as others have pointed out - with a few fields where this is narrowly but clearly established, but has been successfully generalized.
Let us acknowledge some obvious truths here, that we had bad info in an article, that we had a scholar unfamiliar with WP process whose first attempt to correct it went somewhat (but not horrifically) wrong, that the engagement of a number of WP editors/administrators failed to identify the credibility of the scholar and wrongness of the info.
To simply toss UNDUE in response seems a mistake. UNDUE is, every day, actively helping us fight off crap trying to fling itself into WP.
Okay, I preacknowledge that this is not a "solution" but for me it seems that the problem is to differentiate kooks from experts in regard of not widespread information or selfmade but published research which is important for the articles etc.
So far editors have a tool, UNDUE, to hush away kooks. I'd envision an IAMEXPERT template which would inform the editors that the expert in question considers himself an acknowledged expert (and therefore there are other experts who consider him one), the information was in fact peer reviewed and would kindly ask the editors to allocate a bit more consideration.
What if the template is used by kooks? Well, they should somehow back up the facts, how are they acknowledged as experts, what kind of peer review happened, and most importantly establish why the "minority fact" is important. Do the kooks they fight against possessing proof of their expertise in the field? Do they have reviewed sources? If yes, this hack wouldn't even work. But if all the kooks are just selfmade evangelists of kookery they'll simply fail to prove their right. The template would be just a request for the editors to strongly consider the appropriateness of UNDUE. Could be nicely phrased and offer the background. A tool for the other side.
grin
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't* do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge - whether the 'truth' is new analysis backed up by serious scholarship (as in this case), or things that have not yet made it to reliable print scholarship (knowledge that's circulated orally, whether in conversations or social media). The core of the problem would appear to be our insistence on the narrowest and smallest possible definition of 'legitimate knowledge'. And I'd imagine that the solution is to find a workable, sensible and cross-culturally translatable version of legitimacy that is a lot better, bigger and more generous than what we have.
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Wed, Feb 22, 2012 at 09:32, Achal Prabhala aprabhala@gmail.com wrote:
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't* do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge - whether the 'truth' is new analysis backed up by serious scholarship (as in this case), or things that have not yet made it to reliable print scholarship (knowledge that's circulated orally, whether in conversations or social media). The core of the problem would appear to be our insistence on the narrowest and smallest possible definition of 'legitimate knowledge'. And I'd imagine that the solution is to find a workable, sensible and cross-culturally translatable version of legitimacy that is a lot better, bigger and more generous than what we have.
Thank you, that is a well phrased description of what I wanted to write.
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't* do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge
I'm not seeing a good argument that we *should* assess the legitimacy. This seems to be being cast in the light of "verifiability not truth" (a really silly maxim) but, in reality, it goes more back to our idea of "we use reliable sources because they are *peer reviewed*".
The implicit suggestion here is that Wikipedia could/should act as that form of peer review for so called "not-yet-legitimised knowledge".
Although it would be nice to have that role it isn't actually all that practical for several reasons:
- We already have enough disagreement over sourcing as it is - Very few of us are truly subject matter experts - Even fewer of us have experience of peer review and critical examination of work (this is especially critical in the sciences) - Taking on the role of peer review puts us at odds with our main aim; of providing a summary resource.
The main thing it would do is open up Wikipedia as an avenue to push (and legitimise) fringe material.
whether the 'truth' is new analysis backed up by serious scholarship (as in this case), or things that have not yet made it to reliable print scholarship (knowledge that's circulated orally, whether in conversations or social media). The core of the problem would appear to be our insistence on the narrowest and smallest possible definition of 'legitimate knowledge'.
Is it? Lets look at what happened here.
- Someone posted information apparently based on their own analysis - it's not unreasonable to remove this - He began to defend his additions on the talk page and some were incorporated - He gave up further attempts - The next day a lot of those comments were incorporated (if you read through the detail very carefully, to as much of an extent as the published literature allowed) based on the inconsistencies he raised - He went away and wrote a book which forwards a number of new theories and updates our understanding of the topic.
Has anyone actually read through the points raised? The problem is not a case of "well this factual thing disproves what is in the article". It is much more a case of disagreement over the established *interpretation* of events and over the *extent* to which views expressed by the previously top level source were recorded (for example; "no evidence" was a mistaken summary of the view raised by the source, a point which was then corrected).
And I'd imagine that the solution is to find a workable, sensible and cross-culturally translatable version of legitimacy that is a lot better, bigger and more generous than what we have.
No it isn't.
We have a good sourcing policy; one which does cover a very wide range of sources and can be relaxed and restricted as required to fit the topic based on good editorial judgement.
However, for the topic of *history* (in which I have an interest, and where I work on articles at the moment) we definitely should stick to well reviewed, published material.
What *was* at issue here is how we treat new users; the discussion was approached (on the part of our editors) either as a battleground/fight, or in a quite patronising way. The issue here was that someone was put off from raising the issues.
I do know of academics who are frustrated by what they see as inaccuracies in Wikipedia articles; and when they try to correct them from their own knowledge get reverted. That, coupled with a lack of understanding of how Wikipedia works from a technical perspective, can make the experience very frustrating - and the opportunity to explain the rational viewpoint (i.e. peer reviewed sourcing) is lost.
If you read the article this is what he is saying; that academics should follow the peer review route before trying to get their material included. He also notes that even when he had taken this route he was put off because of his treatment the last time.
The failure here is *not* our content policy. But the behavioural.
Tom
"What *was* at issue here is how we treat new users; the discussion was approached (on the part of our editors) either as a battleground/fight, or in a quite patronising way. The issue here was that someone was put off from raising the issues."
The "expertise" that is most valued at Wikipedia is expertise in Wikipedia itself - its policies, procedures, technology, etc - rather than expertise in the content. That's a fundamental cultural flaw if the project is to succeed.
In reference to other comments here about the treatment of new editors, there has been a noticeable (to me at least) shift away from the role of administrators and "senior editors" from helping newcomers overcome the challenges to finding them a nuisance. On smaller projects the "it's no big deal" approach to the sysop flag still dominates and the administrators spend their time correcting naming errors, moving pages, merging histories, adding templates and also adding content to help new work survive. I don't see that at the English Wikipedia any more.
Sent from my BlackBerry® wireless device
-----Original Message----- From: Thomas Morton morton.thomas@googlemail.com Sender: foundation-l-bounces@lists.wikimedia.org Date: Wed, 22 Feb 2012 10:15:20 To: Wikimedia Foundation Mailing Listfoundation-l@lists.wikimedia.org Reply-To: Wikimedia Foundation Mailing List foundation-l@lists.wikimedia.org Subject: Re: [Foundation-l] The 'Undue Weight' of Truth on Wikipedia (from the Chronicle) + some citation discussions
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't* do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge
I'm not seeing a good argument that we *should* assess the legitimacy. This seems to be being cast in the light of "verifiability not truth" (a really silly maxim) but, in reality, it goes more back to our idea of "we use reliable sources because they are *peer reviewed*".
The implicit suggestion here is that Wikipedia could/should act as that form of peer review for so called "not-yet-legitimised knowledge".
Although it would be nice to have that role it isn't actually all that practical for several reasons:
- We already have enough disagreement over sourcing as it is - Very few of us are truly subject matter experts - Even fewer of us have experience of peer review and critical examination of work (this is especially critical in the sciences) - Taking on the role of peer review puts us at odds with our main aim; of providing a summary resource.
The main thing it would do is open up Wikipedia as an avenue to push (and legitimise) fringe material.
whether the 'truth' is new analysis backed up by serious scholarship (as in this case), or things that have not yet made it to reliable print scholarship (knowledge that's circulated orally, whether in conversations or social media). The core of the problem would appear to be our insistence on the narrowest and smallest possible definition of 'legitimate knowledge'.
Is it? Lets look at what happened here.
- Someone posted information apparently based on their own analysis - it's not unreasonable to remove this - He began to defend his additions on the talk page and some were incorporated - He gave up further attempts - The next day a lot of those comments were incorporated (if you read through the detail very carefully, to as much of an extent as the published literature allowed) based on the inconsistencies he raised - He went away and wrote a book which forwards a number of new theories and updates our understanding of the topic.
Has anyone actually read through the points raised? The problem is not a case of "well this factual thing disproves what is in the article". It is much more a case of disagreement over the established *interpretation* of events and over the *extent* to which views expressed by the previously top level source were recorded (for example; "no evidence" was a mistaken summary of the view raised by the source, a point which was then corrected).
And I'd imagine that the solution is to find a workable, sensible and cross-culturally translatable version of legitimacy that is a lot better, bigger and more generous than what we have.
No it isn't.
We have a good sourcing policy; one which does cover a very wide range of sources and can be relaxed and restricted as required to fit the topic based on good editorial judgement.
However, for the topic of *history* (in which I have an interest, and where I work on articles at the moment) we definitely should stick to well reviewed, published material.
What *was* at issue here is how we treat new users; the discussion was approached (on the part of our editors) either as a battleground/fight, or in a quite patronising way. The issue here was that someone was put off from raising the issues.
I do know of academics who are frustrated by what they see as inaccuracies in Wikipedia articles; and when they try to correct them from their own knowledge get reverted. That, coupled with a lack of understanding of how Wikipedia works from a technical perspective, can make the experience very frustrating - and the opportunity to explain the rational viewpoint (i.e. peer reviewed sourcing) is lost.
If you read the article this is what he is saying; that academics should follow the peer review route before trying to get their material included. He also notes that even when he had taken this route he was put off because of his treatment the last time.
The failure here is *not* our content policy. But the behavioural.
Tom _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
"What *was* at issue here is how we treat new users; the discussion was approached (on the part of our editors) either as a battleground/fight, or in a quite patronising way. The issue here was that someone was put off from raising the issues."
The "expertise" that is most valued at Wikipedia is expertise in Wikipedia itself - its policies, procedures, technology, etc - rather than expertise in the content. That's a fundamental cultural flaw if the project is to succeed.
In a sense; though, as one academic pointed out to me, writing an encyclopaedia is a skill in itself. And just because one is a topic area expert does not immediately make them the most capable of writing the article (in some respects it makes them less capable than an interested layman).
In reference to other comments here about the treatment of new editors, there has been a noticeable (to me at least) shift away from the role of administrators and "senior editors" from helping newcomers overcome the challenges to finding them a nuisance.
I don't think this is an issue of sysops or "senior editors" - it is ingrained in the vast majority of the community.
For example we know it is common in newer/younger editors to "bite" or otherwise apply policy too strongly - because with regularity we have to deal with the fall out (i.e. mentor them).
I see the same issues with content editors as well; with resistance to anyone trying to add content to articles they've invested in (I don't just mean subject matter experts).
Realistically *we are all part of the problem*. You, me, etc. because the problem is the entire ecosystem. Even stuff we think is polite and sensible might be incomprehensible to a newbie. Simple things like linking to, or quoting, parts of policy instead of taking time to write a simple explanation. The use of templates. The resistance to listen to arguments. It all adds up into a confusing user experience.
This is not a new problem; many online communities suffer, and have suffered, from it.
All of the things I mentioned are useful once your dealing with editors aware of the workings - it's not "new and scary" at that point and acts as a useful shortcut to streamline our interaction. The key thing to work on, I think, is easing newbies into that process without bombarding them with too much of it at once.
Tom
On Wed, Feb 22, 2012 at 7:05 AM, Thomas Morton <morton.thomas@googlemail.com
wrote:
Realistically *we are all part of the problem*. You, me, etc. because the problem is the entire ecosystem. Even stuff we think is polite and sensible might be incomprehensible to a newbie. Simple things like linking to, or quoting, parts of policy instead of taking time to write a simple explanation. The use of templates. The resistance to listen to arguments. It all adds up into a confusing user experience.
This is not a new problem; many online communities suffer, and have suffered, from it.
All of the things I mentioned are useful once your dealing with editors aware of the workings - it's not "new and scary" at that point and acts as a useful shortcut to streamline our interaction. The key thing to work on, I think, is easing newbies into that process without bombarding them with too much of it at once.
This is part of the reason why I have been advocating that the education programs take an active role in encouraging the academics who teach classes on Wikipedia to become contributors themselves. If we can provide high-quality one-on-one mentoring to academics in the workings of Wikipedia we could increase the percentage of users who have a foot in both worlds. Editors without subject matter expertise will always be needed but to solve some of the problems on Wikipedia, particularly those regarding undue weight and comprehensiveness of coverage, we have to attract experts and help them become editors.
Mike
On 22 February 2012 12:44, Mike Christie coldchrist@gmail.com wrote:
On Wed, Feb 22, 2012 at 7:05 AM, Thomas Morton < morton.thomas@googlemail.com
wrote:
Realistically *we are all part of the problem*. You, me, etc. because the problem is the entire ecosystem. Even stuff we think is polite and
sensible
might be incomprehensible to a newbie. Simple things like linking to, or quoting, parts of policy instead of taking time to write a simple explanation. The use of templates. The resistance to listen to arguments. It all adds up into a confusing user experience.
This is not a new problem; many online communities suffer, and have suffered, from it.
All of the things I mentioned are useful once your dealing with editors aware of the workings - it's not "new and scary" at that point and acts
as
a useful shortcut to streamline our interaction. The key thing to work
on,
I think, is easing newbies into that process without bombarding them with too much of it at once.
This is part of the reason why I have been advocating that the education programs take an active role in encouraging the academics who teach classes on Wikipedia to become contributors themselves. If we can provide high-quality one-on-one mentoring to academics in the workings of Wikipedia we could increase the percentage of users who have a foot in both worlds. Editors without subject matter expertise will always be needed but to solve some of the problems on Wikipedia, particularly those regarding undue weight and comprehensiveness of coverage, we have to attract experts and help them become editors.
That sounds like a great idea! I've always wondered about how our eduction programs focus on students, but not on the academics that teach them - or on professional organisations.
During the last Board elections one of the things I kept saying is that we need to focus on subject matter experts; be they academics or professionals and get their input.
Tom
"What *was* at issue here is how we treat new users; the discussion was approached (on the part of our editors) either as a battleground/fight, or in a quite patronising way. The issue here was that someone was put off from raising the issues."
The "expertise" that is most valued at Wikipedia is expertise in Wikipedia itself - its policies, procedures, technology, etc - rather than expertise in the content. That's a fundamental cultural flaw if the project is to succeed.
In a sense; though, as one academic pointed out to me, writing an encyclopaedia is a skill in itself. And just because one is a topic area expert does not immediately make them the most capable of writing the article (in some respects it makes them less capable than an interested layman).
Of course, but that is not a reason to heap abuse on them rather than assisting them.
In reference to other comments here about the treatment of new editors, there has been a noticeable (to me at least) shift away from the role of administrators and "senior editors" from helping newcomers overcome the challenges to finding them a nuisance.
I don't think this is an issue of sysops or "senior editors" - it is ingrained in the vast majority of the community.
For example we know it is common in newer/younger editors to "bite" or otherwise apply policy too strongly - because with regularity we have to deal with the fall out (i.e. mentor them).
I see the same issues with content editors as well; with resistance to anyone trying to add content to articles they've invested in (I don't just mean subject matter experts).
That is what is going on at the Haymarket article. Editing that article successfully is harder than the D-Day Landing.
Realistically *we are all part of the problem*. You, me, etc. because the problem is the entire ecosystem. Even stuff we think is polite and sensible might be incomprehensible to a newbie. Simple things like linking to, or quoting, parts of policy instead of taking time to write a simple explanation. The use of templates. The resistance to listen to arguments. It all adds up into a confusing user experience.
This is not a new problem; many online communities suffer, and have suffered, from it.
All of the things I mentioned are useful once your dealing with editors aware of the workings - it's not "new and scary" at that point and acts as a useful shortcut to streamline our interaction. The key thing to work on, I think, is easing newbies into that process without bombarding them with too much of it at once.
Tom
And we do have a problem with academics such as this one who are not patient enough or too busy to get up to speed. Note, however, that he is not too busy to write an article on the Academic Chronicle or appear on NPR.
Now, in effect we have moved a Wikipedia policy discussion off our policy pages onto The Academic Chronicle and NPR which most of us have no access too. Our policy process is broken, and, in fact, effectively jammed.
Fred
On Wednesday 22 February 2012 03:45 PM, Thomas Morton wrote:
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't* do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge
I'm not seeing a good argument that we *should* assess the legitimacy. This seems to be being cast in the light of "verifiability not truth" (a really silly maxim) but, in reality, it goes more back to our idea of "we use reliable sources because they are *peer reviewed*".
Well actually, we use newspaper sources very frequently, as well as non-scholarly (and therefore non-peer-reviewed) books, so in fact, we rely on *printing* (or to put it more kindly, publishing) as a signal for peer-review, not peer-review itself. In my opinion, this is a poor signal.
The implicit suggestion here is that Wikipedia could/should act as that form of peer review for so called "not-yet-legitimised knowledge".
Although it would be nice to have that role it isn't actually all that practical for several reasons:
- We already have enough disagreement over sourcing as it is
- Very few of us are truly subject matter experts
- Even fewer of us have experience of peer review and critical examination
of work (this is especially critical in the sciences)
- Taking on the role of peer review puts us at odds with our main aim; of
providing a summary resource.
The main thing it would do is open up Wikipedia as an avenue to push (and legitimise) fringe material.
I completely agree that we need a system that doesn't throw a spanner in the works - but if you're suggesting that the only workable signal for legitimacy is printing, then that seems odd; and it is odder that while we ourselves rely on a range of filtered non-printed sources for our own information (social media, conversations) that we shouldn't attempt to find a way to bring Wikipedia into these very old and very new systems of legitimate knowledge that we've fundamentally accepted ourselves.
whether the 'truth' is new analysis backed up by serious scholarship (as in this case), or things that have not yet made it to reliable print scholarship (knowledge that's circulated orally, whether in conversations or social media). The core of the problem would appear to be our insistence on the narrowest and smallest possible definition of 'legitimate knowledge'.
Is it? Lets look at what happened here.
- Someone posted information apparently based on their own analysis - it's
not unreasonable to remove this
- He began to defend his additions on the talk page and some were
incorporated
- He gave up further attempts
- The next day a lot of those comments were incorporated (if you read
through the detail very carefully, to as much of an extent as the published literature allowed) based on the inconsistencies he raised
- He went away and wrote a book which forwards a number of new theories and
updates our understanding of the topic.
Has anyone actually read through the points raised? The problem is not a case of "well this factual thing disproves what is in the article". It is much more a case of disagreement over the established *interpretation* of events and over the *extent* to which views expressed by the previously top level source were recorded (for example; "no evidence" was a mistaken summary of the view raised by the source, a point which was then corrected).
And I'd imagine that the solution is to find a workable, sensible and cross-culturally translatable version of legitimacy that is a lot better, bigger and more generous than what we have.
No it isn't.
We have a good sourcing policy; one which does cover a very wide range of sources and can be relaxed and restricted as required to fit the topic based on good editorial judgement.
Consider these two points:
1) Bad behaviour needs a back-up, and inadequately updated/ incompletely thought out policies serve as a bulwark against weeding out bad behaviour.
2) If, for instance, 'no original research' was to keep physics cranks out, as seems the case, then it's succeeded - the physics cranks are out. Given that it was put in place ten years ago though, and given that it may have been very useful circa 2001, in a Wikipedia with limited geographical contribution and use, things are very different now. Might we not benefit from assessing the cost of policies that guard against enemies who no longer exist?
However, for the topic of *history* (in which I have an interest, and where I work on articles at the moment) we definitely should stick to well reviewed, published material.
What *was* at issue here is how we treat new users; the discussion was approached (on the part of our editors) either as a battleground/fight, or in a quite patronising way. The issue here was that someone was put off from raising the issues.
I do know of academics who are frustrated by what they see as inaccuracies in Wikipedia articles; and when they try to correct them from their own knowledge get reverted. That, coupled with a lack of understanding of how Wikipedia works from a technical perspective, can make the experience very frustrating - and the opportunity to explain the rational viewpoint (i.e. peer reviewed sourcing) is lost.
If you read the article this is what he is saying; that academics should follow the peer review route before trying to get their material included. He also notes that even when he had taken this route he was put off because of his treatment the last time.
The failure here is *not* our content policy. But the behavioural.
I respect where you're coming from and it's very helpful in furthering my own understanding of the situation. But: I think the 'behavioural' is distinctly affected by 'policy' - especially when the policy is malleable, loose and archaic enough to interpreted (usually hawkishly) at will by those already in the know.
Tom _______________________________________________ foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On 22 February 2012 13:11, Achal Prabhala aprabhala@gmail.com wrote:
On Wednesday 22 February 2012 03:45 PM, Thomas Morton wrote:
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't*
do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge
I'm not seeing a good argument that we *should* assess the legitimacy. This seems to be being cast in the light of "verifiability not truth" (a really silly maxim) but, in reality, it goes more back to our idea of "we use reliable sources because they are *peer reviewed*".
Well actually, we use newspaper sources very frequently, as well as non-scholarly (and therefore non-peer-reviewed) books, so in fact, we rely on *printing* (or to put it more kindly, publishing) as a signal for peer-review, not peer-review itself. In my opinion, this is a poor signal.
Well realistically, yes, we consider something that has been reputably published to have a basic level of reliability. But that is not the end of the test.
This idea of "published" can (and is) relaxed though. Indeed it is my perception that in many topic areas we rely far too heavily on online sources - there can be a distinct prejudice against offline source material.
However I am interested in whether you have a specific idea of what you would change? Can you express a reason for why using the published test is a poor signal?
Tom
On 22 February 2012 13:29, Thomas Morton morton.thomas@googlemail.com wrote:
However I am interested in whether you have a specific idea of what you would change? Can you express a reason for why using the published test is a poor signal?
It produces a rich crop of both false positives and false negatives. I can't think of a better test off the top of my head, but that doesn't mean it's defects are somehow not gross and obvious. No-one who's ever been quoted by the media could ever hear them being called a "reliable source" and keep a straight face.
- d.
On Wednesday 22 February 2012 06:59 PM, Thomas Morton wrote:
On 22 February 2012 13:11, Achal Prabhalaaprabhala@gmail.com wrote:
On Wednesday 22 February 2012 03:45 PM, Thomas Morton wrote:
Jokes aside :) the problem here is exemplary of what Wikipedia *doesn't*
do well, which is to find ways to assess the legitimacy of not-yet-legitimised knowledge
I'm not seeing a good argument that we *should* assess the legitimacy. This seems to be being cast in the light of "verifiability not truth" (a really silly maxim) but, in reality, it goes more back to our idea of "we use reliable sources because they are *peer reviewed*".
Well actually, we use newspaper sources very frequently, as well as non-scholarly (and therefore non-peer-reviewed) books, so in fact, we rely on *printing* (or to put it more kindly, publishing) as a signal for peer-review, not peer-review itself. In my opinion, this is a poor signal.
Well realistically, yes, we consider something that has been reputably published to have a basic level of reliability. But that is not the end of the test.
This idea of "published" can (and is) relaxed though. Indeed it is my perception that in many topic areas we rely far too heavily on online sources - there can be a distinct prejudice against offline source material.
However I am interested in whether you have a specific idea of what you would change? Can you express a reason for why using the published test is a poor signal?
Tom
I think it's a poor signal when it's the only signal, when it wholly occupies the phrase 'legitimate knowledge'. In a cross-cultural context, and especially on English Wikipedia, it's notoriously fraught - it's very difficult for someone with no experience of a place to distinguish between 'printed' and 'respectably published' - or even more simply, between a lunatic fringe newsletter and a mainstream newspaper. I thought what Tom Morris had to say here was very useful: http://en.wikipedia.org/wiki/User:Tom_Morris/The_Reliability_Delusion - that we could well deepen our own understanding of currently unimpeachable sources - like the Guardian or the Observer.
So the helpful starting point here is that printed, published work is fallible and variably reliable too.
In real life, each of us has figured out ways to filter the legitimate from the illegitimate in terms of received knowledge, whether in newspapers, conversations, or on twitter. But on Wikipedia, we've only figured out a way to sort the published, and maybe a little but more. Published knowledge though, is a fraction of what there is to know as a whole. That sounds terribly high-minded but it's not really, and some more on this is available here: http://meta.wikimedia.org/wiki/Research:Oral_Citations
My point is not that we should discard what we have in terms of policies. My point is that we may benefit from acknowledging what the policies lead us *not to do well*. And that would be to find a system to sort out the unreliable and fake from the reliable and legitimate when it comes to oral citations, or social media citations or primary sources - in exactly the same way as we've figured out a system to sort the unreliable from the reliable in another fallible knowledge system - printed publishing. And if we think that these things we don't do well are important and that we can figure a way to bring them in, then we should find that way. (Which is to say - to add to what we've got, not to forego the current system).
An aside: there are millions of oral testimonies hosted at thousands of extremely reputable organisations - on Native American life at the Smithsonian, or Holocaust history at Yale - which currently have no place on Wikipedia, because they're primary sources. Often but not always, these primary sources relate to power relations - and so you are far more likely to find the lives of women, Native Americans, Holocaust survivors or Jazz musicians in oral testimony than in the printed word. Sometimes, foregoing these primary sources may be the right decision, but other times this will not be so - and by disallowing primary sources in entirety, or not figuring out a system to use them sensibly, I think we're throwing the baby out with the bathwater.
Related point: there's this project proposal that you might be interested in - http://meta.wikimedia.org/wiki/Wikimedia_Fellowships/Project_Ideas/InCite
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
I think it's a poor signal when it's the only signal, when it wholly occupies the phrase 'legitimate knowledge'. In a cross-cultural context, and especially on English Wikipedia, it's notoriously fraught - it's very difficult for someone with no experience of a place to distinguish between 'printed' and 'respectably published' - or even more simply, between a lunatic fringe newsletter and a mainstream newspaper. I thought what Tom Morris had to say here was very useful: http://en.wikipedia.org/wiki/** User:Tom_Morris/The_**Reliability_Delusionhttp://en.wikipedia.org/wiki/User:Tom_Morris/The_Reliability_Delusion- that we could well deepen our own understanding of currently unimpeachable sources - like the Guardian or the Observer.
So the helpful starting point here is that printed, published work is fallible and variably reliable too.
I absolutely agree with that. One of the big bug bears I have is that, when discussing sourcing, people put them into categories such as "newspaper" and declare them therefore reliable.
IMO our sourcing policy is very good at laying out how to consider the reliability of the source - for example reminding us to think of it in terms of not only the content but author and publisher (is the author known for attacking X, is the publisher criticised for publishing poor quality work?).
This is the key usefulness of publishing - in that it involves other people/entities in the process.
So if anything we should boil the sourcing policy down to "lets see who your friends are".
In real life, each of us has figured out ways to filter the legitimate from
the illegitimate in terms of received knowledge, whether in newspapers, conversations, or on twitter.
Ugh, no. Don't get me started on this :) The lack of critical thinking within the wider population is dire - and the spoon fed rubbish we get from every side is disheartening. Things like the prevalence of homoeopathy are examples of this issue.
We have filters; but they are subjective and usually not good.
But on Wikipedia, we've only figured out a way to sort the published, and maybe a little but more. Published knowledge though, is a fraction of what there is to know as a whole. That sounds terribly high-minded but it's not really, and some more on this is available here: http://meta.wikimedia.org/**wiki/Research:Oral_Citationshttp://meta.wikimedia.org/wiki/Research:Oral_Citations
The oral citations stuff is cool - and I now see where you are going with your thought process.
For what its worth I don't think we preclude such sourcing - but I do think that the community often misunderstands (or fails to read) the actual policy (as opposed to, say, the summary).
My point is not that we should discard what we have in terms of policies.
My point is that we may benefit from acknowledging what the policies lead us *not to do well*. And that would be to find a system to sort out the unreliable and fake from the reliable and legitimate when it comes to oral citations, or social media citations or primary sources - in exactly the same way as we've figured out a system to sort the unreliable from the reliable in another fallible knowledge system - printed publishing. And if we think that these things we don't do well are important and that we can figure a way to bring them in, then we should find that way. (Which is to say - to add to what we've got, not to forego the current system).
This is probably where we disagree. The amount of editorial decision making in terms of "what weight should I give this material". If you have a set of oral accounts of an event how do you present that - which parts as fact? Which as opinion? What weighting? Finding expert people to do that review stage - and have the work reviewed - is absolutely critical to writing an Encyclopaedia.
An aside: there are millions of oral testimonies hosted at thousands of extremely reputable organisations - on Native American life at the Smithsonian, or Holocaust history at Yale - which currently have no place on Wikipedia, because they're primary sources.
And this is what I meant about misunderstanding policies. Because nothing in our policies precludes the use of primary sources. What you can't do is use them for interpretation or analysis. So to make up an example; if you have an oral citation from someone who was arrested under an oppressive regime - and questioned at length on his choice of blonde hair color and whether he dyed it. You could relate that experience, but you couldn't necessarily say something like "The regime persecuted people with blond hair, or those who dyed it".
So if there are oral recordings of at the Smithsonian & Yale (surely that means they are published?? It certainly fits our explicit criteria for published) then we can and should be using them.
One example of published primary sources we do use is court proceedings.
Tom
And this is what I meant about misunderstanding policies. Because nothing in our policies precludes the use of primary sources. What you can't do is use them for interpretation or analysis. So to make up an example; if you have an oral citation from someone who was arrested under an oppressive regime - and questioned at length on his choice of blonde hair color and whether he dyed it. You could relate that experience, but you couldn't necessarily say something like "The regime persecuted people with blond hair, or those who dyed it".
So if there are oral recordings of at the Smithsonian & Yale (surely that means they are published?? It certainly fits our explicit criteria for published) then we can and should be using them.
One example of published primary sources we do use is court proceedings.
Tom
Interesting because in the Haymarket case there is a 3,000 page transcript of the trial on line. I thought we could not use it directly. What can we use it for? Can it be used as a reference for itself, in the sense that the fact that there was a lengthy hearing with a great number of prosecution witnesses being heard, as well as many defense witness?
From Identifying reliable sources:
Primary sources are often difficult to use appropriately. While they can be both reliable and useful in certain situations, they must be used with caution in order to avoid original research. Material based purely on primary sources should be avoided. All interpretive claims, analyses, or synthetic claims about primary sources must be referenced to a secondary source, rather than original analysis of the primary-source material by Wikipedia editors.
Fred
Interesting because in the Haymarket case there is a 3,000 page transcript of the trial on line. I thought we could not use it directly. What can we use it for?
Can it be used as a reference for itself, in the
sense that the fact that there was a lengthy hearing with a great number of prosecution witnesses being heard, as well as many defense witness?
No; because that is interpretive/analysis.
But you could use it to, say, list the witnesses called.
Tom
On Wed, Feb 22, 2012 at 11:01 AM, Achal Prabhala aprabhala@gmail.com wrote:
An aside: there are millions of oral testimonies hosted at thousands of extremely reputable organisations - on Native American life at the Smithsonian, or Holocaust history at Yale - which currently have no place on Wikipedia, because they're primary sources.
There is no policy disallowing the use of primary sources on the English Wikipedia. They have to be used carefully, because it's easy to misuse them, but they're definitely allowed. See the NOR policy -- http://en.wikipedia.org/w/index.php?title=Wikipedia:No_original_research&...
Sarah
Thank you Tom, and Sarah, for your very helpful explanations - they are extremely useful.
There's a discussion on at the reliable sources notice board, for instance, which highlights some of the interpretive problems you raise: http://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Noticeboard#Oral_Cit...
Can I ask you how you would analyse the work of the oral citations project (http://meta.wikimedia.org/wiki/Research:Oral_Citations) in terms of our policies on original research, and verifiability?
And further, how these policies might apply to the idea of social media, as well as more private archives, say, corporate archives, being used as citations? (And on that point, is there a difference between the the Native American folk archive at the Smithsonian and the corporate archives of the Michelin corporation in France, for our purposes?)
Okay, I know that's asking a lot :)
On Wednesday 22 February 2012 08:56 PM, Sarah wrote:
On Wed, Feb 22, 2012 at 11:01 AM, Achal Prabhalaaprabhala@gmail.com wrote:
An aside: there are millions of oral testimonies hosted at thousands of extremely reputable organisations - on Native American life at the Smithsonian, or Holocaust history at Yale - which currently have no place on Wikipedia, because they're primary sources.
There is no policy disallowing the use of primary sources on the English Wikipedia. They have to be used carefully, because it's easy to misuse them, but they're definitely allowed. See the NOR policy -- http://en.wikipedia.org/w/index.php?title=Wikipedia:No_original_research&...
Sarah
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Wed, Feb 22, 2012 at 3:53 PM, Achal Prabhala aprabhala@gmail.com wrote:
Thank you Tom, and Sarah, for your very helpful explanations - they are extremely useful.
There's a discussion on at the reliable sources notice board, for instance, which highlights some of the interpretive problems you raise: http://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Noticeboard#Oral_Cit...
Can I ask you how you would analyse the work of the oral citations project (http://meta.wikimedia.org/wiki/Research:Oral_Citations) in terms of our policies on original research, and verifiability?
Hi Achal,
It's difficult to give an off-the-cuff reply to this, because there are so many variables. But audio interviews published only by Wikinews have already been used as sources on Wikipedia. For example, I added a David Shankbone interview with Ingrid Newkirk to her bio. http://en.wikipedia.org/w/index.php?title=Ingrid_Newkirk&oldid=473905868...
And I have used that interview as a source for at least two other articles that discussed Newkirk's views.
It's a primary source, but it's unproblematic, in terms of NOR, because it's clearly Ingrid Newkirk (not an imposter), and she isn't saying anything controversial (e.g. nothing defamatory or factually contentious). And I wasn't using it in an interpretive way, but purely descriptively. The only prohibition regarding primary sources is when they are used interpretively, as though they are secondary sources -- that's where you get into NOR territory.
In terms of the Verifiability policy, that interview might count as self-published or unpublished, I don't know. But remember -- that policy requires reliable published sources for material that is (reasonably) challenged or likely to be challenged. It would be entirely contrary to the spirit of that policy to object to Ingrid Newkirk talking about herself non-contentiously in the article about her. That is, it would not be a reasonable challenge.
So, to answer your question more usefully perhaps, I do not see the introduction of oral citations into Wikipedia as a major upheaval (so long as they are recorded in some way and used appropriately), in terms of the existing policies. And I think they would liven up our articles considerably if done well.
Sarah
On Thursday 23 February 2012 12:58 AM, Sarah wrote:
On Wed, Feb 22, 2012 at 3:53 PM, Achal Prabhalaaprabhala@gmail.com wrote:
Thank you Tom, and Sarah, for your very helpful explanations - they are extremely useful.
There's a discussion on at the reliable sources notice board, for instance, which highlights some of the interpretive problems you raise: http://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Noticeboard#Oral_Cit...
Can I ask you how you would analyse the work of the oral citations project (http://meta.wikimedia.org/wiki/Research:Oral_Citations) in terms of our policies on original research, and verifiability?
Hi Achal,
It's difficult to give an off-the-cuff reply to this, because there are so many variables. But audio interviews published only by Wikinews have already been used as sources on Wikipedia. For example, I added a David Shankbone interview with Ingrid Newkirk to her bio. http://en.wikipedia.org/w/index.php?title=Ingrid_Newkirk&oldid=473905868...
And I have used that interview as a source for at least two other articles that discussed Newkirk's views.
It's a primary source, but it's unproblematic, in terms of NOR, because it's clearly Ingrid Newkirk (not an imposter), and she isn't saying anything controversial (e.g. nothing defamatory or factually contentious). And I wasn't using it in an interpretive way, but purely descriptively. The only prohibition regarding primary sources is when they are used interpretively, as though they are secondary sources -- that's where you get into NOR territory.
In terms of the Verifiability policy, that interview might count as self-published or unpublished, I don't know. But remember -- that policy requires reliable published sources for material that is (reasonably) challenged or likely to be challenged. It would be entirely contrary to the spirit of that policy to object to Ingrid Newkirk talking about herself non-contentiously in the article about her. That is, it would not be a reasonable challenge.
So, to answer your question more usefully perhaps, I do not see the introduction of oral citations into Wikipedia as a major upheaval (so long as they are recorded in some way and used appropriately), in terms of the existing policies. And I think they would liven up our articles considerably if done well.
Thanks Sarah - this is very interesting, and I too think that a mix of traditional and non-traditional citations make for a very good package. Andrew and Castelo Branco brought up the idea of using Wikinews as a publisher for interviews that form the basis of oral citations rather than Commons - taking advantage of its policy on OR. And Andrew further suggested reinventing Wikinews into a Nat-Geo style feature news site on an earlier thread.
Sarah
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
On Thu, Feb 23, 2012 at 3:12 PM, Achal Prabhala aprabhala@gmail.com wrote:
On Thursday 23 February 2012 12:58 AM, Sarah wrote:
On Wed, Feb 22, 2012 at 3:53 PM, Achal Prabhalaaprabhala@gmail.com wrote:
Thank you Tom, and Sarah, for your very helpful explanations - they are extremely useful.
There's a discussion on at the reliable sources notice board, for instance, which highlights some of the interpretive problems you raise:
http://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Noticeboard#Oral_Cit...
Can I ask you how you would analyse the work of the oral citations project (http://meta.wikimedia.org/wiki/Research:Oral_Citations) in terms of our policies on original research, and verifiability?
Hi Achal,
It's difficult to give an off-the-cuff reply to this, because there are so many variables. But audio interviews published only by Wikinews have already been used as sources on Wikipedia. For example, I added a David Shankbone interview with Ingrid Newkirk to her bio.
http://en.wikipedia.org/w/index.php?title=Ingrid_Newkirk&oldid=473905868...
And I have used that interview as a source for at least two other articles that discussed Newkirk's views.
It's a primary source, but it's unproblematic, in terms of NOR, because it's clearly Ingrid Newkirk (not an imposter), and she isn't saying anything controversial (e.g. nothing defamatory or factually contentious). And I wasn't using it in an interpretive way, but purely descriptively. The only prohibition regarding primary sources is when they are used interpretively, as though they are secondary sources -- that's where you get into NOR territory.
In terms of the Verifiability policy, that interview might count as self-published or unpublished, I don't know. But remember -- that policy requires reliable published sources for material that is (reasonably) challenged or likely to be challenged. It would be entirely contrary to the spirit of that policy to object to Ingrid Newkirk talking about herself non-contentiously in the article about her. That is, it would not be a reasonable challenge.
So, to answer your question more usefully perhaps, I do not see the introduction of oral citations into Wikipedia as a major upheaval (so long as they are recorded in some way and used appropriately), in terms of the existing policies. And I think they would liven up our articles considerably if done well.
Thanks Sarah - this is very interesting, and I too think that a mix of traditional and non-traditional citations make for a very good package. Andrew and Castelo Branco brought up the idea of using Wikinews as a publisher for interviews that form the basis of oral citations rather than Commons - taking advantage of its policy on OR. And Andrew further suggested reinventing Wikinews into a Nat-Geo style feature news site on an earlier thread.
Yes, I saw Andrew's suggestion and thought it was a very exciting idea.
If the oral citations (audio and video) were used as an adjunct to more traditional sources, I think there would be no problem at all.
On the Holocaust page, we used to highlight a quote (now removed) from a witness who talked to the BBC at the time of the British liberation of one of the concentration camps, Bergen Belsen. http://en.wikipedia.org/w/index.php?title=The_Holocaust&oldid=358632356#...
"We heard a loud voice repeating the same words in English and in German: 'Hello, hello. You are free. We are British soldiers and have come to liberate you.' These words still resound in my ears."
This kind of personal memory is very moving and compelling. Imagine if we could link to an audio or video interview of an eyewitness by an editor. WP is lagging behind with this because we are so afraid of OR by anonymous interviewers. But if we make sure there is nothing contentious said -- no attempts to rewrite history, as it were -- I think it would be almost entirely unproblematic -- people talking about "this is how I felt when X happened; this is how it was for me ...".
The Foundation could set up a wiki dedicated to eyewitness accounts that people could upload themselves, then Wikipedia could incorporate them as appropriate, using the current restrictions on primary sources (i.e. using them purely descriptively in articles about that subject). Yes, I know, potential problems with libel and nonsense, but no more so than we have already, and we deal with them.
Sarah
On 02/23/12 11:41 AM, Sarah wrote:
If the oral citations (audio and video) were used as an adjunct to more traditional sources, I think there would be no problem at all.
On the Holocaust page, we used to highlight a quote (now removed) from a witness who talked to the BBC at the time of the British liberation of one of the concentration camps, Bergen Belsen. http://en.wikipedia.org/w/index.php?title=The_Holocaust&oldid=358632356#...
"We heard a loud voice repeating the same words in English and in German: 'Hello, hello. You are free. We are British soldiers and have come to liberate you.' These words still resound in my ears."
This kind of personal memory is very moving and compelling. Imagine if we could link to an audio or video interview of an eyewitness by an editor. WP is lagging behind with this because we are so afraid of OR by anonymous interviewers. But if we make sure there is nothing contentious said -- no attempts to rewrite history, as it were -- I think it would be almost entirely unproblematic -- people talking about "this is how I felt when X happened; this is how it was for me ...".
The Foundation could set up a wiki dedicated to eyewitness accounts that people could upload themselves, then Wikipedia could incorporate them as appropriate, using the current restrictions on primary sources (i.e. using them purely descriptively in articles about that subject). Yes, I know, potential problems with libel and nonsense, but no more so than we have already, and we deal with them.
Why would the quote have been removed?
Ultimately most historical events resolve themselves into a series of narratives. Some, like personal diaries are unofficial; others like testimony in a court are official. All can be subject to error. What the narratives say is what they say, nothing more nor less.
We are indeed so afraid of OR, to the point where we trust nobody. When we apply a strict true-or-false test to a statement we lose our ability to recognize truths that lie at the intersection of multiple absurdities.
Ray
This idea of "published" can (and is) relaxed though. Indeed it is my perception that in many topic areas we rely far too heavily on online sources - there can be a distinct prejudice against offline source material.
Tom
Journals pose a particular problem as they are often, as in the case of the three journal articles in this case, behind pay walls. Those are peer reviewed, while his book by a commercial publisher has not received academic reviews.
Someone did send me a copy of one of the academic journal articles. But I have yet to see the other two which cost quite a bit.
Fred
On Wednesday 22 February 2012 08:08 PM, Fred Bauder wrote:
Journals pose a particular problem as they are often, as in the case of the three journal articles in this case, behind pay walls. Those are peer reviewed, while his book by a commercial publisher has not received academic reviews.
Someone did send me a copy of one of the academic journal articles. But I have yet to see the other two which cost quite a bit.
Fred
I was a student at really well-resourced US universities for a short part of my life and then spent the rest of it in far flung parts of the third world with little access to the kind of knowledge I had access to while in the US - a situation that continues here in India - and so I particularly identify with the access problem you've raised. Journal corporations like Reed Elsevier and services like JSTOR and Project Muse provide negligibly small entry to non-paying consumers outside their traditional base - rich universities in the US, Europe and a few other parts of the world.
This creates a weird anomaly, reflected - I am sure - on Wikipedia. Open Access journals - and just generally, any knowledge resource whose text is available to see freely on the internet - probably gets far more citation use on Wikipedia and elsewhere than a journal behind a paywall. (And in many ways this is really good - the reward for sharing or going OA is greater circulation and more citations).
But I can't imagine that either closed journal companies or closed journal article authors are pleased with this. If enough of us see some value in it, I wonder if we can ask someone at the Foundation to negotiate with these services for some kind of preferential/free access? Perhaps a limited amount of free browsing with a registered Wikipedia login or something like that. It would certainly help the work of editing - both in terms of citing well as well as in terms of looking up that citation or checking up on it. The journals market is so centralised, there are literally two companies and two services to talk to for just about everything under the sun.
A related problem is what currently happens to material on Google books. You follow a citation link on a Wikipedia page, say to a particular page, and you find that the page in question is disbarred - as it has not been made available under the (usually minimal) free page views that the copyright holder of the book has authorised Google to allow. This is a shame because my understanding of the situation is that even when something like 10% of the book is allowed to be seen, the Google books process is somewhat random, and doesn't necessarily include the one page you want in your session. But - if this were technically possible and if someone at the Foundation was interested in talking to Google about this - if each Google books citation link from Wikipedia were to assuredly take us to that page (assuming some minimal viewing permission, so this wouldn't apply to books where the copyright holder has provided *no* permissions) then that would be really helpful for editors, both those making the citation as well as others checking up on it. (And probably turn a lot of the non-linking citations to pages in a book into links that take you somewhere).
foundation-l mailing list foundation-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/foundation-l
wikimedia-l@lists.wikimedia.org