On 7 May 2014 19:38, Andreas Kolbe jayen466@gmail.com wrote:
On Thu, May 8, 2014 at 12:22 AM, phoebe ayers phoebe.wiki@gmail.com wrote:
On Wed, May 7, 2014 at 3:14 PM, Andreas Kolbe jayen466@gmail.com
wrote:
Anne, there are really well-established systems of scholarly peer
review.
There is no need to reinvent the wheel, or add distractions such as infoboxes and other bells and whistles.
And those peer review systems have lots and lots of problems as well as upsides. Lots of people *are* trying to reinvent peer review, including some very respected scientists.* As an academic science librarian, I can attest to there being widespread and currently ongoing debates about how
to
review scientific knowledge, whether traditional peer review is
sufficient,
and how to improve it. The current system for scientific research is
often
opaque, messy, prone to failure and doesn't always support innovation,
and
lots of smart people are thinking about it.
Erik: aha! I'd forgotten about those case studies, thanks!
Given that the post that started this thread referenced medical content, are you telling me that you think it would be useless to have qualified medical experts reviewing Wikipedia's medical content, because the process would be "opaque, messy, prone to failure and doesn't always support innovation"?
Andreas, I don't think that's necessarily what is being said here. However, the review needs to be scientifically valid, and the review in the JAOA isn't. For example, it does not require that the assessor look at the references used in the article to determine whether or not the reference meets the arbitratory standard set (i.e. peer-reviewed source updated or published within the last 5 years), and whether or not the article says what the reference says. Instead, the assessors looked at sources that may or may not have been used in the article, thus eroding any control for disagreement amongst scientific peers - something that most editors who work in this area know is surprisingly common.
The study itself identifies very significant, possibly fatal, limitations, including the use of essentially random reference sources that just happen to be available, the level of understanding of the subjects by the reviewers, the limited number of reviewers, and the fact that subject matter experts themselves are often in disagreement. It has not demonstrated repeatability.
It's possible to create a study that's worthwhile. This one wasn't it.
Risker/Anne