Anne, there are really well-established systems of scholarly peer review.
There is no need to reinvent the wheel, or add distractions such as
infoboxes and other bells and whistles.
I find it extraordinary that, after 13 years, a project designed to make
the sum of human knowledge available to humanity, with an annual budget of
$50 million, has no clue how to measure the quality of the content it is
providing, no apparent interest in doing so, and no apparent will to spend
money on it.
For what it's worth, there was a recent external study of Wikipedia's
medical content that came to unflattering results:
Most Wikipedia articles for the 10 costliest conditions in the United
States contain errors compared with standard peer-reviewed sources. Health
care professionals, trainees, and patients should use caution when using
Wikipedia to answer questions regarding patient care.
Our findings reinforce the idea that physicians and medical students who
currently use Wikipedia as a medical reference should be discouraged from
doing so because of the potential for errors.
On Wed, May 7, 2014 at 10:59 PM, Risker <risker.wp(a)gmail.com> wrote:
On 7 May 2014 16:17, Anthony Cole
Could someone please point me to all the studies
the WMF have conducted
into the reliability of Wikipedia's content? I'm particularly interested
the medical content, but would also like to look
over the others too.
Anthony Cole <http://en.wikipedia.org/wiki/User_talk:Anthonyhcole>
I've often thought about this myself, and I'm fairly certain the WMF has
never done any serious assessment of article quality. Different projects
have done so on their own, through content auditing processes and the
development of Wikipedia 1.0, but that affects a minority of articles.
There are some real challenges in coming up with workable metrics.
For example - Is a stub article inaccurate, incomplete, or really contains
all the information it's likely ever going to get?
How does one assess the accuracy of articles where there are multiple
sources that we'd consider reliable, but who provide contradictory
information on a topic? That would include, for example, all the ongoing
boundary issues involving multiple countries, the assessment of historical
impact of certain events or persons, and certain scientific topics where
new claims and reports happen fairly frequently and may or may not have
been reproduced. There may also be geographic or cultural factors that
affect the quality of an article, or the perceived notability of a subject,
and challenges dealing with cross-language reference sources.
Many of the metrics used for determining "quality" in audited articles on
English Wikipedia have very little to do with the actual quality of the
article. From the perspective of providing good information, a lot of
Manual of Style practices are nice but not required. Certain accessibility
standards (alt text for images, media positioning so as not to adversely
affect screen-readers) are not quality metrics, strictly speaking; they're
*accessibility* standards. There remains a huge running debate about
whether or not infoboxes should be required, what information should be in
them, how to deal with controversial or complex information in infoboxes,
So I suppose the first step would be in determining what metrics should be
included in a quality assessment of a project.
Wikimedia-l mailing list, guidelines at: