" Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively. "
http://www.nature.com/news/2005/051212/full/438900a.html
http://www.nature.com/news/2005/051212/multimedia/438900a_m1.html
http://www.nature.com/news/2005/051212/box/438900a_BX1.html
I hope they publish more detail about this study.
Jeremy Dunck wrote:
http://www.nature.com/news/2005/051212/full/438900a.html http://www.nature.com/news/2005/051212/multimedia/438900a_m1.html http://www.nature.com/news/2005/051212/box/438900a_BX1.html
I hope they publish more detail about this study.
As I just wrote in my weblog http://wm.sieheauch.de/ the study is relatively poor. Good for Wikipedia but the sample is quite small and it's vague how the articles were choosen. I bet nature would not have accepted the research as a submited paper.
Greetings, Jakob
On 12/14/05, Jakob Voss jakob.voss@nichtich.de wrote:
As I just wrote in my weblog http://wm.sieheauch.de/ the study is relatively poor. Good for Wikipedia but the sample is quite small and it's vague how the articles were choosen. I bet nature would not have accepted the research as a submited paper.
Indeed. The ct' study last year seemed superior to me (and less head-to-head). I hope the publishing world starts to take accuracy and accountability more seriously, and to both design better studies in this vein, and to improve their standards for revisioning and quality checking.
In my view, precious few modern reference works -- including those pertaining to issues of great world importance -- take themselves or their accuracy seriously enough. I wonder what the review process at Jane's is like...
-- ++SJ
On Wednesday 14 December 2005 18:27, Jakob Voss wrote:
As I just wrote in my weblog http://wm.sieheauch.de/ the study is relatively poor. Good for Wikipedia but the sample is quite small and it's vague how the articles were choosen. I bet nature would not have accepted the research as a submited paper.
Agreed, though its a lot of work to get expert content reviews -- as Nupedia shows! :) The other interesting thing was that the two sources have a strong correlation in errors:
http://reagle.org/joseph/blog/culture/wikipedia/nature-wp-v-eb?showcomments=...
"Jeremy Dunck" jdunck@gmail.com wrote in message news:2545a92c0512141256w5d0dd64fs92633cdca30a771@mail.gmail.com...
"Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively."
http://www.nature.com/news/2005/051212/full/438900a.html http://www.nature.com/news/2005/051212/multimedia/438900a_m1.html http://www.nature.com/news/2005/051212/box/438900a_BX1.html I hope they publish more detail about this study.
Has this been written up yet?
Possible title [[en:Wikipedia: Comparison of Wikipedia with Brittannica by Nature magazine]]
Or should it live on meta?
It is in Wikinews, as "A Nature investigation finds Wikipedia comes close to Britannica in terms of the accuracy of its science entries" http://en.wikinews.org/wiki/A_Nature_investigation_finds_Wikipedia_comes_clo...
It and other comparisons might be of merit for the Foundation website.
Also, it was discussed on the wikipedia-l mailing list that Britannica's articles were usually shorter, and thus the mistakes were actually a greater percentage of the information.
Nick Moreau zanimum
On 12/15/05, Phil Boswell phil.boswell@gmail.com wrote:
"Jeremy Dunck" jdunck@gmail.com wrote in message news:2545a92c0512141256w5d0dd64fs92633cdca30a771@mail.gmail.com...
"Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively."
http://www.nature.com/news/2005/051212/full/438900a.html http://www.nature.com/news/2005/051212/multimedia/438900a_m1.html http://www.nature.com/news/2005/051212/box/438900a_BX1.html I hope they publish more detail about this study.
Has this been written up yet?
Possible title [[en:Wikipedia: Comparison of Wikipedia with Brittannica by Nature magazine]]
Or should it live on meta?
Phil [[en:User:Phil Boswell]]
Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
-- If you want a Gmail account invite, or ten, just ask...
On 12/15/05, Nicholas Moreau nicholasmoreau@gmail.com wrote:
Also, it was discussed on the wikipedia-l mailing list that Britannica's articles were usually shorter, and thus the mistakes were actually a greater percentage of the information.
I think that's premature. The Nature article did mention that the compared articles were specifically selected (in part by) comparable entry length in both works.
What I would love to see is a study in a few weeks/months to show the evolution of these 50 articles in the days following the Nature article... and the delay which was necessary to track the various errors.
I would also welcome on the WMF site a paper summarizing both the findings of Nature AND the consequences of the article (both in the press... and directly on Wikipedia articles or on Wikipedians state of mind).
Anthere
Jeremy Dunck wrote:
" Only eight serious errors, such as misinterpretations of important concepts, were detected in the pairs of articles reviewed, four from each encyclopaedia. But reviewers also found many factual errors, omissions or misleading statements: 162 and 123 in Wikipedia and Britannica, respectively. "
http://www.nature.com/news/2005/051212/full/438900a.html
http://www.nature.com/news/2005/051212/multimedia/438900a_m1.html
http://www.nature.com/news/2005/051212/box/438900a_BX1.html
I hope they publish more detail about this study.
On 12/15/05, Anthere anthere9@yahoo.com wrote:
What I would love to see is a study in a few weeks/months to show the evolution of these 50 articles in the days following the Nature article... and the delay which was necessary to track the various errors.
I would also welcome on the WMF site a paper summarizing both the findings of Nature AND the consequences of the article (both in the press... and directly on Wikipedia articles or on Wikipedians state of mind).
That is a good idea. Perhaps a significant group could collaborate on an analytical paper, with input from the authors of the Nature study, third-party academics using EB and WP, WP contributors, and changes to the WP articles... and submit such a beast to a peer-reviewed journal for publication.
SJ
On 12/15/05, SJ 2.718281828@gmail.com wrote:
That is a good idea. Perhaps a significant group could collaborate on an analytical paper, with input from the authors of the Nature study, third-party academics using EB and WP, WP contributors, and changes to the WP articles... and submit such a beast to a peer-reviewed journal for publication.
As I have no first-hand experience with such work, I would be interested in assisting such an effort, but could not do it justice without guidance.
On 12/15/05, Anthere anthere9@yahoo.com wrote:
What I would love to see is a study in a few weeks/months to show the evolution of these 50 articles in the days following the Nature article... and the delay which was necessary to track the various errors.
Lih came close to this with his "Wikipedia as Participatory Journalism: Reliable Sources?" http://jmsc.hku.hk/faculty/alih/publications/utaustin-2004-wikipedia-rc2.pdf
He didn't compare factual improvement, but it's clear that media attention improves specific articles.
As for referring to WP; I think it'd be useful if there were a prominent link on article pages which gave the URL of the specific revision currently viewed. Yes, you can get this from history, but many argue that because WP is always changing (and not because it's inaccurate), you mustn't cite it. Ignoring for the fact that the whole web is fairly ephemeral at this point, citing a specific rev addresses the changing-content issue.
On 12/16/05, Jeremy Dunck jdunck@gmail.com wrote:
As for referring to WP; I think it'd be useful if there were a prominent link on article pages which gave the URL of the specific revision currently viewed. Yes, you can get this from history, but
There's a small "Permanent link" hyperlink in the sidebar when you're looking at the current rev of an article; though most readers may not know it's there. Is that what you mean, only more prominent?
++SJ
Hi folks,
please. pleeeese check the spelling of products from other companies.
It's Britannica or "Encyclopaedia Britannica" or "Encyclopædia Britannica" (for the elite).
Mathias "Speling Nazi" Schindler
I predict the next print encyclopedia will be published in Brittany, and every edition will be called the Millennium-something edition until at least the 22nd century.
SJ
( you may avoid the spelling issue altogether by referring to EB and every encyclopedia simply as "a work of disorder and destruction" -- a distinguished pedigree to which Wikipedia may also lay claim.
"no original work.. has been more depreciated, ridiculed and calumniated. It has been called chaos, nothingness, the Tower of Babel, a work of disorder and destruction, the gospel of Satan..." -- Pierre Lanfrey on Wikip^B^B^B^B the French Encyclopedie )
On 12/16/05, Mathias Schindler neubau@presroi.de wrote:
Hi folks,
please. pleeeese check the spelling of products from other companies.
It's Britannica or "Encyclopaedia Britannica" or "Encyclopædia Britannica" (for the elite).
Mathias "Speling Nazi" Schindler _______________________________________________ Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
-- ++SJ
SJ wrote:
I predict the next print encyclopedia will be published in Brittany, and every edition will be called the Millennium-something edition until at least the 22nd century.
Yesterday, it came to my mind that "Britannica" is actually nonsense.
EBI is a company in the Chicago. So I translated that Potawatomi word back to Latin.
I strongly would like to encourage everyone to speak only about the "Encyclopaedia Alliumia" from now on.
Mathias
On 12/16/05, SJ 2.718281828@gmail.com wrote:
On 12/16/05, Jeremy Dunck jdunck@gmail.com wrote:
As for referring to WP; I think it'd be useful if there were a prominent link on article pages which gave the URL of the specific revision currently viewed. Yes, you can get this from history, but
There's a small "Permanent link" hyperlink in the sidebar when you're looking at the current rev of an article; though most readers may not know it's there. Is that what you mean, only more prominent?
Yes. The natural tendency is to copy whatever's in the address bar, so you have to overcome that.
The W3C does a nice job of versioning their documents. e.g. http://www.w3.org/TR/REC-CSS1
Of course, it could be less prominent than the W3 treatment, but that's the idea.
{{Edit of a post to wiki wikien-l@Wikipedia.org}}
An example of a publication that is not open to open edit "The Oxford Dictionary of National Biography".
http://www.oup.com/oxforddnb/info/order/print/
http://en.wikipedia.org/wiki/Oxford_Dictionary_of_National_Biography
Priced at 7,500 UK pounds, it may contain errors.
"...but in the months following publication there was occasional criticism of the dictionary in some British newspapers and periodicals for reported factual inaccuracies."
I regard this statement a rather tame: but I have little evidence.
I have cited Lih in some of my work.... but...
I find it problematic to use number of edits and number of authors (quantitative date) as indicators of content quality. I'm willing to believe that these are probably, in most cases, indicators of improvement, but that's a huge assumption. To make this case, I think some kind of qualitative analysis is necessary to demonstrate that the article QUALITY improves by some set of standards and we'd expect that these results will be correlated with number of authors/number of edits. If anyone wants to collaborate on something like this, I might have 15 or 20 minutes free in spring. ;-)
Andrea
(ELC Lab, GA Tech, http://www.cc.gatech.edu/elc)
On 12/16/05, Jeremy Dunck jdunck@gmail.com wrote:
On 12/15/05, Anthere anthere9@yahoo.com wrote:
What I would love to see is a study in a few weeks/months to show the evolution of these 50 articles in the days following the Nature article... and the delay which was necessary to track the various errors.
Lih came close to this with his "Wikipedia as Participatory Journalism: Reliable Sources?" http://jmsc.hku.hk/faculty/alih/publications/utaustin-2004-wikipedia-rc2.pdf
He didn't compare factual improvement, but it's clear that media attention improves specific articles.
As for referring to WP; I think it'd be useful if there were a prominent link on article pages which gave the URL of the specific revision currently viewed. Yes, you can get this from history, but many argue that because WP is always changing (and not because it's inaccurate), you mustn't cite it. Ignoring for the fact that the whole web is fairly ephemeral at this point, citing a specific rev addresses the changing-content issue. _______________________________________________ Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
At 16.12.2005, Andrea Forte wrote:
I find it problematic to use number of edits and number of authors (quantitative date) as indicators of content quality. I'm willing to believe that these are probably, in most cases, indicators of improvement, but that's a huge assumption. To make this case, I think some kind of qualitative analysis is necessary to demonstrate that the article QUALITY improves by some set of standards and we'd expect that these results will be correlated with number of authors/number of edits. If anyone wants to collaborate on something like this, I might have 15 or 20 minutes free in spring. ;-)
I agree, and to me it looks like Lih got it backwards: You would want to show that some quantitative measures like number of edits correlate positively with quality. As the paper stands, if someone comes by and shows there is none or only a very weak correlation between their quantitative indicators and actual quality of articles, their paper becomes moot.
I would argue that you can assess article quality only by human measure. Then you can go and show correlations with data like number of edits, to later turn around and make predictions about quality of papers based on these factors. But first you have to show the strength of such correlation.
I think all attempts at reputation systems etc will fail if they are purely algorithmical. Rather, I'd simply set up a voting system for people to vote on the quality of an article they just read. That will give you a reasonable measure of quality, against which you can run experiments. (Why such voting works is a different topic.)
Dirk
---- Interested in wiki research? Please go to http://www.wikisym.org
Exactly! I think that's what I just proposed. :-) Or, instead of open ratings, you could use some sample of articles and ask third-party experts to rate them along various dimensions of quality (accuracy, comprehensiveness, accessible writing, etc.)
-andrea
On 12/16/05, Dirk Riehle dirk@riehle.org wrote:
At 16.12.2005, Andrea Forte wrote:
I find it problematic to use number of edits and number of authors (quantitative date) as indicators of content quality. I'm willing to believe that these are probably, in most cases, indicators of improvement, but that's a huge assumption. To make this case, I think some kind of qualitative analysis is necessary to demonstrate that the article QUALITY improves by some set of standards and we'd expect that these results will be correlated with number of authors/number of edits. If anyone wants to collaborate on something like this, I might have 15 or 20 minutes free in spring. ;-)
I agree, and to me it looks like Lih got it backwards: You would want to show that some quantitative measures like number of edits correlate positively with quality. As the paper stands, if someone comes by and shows there is none or only a very weak correlation between their quantitative indicators and actual quality of articles, their paper becomes moot.
I would argue that you can assess article quality only by human measure. Then you can go and show correlations with data like number of edits, to later turn around and make predictions about quality of papers based on these factors. But first you have to show the strength of such correlation.
I think all attempts at reputation systems etc will fail if they are purely algorithmical. Rather, I'd simply set up a voting system for people to vote on the quality of an article they just read. That will give you a reasonable measure of quality, against which you can run experiments. (Why such voting works is a different topic.)
Dirk
Interested in wiki research? Please go to http://www.wikisym.org
Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
Andrea Forte wrote:
Exactly! I think that's what I just proposed. :-) Or, instead of open ratings, you could use some sample of articles and ask third-party experts to rate them along various dimensions of quality (accuracy, comprehensiveness, accessible writing, etc.)
In January, it is anticipated that the long-awaited "article validation" feature will go live. This is essentially just a system for gathering public feedback and *doing nothing with it* (at first). The idea is to simply record feedback on all the articles and then take a look at it with minimal a prior preconceptions on what it will tell us to do.
A fantastic research project would be to select N articles at random and have either "experts" or some sort of control group do a similar rating, and look at the correlation. Another aspect of this research would be to compare the ratings of anons, newbies and experienced wikipedians.
If the result is that the ratings of the general public are highly correlated with the ratings of experts, that's a good thing, because it's easier to get ratings from the general public than to do some kind of old-fashioned expert peer review. I would expect, myself, that *in general* the ratings would be similar but that there will be interesting classes of deviations from the norms.
--Jimbo
There's a lot of research in education on peer assessment-- I remember reading studies that show students' assessment of peers' work is similar to that of teachers' assessments, *when they are given guidelines.* So I'd expect that the interface design (how well expectations are communicated in the design of the rating system) will influence how well newbies are able to contribute in meaningful ways.
-andrea
On 12/16/05, Jimmy Wales jwales@wikia.com wrote:
Andrea Forte wrote:
Exactly! I think that's what I just proposed. :-) Or, instead of open ratings, you could use some sample of articles and ask third-party experts to rate them along various dimensions of quality (accuracy, comprehensiveness, accessible writing, etc.)
In January, it is anticipated that the long-awaited "article validation" feature will go live. This is essentially just a system for gathering public feedback and *doing nothing with it* (at first). The idea is to simply record feedback on all the articles and then take a look at it with minimal a prior preconceptions on what it will tell us to do.
A fantastic research project would be to select N articles at random and have either "experts" or some sort of control group do a similar rating, and look at the correlation. Another aspect of this research would be to compare the ratings of anons, newbies and experienced wikipedians.
If the result is that the ratings of the general public are highly correlated with the ratings of experts, that's a good thing, because it's easier to get ratings from the general public than to do some kind of old-fashioned expert peer review. I would expect, myself, that *in general* the ratings would be similar but that there will be interesting classes of deviations from the norms.
--Jimbo _______________________________________________ Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
I'd actually assume that masses of laymen outperform small groups of experts. (But that is in fact a hypothesis :-)
In the design of such an article validation system, I'd heed the lessons of "collective intelligence" i.e. try to ensure as much as possible independence, diversity, etc.
That's somewhat tricky. For example, you would not want to show the feedback on an article to a person who you later allow to rank that article too. (To avoid bias, anchoring, etc.)
Dirk
At 16.12.2005, Jimmy Wales wrote:
Andrea Forte wrote:
Exactly! I think that's what I just proposed. :-) Or, instead of open ratings, you could use some sample of articles and ask third-party experts to rate them along various dimensions of quality (accuracy, comprehensiveness, accessible writing, etc.)
In January, it is anticipated that the long-awaited "article validation" feature will go live. This is essentially just a system for gathering public feedback and *doing nothing with it* (at first). The idea is to simply record feedback on all the articles and then take a look at it with minimal a prior preconceptions on what it will tell us to do.
A fantastic research project would be to select N articles at random and have either "experts" or some sort of control group do a similar rating, and look at the correlation. Another aspect of this research would be to compare the ratings of anons, newbies and experienced wikipedians.
If the result is that the ratings of the general public are highly correlated with the ratings of experts, that's a good thing, because it's easier to get ratings from the general public than to do some kind of old-fashioned expert peer review. I would expect, myself, that *in general* the ratings would be similar but that there will be interesting classes of deviations from the norms.
--Jimbo _______________________________________________ Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
At 08:55 -0500 16/12/05, Jimmy Wales wrote:
Andrea Forte wrote:
Exactly! I think that's what I just proposed. :-) Or, instead of open ratings, you could use some sample of articles and ask third-party experts to rate them along various dimensions of quality (accuracy, comprehensiveness, accessible writing, etc.)
In January, it is anticipated that the long-awaited "article validation" feature will go live. This is essentially just a system for gathering public feedback and *doing nothing with it* (at first). The idea is to simply record feedback on all the articles and then take a look at it with minimal a prior preconceptions on what it will tell us to do. [...]
So, how does that differ from a member of the "public" editing by correcting an article or musing in the talk page?
Action research anyone?
http://en.wikipedia.org/wiki/Action_research
Gordon Joly wrote:
At 08:55 -0500 16/12/05, Jimmy Wales wrote:
In January, it is anticipated that the long-awaited "article validation" feature will go live. This is essentially just a system for gathering public feedback and *doing nothing with it* (at first). The idea is to simply record feedback on all the articles and then take a look at it with minimal a prior preconceptions on what it will tell us to do. [...]
So, how does that differ from a member of the "public" editing by correcting an article or musing in the talk page?
Action research anyone?
I read Action research is
1. Data Collection 2. Evaluation 3. Action 4. ...
So what data do you need?
# edits per article (for which articles) # edits on it's discussion page (dito) # distinct authors per article # distinct authors per discussion page # percentage of anonymous edits ...?
Unfourtunately the history export is disabled but I can get the data out of the XML dump und the database.
Greetings, Jakob
A note of caution...
My understanding of action research is that it is BIG -- involves organizational change on a large scale and a lot of time to go longitudinal work. If you want to do it right, it's not anyone's side project. And I always prefer to do it right. ;-)
I humbly suggest that a well-scoped, narrowly targeted correlational study (or set of studies) would be the next step for pursuing the quality-quantity connection. Even if one were to attempt action research to understand the Wikipedia community and the design of the rating system, I'm wondering if it wouldn't more like a design experiment/design study, which is an approach that draws on action research but (in my understanding) is focused more on developing theories of human behavior and cognition and principles to inform the design of artifacts like social and educational software. Coincidentally, understanding the nuanced relationship between these approaches is already on my to-do list, since I just started a three-year design study!
Both of these approaches are complex and I would hesitate to undertake anything on such a scale without the input of a researcher well-versed in these methods and a detailed proposal for moving forward. If someone has the time for this scale of project, wonderful. If everyone is trying to squeeze it in among other big projects, maybe not the best approach because my prediction is that we won't end up with anything quality enough to matter.
-andrea
On 12/17/05, Jakob Voss jakob.voss@nichtich.de wrote:
Gordon Joly wrote:
At 08:55 -0500 16/12/05, Jimmy Wales wrote:
In January, it is anticipated that the long-awaited "article validation" feature will go live. This is essentially just a system for gathering public feedback and *doing nothing with it* (at first). The idea is to simply record feedback on all the articles and then take a look at it with minimal a prior preconceptions on what it will tell us to do. [...]
So, how does that differ from a member of the "public" editing by correcting an article or musing in the talk page?
Action research anyone?
I read Action research is
- Data Collection
- Evaluation
- Action
- ...
So what data do you need?
# edits per article (for which articles) # edits on it's discussion page (dito) # distinct authors per article # distinct authors per discussion page # percentage of anonymous edits ...?
Unfourtunately the history export is disabled but I can get the data out of the XML dump und the database.
Greetings, Jakob _______________________________________________ Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
At 08:25 -0500 17/12/05, Andrea Forte wrote:
A note of caution...
My understanding of action research is that it is BIG -- involves organizational change on a large scale and a lot of time to go longitudinal work. If you want to do it right, it's not anyone's side project. And I always prefer to do it right. ;-)
[...]
Yes, I am aware the scope of action research projects. In fact, I would not suggest that anybody started a pure action research project.
What interests me are the opposing forces of self-referential analysis versus an objective approach.
Excellent! I hope you will be submitting a paper then to WikiSym 2006 on this. :-)
Dirk
At 16.12.2005, Andrea Forte wrote:
Exactly! I think that's what I just proposed. :-) Or, instead of open ratings, you could use some sample of articles and ask third-party experts to rate them along various dimensions of quality (accuracy, comprehensiveness, accessible writing, etc.)
-andrea
On 12/16/05, Dirk Riehle dirk@riehle.org wrote:
At 16.12.2005, Andrea Forte wrote:
I find it problematic to use number of edits and number of authors (quantitative date) as indicators of content quality. I'm willing to believe that these are probably, in most cases, indicators of improvement, but that's a huge assumption. To make this case, I think some kind of qualitative analysis is necessary to demonstrate that the article QUALITY improves by some set of standards and we'd expect that these results will be correlated with number of authors/number of edits. If anyone wants to collaborate on something like this, I might have 15 or 20 minutes free in spring. ;-)
I agree, and to me it looks like Lih got it backwards: You would want to show that some quantitative measures like number of edits correlate positively with quality. As the paper stands, if someone comes by and shows there is none or only a very weak correlation between their quantitative indicators and actual quality of articles, their paper becomes moot.
I would argue that you can assess article quality only by human measure. Then you can go and show correlations with data like number of edits, to later turn around and make predictions about quality of papers based on these factors. But first you have to show the strength of such correlation.
I think all attempts at reputation systems etc will fail if they are purely algorithmical. Rather, I'd simply set up a voting system for people to vote on the quality of an article they just read. That will give you a reasonable measure of quality, against which you can run experiments. (Why such voting works is a different topic.)
Dirk
Interested in wiki research? Please go to http://www.wikisym.org
Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
Wiki-research-l mailing list Wiki-research-l@Wikimedia.org http://mail.wikipedia.org/mailman/listinfo/wiki-research-l
On 12/16/05, Dirk Riehle dirk@riehle.org wrote:
At 16.12.2005, Andrea Forte wrote:
I find it problematic to use number of edits and number of authors (quantitative date) as indicators of content quality.
...
...You would want to show that some quantitative measures like number of edits correlate positively with quality.
...
I would argue that you can assess article quality only by human measure.
...
I think all attempts at reputation systems etc will fail if they are purely algorithmical.
Well, then we all agree. When do we start? :)
...You would want to show that some quantitative measures like number of edits correlate positively with quality.
...
I would argue that you can assess article quality only by human measure.
...
I think all attempts at reputation systems etc will fail if they are purely algorithmical.
Well, then we all agree. When do we start? :)
Well, as ever, pinning down measures of "quality" could be one place to start. We all know it when we see it... but what does quality mean? Andrea's dimensions of accuracy, comprehensiveness, and accessible writing are certainly major -- and I would add whether the article is referenced, has images and so on, with number of edits as one dimension. The problem, of course, is measuring something so subjective.
From a research point of view, I'd be interested in seeing whether
different measures of quality are more important to different groups (anons v. wikipedians v. outside experts, and so on). Does accuracy matter more to some groups than others? Writing style? The Nature article mentioned that some of the reviewers thought the Wikipedia articles were difficult to get through, even if they were accurate, but no mention was made of the Brittanica's writing style.
And yeah, I'll have about half an hour free in the spring myself :) and would also be interested in working on something like this.
Incidentally, the sidebar permanent link doesn't show up in all skins.
-- phoebe / brassratgirl
At 00:56 -0800 17/12/05, phoebe ayers wrote:
...You would want to show that some quantitative measures like number of edits correlate positively with quality.
...
I would argue that you can assess article quality only by human measure.
...
I think all attempts at reputation systems etc will fail if they are purely algorithmical.
Well, then we all agree. When do we start? :)
Well, as ever, pinning down measures of "quality" could be one place to start. We all know it when we see it... but what does quality mean? Andrea's dimensions of accuracy, comprehensiveness, and accessible writing are certainly major -- and I would add whether the article is referenced, has images and so on, with number of edits as one dimension. The problem, of course, is measuring something so subjective.
From a research point of view, I'd be interested in seeing whether
different measures of quality are more important to different groups (anons v. wikipedians v. outside experts, and so on).
Aha. Now I see. The average Wikipedian is jaded, and tend to focus on whether the "external links" or "see also" is formatted correctly.
Does accuracy matter more to some groups than others? Writing style?
Uniform style?
The Nature article mentioned that some of the reviewers thought the Wikipedia articles were difficult to get through, even if they were accurate, but no mention was made of the Brittanica's writing style.
My guess is that the "difficulty" of an article is in direct proportion to the number of contributors.
And yeah, I'll have about half an hour free in the spring myself :) and would also be interested in working on something like this.
Incidentally, the sidebar permanent link doesn't show up in all skins.
I reported that. It is can be switched on and off (in some skins).
Brion said:: "Checked your quickbar setting?"
In CologneBlue, for example:
Quickbar None Fixed left Fixed right Floating left Floating right
-- phoebe / brassratgirl _______________________________________________
wiki-research-l@lists.wikimedia.org