At 16.12.2005, Andrea Forte wrote:
I find it problematic to use number of edits and number of authors (quantitative date) as indicators of content quality. I'm willing to believe that these are probably, in most cases, indicators of improvement, but that's a huge assumption. To make this case, I think some kind of qualitative analysis is necessary to demonstrate that the article QUALITY improves by some set of standards and we'd expect that these results will be correlated with number of authors/number of edits. If anyone wants to collaborate on something like this, I might have 15 or 20 minutes free in spring. ;-)
I agree, and to me it looks like Lih got it backwards: You would want to show that some quantitative measures like number of edits correlate positively with quality. As the paper stands, if someone comes by and shows there is none or only a very weak correlation between their quantitative indicators and actual quality of articles, their paper becomes moot.
I would argue that you can assess article quality only by human measure. Then you can go and show correlations with data like number of edits, to later turn around and make predictions about quality of papers based on these factors. But first you have to show the strength of such correlation.
I think all attempts at reputation systems etc will fail if they are purely algorithmical. Rather, I'd simply set up a voting system for people to vote on the quality of an article they just read. That will give you a reasonable measure of quality, against which you can run experiments. (Why such voting works is a different topic.)
Dirk
---- Interested in wiki research? Please go to http://www.wikisym.org