Bennett Haselton wrote:
At 11:05 AM 3/23/2007 -0500, Sheldon Rampton wrote:
Ray Saintonge wrote:
My first impression from your response is that you would end up with something even more complicated than what I would imagine. :- ( While I see the value of having this ones preferences set to have a certain version as preferred, the drive-by viewer just looking for information is not likely to know about this. He can, however, be guided by whether an article has (in big numbers) a reliability rating of 2.6 or 7.9.
I think user rating of article versions ought to be as simple as possible: one-click approval. It ought to be as easy as clicking on the "watch this article" tab to add an article to your watchlist. This means that all users do is decide yes or no for approval. Anything further adds complexity to the system but offers little gain in utility.
Also, one reason I was advocating a means for verified experts to sign off on the accuracy of an article, was to enable people to cite or reference that article, saying that even if Professor X didn't write it, he vouches for its authenticity.
Saying that Professor X vouched for the article only begs the question, "Who the hell is professor X?" For someone who is not familiar with the subject area Professor Y would be just as valuable even if he has an opposing opinion.
If you go with a sliding scale from 1 to 10 instead of a yes/no option, then there's some ambiguity in whether you can cite an expert as vouching for the correctness of the article.
I'm not completely opposed to some professor's anecdotal evidence. TV ads for medical products regularly show some doctor proclaiming the virtues of some product.
How high does their rating have to be before you can put them down as "vouching" for the article? 7? 10? I have no idea.
How about two standard deviations above the statistical mean.
Ec