On 9/6/05, Poor, Edmund W Edmund.W.Poor@abc.com wrote:
From: Daniel P. B. Smith [mailto:dpbsmith@verizon.net]
Back in June, I complained that a little particle of misinformation from Wikipedia had gotten lodged in my brain, and might potentially have affected my car-purchasing decision. Specifically, I was referring to an article that characterized the Toyota Echo, as a "flop" in the U.S. whose sales had tanked in 2004 and was due to be discontinued--despite continuing success in many other countries, including Canada.
When Wikipedia fails, this is the typical way it does fail. Someone makes insufficient effort to balance the article.
In this case, there was no balance to add. Several sources were claiming the Echo's demise in the US market, and none were claiming it would continue.
Now that summer is over, I will renew my Quarterly Qall for Quality
(oops, typo! ;-) at Wikipedia.
We need a kind of rating scheme or quality assurance system or certification mechanism. It's not a problem of designing the software for this. Tim and Erik and Magnus and Brion and all the rest are superbly capable. But *they* are not going to take the lead in this. *We* must decide that we want some means of assuring readers that they are getting reliable information.
Once again, I suggest that we let users "review" a given "version" of an article and (using a "new" software feature) "mark" that version as:
- "patrolled" (as in Recent Changes "simple vandalism" patrol)
- "accurate" (i.e. "I am personally convinced that Everything
this article says is true and correct."
- "balanced" (i.e., nothing has been left out or downplayed)
For problems:
- "graffiti" (or "vandalism" = someone has messed up this version,
but I don't have the time, inclination or ability to undo the damage)
- "inaccurate" (contains mistakes, which we *hope* they'll mention on
the talk page)
- "bias" (tells one side of a story, especially in a raging controversy)
In this case, tagging an article would cause more problems than it would solve. When information changes, we don't want the "certified valid" version to be based on out of date information.
Also, certification is simply inserting a POV into the article's metadata. It is a reader's POV, but nonetheless, that's what it is.
Now where we take it from here is really up for grabs. Some people won't
even care about these tags. We can set the default to ignore quality tags unless you "opt in".
RC patrollers might like to know WHO has reviewed a version. If Mav says he's checked the diff for "simple vandalism", I wouldn't give it a second thought. He's the champ. He used to check EVERY change (!) when traffic was slow enough. Now that there are often 100s of edits per minute, this work needs to be split up. I would be happy to put in an hour, from time to time, if only I knew "who else" had certified a certain article version as "patrolled". I'd ignore known troublemakers, for example. Double check newbies, and not even bother reviewing the work of people I come to trust.
Knowing that a particular VERSION of an article had been certified as "balanced" would help Administrators if an edit war flares up. If they need to protect the article, they could go back to the last version which had been marked "balanced" by someone they trust. To be fair, they would almost certainly have to pick a version certified by someone other than themselves - unless they weren't involved in the dispute.
I don't see the value in article certification, as it's just another form of voting. With as fast as articles change in Wikipedia, I could spend a whole day every week "recertifying" articles on my watch list. On busy articles, most of them would get edited again before any significant number of people cast their votes on them, so we'd have a dozen or two versions in the article history, each with one vote. I don't see value in that.
Further, would such article voting appear in recent changes and watchlists? I watch almost everything I've voted on, so I can explain my vote or discuss the votes of others. I don't think I'd expect this to be any different.
I would rather have more personalized watch lists. When I watch an article, I want to be able to review the latest edits, then "check off" that article in my watch list to say "I'm OK with those edits, take this article off my watch list until the article changes again" or not "check off" the article so that I may review it more carefully later. If items were removed from my watchlist as I checked them, I could probably watch 10 times as many items or more.
For the looming print version (or CD / DVD version), we could
automatically choose article versions which have a suitable combination of Approval Tags and Problem tags. (My own preference would be "patrolled" and no "graffiti".) A library might insist on "accurate" and "balanced" with no "inaccurate" or "bias".
We can let people use the tags as a filtering system. When browsing or looking up information, you might want to see:
- the most recent version which HAS the tags you like;
Or
- the most recent version which DOES NOT HAVE any of the tags you hate
Or
- the most recent version which has all the tags you like and none of
the tags you hate.
But when you went to edit, you'd see the latest version same as before. Perhaps you'd get a notice saying "This is the latest version. You were looking at yesterday's / last week's version."
What do you all think?
The short version of my opinion: I don't think this is capable of keeping up with Wikipedia's changes, not because of the software, but because of limited editor time.
I think if the watchlists are made more efficient, we'll get a far better return on the developers' time investment.