2009/2/2 Carcharoth <carcharothwp(a)googlemail.com>om>:
On Mon, Feb 2, 2009 at 6:30 PM, Thomas Dalton
<thomas.dalton(a)gmail.com> wrote:
2009/2/2 Sam Korn <smoddy(a)gmail.com>om>:
On Mon, Feb 2, 2009 at 3:03 PM, Thomas Dalton
<thomas.dalton(a)gmail.com> wrote:
I agree, that's definitely the most important
statistic. A more useful
statistic would be the age of the oldest unreviewed revision.
17.8 days
http://toolserver.org/~aka/cgi-bin/reviewcnt.cgi?lang=english&action=ou…
Ask, and you shall receive! Thank you!
So that's 10570 articles that have been waiting over a day (out of
12667 articles with out of date reviews). That's pretty bad... I would
have expected a long tail type distribution. Any ideas why there are
so many very out-of-date article compared to slightly out-of-date
ones?
Might it be because they were looked at several times and each time
people went "um, not sure about this" and left it for someone else to
do? Flagged revisions is serious because the impression is that you
are verifying people's work to some standard. Now if someone quote an
obscure source, but you don't have or haven't heard of that source,
what do you do? Trust the editor? Let it go through anyway? Let
someone else deal with it and see a backlog build up?
I would have expected that to lead to a long tail - the longer a
revision has been around, the more chance someone will have been sure
enough to do something about it.
What I'd like to see is a feature where you can
click "not sure" and
bump the review up several levels of expertise, so the difficult stuff
gets naturally filtered to those with the expertise. Say, subject
matter or foreign language, or obscure book. Depending on how flexible
such a system is, it might make flagging revisions more efficient, not
less.
Training people to do rudimentary and moderate and advanced reviews
would be next.
Extremely dififcult to scale and harness the right levels of expertise
(from typo-spotting upwards), but very rewarding if done right. One
problem is edits that combine different sorts of things, and the
"massive chunks of text added in one go".
I presume the current system is a rudimentary one only designed to
catch obvious vandalism? If that is the case, people need to be more
alert than before (not less) to subtle vandalism and good-faith
misrepresentation of sources by poor or skewed writing.
As I understand it, the German implementation has quite strict
requirements for flagging. The suggestions on the English Wikipedia
seem to be more about just stopping obvious vandalism. Different
levels of flags would seem to be the solution (and have been discussed
before, but I guess we need to wait until we have the basics working)
- "sighted", "fact checked", "good", "featured",
say. Anyone that is
autoconfirmed can sight a revision with just a few seconds of review
in most cases, fact checking requires you to have proven yourself
competent and takes longer since you have to actually verify all the
sources (and may need to have some experience of the subject in
question in order to know if the sources have been correctly
interpreted). Good and featured would follow existing procedure (and
just make it easier to make it clear which revision was reviewed).