On 0, Bryan Derksen bryan.derksen@shaw.ca scribbled:
Gwern Branwen wrote:
The most exciting ones I can think of:
#We can scrap the 'newest 1%' part of semi-protection. Instead of waiting 4 days, write 4 articles!
On the one hand, as an inclusionist and apostate mergist, I would welcome anything codified that boosted the philosophy that "more articles is good". On the other hand, though, forcing people to run the Newpages Patrol gauntlet in order to edit other existing articles may not be optimal. It can be frustrating seeing one's work randomly blipping out of existence minutes after saving it, I wouldn't want that experience explicitly forced on new users. And some editors just don't _want_ to create new articles; they prefer editing existing ones. That's perfectly useful too.
Well, I was being facetious there. For symmetry, I had to keep the '4' from semi-protection, but I didn't want to say edits because we all know how pointless and worthless any given edit can be; '4 articles' had a properly substantial sound to it. Plus, the way I say 'articles', the two clauses had the same syllable count, which pleases me.
#We can scrap editcountitis - this reputation metric may still not be ideal, but I suspect the metric will reflect the value of one's contributions *a heckuva* lot better than # of edits.
I _wish_ editcountitis counted for more, I'd be a sort of demigod. :)
I'm not sure why a reputation metric of any sort is necessary, though. The contributions to articles themselves should stand on their own; one of the main defenses I gave to the press during the Essjay controversy was that we don't usually consider the reputation or qualifications of the editors relevant when evaluating their work. And if an editor has a history of significant disruptiveness, inaccuracy, etc., then it'll be raised on their user talk page and perhaps ultimately proceed on to RfC and other such fora. We don't need robots and math formulae to do that.
No, we don't *need* to use such things, in the same sense that one could go around disambiguating links and taggin' FU images without the benefit of automation, and quite a few people have spent quite a bit of time doing just that. But darn it, if you want to get a couple thousand links disambiguated or images handled so as to put a good dent in the backlog, why shouldn't you? There's more then enough un-automate-able work on WP to keep a legion of editors busy for decades; no point in passing up any tool whose benefit outweighs its cost.
#Bots could probably benefit from this. An example: Pywikipedia's followlive.py script follows Newpages looking for dubious articles to display for the user to take action on. You could filter out all pages consisting of avg. reputation > n, or something.
Could work, but there's no need to display the orange highlighting for this one.
Yeah. The entire thing could be done locally if people are willing to code it up, but having it done on the servers (even if nothing is actually displayed but just made available in a Special: page, perhaps) is a lot more efficient in terms of coding, time, and bandwidth. (Reading through the paper, for a given section of text in an article, you need all past revisions of that page n, and all past revisions of all articles edited by all editors of that page n).
#People have long suggested that edits by anons and new users be buffered for a while or approved; this might be a way of doing it.
Also might work, but version flagging is so close to being real now that I'd like to see how that goes first. Baby steps. :)
Yes, let's put it on the docket. I'm sure stable versions and SUL will be done very soon, and then it'd be easy as pie to add this reputation stuff in. Next year in Jerusalem!
-- gwern monarchist SGC 127 NAVELEXSYSSECENGCEN Z7 CACI POCSAG Ti cybercash Infrastructure