I wish someone would please replicate my measurement of the variance in the distribution of fundraising results using the editor-submitted banners from 2008-9, and explain to the fundraising team that distribution implies they can do a whole lot better.... When are they going to test the remainder of the editors' submissions?
Given that you've been asking for that analysis for four years, and it's never been done, and you've been repeatedly told that it's not going to happen, could you....take those hints? And by hints, I mean explicit statements....
Which statements? I've been told on at least two occasions that the remainder of the volunteer submissions *will* be tested, with multivariate analysis as I've suggested (instead of much more lengthy rounds of A/B testing, which still seem to be the norm for some reason) and have never once been told that it's not going to happen, as far as I know. Who ruled it out and why? Is there any evidence that my measurement of the distribution's kurtosis is flawed?
I'll raise the issue as to whether and how much the Foundation should pay to crowdsource revision scoring to help transition from building new content to updating existing articles when the appropriate infrastructure to measure the extent of volunteer effort devoted to it is in place. If there is any reason for refraining from discussion of the fact that revision scoring can be equivalent to computer-aided instruction and the ways that it can be implemented to maximize its usefulness as such, then please bring it to my attention.