I wish someone would please replicate my measurement of the variance in the distribution of fundraising results using the editor-submitted banners from 2008-9, and explain to the fundraising team that distribution implies they can do a whole lot better.... When are they going to test the remainder of the editors' submissions?
Given that you've been asking for that analysis for four years, and it's never been done, and you've been repeatedly told that it's not going to happen, could you....take those hints? And by hints, I mean explicit statements....
Which statements? I've been told on at least two occasions that the remainder of the volunteer submissions *will* be tested, with multivariate analysis as I've suggested (instead of much more lengthy rounds of A/B testing, which still seem to be the norm for some reason) and have never once been told that it's not going to happen, as far as I know. Who ruled it out and why? Is there any evidence that my measurement of the distribution's kurtosis is flawed?
I'll raise the issue as to whether and how much the Foundation should pay to crowdsource revision scoring to help transition from building new content to updating existing articles when the appropriate infrastructure to measure the extent of volunteer effort devoted to it is in place. If there is any reason for refraining from discussion of the fact that revision scoring can be equivalent to computer-aided instruction and the ways that it can be implemented to maximize its usefulness as such, then please bring it to my attention.
Hello everyone,
I am jumping in this conversation with the aim to put any estimation and/or forecasting in relative terms. I personally think it is slightly more fruitful and productive to ask the proportion and/or share of Wikimedia foundation in Internet economy as a non-profit.
With this in mind, it might be useful to consider other forecasting data points about the size of Internet economy:
http://www.digital.je/media/Secure-Strategic-Documents/OECD%20-%20Measuring%...
http://www.mckinsey.com/features/sizing_the_internet_economy
http://allthingsd.com/20120127/report-internet-economy-set-to-nearly-double-...
For research, to measure the equivalent economic activities of Wikimedia foundation as x% share of the Internet economy entails longer time frame at macro levels. The time frame of this question is longer than A/B testing for fund-raising interfaces/mechanisms in a few days. They are two different questions, and both have their merits for study. However, the five-year or ten-year forecasting research seems to be more relevant to the bigger question of the share and role of Wikimedia foundation in the whole of Internet economy.
Put in relative terms, I hope that the Wikimedia foundation budget grows in proportion with the number of Internet users, and the average donations remains the same (inflation-adjusted). I have this hope because I have the assumption that Wikimedia foundation provides public goods or public utility to serve the public. Internet economy can go bust and boom, which can have real impacts on fundraising performance. I do not want the quality and price for public utility fluctuates. On the other hand, I hope that the impact of Wikimedia foundation remains a substantial proportion of the whole Internet economy. Limited budget, but multiplier effects of public knowledge. . From the above slightly normative assumptions, I hope to see two indicators produced and/or constructed: (1) Wikimedia foundation's annual income divided by global Internet users (2) The equivalent Internet economic values Wikimedia created as proportion to the whole economy every year. It is reasonable to expected that the global Internet users will eventually plateau and that the global Internet economy size will grow much faster.
Best, han-teng liao
2015-01-03 2:41 GMT+02:00 James Salsman jsalsman@gmail.com:
I wish someone would please replicate my measurement of the variance in the distribution of fundraising results using the editor-submitted banners from 2008-9, and explain to the fundraising team that distribution implies they can do a whole lot better.... When are they going to test the remainder of the editors' submissions?
Given that you've been asking for that analysis for four years, and it's never been done, and you've been repeatedly told that it's not going to happen, could you....take those hints? And by hints, I mean explicit statements....
Which statements? I've been told on at least two occasions that the remainder of the volunteer submissions *will* be tested, with multivariate analysis as I've suggested (instead of much more lengthy rounds of A/B testing, which still seem to be the norm for some reason) and have never once been told that it's not going to happen, as far as I know. Who ruled it out and why? Is there any evidence that my measurement of the distribution's kurtosis is flawed?
I'll raise the issue as to whether and how much the Foundation should pay to crowdsource revision scoring to help transition from building new content to updating existing articles when the appropriate infrastructure to measure the extent of volunteer effort devoted to it is in place. If there is any reason for refraining from discussion of the fact that revision scoring can be equivalent to computer-aided instruction and the ways that it can be implemented to maximize its usefulness as such, then please bring it to my attention.
Wiki-research-l mailing list Wiki-research-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wiki-research-l
wiki-research-l@lists.wikimedia.org