>> I wish someone would please replicate my measurement of
>> the variance in the distribution of fundraising results using
>> the editor-submitted banners from 2008-9, and explain to the
>> fundraising team that distribution implies they can do a whole
>> lot better.... When are they going to test the remainder of the
>> editors' submissions?
>
> Given that you've been asking for that analysis for four years,
> and it's never been done, and you've been repeatedly told that
> it's not going to happen, could you....take those hints? And by
> hints, I mean explicit statements....
Which statements? I've been told on at least two occasions that the
remainder of the volunteer submissions *will* be tested, with
multivariate analysis as I've suggested (instead of much more lengthy
rounds of A/B testing, which still seem to be the norm for some
reason) and have never once been told that it's not going to happen,
as far as I know. Who ruled it out and why? Is there any evidence that
my measurement of the distribution's kurtosis is flawed?
I'll raise the issue as to whether and how much the Foundation should
pay to crowdsource revision scoring to help transition from building
new content to updating existing articles when the appropriate
infrastructure to measure the extent of volunteer effort devoted to it
is in place. If there is any reason for refraining from discussion of
the fact that revision scoring can be equivalent to computer-aided
instruction and the ways that it can be implemented to maximize its
usefulness as such, then please bring it to my attention.
>>... here are a couple illustrations of some reasons I
> believe a ten year extrapolation of Foundation fundraising
> is completely reasonable: http://imgur.com/a/mV72T
>
> Words tend to be more useful than contextless images.
I meant that the very sharply declining cost of solar (and wind)
energy, and the extent to which renewable energy is becoming fungible
-- see e.g. https://en.wikipedia.org/w/index.php?title=Wikipedia:Reference_desk/Science….
-- is likely to have a profoundly positive economic effect in both the
developed and developing world, especially over the next ten years;
and that the rate at which the most populous areas of the world are
growing in terms of income per capita will combine to support a simple
extrapolation of the Foundation's fundraising success. The very rapid
per capita income growth in Asia and Africa should last for 15 to 20
years at least.
And I don't see any downward pressure on the ability of the Foundation
to raise money; especially if transitioning to maintaining existing
content is successful. That is why I think this is so important:
>>> https://meta.wikimedia.org/wiki/Grants:IEG/Revision_scoring_as_a_service
>
>> That is equivalent to a general computer-aided instruction
>> system, with the side effects of both improving the encyclopedia
>> and making counter-vandalism bots more accurate. As an
>> anonymous crowdsourced review system based on consensus
>> voting instead of editorial judgement, it leaves the Foundation
>> immunized with their safe harbor provisions regarding content
>> control intact.
>
> It's also not worth 3 billion dollars (no offence, Aaron!) as
> evidenced by the fact that it can be established with <20k.
I agree it can be set up with very little money, and I am completely
thrilled beyond words that work is proceeding on it.
However, once it is established, it's impossible to say whether
volunteers can sustain it at useful levels. I think it's almost
impossible that volunteers will keep it up with even half of major
edits. However, again, paying people to score revisions (including
trial null revisions against existing content, for example, that
editors could flag as being out of date, for example) would be like
paying them to enrich their own education and improve the encyclopedia
and anti-vandalism bots all at the same time. That is a fantastic
opportunity for research and development.
>... This is not a discussion for research-l
On the contrary, please see e.g.
http://www.wikisym.org/os2014-files/proceedings/p609.pdf
this Foundation-sponsored IEG effort can serve as a confirmatory
replication of that prior work.
>... time is better spent doing research with the resources
> we have now....
I wish someone would please replicate my measurement of the variance
in the distribution of fundraising results using the editor-submitted
banners from 2008-9, and explain to the fundraising team that
distribution implies they can do a whole lot better than sticking with
the spiel which degrades Foundation employees by implying they
typically spend $3 or £3 on coffee. (Although I wouldn't discount the
possibility that some donors feel good about sending Foundation
employers to boutique coffee shops.)
We know donor message- and banner-fatigue exists as a strong effect
which limits the useful life of fundraising approaches in some cases,
so they have to keep trying to keep up. When are they going to test
the remainder of the editors' submissions?
Oliver Keyes wrote:
>
>... Extrapolation is not a particularly useful method to use for
> the budget, because it assumes endless exponential growth.
I agree. Formal budgeting usually shouldn't extend further than three
to five years in the nonprofit sector (long-term budgeting is
unavoidable in government and some industry.) However, here are a
couple illustrations of some reasons I believe a ten year
extrapolation of Foundation fundraising is completely reasonable:
http://imgur.com/a/mV72T
>... I can't see what we'd actually /do/ with 3 billion dollars
I used to be in favor of a establishing an endowment with a sufficient
perpetuity, and then halting fundraising forever, but I have changed
my mind. I think the Foundation should continue to raise money
indefinitely to pay people for this task:
https://meta.wikimedia.org/wiki/Grants:IEG/Revision_scoring_as_a_service
That is equivalent to a general computer-aided instruction system,
with the side effects of both improving the encyclopedia and making
counter-vandalism bots more accurate. As an anonymous crowdsourced
review system based on consensus voting instead of editorial
judgement, it leaves the Foundation immunized with their safe harbor
provisions regarding content control intact.
Best regards,
James Salsman
In ten years time, I predict the Foundation will raise $3 billion:
http://i.imgur.com/hdoAIan.jpg
---------- Forwarded message ----------
From: James Salsman <jsalsman(a)gmail.com>
Date: Thu, Jan 1, 2015 at 9:01 PM
Subject: $55 million raised in 2014
To: Wikimedia Mailing List <wikimedia-l(a)lists.wikimedia.org>
Happy new year: http://i.imgur.com/faPsI9J.jpg
Source: http://frdata.wikimedia.org/yeardata-day-vs-ytdsum.csv
I don't mind the banners, although I am still saddened that several
hundred editor-submitted banners remain untested from six years
ago, when the observed variance in the performance of those that were
tested indicates that there are likely at least 15 which would do
better than any of those which were tested. Why the heck is the
fundraising team still ignoring all those untested submissions?
But as to the intrusiveness of the banners, I would rather have
fade-in popups with fuschia <blink><marquee> text on a epileptic
seizure-inducing background and auto-play audio than have the
fundraising director claim that donations are decreasing to help
justify "narrowing scope."
Best regards,
James Salsman