[Wikimedia-l] Fundraising updates?

Megan Hernandez mhernandez at wikimedia.org
Mon Dec 17 18:30:06 UTC 2012


Here's the link to the chapter daily reporting numbers.  Please keep in
mind that these numbers are still preliminary.  The numbers will still
change as donations are still coming in and some donations take a while to
settle (checks, bank transfer, etc).

https://docs.google.com/a/wikimedia.org/spreadsheet/ccc?key=0Alem5893XFmUdDNITmtPTlpmcVlETnphOUEwdFlCUHc#gid=0

Megan

On Mon, Dec 17, 2012 at 9:28 AM, Samuel Klein <meta.sj at gmail.com> wrote:

> Thomas writes:
> > You could also model banner fatigue properly, which could be very useful.
>
> Yes, a detailed model of banner fatigue would be fascinating.
>
> It's certainly something studied by many groups in different contexts;
> ideally we'd learn from published analysis, and then see deviations from
> the norm in our own context.  It's quite likely that the context changes
> between donation appeals and other messages; understanding this better
> would also help us rotate global sitenotices more effectively.
>
> Zack - thank you for sharing so much detail about the process.
> James - thank you for your nuanced statistical comments; something we could
> use more of.
>
> SJ
>
>
> On Mon, Dec 17, 2012 at 12:23 PM, Thomas Dalton <thomas.dalton at gmail.com
> >wrote:
>
> > Have you considered doing some longer tests? Lasting a week, say. It
> would
> > enable you to do proper multivariate testing, including dependencies
> > between variables (which I don't think you have done any real tests of
> > yet). It would also let you test time dependence. Eg., does a particular
> > message work better in the morning than in the afternoon? (Different
> types
> > of people browse at different times, so it wouldn't surprise me) You
> could
> > also model banner fatigue properly, which could be very useful.
> > On Dec 17, 2012 4:28 PM, "Zack Exley" <zexley at wikimedia.org> wrote:
> >
> > > On Sat, Dec 15, 2012 at 11:01 AM, James Salsman <jsalsman at gmail.com>
> > > wrote:
> > >
> > > > Hi Zack,
> > > >
> > > > Thanks very much for your updates:
> > > >
> > > > > What saved us was taking text from the personal appeals and putting
> > it
> > > > into
> > > > > the banner itself. These banners did very well. These new
> > > message-driven
> > > > > banners are what made us split the campaign in two -- because we
> knew
> > > we
> > > > > were going to develop a lot of new messages and not have time to
> > > > translate
> > > > > them well....
> > > >
> > > > As you know I've been saying for years that the variance among the
> > > > volunteer-supplied messages, originally submitted in 2009 and
> hundreds
> > > > of which have not yet been tested (as far as I know), was large
> enough
> > > > to suggest that some messages would certainly outperform the
> > > > traditional banners and appeals. While it's refreshing to be
> > > > validated, as you might imagine I feel like Cassandra much of the
> time
> > > > for reasons that have nothing to do with the underlying mathematical
> > > > reasoning involved.
> > > >
> > > > The last time I heard from you, you said that you intended to test
> the
> > > > untried messaging from 2009 with multivariate analysis. However,
> > > >
> http://meta.wikimedia.org/wiki/Fundraising_2012/We_Need_A_Breakthrough
> > > > shows only three very small-N multivariate tests, the last of which
> > > > was in October, and no recent testing.
> > > >
> > > > Do you still intend to test the untried volunteer-submitted messages
> > > > with multivariate analysis? If so, when? Thank you.
> > > >
> > > >
> > > James -
> > >
> > > We can only do big multivariate tests for banner click rates. But
> banner
> > > click rates have very little to do with donations in our present
> context.
> > >
> > > For example, the new banners have about 30% the click rate of the old
> > ones,
> > > but they make about 3 or 4 times as much money.
> > >
> > > To determine how well a banner message does for donations, we usually
> > need
> > > a sample size between 500 and 5,000 donations per banner, depending on
> > the
> > > difference in performance between the banners. That takes from 30
> minutes
> > > to several hours to collect -- if we're only testing two banners at a
> > time.
> > >
> > > Regarding the banners suggested in past years: I've explained this
> > before,
> > > and will repeat: We tested tons of those banners. I think that we
> tested
> > > virtually every different (serious) theme that was suggested. They all
> > had
> > > BOTH far lower click rates and even lower donation rates -- usually by
> > > orders of magnitude. This was also true for the new short slogans that
> we
> > > came up with ourselves on the fundraising team.
> > >
> > > Now we're pretty clear on why: A short slogan isn't enough to get
> people
> > > over all their questions about why they should support Wikipedia. More
> > text
> > > was needed. In our marketing-slogan-obsessed culture, the idea that
> we'd
> > > have to present people with a long paragraph was very counterintuitive.
> > We
> > > didn't think of it on the fundraising team and none of the volunteers
> who
> > > submitted suggestions thought of it either. Several marketing
> > professionals
> > > who contacted us with advice even told us to get rid of the appeal on
> > then
> > > landing page altogether because "people don't read!"
> > >
> > > As it turns out, Wikipedia users DO like to read -- and want all the
> > facts
> > > before they donate.
> > >
> > > Where we're at today, just to emphasize my previous point, is that with
> > the
> > > new banners, changes in messages effect donations totally independently
> > of
> > > click rate. And we typically need an hour or two -- or five -- to
> detect
> > > even a 10%-%15 percent difference in message performance. That's why
> > we're
> > > not running big multivariate tests with tons of difference banners.
> > >
> > > You'll be happy to know, though, that we are running multivariate tests
> > > when we're able. For example, if we have a tweak to the landing pages
> > that
> > > we think is fairly independent of the banner effect, then we sometimes
> > run
> > > a multivariate test. Or if we have a design tweak (like color) that
> we're
> > > confident will always effect click rate in the same direction as
> > donations,
> > > then we can combine that with message testing.
> > >
> > >
> > > > Sincerely,
> > > > James Salsman
> > > >
> > >
> > >
> > >
> > > --
> > > Zack Exley
> > > Chief Revenue Officer
> > > Wikimedia Foundation
> > > 415 506 9225
> > > _______________________________________________
> > > Wikimedia-l mailing list
> > > Wikimedia-l at lists.wikimedia.org
> > > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
> > >
> > _______________________________________________
> > Wikimedia-l mailing list
> > Wikimedia-l at lists.wikimedia.org
> > Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
> >
>
>
>
> --
> Samuel Klein          @metasj           w:user:sj          +1 617 529 4266
> _______________________________________________
> Wikimedia-l mailing list
> Wikimedia-l at lists.wikimedia.org
> Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l
>



-- 

Megan Hernandez

Head of Annual Fundraiser
Wikimedia Foundation


More information about the Wikimedia-l mailing list