Another point to consider is that comparing grants that include staff compensation to grants that do not is necessarily tipping the scales. Volunteer time is a cost too (though borne by the volunteers themselves and not by the funder), and ignoring it in cost-benefit analysis will always give the impression that grants including staff are significantly less effective, whether or not they truly are.
It may make sense to ignore it if the funder is only interested in straight impact-for-dollars; it seems to me that WMF is a funder that cares about _movement resources_, including volunteer time, and not just dollars out of its own budget.
A.
On Thu, Jul 31, 2014 at 10:12 PM, Pine W wiki.pine@gmail.com wrote:
Hi Jessie,
Thanks for the quick reply.
Issue 1 may be challenging to measure even with Wikimetrics. Can we talk about this during the Research Hackathon next week if we can set up a time off-list?
Thanks for the info about issue 2. I am grateful to learn that you did an evaluation of PEG. It is interesting to compare that evaluation with the evaluation of IEG. A number of grantmaking committee members and grantees will be at Wikimania and I hope the PED team will introduce themselves and be available to discuss these studies, especially if there is a plenary meeting of all the Meta grantmaking committee members who attend Wikimania.
Thanks very much,
Pine On Jul 31, 2014 2:50 PM, "Jessie Wild" jwild@wikimedia.org wrote:
Thanks for listening to the presentation, Pine!
There will be a more comprehensive analysis posted on Meta, but in the meantime to answer your questions:
- I'm aware that Program Evaluation is examining the outcomes of
conferences this year, and Jamie and I have discussed this in at least
two
places on Meta. I'm curious about if and how you plan to measure the
online
impact of conferences; not just what people and groups say they will do
in
post-survey conferences, but what they actually do online in verifiable ways in the subsequent 3-12 months.
Jaime and I and the others on the Grantmaking team are working together
on
this, and experimenting with some different ways of evaluating the work
in
the few months following the conferences. One way to do this in a small experiment, for example, is to run a cohort of users who received
Wikimania
Scholarships through Wikimetrics at different increments throughout the year following. This is something I have been curious to do for a long time, but never had the tool to do it on an aggregate level!
- You said in your presentation that there is no direct correlation
between grant size and measurable online impact. From the slides at
around
the 1:13-1:15 minute marks, it looks to me like the correlation is negative, meaning that smaller grants produced disproportionately more impact. I can say that within IEG this occurred partly because we had
some
highly motivated and generous grantees who volunteered a considerable amount of time to work with modest amounts of money, and I don't think
we
should expect that level of generosity from all grantees, but I think
that
grantmaking committees may want (A) to take into account the level of motivation of grantees, (B) to consider breaking large block grants into discrete smaller projects with individual reporting requirements, and
(C)
for larger grants where there seem to be a lot of problems with
reporting
and a disappointing level of cost-effectiveness, to be more assertive
about
tying funding to demonstrated results and reliable, standardized
reporting
with assistance from WMF. What do you think?
Well, there are definite outliers, and the slides aggregate by program
type rather than by size. So, for example, several of the IEG grants were much bigger than than the majority of PEG grants. So - not exactly
negative
correlation (at least, we can't definitively say that).
I absolutely agree with your (C) suggestion, and your (B) suggestion is very interesting too - we haven't discussed that one. It may be worth considering if there are larger project-based grants. For the annual plan grants, we have this in terms of quarterly reports (and midpoint reports for IEG), so we do try to do interventions with grantees if it looks like they are off-track. As for (A), based on what we saw through our evaluation of IEG[1], motivation is definitely important but the key difference for outlier performance was from those grantees that had
*specific
target audiences* identified, so they knew exactly who they wanted to be working with and how to reach those people. So, I would want committees
to
take into account grants with a specific target audience or specific
target
topic area (for quality improvements, for example; we saw this for successful outreach in PEG grants[2]). More explicitly on motivation,
while
it is difficult to measure for new grantees, you can see a lot about someone's motivation and creativity based on their past reports if they
are
a returning grantee. I would definitely encourage our committees to look back on past reports from returning grantees!
- Jessie
[1] https://meta.wikimedia.org/wiki/Grants:IEG/Learning/Round_1_2013/Impact [2] https://meta.wikimedia.org/wiki/Grants:PEG/Learning/2013-14
--
*Jessie Wild SnellerGrantmaking Learning & Evaluation * *Wikimedia Foundation*
Imagine a world in which every single human being can freely share in the sum of all knowledge. Help us make it a reality! Donate to Wikimedia https://donate.wikimedia.org/
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe