Thanks for listening to the presentation, Pine!
There will be a more comprehensive analysis posted on Meta, but in the meantime to answer your questions:
- I'm aware that Program Evaluation is examining the outcomes of
conferences this year, and Jamie and I have discussed this in at least two places on Meta. I'm curious about if and how you plan to measure the online impact of conferences; not just what people and groups say they will do in post-survey conferences, but what they actually do online in verifiable ways in the subsequent 3-12 months.
Jaime and I and the others on the Grantmaking team are working together on this, and experimenting with some different ways of evaluating the work in the few months following the conferences. One way to do this in a small experiment, for example, is to run a cohort of users who received Wikimania Scholarships through Wikimetrics at different increments throughout the year following. This is something I have been curious to do for a long time, but never had the tool to do it on an aggregate level!
- You said in your presentation that there is no direct correlation
between grant size and measurable online impact. From the slides at around the 1:13-1:15 minute marks, it looks to me like the correlation is negative, meaning that smaller grants produced disproportionately more impact. I can say that within IEG this occurred partly because we had some highly motivated and generous grantees who volunteered a considerable amount of time to work with modest amounts of money, and I don't think we should expect that level of generosity from all grantees, but I think that grantmaking committees may want (A) to take into account the level of motivation of grantees, (B) to consider breaking large block grants into discrete smaller projects with individual reporting requirements, and (C) for larger grants where there seem to be a lot of problems with reporting and a disappointing level of cost-effectiveness, to be more assertive about tying funding to demonstrated results and reliable, standardized reporting with assistance from WMF. What do you think?
Well, there are definite outliers, and the slides aggregate by program
type rather than by size. So, for example, several of the IEG grants were much bigger than than the majority of PEG grants. So - not exactly negative correlation (at least, we can't definitively say that).
I absolutely agree with your (C) suggestion, and your (B) suggestion is very interesting too - we haven't discussed that one. It may be worth considering if there are larger project-based grants. For the annual plan grants, we have this in terms of quarterly reports (and midpoint reports for IEG), so we do try to do interventions with grantees if it looks like they are off-track. As for (A), based on what we saw through our evaluation of IEG[1], motivation is definitely important but the key difference for outlier performance was from those grantees that had *specific target audiences* identified, so they knew exactly who they wanted to be working with and how to reach those people. So, I would want committees to take into account grants with a specific target audience or specific target topic area (for quality improvements, for example; we saw this for successful outreach in PEG grants[2]). More explicitly on motivation, while it is difficult to measure for new grantees, you can see a lot about someone's motivation and creativity based on their past reports if they are a returning grantee. I would definitely encourage our committees to look back on past reports from returning grantees!
- Jessie
[1] https://meta.wikimedia.org/wiki/Grants:IEG/Learning/Round_1_2013/Impact [2] https://meta.wikimedia.org/wiki/Grants:PEG/Learning/2013-14