Hoi, I have done a project and there were two parts to my project. There was the delivery of an input method and a font for a script that did not have any UNICODE font. At that time there was functionality for fonts. So it should have been a shoe in. The cost of the project was relatively large. This was because of the cost of producing a new font. In real world terms the font and input method were provided for a very low price..
Because of whatever internal issues, the font did not become available in MediaWiki. While waiting the partner for the project lost his subsidy and as an organisation the Royal Institute for the Tropics ended and the Tropenmuseum was merged with two other museums. This was duly mentioned at the time. I even blogged about it.
As a consequence of this all my project was gone. The money was spend, the goods were available but not available to a project. I am no longer involved in Batak and have no leads to revive it. I have no intention either.
Now a long time after all this I was hassled for a report. As far as I am aware I have attempted multiple iterations of a report. It did not fit the mold or whatever was wrong with it.
With more reporting you get less project and more irritation. I loathe the notion that more reporting will lead to anything positive. If anything it makes sense to project manage the reports, keep a finger on the pulse. But this is a personal affair and very much NOT an administrative affair.
When I am getting involved in another project I will very much try to stay away from administrative bullshit while I am very much available for personal contact. Thanks, GerardM
On 31 July 2014 23:50, Jessie Wild jwild@wikimedia.org wrote:
Thanks for listening to the presentation, Pine!
There will be a more comprehensive analysis posted on Meta, but in the meantime to answer your questions:
- I'm aware that Program Evaluation is examining the outcomes of
conferences this year, and Jamie and I have discussed this in at least
two
places on Meta. I'm curious about if and how you plan to measure the
online
impact of conferences; not just what people and groups say they will do
in
post-survey conferences, but what they actually do online in verifiable ways in the subsequent 3-12 months.
Jaime and I and the others on the Grantmaking team are working together on this, and experimenting with some different ways of evaluating the work in the few months following the conferences. One way to do this in a small experiment, for example, is to run a cohort of users who received Wikimania Scholarships through Wikimetrics at different increments throughout the year following. This is something I have been curious to do for a long time, but never had the tool to do it on an aggregate level!
- You said in your presentation that there is no direct correlation
between grant size and measurable online impact. From the slides at
around
the 1:13-1:15 minute marks, it looks to me like the correlation is negative, meaning that smaller grants produced disproportionately more impact. I can say that within IEG this occurred partly because we had
some
highly motivated and generous grantees who volunteered a considerable amount of time to work with modest amounts of money, and I don't think we should expect that level of generosity from all grantees, but I think
that
grantmaking committees may want (A) to take into account the level of motivation of grantees, (B) to consider breaking large block grants into discrete smaller projects with individual reporting requirements, and (C) for larger grants where there seem to be a lot of problems with reporting and a disappointing level of cost-effectiveness, to be more assertive
about
tying funding to demonstrated results and reliable, standardized
reporting
with assistance from WMF. What do you think?
Well, there are definite outliers, and the slides aggregate by program
type rather than by size. So, for example, several of the IEG grants were much bigger than than the majority of PEG grants. So - not exactly negative correlation (at least, we can't definitively say that).
I absolutely agree with your (C) suggestion, and your (B) suggestion is very interesting too - we haven't discussed that one. It may be worth considering if there are larger project-based grants. For the annual plan grants, we have this in terms of quarterly reports (and midpoint reports for IEG), so we do try to do interventions with grantees if it looks like they are off-track. As for (A), based on what we saw through our evaluation of IEG[1], motivation is definitely important but the key difference for outlier performance was from those grantees that had *specific target audiences* identified, so they knew exactly who they wanted to be working with and how to reach those people. So, I would want committees to take into account grants with a specific target audience or specific target topic area (for quality improvements, for example; we saw this for successful outreach in PEG grants[2]). More explicitly on motivation, while it is difficult to measure for new grantees, you can see a lot about someone's motivation and creativity based on their past reports if they are a returning grantee. I would definitely encourage our committees to look back on past reports from returning grantees!
- Jessie
[1] https://meta.wikimedia.org/wiki/Grants:IEG/Learning/Round_1_2013/Impact [2] https://meta.wikimedia.org/wiki/Grants:PEG/Learning/2013-14
--
*Jessie Wild SnellerGrantmaking Learning & Evaluation * *Wikimedia Foundation*
Imagine a world in which every single human being can freely share in the sum of all knowledge. Help us make it a reality! Donate to Wikimedia https://donate.wikimedia.org/ _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe