On Thu, May 7, 2015 at 6:34 AM, Lodewijk lodewijk@effeietsanders.org wrote:
I hope that at some point WLM organizers can be given the tools, enthusiasm and support to create their own evaluation on a larger scale. That way I hope that some of the flaws can be avoided thanks to a better understanding of the collaborations, structures and the projects in general.
The Evaluation portal on Meta [1] has all the resources we use, open to organizers of any program. There is a guide to using the portal resources [2]. We also host virtual meet ups regularly to develop capacity around evaluation, that are recorded and available on our Youtube channel [3] under CC license. The Learning and Evaluation team is open to have conversations one on one as well! =)
We are always encouraging program leaders to engage in this conversation: what metrics matter to this program, what is relevant to measure. Happily, this is the conversation we had with some WLM organizers yesterday [4], which is also taking place on WLM Report talk page [5].
All in all it is good to have something 'to shoot at' but I would prefer that these reports are produces more in concert with the stakeholders involved and affected, rather than 'announced' and 'presented' to the wide community.
This isn't true. We always reach out to program leaders to engage in data collection. Further, had you taken part of the event, or even watched it, or read the blog we wrote [6], you would have seen nothing is presented or announced, rather, open for discussion and conversation.
*María Cruz * \ Community Coordinator, PE&D Team \ Wikimedia Foundation, Inc. mcruz@wikimedia.org | : @marianarra_ https://twitter.com/marianarra_
[1] https://meta.wikimedia.org/wiki/Grants:Evaluation [2] https://meta.wikimedia.org/wiki/Grants:Evaluation/Introduction [3] https://www.youtube.com/user/WikiEvaluation/ [4] https://www.youtube.com/watch?v=PN3TN4wrFZs [5] https://meta.wikimedia.org/wiki/Grants_talk:Evaluation/Evaluation_reports/20...
[6] http://blog.wikimedia.org/2015/04/22/first-2015-wikimedia-programs-evaluatio...
Best, Lodewijk (effeietsanders) member of the international coordinating team 2011-2013
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Claudia, I share your concerns about reducing subtle things to a few numbers. Data can also be used in context-sensitive ways. So I'm wondering if there are any existing quantitative summaries that you find useful? Or qualitative descriptions that draw from more than one
project?
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
I'm glad to see data being shared, and again it might help to have many different datasets, to limit conceptual bias in what sort of data is relevant. On May 6, 2015 9:59 AM, "Claudia Garád" claudia.garad@wikimedia.at wrote:
Hi Sam,
I am sure there are figures and stories that the various orgs collect
and
publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to
demonstrate
these more qualitative aspects of our work (e.g. by storytelling) in
their
reports. But these information does not get the same attention and publicity in
the
wider community as the evaluation done by the WMF. Many WMAT volunteers
and
I myself share the concerns expressed by Romaine that these
unidimensional
numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved
in
the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they
fail
in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion
grows
about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation
already
had released the first Programs Reports for Wiki Loves Monuments, we are
now
put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide
Wikimedia
community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in
September,
the report is too simplified without actual understanding how the
community
is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi...
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe