Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion grows about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation already had released the first Programs Reports for Wiki Loves Monuments, we are now put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide Wikimedia community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in September, the report is too simplified without actual understanding how the community is doing this project.
Romaine
[1] https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi...
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion grows about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation already had released the first Programs Reports for Wiki Loves Monuments, we are now put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide Wikimedia community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in September, the report is too simplified without actual understanding how the community is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi... _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hi Sam,
I am sure there are figures and stories that the various orgs collect and publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to demonstrate these more qualitative aspects of our work (e.g. by storytelling) in their reports. But these information does not get the same attention and publicity in the wider community as the evaluation done by the WMF. Many WMAT volunteers and I myself share the concerns expressed by Romaine that these unidimensional numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved in the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion grows about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation already had released the first Programs Reports for Wiki Loves Monuments, we are now put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide Wikimedia community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in September, the report is too simplified without actual understanding how the community is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi... _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Claudia, I share your concerns about reducing subtle things to a few numbers. Data can also be used in context-sensitive ways. So I'm wondering if there are any existing quantitative summaries that you find useful? Or qualitative descriptions that draw from more than one project?
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
I'm glad to see data being shared, and again it might help to have many different datasets, to limit conceptual bias in what sort of data is relevant. On May 6, 2015 9:59 AM, "Claudia Garád" claudia.garad@wikimedia.at wrote:
Hi Sam,
I am sure there are figures and stories that the various orgs collect and publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to demonstrate these more qualitative aspects of our work (e.g. by storytelling) in their reports. But these information does not get the same attention and publicity in the wider community as the evaluation done by the WMF. Many WMAT volunteers and I myself share the concerns expressed by Romaine that these unidimensional numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved in the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion grows about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation already had released the first Programs Reports for Wiki Loves Monuments, we are now put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide Wikimedia community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in September, the report is too simplified without actual understanding how the community is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi... _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Yes, I think that this may be considered the central problem.
It's easier to compare two different scenarios with a standard measure and to use kilos to compare apples and oranges, for instance.
The problem is to understand that oranges will continue to be oranges after this measure, and apples will continue to be apples.
This is an example to say that several countries focus their contest in quality, some others in quantity.
The prize and the contest, anyway, is focused to select the "better photo" and not the biggest uploaders.
It means that there is no sense to force the quantitative parameters while the incentives are focused to increase quality.
Personally I find the same measure costs/uploads a lot far from the most correct measure costs/benefits because we cannot consider a single upload automatically as a "benefit".
In my opinion the most critical point is how measure costs (the workload of a community is it a cost?) and the benefits (a huge amount of worst photos is it a benefit?) because it involves several not measurable parameters.
Regards
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
Hi all,
Thanks for the comments on the first two program evaluation reports. This is the kind of feedback we are looking for coming from the community, and for that reason, we want to continue this conversation and learn more about what goals and metrics make more sense to program leaders.
As many of you know, today we held an open virtual event to introduce the first Wikimedia Programs Evaluation Reports 2015. You can now watch the recorded event online [1].
We have also captured some of the conversation that started on Wiki Loves Monuments list on the report's Talk Page [2]. Many community members have already contributed their views there. We want to encourage everyone to keep the conversation on the talk page, which will allow us to document all the feedback and keep track of it.
Looking forward to your feedback and happy editing!
*María Cruz * \ Community Coordinator, PE&D Team \ Wikimedia Foundation, Inc. mcruz@wikimedia.org | : @marianarra_ https://twitter.com/marianarra_
[1] Video of the reports presentation
https://youtu.be/PN3TN4wrFZs [2] Wiki Loves Monuments Evaluation Report - Talk Page https://meta.wikimedia.org/wiki/Grants_talk:Evaluation/Evaluation_reports/20...
On Wed, May 6, 2015 at 12:11 PM, Ilario Valdelli valdelli@gmail.com wrote:
Yes, I think that this may be considered the central problem.
It's easier to compare two different scenarios with a standard measure and to use kilos to compare apples and oranges, for instance.
The problem is to understand that oranges will continue to be oranges after this measure, and apples will continue to be apples.
This is an example to say that several countries focus their contest in quality, some others in quantity.
The prize and the contest, anyway, is focused to select the "better photo" and not the biggest uploaders.
It means that there is no sense to force the quantitative parameters while the incentives are focused to increase quality.
Personally I find the same measure costs/uploads a lot far from the most correct measure costs/benefits because we cannot consider a single upload automatically as a "benefit".
In my opinion the most critical point is how measure costs (the workload of a community is it a cost?) and the benefits (a huge amount of worst photos is it a benefit?) because it involves several not measurable parameters.
Regards
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
-- Ilario Valdelli Wikimedia CH Verein zur Förderung Freien Wissens Association pour l’avancement des connaissances libre Associazione per il sostegno alla conoscenza libera Switzerland - 8008 Zürich Wikipedia: Ilario https://meta.wikimedia.org/wiki/User:Ilario Skype: valdelli Facebook: Ilario Valdelli https://www.facebook.com/ivaldelli Twitter: Ilario Valdelli https://twitter.com/ilariovaldelli Linkedin: Ilario Valdelli <http://www.linkedin.com/profile/view?id=6724469
Tel: +41764821371 http://www.wikimedia.ch _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hi Sam,
The main misconception (which is understandable, but also often pointed out already) is that Wiki Loves Monuments can be fundamentally different projects from a goals-and-outcomes point of view, based on the interests and strenghts of the local organizers and the local situation. In some countries, the main outcome of the competition is that it brings together organizers for a first project, that can then move on, and leverage their collaboration in other projects. In other countries it fosters collaborations with other organizations.
In some countries, it is a very grassroots competition, with low budget and big focus on getting a lot of photos. In other countries, there is a lot of effort (and funding) going into catching editors, setting up structures or overcoming the local challenges or making concepts better aware.
Aside from the fact that many of these outcomes are qualitative, which seems to get no attention in the (summaries of the) reports, but do get described in the reports of the individual contests, the local competitions are too diverse to try and catch as one group.
This is a fundamental flaw (pointed out before) in the approach. The work is appreciated of course, the numbers can be useful - the way they are presented is however very sensitive for major misunderstandings.
Besides this, there are several very specific flaws in the number crunching that have been pointed out, which are for example messing up the numbers on editor retention.
I hope that at some point WLM organizers can be given the tools, enthusiasm and support to create their own evaluation on a larger scale. That way I hope that some of the flaws can be avoided thanks to a better understanding of the collaborations, structures and the projects in general.
All in all it is good to have something 'to shoot at' but I would prefer that these reports are produces more in concert with the stakeholders involved and affected, rather than 'announced' and 'presented' to the wide community.
Best, Lodewijk (effeietsanders) member of the international coordinating team 2011-2013
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Claudia, I share your concerns about reducing subtle things to a few numbers. Data can also be used in context-sensitive ways. So I'm wondering if there are any existing quantitative summaries that you find useful? Or qualitative descriptions that draw from more than one project?
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
I'm glad to see data being shared, and again it might help to have many different datasets, to limit conceptual bias in what sort of data is relevant. On May 6, 2015 9:59 AM, "Claudia Garád" claudia.garad@wikimedia.at wrote:
Hi Sam,
I am sure there are figures and stories that the various orgs collect and publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to demonstrate these more qualitative aspects of our work (e.g. by storytelling) in
their
reports. But these information does not get the same attention and publicity in
the
wider community as the evaluation done by the WMF. Many WMAT volunteers
and
I myself share the concerns expressed by Romaine that these
unidimensional
numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved in the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion grows about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation already had released the first Programs Reports for Wiki Loves Monuments, we are
now
put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide Wikimedia community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in September, the report is too simplified without actual understanding how the community is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi...
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Regaring measuerment of editor retention - this is tricky - as in fact many participant created new accounts only to join the contest. Some of them had accounts on Wikipedia (but different) - some others - abandoned their accounts and created a new ones for various reasons (the most trival - they have forgoten passwords). There are also user who are active only during contensts - also for various reasons - not only due to possibility to win attractive prizes, but also because the normal upload process is too tricky for them, or they don't know what to photograph if there is no easy to use list of objects.
In fact measurement of editor retention is tricky even for workshops if it is only based on list of nicknames. I saw this many times - that people create the accounts during the workshop and then abandon them, but create later a new ones. The only effective way to follow the retention of users after workshop is to collect their e-mails and then survey them some time after the workshop. It might produce completely different picture that studies based on following the activity of accounts created during workshops...
2015-05-07 11:34 GMT+02:00 Lodewijk lodewijk@effeietsanders.org:
Hi Sam,
The main misconception (which is understandable, but also often pointed out already) is that Wiki Loves Monuments can be fundamentally different projects from a goals-and-outcomes point of view, based on the interests and strenghts of the local organizers and the local situation. In some countries, the main outcome of the competition is that it brings together organizers for a first project, that can then move on, and leverage their collaboration in other projects. In other countries it fosters collaborations with other organizations.
In some countries, it is a very grassroots competition, with low budget and big focus on getting a lot of photos. In other countries, there is a lot of effort (and funding) going into catching editors, setting up structures or overcoming the local challenges or making concepts better aware.
Aside from the fact that many of these outcomes are qualitative, which seems to get no attention in the (summaries of the) reports, but do get described in the reports of the individual contests, the local competitions are too diverse to try and catch as one group.
This is a fundamental flaw (pointed out before) in the approach. The work is appreciated of course, the numbers can be useful - the way they are presented is however very sensitive for major misunderstandings.
Besides this, there are several very specific flaws in the number crunching that have been pointed out, which are for example messing up the numbers on editor retention.
I hope that at some point WLM organizers can be given the tools, enthusiasm and support to create their own evaluation on a larger scale. That way I hope that some of the flaws can be avoided thanks to a better understanding of the collaborations, structures and the projects in general.
All in all it is good to have something 'to shoot at' but I would prefer that these reports are produces more in concert with the stakeholders involved and affected, rather than 'announced' and 'presented' to the wide community.
Best, Lodewijk (effeietsanders) member of the international coordinating team 2011-2013
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Claudia, I share your concerns about reducing subtle things to a few numbers. Data can also be used in context-sensitive ways. So I'm wondering if there are any existing quantitative summaries that you find useful? Or qualitative descriptions that draw from more than one
project?
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
I'm glad to see data being shared, and again it might help to have many different datasets, to limit conceptual bias in what sort of data is relevant. On May 6, 2015 9:59 AM, "Claudia Garád" claudia.garad@wikimedia.at wrote:
Hi Sam,
I am sure there are figures and stories that the various orgs collect
and
publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to
demonstrate
these more qualitative aspects of our work (e.g. by storytelling) in
their
reports. But these information does not get the same attention and publicity in
the
wider community as the evaluation done by the WMF. Many WMAT volunteers
and
I myself share the concerns expressed by Romaine that these
unidimensional
numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved
in
the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they
fail
in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion
grows
about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation
already
had released the first Programs Reports for Wiki Loves Monuments, we are
now
put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide
Wikimedia
community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in
September,
the report is too simplified without actual understanding how the
community
is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi...
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Editor retention really consists of three components *new temporary contributors. WLM helps here, and even if they leave after a few edits this is of value for the projects. They have learned to edit, and will be more open to correct an error or complement an article very much later when using Wikipedia *New regular contributors. Low impact from WLM, but this is the key and only parameter being measured *Make regular contributors stay on (longer). Here too WLM has a positive effect. It is a stimulus for longtimers to see the new images, the (IRL)activities around the WLM and that something of value is happening. This is of course impossible to measure. Personally I believe that making the work environment fun and stimulating is the most cost effective way to keep up the editor base. The Thanks notification is a wonderful example of high-effect on retention by a very limited investment in software
Anders
Tomasz Ganicz skrev den 2015-05-07 13:06:
Regaring measuerment of editor retention - this is tricky - as in fact many participant created new accounts only to join the contest. Some of them had accounts on Wikipedia (but different) - some others - abandoned their accounts and created a new ones for various reasons (the most trival - they have forgoten passwords). There are also user who are active only during contensts - also for various reasons - not only due to possibility to win attractive prizes, but also because the normal upload process is too tricky for them, or they don't know what to photograph if there is no easy to use list of objects.
In fact measurement of editor retention is tricky even for workshops if it is only based on list of nicknames. I saw this many times - that people create the accounts during the workshop and then abandon them, but create later a new ones. The only effective way to follow the retention of users after workshop is to collect their e-mails and then survey them some time after the workshop. It might produce completely different picture that studies based on following the activity of accounts created during workshops...
2015-05-07 11:34 GMT+02:00 Lodewijk lodewijk@effeietsanders.org:
Hi Sam,
The main misconception (which is understandable, but also often pointed out already) is that Wiki Loves Monuments can be fundamentally different projects from a goals-and-outcomes point of view, based on the interests and strenghts of the local organizers and the local situation. In some countries, the main outcome of the competition is that it brings together organizers for a first project, that can then move on, and leverage their collaboration in other projects. In other countries it fosters collaborations with other organizations.
In some countries, it is a very grassroots competition, with low budget and big focus on getting a lot of photos. In other countries, there is a lot of effort (and funding) going into catching editors, setting up structures or overcoming the local challenges or making concepts better aware.
Aside from the fact that many of these outcomes are qualitative, which seems to get no attention in the (summaries of the) reports, but do get described in the reports of the individual contests, the local competitions are too diverse to try and catch as one group.
This is a fundamental flaw (pointed out before) in the approach. The work is appreciated of course, the numbers can be useful - the way they are presented is however very sensitive for major misunderstandings.
Besides this, there are several very specific flaws in the number crunching that have been pointed out, which are for example messing up the numbers on editor retention.
I hope that at some point WLM organizers can be given the tools, enthusiasm and support to create their own evaluation on a larger scale. That way I hope that some of the flaws can be avoided thanks to a better understanding of the collaborations, structures and the projects in general.
All in all it is good to have something 'to shoot at' but I would prefer that these reports are produces more in concert with the stakeholders involved and affected, rather than 'announced' and 'presented' to the wide community.
Best, Lodewijk (effeietsanders) member of the international coordinating team 2011-2013
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Claudia, I share your concerns about reducing subtle things to a few numbers. Data can also be used in context-sensitive ways. So I'm wondering if there are any existing quantitative summaries that you find useful? Or qualitative descriptions that draw from more than one
project?
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
I'm glad to see data being shared, and again it might help to have many different datasets, to limit conceptual bias in what sort of data is relevant. On May 6, 2015 9:59 AM, "Claudia Garád" claudia.garad@wikimedia.at wrote:
Hi Sam,
I am sure there are figures and stories that the various orgs collect
and
publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to
demonstrate
these more qualitative aspects of our work (e.g. by storytelling) in
their
reports. But these information does not get the same attention and publicity in
the
wider community as the evaluation done by the WMF. Many WMAT volunteers
and
I myself share the concerns expressed by Romaine that these
unidimensional
numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved
in
the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they
fail
in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion
grows
about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation
already
had released the first Programs Reports for Wiki Loves Monuments, we are
now
put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide
Wikimedia
community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in
September,
the report is too simplified without actual understanding how the
community
is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi...
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
On Thu, May 7, 2015 at 6:34 AM, Lodewijk lodewijk@effeietsanders.org wrote:
I hope that at some point WLM organizers can be given the tools, enthusiasm and support to create their own evaluation on a larger scale. That way I hope that some of the flaws can be avoided thanks to a better understanding of the collaborations, structures and the projects in general.
The Evaluation portal on Meta [1] has all the resources we use, open to organizers of any program. There is a guide to using the portal resources [2]. We also host virtual meet ups regularly to develop capacity around evaluation, that are recorded and available on our Youtube channel [3] under CC license. The Learning and Evaluation team is open to have conversations one on one as well! =)
We are always encouraging program leaders to engage in this conversation: what metrics matter to this program, what is relevant to measure. Happily, this is the conversation we had with some WLM organizers yesterday [4], which is also taking place on WLM Report talk page [5].
All in all it is good to have something 'to shoot at' but I would prefer that these reports are produces more in concert with the stakeholders involved and affected, rather than 'announced' and 'presented' to the wide community.
This isn't true. We always reach out to program leaders to engage in data collection. Further, had you taken part of the event, or even watched it, or read the blog we wrote [6], you would have seen nothing is presented or announced, rather, open for discussion and conversation.
*María Cruz * \ Community Coordinator, PE&D Team \ Wikimedia Foundation, Inc. mcruz@wikimedia.org | : @marianarra_ https://twitter.com/marianarra_
[1] https://meta.wikimedia.org/wiki/Grants:Evaluation [2] https://meta.wikimedia.org/wiki/Grants:Evaluation/Introduction [3] https://www.youtube.com/user/WikiEvaluation/ [4] https://www.youtube.com/watch?v=PN3TN4wrFZs [5] https://meta.wikimedia.org/wiki/Grants_talk:Evaluation/Evaluation_reports/20...
[6] http://blog.wikimedia.org/2015/04/22/first-2015-wikimedia-programs-evaluatio...
Best, Lodewijk (effeietsanders) member of the international coordinating team 2011-2013
On Wed, May 6, 2015 at 4:40 PM, Samuel Klein meta.sj@gmail.com wrote:
Claudia, I share your concerns about reducing subtle things to a few numbers. Data can also be used in context-sensitive ways. So I'm wondering if there are any existing quantitative summaries that you find useful? Or qualitative descriptions that draw from more than one
project?
Figuring out what ideas are repeatable, scalable, or awesome but one-time only, is complex. We probably need many different approaches, not one central approach, to understand and compare.
I'm glad to see data being shared, and again it might help to have many different datasets, to limit conceptual bias in what sort of data is relevant. On May 6, 2015 9:59 AM, "Claudia Garád" claudia.garad@wikimedia.at wrote:
Hi Sam,
I am sure there are figures and stories that the various orgs collect
and
publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to
demonstrate
these more qualitative aspects of our work (e.g. by storytelling) in
their
reports. But these information does not get the same attention and publicity in
the
wider community as the evaluation done by the WMF. Many WMAT volunteers
and
I myself share the concerns expressed by Romaine that these
unidimensional
numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved
in
the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they
fail
in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion
grows
about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation
already
had released the first Programs Reports for Wiki Loves Monuments, we are
now
put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide
Wikimedia
community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in
September,
the report is too simplified without actual understanding how the
community
is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi...
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe:
https://lists.wikimedia.org/mailman/listinfo/wikimedia-l,
mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
On Thu, May 7, 2015 at 3:14 PM, Maria Cruz mcruz@wikimedia.org wrote:
<snip>
All in all it is good to have something 'to shoot at' but I would prefer that these reports are produces more in concert with the stakeholders involved and affected, rather than 'announced' and 'presented' to the
wide
community.
This isn't true. We always reach out to program leaders to engage in data collection. Further, had you taken part of the event, or even watched it, or read the blog we wrote [6], you would have seen nothing is presented or announced, rather, open for discussion and conversation.
<snip>
Sure, the team did reach out in the collection phase - after all, without the data such evaluation would be impossible. But after that, the conclusions were drafted and shared with the wide community, rather than with the stakeholders involved to discuss interpretation. And I do admit for not watching the full video (the event itself was during working hours in Europe - not compatible with my job) but only watching parts of it - and it had a high presentation level to me. But maybe I was unlucky in that. Either way, all communication seemed to be aimed to announce the evaluation, rather than to ask active input on whether the analysis made sense, whether there were misunderstandings, etc. But maybe you have had a lot of follow-up discussions with the people you collected data from on a 1-to-1 level, which would be admirable.
Again, I do appreciate the effort, I don't agree with the approach and process.
Best, Lodewijk
Hi Lodewijk,
Thanks for your feedback about the process. It's been very valuable.
I have a few follow up questions below:
Sure, the team did reach out in the collection phase - after all, without the data such evaluation would be impossible. But after that, the conclusions were drafted and shared with the wide community, rather than with the stakeholders involved to discuss interpretation.
Can you say more about which stakeholders? Do you have ideas how we might include them in the future, for example, through the Wiki Loves Monuments mailing list, or were you thinking in some other way?
Either way, all communication seemed to be aimed to announce the
evaluation, rather than to ask active input on whether the analysis made sense, whether there were misunderstandings, etc. But maybe you have had a lot of follow-up discussions with the people you collected data from on a 1-to-1 level, which would be admirable.
We tried to encourage input and questions through the next steps and in the talk page, but it sounds like this might not have been enough. How do you think we can do this better next time? Anything specific that stands out to you, beyond sharing with stakeholders beforehand?
Thanks so much, Edward
Again, I do appreciate the effort, I don't agree with the approach and process.
Best, Lodewijk _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org https://meta.wikimedia.org/wiki/Mailing_lists/GuidelinesWikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hi Edward,
Thanks for the questions. The Wiki Loves Monuments mailing list would have made a very logical starting place to ask for initial feedback. But also sending an email to the people who shared their data with you to work with in the first place, or people who worked on internal evaluations in these projects before.
The feeling has been created that right now, the 'damage is done': the report is published, you have done all you could to make sure that all community members are as much aware as possible of what you consider the conclusions. That means that any feedback now, becomes somewhat moot. We have seen this before with Foundation publications (i.e. statistics on the chapters), once it is announced to the community at large, feedback often doesn't get incorporated any more (I hope this time it does!) and even if it is, the "facts" already found their place into other publications like the signpost. Asking feedback is most valuable *before* you announce it, and proactively. You could (even better) consider involving those stakeholders even earlier in the process, which makes it less of a black box.
I strongly believe that it would improve the quality of the work you do. Still some of the basic flaws will remain due to the basic setup of the evaluation framework (assumptions that all WLM are comparable etc.) but others could be managed better.
Best, Lodewijk
On Fri, May 8, 2015 at 3:47 AM, Edward Galvez egalvez@wikimedia.org wrote:
Hi Lodewijk,
Thanks for your feedback about the process. It's been very valuable.
I have a few follow up questions below:
Sure, the team did reach out in the collection phase - after all, without the data such evaluation would be impossible. But after that, the conclusions were drafted and shared with the wide community, rather than with the stakeholders involved to discuss interpretation.
Can you say more about which stakeholders? Do you have ideas how we might include them in the future, for example, through the Wiki Loves Monuments mailing list, or were you thinking in some other way?
Either way, all communication seemed to be aimed to announce the
evaluation, rather than to ask active input on whether the analysis made sense, whether there were misunderstandings, etc. But maybe you have had
a
lot of follow-up discussions with the people you collected data from on a 1-to-1 level, which would be admirable.
We tried to encourage input and questions through the next steps and in the talk page, but it sounds like this might not have been enough. How do you think we can do this better next time? Anything specific that stands out to you, beyond sharing with stakeholders beforehand?
Thanks so much, Edward
Again, I do appreciate the effort, I don't agree with the approach and process.
Best, Lodewijk _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org <
https://meta.wikimedia.org/wiki/Mailing_lists/GuidelinesWikimedia-l@lists.wi...
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
-- Edward Galvez Program Evaluation Associate Wikimedia Foundation _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hi,
I wasn't involved in this evaluation, but I would like to say that, as someone who recently worked for WMF Learning and Evaluation, I believe that the L&E team is interested in producing useful and accurate reports. So, I am optimistic that feedback from the community about methodology and communications will be carefully considered in future work plans for the L&E team.
Also, I will mention that Cascadia Wikimedians plans to participate in Summer of Monuments, and we will look at L&E reports for ideas and data about effective practices in Wiki Loves Monuments and other programmatic work. These reports will be, I hope, not just about numerical accountability but also about sharing stories, ideas, and qualitative information.
Regards,
Pine
Thanks Lodewijk for answering my questions. I don't find your feedback moot and it's actually quite helpful. From what you're saying, it sounds like opening up feedback to those who reported data would help to solidify the content of the report before pushing the announcement publicly. We have 8 more program reports to publish and I'm starting to think of ways we might include a window for this kind of feedback, but I would need to check with the rest of the team and our timelines to know what is feasible. We have been extra busy these last few months.
Also, to make a clarification, we don't assume that all Wiki Loves Monuments are the same. The metrics we collect are fairly broad and not exhaustive so that we can first know the collective impact of the program and then we can dig deeper and learn about the contests in greater detail afterward. In the coming months, we will be doing one-on-one interviews with organizers to surface the processes and goals of several photo contests, and learn what works and what doesn't in different contexts. This would be the opportunity to explore these assumptions and questions.
Thanks again for your helpful suggestions, Edward
On Thu, May 7, 2015 at 9:46 PM, Lodewijk lodewijk@effeietsanders.org wrote:
Hi Edward,
Thanks for the questions. The Wiki Loves Monuments mailing list would have made a very logical starting place to ask for initial feedback. But also sending an email to the people who shared their data with you to work with in the first place, or people who worked on internal evaluations in these projects before.
The feeling has been created that right now, the 'damage is done': the report is published, you have done all you could to make sure that all community members are as much aware as possible of what you consider the conclusions. That means that any feedback now, becomes somewhat moot. We have seen this before with Foundation publications (i.e. statistics on the chapters), once it is announced to the community at large, feedback often doesn't get incorporated any more (I hope this time it does!) and even if it is, the "facts" already found their place into other publications like the signpost. Asking feedback is most valuable *before* you announce it, and proactively. You could (even better) consider involving those stakeholders even earlier in the process, which makes it less of a black box.
I strongly believe that it would improve the quality of the work you do. Still some of the basic flaws will remain due to the basic setup of the evaluation framework (assumptions that all WLM are comparable etc.) but others could be managed better.
Best, Lodewijk
On Fri, May 8, 2015 at 3:47 AM, Edward Galvez egalvez@wikimedia.org wrote:
Hi Lodewijk,
Thanks for your feedback about the process. It's been very valuable.
I have a few follow up questions below:
Sure, the team did reach out in the collection phase - after all,
without
the data such evaluation would be impossible. But after that, the conclusions were drafted and shared with the wide community, rather
than
with the stakeholders involved to discuss interpretation.
Can you say more about which stakeholders? Do you have ideas how we might include them in the future, for example, through the Wiki Loves Monuments mailing list, or were you thinking in some other way?
Either way, all communication seemed to be aimed to announce the
evaluation, rather than to ask active input on whether the analysis
made
sense, whether there were misunderstandings, etc. But maybe you have
had
a
lot of follow-up discussions with the people you collected data from
on a
1-to-1 level, which would be admirable.
We tried to encourage input and questions through the next steps and in
the
talk page, but it sounds like this might not have been enough. How do you think we can do this better next time? Anything specific that stands out
to
you, beyond sharing with stakeholders beforehand?
Thanks so much, Edward
Again, I do appreciate the effort, I don't agree with the approach and process.
Best, Lodewijk _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org <
https://meta.wikimedia.org/wiki/Mailing_lists/GuidelinesWikimedia-l@lists.wi...
Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
-- Edward Galvez Program Evaluation Associate Wikimedia Foundation _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hello my friends,
I didn't have the opportunity to organize a WLM contest yet, but I had the opportunity to organize the Brazilian WLE last year and I'm promoting that same contest here in Brasil this year again.
Quantitative analysis are always easier to do than qualitative analysis. In that case WMF is always trying to show how the money was spent, how much and the direct impact of that effort.
So, basically we are always measuring using metrics of the conversion of money/time to edits/new users engagement and retention.
IMHO, WLE and WLM are bigger than this and these projects are very complex with many types of results and direct and mainly, indirect impacts to the Wikimedia Projects and movement in general.
By example, last year we organized an exhibition https://pt.wikipedia.org/wiki/Wikip%C3%A9dia:Wiki_Loves_Earth_2014/Brasil/Ex... with the TOP500 photos collected during the WLE Brasil 2014. So, during that exhibition so many people from many classes, ages, cities and level of knowledge passed by the exhibition and read the contest descriptions and they got more information regarding the Photo Contest, WLE, Wikipedia, Commons and the Wikimedia movement. Since we can't track the visitors behavior after the exhibition, we cant say anything regarding the results of that activity based on the regular metrics, used by default. So, based in things like that we cant publish any final report with the real impact of the exhibition translated in some numbers.IMHO that report regarding WLM don't reflect the real impact of the contest, we can see there only some simple numbers and conversions and this kind of report can generate a lot of misunderstanding when accessed by the regular media.
Best regards
Rodrigo Padula Wikimedia Brazilian Group of Education and Research PPGI/UFRJ
2015-05-06 10:59 GMT-03:00 Claudia Garád claudia.garad@wikimedia.at:
Hi Sam,
I am sure there are figures and stories that the various orgs collect and publish. But they are spread across different wikis and websites and/or languages. E.g. many of the FDC orgs are looking into ways to demonstrate these more qualitative aspects of our work (e.g. by storytelling) in their reports. But these information does not get the same attention and publicity in the wider community as the evaluation done by the WMF. Many WMAT volunteers and I myself share the concerns expressed by Romaine that these unidimensional numbers and lack of context foster misconceptions or even prejudices especially in the parts of the community that are not closely involved in the work of the respective groups and orgs.
Best Claudia
Am 06.05.2015 um 13:40 schrieb Sam Klein:
Hi Romaine,
Are there other evals of WLM projects that capture the complexity you want?
Perhaps single-community evaluations done by the WLM organizers there?
Sam
On Wed, May 6, 2015 at 7:21 AM, Romaine Wiki romaine.wiki@gmail.com wrote:
Hi all,
In the past months the Wikimedia Foundation has been writing an evaluation about Wiki Loves Monuments. [1]
At such it is fine that WMF is writing an evaluation, however they fail in actual understanding Wiki Loves Monuments, and that is shown in the evaluation report.
As a result on the Wiki Loves Monuments mailing list a discussion grows about the various problems the evaluation has.
As the Learning and Evaluation team at the Wikimedia Foundation already had released the first Programs Reports for Wiki Loves Monuments, we are now put as fait accompli with this evaluation report.
Therefore I am writing here so that the rest of the worldwide Wikimedia community is informed that this is not going right.
Wiki Loves Monuments is not just a bunch of uploads done in September, the report is too simplified without actual understanding how the community is doing this project.
Romaine
[1]
https://meta.wikimedia.org/wiki/Grants:Evaluation/Evaluation_reports/2015/Wi... _______________________________________________ Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/wiki/Mailing_lists/Guidelines Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
wikimedia-l@lists.wikimedia.org