Hi all,
A number of staff and volunteers have been talking about community health for some time now, and I think it’s a point most can agree with that technical improvements alone don’t represent a comprehensive approach to the problem. While we believe they can substantially help those working on the front lines to deal with issues, it is true that there is much work to be done on reducing the number and severity of problems on the social side. As I mentioned in an earlier post https://lists.wikimedia.org/pipermail/wikimedia-l/2016-December/085668.html[1] on the topic, improvements in how we as a community both deal with and define problem behaviour is needed. The Wikimedia Foundation is working in other areas as well and hopes to further help communities research what is working and what is not, and provide support for trialing new approaches and processes.
The Support and Safety team at the Wikimedia Foundation is currently making progress on the development of training modules on both keeping events safe https://meta.wikimedia.org/wiki/Training_modules/Keeping_events_safe/drafting [3] and dealing with online harassment https://meta.wikimedia.org/wiki/Training_modules/Online_harassment/drafting.[4] Making use of community input and feedback, we're hoping to publish these in multiple languages by the beginning of the summer. We know that training alone will not eliminate harassment, but it will allow for the development of best practices in handling harassment online and at events, and help these practices to become more widespread on the Wikimedia projects.
Some challenging harassment situations arise from longstanding unresolved disputes between contributors. Asaf Bartov has done some innovative work with communities on identifying more effective methods of resolving conflicts - you can see his presentation at the recent Metrics meeting https://youtu.be/6fF4xLHkZe4?t=19m,[2] and there will be a more detailed report on this initiative next week. Improvement of dispute resolution practices could be of use on other projects as well, through the Community Capacity Development program or through other initiatives, which the Wikimedia Foundation may be able to support.
Our movement also has a variety of different policy approaches to bad behaviour and different enforcement practices in different communities. Some of these work well; others, perhaps not so much. The Foundation can support communities by helping research the effectiveness of these policies and practices, and we can work with contributors to trial new approaches.
We plan on proposing more of these types of approaches in our upcoming Annual Plan process over the next few months, and we are working to make anti-harassment programs more cross-disciplinary and collaborative between the technical and community teams. As Delphine mentions, affiliates have already taken a lead on some new initiatives, and we must help scale those improvements to the larger movement.
I think this thread illustrates how we can continue brainstorming on the sometimes less-straightforward social approaches to harassment mitigation (Lodewijk came up with some intriguing ideas above) and find ways forward that combine effective tools and technical infrastructure with an improved social environment.
[1] https://lists.wikimedia.org/pipermail/wikimedia-l/2016-December/085668.html
[2] https://youtu.be/6fF4xLHkZe4?t=19m
[3] https://meta.wikimedia.org/wiki/Training_modules/Keeping_events_safe/draftin... [4] https://meta.wikimedia.org/wiki/Training_modules/Online_harassment/drafting
On Fri, Jan 27, 2017 at 12:51 PM, Delphine Ménard notafishz@gmail.com wrote:
On 27 January 2017 at 18:17, Lodewijk lodewijk@effeietsanders.org wrote: [snip]
What I am curious about, is whether there are also efforts ongoing that
are
focused on influencing community behavior in a more preventive manner.
I'm
not sure how that would work out in practice,
[snip]
But I do want to express my hope that somewhere in the
Foundation (and affiliates), work is being done to also look at
preventing
bullying and harassment - besides handling it effectively. And that you maybe keep that work in mind, when developing these tools. Some overlap
may
exist - for example, I could imagine that if the harassment-identificationtool is reliable enough, it could trigger
warnings
to users before they save their edit, or the scores could be used in
admin
applications (and for others with example-functions). A more social approach that is unrelated, would be to train community members on how to respond to poisonous behavior. I'm just thinking out loud here, and
others
may have much better approaches in mind (or actually work on them).
Actually Lodewijk, it's happening not too far from you. Wikimedia Nederland [1] has been working on this for a while, quietly, with small samples and small steps, but with good results and most importantly, a lot of hope and resilience to pursue this really really hard work.
Delphine
[1] https://meta.wikimedia.org/wiki/Grants:APG/Proposals/ 2016-2017_round1/Wikimedia_Nederland/Proposal_form# Program_1:_Community_health
-- @notafish
NB. This gmail address is used for mailing lists. Personal emails will get lost. Intercultural musings: Ceci n'est pas une endive - http://blog.notanendive.org Photos with simple eyes: notaphoto - http://photo.notafish.org
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe