These are all very nice sentiments. But they're phrased in very vague ways.
Is there anywhere we can see the actual concrete plan for the use of these funds?
Todd
On Thu, Jan 26, 2017 at 7:30 PM, Samantha Lien slien@wikimedia.org wrote:
This press release is also available online here: https://wikimediafoundation.org/wiki/Press_releases/ Wikimedia_Foundation_receives_$500,000_from_the_Craig_ Newmark_Foundation_and_craigslist_Charitable_Fund_to_ support_a_healthy_and_inclusive_Wikimedia_community https://wikimediafoundation.org/wiki/Press_releases/Wikimedia_Foundation_receives_$500,000_from_the_Craig_Newmark_Foundation_and_craigslist_Charitable_Fund_to_support_a_healthy_and_inclusive_Wikimedia_community
And as a blog post on the Wikimedia blog here:
https://blog.wikimedia.org/2017/01/26/community-health-initiative-grant/
Wikimedia Foundation receives $500,000 from the Craig Newmark Foundation and craigslist Charitable Fund to support a healthy and inclusive Wikimedia community
Grant supports development of more advanced tools for volunteers and staff to reduce harassing behavior on Wikipedia and block harassers from the site
SAN FRANCISCO — January 26, 2017 — Today, the Wikimedia Foundation announced the launch of a community health initiative to address harassment and toxic behavior on Wikipedia, with initial funding of US$500,000 from the Craig Newmark Foundation and craigslist Charitable Fund. The two seed grants, each US$250,000, will support the development of tools for volunteer editors and staff to reduce harassment on Wikipedia and block harassers.
Approximately 40% of internet users http://www.pewinternet.org/2014/10/22/online-harassment/, and as many as 70% of younger users have personally experienced harassment online, with regional studies showing rates as high as 76% https://www.symantec.com/en/au/about/newsroom/press-releases/2016/symantec_0309_01 for young women. While harassment differs across the internet, on Wikipedia and other Wikimedia projects, harassment has been shown to reduce participation on the sites. More than 50% https://upload.wikimedia.org/wikipedia/commons/5/52/Harassment_Survey_2015_-_Results_Report.pdf of people who reported experiencing harassment also reported decreasing their participation in the Wikimedia community.
Volunteer editors on Wikipedia are often the first line of response for finding and addressing harassment on Wikipedia. "Trolling https://en.wikipedia.org/wiki/Internet_troll," "doxxing https://en.wikipedia.org/wiki/Doxing," and other menacing behaviors are burdens to Wikipedia's contributors, impeding their ability to do the writing and editing that makes Wikipedia so comprehensive and useful. This program seeks to respond to requests from editors over the years for better tools and support for responding to harassment and toxic behavior.
“To ensure Wikipedia’s vitality, people of good will need to work together to prevent trolling, harassment and cyber-bullying from interfering with the common good,” said Craig Newmark, founder of craigslist. “To that end, I'm supporting the work of the Wikimedia Foundation towards the prevention of harassment.”
The initiative is part of a commitment to community health at the Wikimedia Foundation, the non-profit organization that supports Wikipedia and the other Wikimedia projects, in collaboration with the global community of volunteer editors. In 2015, the Foundation published its first Harassment Survey https://meta.wikimedia.org/wiki/Research:Harassment_survey_2015 about the nature of the issue in order to identify key areas of concern. In November 2016, the Wikimedia Foundation Board of Trustees issued a statement of support https://meta.wikimedia.org/wiki/Wikimedia_Foundation_Board_noticeboard/November_2016_-_Statement_on_Healthy_Community_Culture,_Inclusivity,_and_Safe_Spaces calling for a more “proactive” approach to addressing harassment as a barrier to healthy, inclusive communities on Wikipedia.
"If we want everyone to share in the sum of all knowledge, we need to make sure everyone feels welcome,” said Katherine Maher, Executive Director of the Wikimedia Foundation. “This grant supports a healthy culture for the volunteer editors of Wikipedia, so that more people can take part in sharing knowledge with the world."
The generous funding from the Craig Newmark Foundation and craigslist Charitable Fund will support the initial phase of a program https://meta.wikimedia.org/wiki/Community_health_initiative to strengthen existing tools and develop additional tools to more quickly identify potentially harassing behavior, and help volunteer administrators evaluate harassment reports and respond effectively. These improvements will be made in close collaboration with the Wikimedia community to evaluate, test, and give feedback on the tools as they are developed.
This initiative addresses the major forms of harassment reported on the Wikimedia Foundation’s 2015 Harassment Survey https://meta.wikimedia.org/w/index.php?title=File:Harassment_Survey_2015_-_Results_Report.pdf&page=17, which covers a wide range of different behaviors: content vandalism, stalking, name-calling, trolling, doxxing, discrimination—anything that targets individuals for unfair and harmful attention. From research and community feedback, four areas have been identified where new tools could be beneficial in addressing and responding to harassment:
- Detection and prevention - making it easier and faster for editors to
identify and flag harassing behavior
- Reporting - providing victims and respondents of harassment improved
ways to report instances that offer a clearer, more streamlined approach
- Evaluating - supporting tools that help volunteers better evaluate
harassing behavior and inform the best way to respond
- Blocking - making it more difficult for someone who is blocked from the
site to return
For more information, please visit: https://meta.wikimedia.org/ wiki/Community_health_initiative
About the Wikimedia Foundation
The Wikimedia Foundation is the non-profit organization that supports and operates Wikipedia and its sister projects. More than a billion unique devices access the Wikimedia sites each month. Roughly 75,000 people edit Wikipedia and its sister projects every month, collectively creating and improving its more than 40 million articles across hundreds of languages – this all makes Wikipedia one of the most popular web properties in the world. Based in San Francisco, California, the Wikimedia Foundation is a 501(c)(3) charity that is funded primarily through donations and grants.
About Wikipedia
Wikipedia is the world’s free knowledge resource. It is a collaborative creation that has been added to and edited by millions of people from around the globe since it was created in 2001: anyone can edit it, at any time. Wikipedia is offered in hundreds of languages containing more than 40 million articles. Wikimedia and its sister projects are collectively visited by more than a billion unique devices each month.
Harassment takes different forms on Wikipedia than it does on other major websites. Unlike other platforms, Wikipedia editors generally don’t write about their personal lives. Instead, on Wikipedia, harassment usually begins as a content dispute between editors that results in an attack on an editor’s personal attributes—their gender, race, religion, sexual orientation, political affiliation—based on something that they’ve shared, or an assumption based on the user’s edit history.
About the Craig Newmark Foundation
The Craig Newmark Foundation (CNF) is a private foundation created by craigslist founder Craig Newmark in 2016 to support and connect nonprofit communities and drive powerful civic engagement. The Foundation’s priorities include Trustworthy Journalism, Veterans and Military Families, Voter Protection and Education, Consumer Protection and Education, Public Diplomacy, Government Transparency, Micro-Lending to Alleviate Poverty, and Women in Tech.
About craigslist Charitable Fund
The craigslist Charitable Fund (CCF) provides millions of dollars each year in one-time and recurring grants to hundreds of partner organizations addressing four broad areas of interest including Environment and Transportation; Education, Rights, Justice, and Reason; Nonviolence, Veterans and Peace; and Journalism, Open Source, and Internet.
Press contacts
Craig Newmark Foundation
Bruce Bonafede
press@craigconnects.org
Wikimedia Foundation
Juliet Barbara
jbarbara@wikimedia.org
(415) 839-6885
-- *Samantha Lien* Communications Manager Wikimedia Foundation 149 New Montgomery Street San Francisco, CA 94105
(To be unsubscribed from this press release distribution list, please reply to communications@wikimedia.org with 'UNSUBSCRIBE' in the subject line) _______________________________________________ Please note: all replies sent to this mailing list will be immediately directed to Wikimedia-l, the public mailing list of the Wikimedia community. For more information about Wikimedia-l: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l _______________________________________________ WikimediaAnnounce-l mailing list WikimediaAnnounce-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikimediaannounce-l
On Fri, Jan 27, 2017 at 3:47 AM, Todd Allen toddmallen@gmail.com wrote:
These are all very nice sentiments. But they're phrased in very vague ways.
Is there anywhere we can see the actual concrete plan for the use of these funds?
Todd
Hi Todd,
You can take a look at the grant proposal (also linked to from https://meta.wikimedia.org/wiki/Community_health_initiative) here: https://upload.wikimedia.org/wikipedia/commons/d/df/Wikimedia_Foundation_gra...
Pages 6–14 should be relevant.
//Johan Jönsson --
Hi Johan,
Thanks for the link, very insightful indeed. Glad to see these documents public
Do I understand correctly that this particular initiative will focus on fighting harassment, and not necessarily on preventing it? Basically in a similar pattern that vandalism is fought on most wikipedia projects?
I really hope that prevention, education and (social) training will become a major point in the overall agenda, but I can imagine that we can't pay all that from the single grant :) So I just would like to place it in the proper context.
Best, Lodewijk
2017-01-27 10:14 GMT+01:00 Johan Jönsson jjonsson@wikimedia.org:
On Fri, Jan 27, 2017 at 3:47 AM, Todd Allen toddmallen@gmail.com wrote:
These are all very nice sentiments. But they're phrased in very vague
ways.
Is there anywhere we can see the actual concrete plan for the use of
these
funds?
Todd
Hi Todd,
You can take a look at the grant proposal (also linked to from https://meta.wikimedia.org/wiki/Community_health_initiative) here: https://upload.wikimedia.org/wikipedia/commons/d/df/ Wikimedia_Foundation_grant_proposal_-_Anti-Harassment_ Tools_For_Wikimedia_Projects_-_2017.pdf
Pages 6–14 should be relevant.
//Johan Jönsson
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
On 27 January 2017 at 09:21, Lodewijk lodewijk@effeietsanders.org wrote: ...
Do I understand correctly that this particular initiative will focus on fighting harassment, and not necessarily on preventing it? Basically in a similar pattern that vandalism is fought on most wikipedia projects?
I really hope that prevention, education and (social) training will become a major point in the overall agenda, but I can imagine that we can't pay all that from the single grant :) So I just would like to place it in the proper context.
Best, Lodewijk
+1 Spot on.
The plan appears to hinge on blocks as the outcome. Based on cases of long term harassment targeted at individuals which invariably involved off-wiki doxxing or contacting friends and family members of their target, blocking Wikimedia accounts is an approach that may remove Wikimedia projects as a platform but does little to help reform the person causing harassment. I would rather see systems that include reaching out to the apparent harasser to help them recognize and deal with their anger or obsessive issues. Treating badly behaved individuals as the "other", without aiming for a lasting resolution, means we are back to the old days of telling the unfortunate target/victim to change their identity or grow a thicker skin as the on-line harassment may never stop.
Fae
The project has four focus areas, and blocking is just one of them. Here's the whole picture:
* Detection and prevention: Using machine learning to help flag situations for admin review -- both text that looks like it's harassing and aggressive, as well as modeling patterns of user interaction, like stalking and hounding, before the situation gets out of control.
* Reporting: Building a new system to encourage editors to reach out for help, in a way that's less chaotic and stressful than the current system.
* Evaluation: Giving admins and others tools that help them evaluate harassment cases, and make good decisions.
* Blocking: Making it more difficult for banned users to come back.
We'll be actively working on all four areas. There aren't a ton of details right now about exactly what we'll build, for a couple reasons. The product manager and the analyst haven't started yet, and the research that they do will generate a lot of new ideas and insights. Also, we're going to work closely with the community -- talking to people with different roles and perspectives, and making plans in collaboration with contributors who are interested in these issues. So there's lots of work and thinking and consulting to do.
But here's one idea that I'm personally excited about, which I think helps to explain why we're focusing on tools:
Right now, when two people end up at AN/I, the only way to figure out whose version of the story to believe is by looking at individual, cherrypicked diffs. You can also look through the two editors' contributions, but if they're both active editors and the problem has been going on for a while, then it's very difficult to get a sense of what's going on. Sometimes it really matters who did what first, and you have to correlate the two contributions logs, and pay attention to timestamps.
The idea is: build a tool that helps admins (and others) follow the "story" of this conflict. Look for the pages where the two editors have interacted, and show a timeline that helps you see what happened first, how they responded, and how the drama unfolded. That could reduce the time cost of investigating and evaluating considerably, making it much easier for an admin or mediator to get involved.
There are lots of UI questions about how that would work and what it would look like, but I don't think it would be too difficult on the tech side. The information is already there in the contributions; it's just difficult to correlate by hand.
Assuming it works, that tool could have a lot of good outcomes. Admins would be more likely to take on harassment cases, because there'd be greater return for the time investment. It would take some of the burden off the target, so they don't have to figure out which individual diffs they should provide in order to make their case. Also, it would be harder for harassers to get away with mistreating people, because they wouldn't be able to hide behind a smokescreen of random diffs.
As folks on this thread have said, there are lots of other components to tackling the harassment problems. There will probably be groups of admins and others who are especially interested in helping with the reporting and evaluation, and the Foundation could provide trainings and resources for those groups. Making changes to the reporting system will involve a lot of community discussions about policies and competing values. Some of those conversations and plans will probably be led by the Foundation, and some of them will arise naturally within the community.
For this specific team -- the Community Tech product team, working with the community advocate -- our focus is on doing research and building tools that will support those conversations and plans. We're not going to take over the community's proper role in setting policy, or making decisions about how to handle cases.
To Fæ's point, the community will determine the social and cultural decisions about how to treat harassment cases, and our team's job is to build software that will help to put those decisions into practice.
On Fri, Jan 27, 2017 at 3:06 AM, Fæ faewik@gmail.com wrote:
On 27 January 2017 at 09:21, Lodewijk lodewijk@effeietsanders.org wrote: ...
Do I understand correctly that this particular initiative will focus on fighting harassment, and not necessarily on preventing it? Basically in a similar pattern that vandalism is fought on most wikipedia projects?
I really hope that prevention, education and (social) training will
become
a major point in the overall agenda, but I can imagine that we can't pay all that from the single grant :) So I just would like to place it in the proper context.
Best, Lodewijk
+1 Spot on.
The plan appears to hinge on blocks as the outcome. Based on cases of long term harassment targeted at individuals which invariably involved off-wiki doxxing or contacting friends and family members of their target, blocking Wikimedia accounts is an approach that may remove Wikimedia projects as a platform but does little to help reform the person causing harassment. I would rather see systems that include reaching out to the apparent harasser to help them recognize and deal with their anger or obsessive issues. Treating badly behaved individuals as the "other", without aiming for a lasting resolution, means we are back to the old days of telling the unfortunate target/victim to change their identity or grow a thicker skin as the on-line harassment may never stop.
Fae
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Right on. Your enthusiasm is infectious, Danny. Congratulations to all who are making this a reality.
/a
On Fri, Jan 27, 2017 at 8:24 AM, Danny Horn dhorn@wikimedia.org wrote:
The project has four focus areas, and blocking is just one of them. Here's the whole picture:
- Detection and prevention: Using machine learning to help flag situations
for admin review -- both text that looks like it's harassing and aggressive, as well as modeling patterns of user interaction, like stalking and hounding, before the situation gets out of control.
- Reporting: Building a new system to encourage editors to reach out for
help, in a way that's less chaotic and stressful than the current system.
- Evaluation: Giving admins and others tools that help them evaluate
harassment cases, and make good decisions.
- Blocking: Making it more difficult for banned users to come back.
We'll be actively working on all four areas. There aren't a ton of details right now about exactly what we'll build, for a couple reasons. The product manager and the analyst haven't started yet, and the research that they do will generate a lot of new ideas and insights. Also, we're going to work closely with the community -- talking to people with different roles and perspectives, and making plans in collaboration with contributors who are interested in these issues. So there's lots of work and thinking and consulting to do.
But here's one idea that I'm personally excited about, which I think helps to explain why we're focusing on tools:
Right now, when two people end up at AN/I, the only way to figure out whose version of the story to believe is by looking at individual, cherrypicked diffs. You can also look through the two editors' contributions, but if they're both active editors and the problem has been going on for a while, then it's very difficult to get a sense of what's going on. Sometimes it really matters who did what first, and you have to correlate the two contributions logs, and pay attention to timestamps.
The idea is: build a tool that helps admins (and others) follow the "story" of this conflict. Look for the pages where the two editors have interacted, and show a timeline that helps you see what happened first, how they responded, and how the drama unfolded. That could reduce the time cost of investigating and evaluating considerably, making it much easier for an admin or mediator to get involved.
There are lots of UI questions about how that would work and what it would look like, but I don't think it would be too difficult on the tech side. The information is already there in the contributions; it's just difficult to correlate by hand.
Assuming it works, that tool could have a lot of good outcomes. Admins would be more likely to take on harassment cases, because there'd be greater return for the time investment. It would take some of the burden off the target, so they don't have to figure out which individual diffs they should provide in order to make their case. Also, it would be harder for harassers to get away with mistreating people, because they wouldn't be able to hide behind a smokescreen of random diffs.
As folks on this thread have said, there are lots of other components to tackling the harassment problems. There will probably be groups of admins and others who are especially interested in helping with the reporting and evaluation, and the Foundation could provide trainings and resources for those groups. Making changes to the reporting system will involve a lot of community discussions about policies and competing values. Some of those conversations and plans will probably be led by the Foundation, and some of them will arise naturally within the community.
For this specific team -- the Community Tech product team, working with the community advocate -- our focus is on doing research and building tools that will support those conversations and plans. We're not going to take over the community's proper role in setting policy, or making decisions about how to handle cases.
To Fæ's point, the community will determine the social and cultural decisions about how to treat harassment cases, and our team's job is to build software that will help to put those decisions into practice.
On Fri, Jan 27, 2017 at 3:06 AM, Fæ faewik@gmail.com wrote:
On 27 January 2017 at 09:21, Lodewijk lodewijk@effeietsanders.org
wrote:
...
Do I understand correctly that this particular initiative will focus on fighting harassment, and not necessarily on preventing it? Basically
in a
similar pattern that vandalism is fought on most wikipedia projects?
I really hope that prevention, education and (social) training will
become
a major point in the overall agenda, but I can imagine that we can't
pay
all that from the single grant :) So I just would like to place it in
the
proper context.
Best, Lodewijk
+1 Spot on.
The plan appears to hinge on blocks as the outcome. Based on cases of long term harassment targeted at individuals which invariably involved off-wiki doxxing or contacting friends and family members of their target, blocking Wikimedia accounts is an approach that may remove Wikimedia projects as a platform but does little to help reform the person causing harassment. I would rather see systems that include reaching out to the apparent harasser to help them recognize and deal with their anger or obsessive issues. Treating badly behaved individuals as the "other", without aiming for a lasting resolution, means we are back to the old days of telling the unfortunate target/victim to change their identity or grow a thicker skin as the on-line harassment may never stop.
Fae
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Thanks Danny for the elaboration.
I don't want to contest the value of this work at all - sorry if that seemed implied. I think it's an effort that may be quite necessary - especially in some communities.
The set of tools you're describing to be developed, seem all to be related to a process that eventually leads to blocking people off our sites. That is what triggered my response. This process may be necessary in a number of cases (unfortunately), and helpful for the community health. But it is all 'after the fact' - once harassment has taken place.
What I am curious about, is whether there are also efforts ongoing that are focused on influencing community behavior in a more preventive manner. I'm not sure how that would work out in practice, I don't have the solution (although some ideas have been bouncing around). This work seems related to bullying in general - which happens unfortunately in schools and communities around the world - and research on this topic may help identify methods that could have a preventive effect. I have yet to see a 100% effective program, but it may strengthen the efforts for a healthier community.
I can see that where these approaches are still investigated, or non-technical, the community tech team may be less suitable for implementing them. But I do want to express my hope that somewhere in the Foundation (and affiliates), work is being done to also look at preventing bullying and harassment - besides handling it effectively. And that you maybe keep that work in mind, when developing these tools. Some overlap may exist - for example, I could imagine that if the harassment-identificationtool is reliable enough, it could trigger warnings to users before they save their edit, or the scores could be used in admin applications (and for others with example-functions). A more social approach that is unrelated, would be to train community members on how to respond to poisonous behavior. I'm just thinking out loud here, and others may have much better approaches in mind (or actually work on them).
Hope that clarifies a bit,
Best, Lodewijk
2017-01-27 17:24 GMT+01:00 Danny Horn dhorn@wikimedia.org:
The project has four focus areas, and blocking is just one of them. Here's the whole picture:
- Detection and prevention: Using machine learning to help flag situations
for admin review -- both text that looks like it's harassing and aggressive, as well as modeling patterns of user interaction, like stalking and hounding, before the situation gets out of control.
- Reporting: Building a new system to encourage editors to reach out for
help, in a way that's less chaotic and stressful than the current system.
- Evaluation: Giving admins and others tools that help them evaluate
harassment cases, and make good decisions.
- Blocking: Making it more difficult for banned users to come back.
We'll be actively working on all four areas. There aren't a ton of details right now about exactly what we'll build, for a couple reasons. The product manager and the analyst haven't started yet, and the research that they do will generate a lot of new ideas and insights. Also, we're going to work closely with the community -- talking to people with different roles and perspectives, and making plans in collaboration with contributors who are interested in these issues. So there's lots of work and thinking and consulting to do.
But here's one idea that I'm personally excited about, which I think helps to explain why we're focusing on tools:
Right now, when two people end up at AN/I, the only way to figure out whose version of the story to believe is by looking at individual, cherrypicked diffs. You can also look through the two editors' contributions, but if they're both active editors and the problem has been going on for a while, then it's very difficult to get a sense of what's going on. Sometimes it really matters who did what first, and you have to correlate the two contributions logs, and pay attention to timestamps.
The idea is: build a tool that helps admins (and others) follow the "story" of this conflict. Look for the pages where the two editors have interacted, and show a timeline that helps you see what happened first, how they responded, and how the drama unfolded. That could reduce the time cost of investigating and evaluating considerably, making it much easier for an admin or mediator to get involved.
There are lots of UI questions about how that would work and what it would look like, but I don't think it would be too difficult on the tech side. The information is already there in the contributions; it's just difficult to correlate by hand.
Assuming it works, that tool could have a lot of good outcomes. Admins would be more likely to take on harassment cases, because there'd be greater return for the time investment. It would take some of the burden off the target, so they don't have to figure out which individual diffs they should provide in order to make their case. Also, it would be harder for harassers to get away with mistreating people, because they wouldn't be able to hide behind a smokescreen of random diffs.
As folks on this thread have said, there are lots of other components to tackling the harassment problems. There will probably be groups of admins and others who are especially interested in helping with the reporting and evaluation, and the Foundation could provide trainings and resources for those groups. Making changes to the reporting system will involve a lot of community discussions about policies and competing values. Some of those conversations and plans will probably be led by the Foundation, and some of them will arise naturally within the community.
For this specific team -- the Community Tech product team, working with the community advocate -- our focus is on doing research and building tools that will support those conversations and plans. We're not going to take over the community's proper role in setting policy, or making decisions about how to handle cases.
To Fæ's point, the community will determine the social and cultural decisions about how to treat harassment cases, and our team's job is to build software that will help to put those decisions into practice.
On Fri, Jan 27, 2017 at 3:06 AM, Fæ faewik@gmail.com wrote:
On 27 January 2017 at 09:21, Lodewijk lodewijk@effeietsanders.org
wrote:
...
Do I understand correctly that this particular initiative will focus on fighting harassment, and not necessarily on preventing it? Basically
in a
similar pattern that vandalism is fought on most wikipedia projects?
I really hope that prevention, education and (social) training will
become
a major point in the overall agenda, but I can imagine that we can't
pay
all that from the single grant :) So I just would like to place it in
the
proper context.
Best, Lodewijk
+1 Spot on.
The plan appears to hinge on blocks as the outcome. Based on cases of long term harassment targeted at individuals which invariably involved off-wiki doxxing or contacting friends and family members of their target, blocking Wikimedia accounts is an approach that may remove Wikimedia projects as a platform but does little to help reform the person causing harassment. I would rather see systems that include reaching out to the apparent harasser to help them recognize and deal with their anger or obsessive issues. Treating badly behaved individuals as the "other", without aiming for a lasting resolution, means we are back to the old days of telling the unfortunate target/victim to change their identity or grow a thicker skin as the on-line harassment may never stop.
Fae
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Oh, that's a really good point. For the product analyst job, we're hoping to hire someone who's already done research on online harassment, and can help us to learn from other people's approaches.
Your idea for using aggression/harassment scores in admin applications is really interesting; I hadn't thought of that before. Nothing's actually planned right now, just research and conversations, but it's neat to see people already coming up with interesting suggestions. :)
On Fri, Jan 27, 2017 at 9:17 AM, Lodewijk lodewijk@effeietsanders.org wrote:
Thanks Danny for the elaboration.
I don't want to contest the value of this work at all - sorry if that seemed implied. I think it's an effort that may be quite necessary - especially in some communities.
The set of tools you're describing to be developed, seem all to be related to a process that eventually leads to blocking people off our sites. That is what triggered my response. This process may be necessary in a number of cases (unfortunately), and helpful for the community health. But it is all 'after the fact' - once harassment has taken place.
What I am curious about, is whether there are also efforts ongoing that are focused on influencing community behavior in a more preventive manner. I'm not sure how that would work out in practice, I don't have the solution (although some ideas have been bouncing around). This work seems related to bullying in general - which happens unfortunately in schools and communities around the world - and research on this topic may help identify methods that could have a preventive effect. I have yet to see a 100% effective program, but it may strengthen the efforts for a healthier community.
I can see that where these approaches are still investigated, or non-technical, the community tech team may be less suitable for implementing them. But I do want to express my hope that somewhere in the Foundation (and affiliates), work is being done to also look at preventing bullying and harassment - besides handling it effectively. And that you maybe keep that work in mind, when developing these tools. Some overlap may exist - for example, I could imagine that if the harassment-identificationtool is reliable enough, it could trigger warnings to users before they save their edit, or the scores could be used in admin applications (and for others with example-functions). A more social approach that is unrelated, would be to train community members on how to respond to poisonous behavior. I'm just thinking out loud here, and others may have much better approaches in mind (or actually work on them).
Hope that clarifies a bit,
Best, Lodewijk
2017-01-27 17:24 GMT+01:00 Danny Horn dhorn@wikimedia.org:
The project has four focus areas, and blocking is just one of them.
Here's
the whole picture:
- Detection and prevention: Using machine learning to help flag
situations
for admin review -- both text that looks like it's harassing and aggressive, as well as modeling patterns of user interaction, like
stalking
and hounding, before the situation gets out of control.
- Reporting: Building a new system to encourage editors to reach out for
help, in a way that's less chaotic and stressful than the current system.
- Evaluation: Giving admins and others tools that help them evaluate
harassment cases, and make good decisions.
- Blocking: Making it more difficult for banned users to come back.
We'll be actively working on all four areas. There aren't a ton of
details
right now about exactly what we'll build, for a couple reasons. The
product
manager and the analyst haven't started yet, and the research that they
do
will generate a lot of new ideas and insights. Also, we're going to work closely with the community -- talking to people with different roles and perspectives, and making plans in collaboration with contributors who are interested in these issues. So there's lots of work and thinking and consulting to do.
But here's one idea that I'm personally excited about, which I think
helps
to explain why we're focusing on tools:
Right now, when two people end up at AN/I, the only way to figure out
whose
version of the story to believe is by looking at individual, cherrypicked diffs. You can also look through the two editors' contributions, but if they're both active editors and the problem has been going on for a
while,
then it's very difficult to get a sense of what's going on. Sometimes it really matters who did what first, and you have to correlate the two contributions logs, and pay attention to timestamps.
The idea is: build a tool that helps admins (and others) follow the
"story"
of this conflict. Look for the pages where the two editors have
interacted,
and show a timeline that helps you see what happened first, how they responded, and how the drama unfolded. That could reduce the time cost of investigating and evaluating considerably, making it much easier for an admin or mediator to get involved.
There are lots of UI questions about how that would work and what it
would
look like, but I don't think it would be too difficult on the tech side. The information is already there in the contributions; it's just
difficult
to correlate by hand.
Assuming it works, that tool could have a lot of good outcomes. Admins would be more likely to take on harassment cases, because there'd be greater return for the time investment. It would take some of the burden off the target, so they don't have to figure out which individual diffs they should provide in order to make their case. Also, it would be harder for harassers to get away with mistreating people, because they wouldn't
be
able to hide behind a smokescreen of random diffs.
As folks on this thread have said, there are lots of other components to tackling the harassment problems. There will probably be groups of admins and others who are especially interested in helping with the reporting
and
evaluation, and the Foundation could provide trainings and resources for those groups. Making changes to the reporting system will involve a lot
of
community discussions about policies and competing values. Some of those conversations and plans will probably be led by the Foundation, and some
of
them will arise naturally within the community.
For this specific team -- the Community Tech product team, working with
the
community advocate -- our focus is on doing research and building tools that will support those conversations and plans. We're not going to take over the community's proper role in setting policy, or making decisions about how to handle cases.
To Fæ's point, the community will determine the social and cultural decisions about how to treat harassment cases, and our team's job is to build software that will help to put those decisions into practice.
On Fri, Jan 27, 2017 at 3:06 AM, Fæ faewik@gmail.com wrote:
On 27 January 2017 at 09:21, Lodewijk lodewijk@effeietsanders.org
wrote:
...
Do I understand correctly that this particular initiative will focus
on
fighting harassment, and not necessarily on preventing it? Basically
in a
similar pattern that vandalism is fought on most wikipedia projects?
I really hope that prevention, education and (social) training will
become
a major point in the overall agenda, but I can imagine that we can't
pay
all that from the single grant :) So I just would like to place it in
the
proper context.
Best, Lodewijk
+1 Spot on.
The plan appears to hinge on blocks as the outcome. Based on cases of long term harassment targeted at individuals which invariably involved off-wiki doxxing or contacting friends and family members of their target, blocking Wikimedia accounts is an approach that may remove Wikimedia projects as a platform but does little to help reform the person causing harassment. I would rather see systems that include reaching out to the apparent harasser to help them recognize and deal with their anger or obsessive issues. Treating badly behaved individuals as the "other", without aiming for a lasting resolution, means we are back to the old days of telling the unfortunate target/victim to change their identity or grow a thicker skin as the on-line harassment may never stop.
Fae
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
On Fri, Jan 27, 2017 at 9:17 AM, Lodewijk lodewijk@effeietsanders.org wrote:
What I am curious about, is whether there are also efforts ongoing that are focused on influencing community behavior in a more preventive manner.
On 01/27/2017 09:54 AM, Danny Horn wrote:
Your idea for using aggression/harassment scores in admin applications is really interesting; I hadn't thought of that before. Nothing's actually planned right now, just research and conversations, but it's neat to see people already coming up with interesting suggestions. :)
I'm delighted to see this issue getting some attention. I believe the core of the problem comes from the WMF's identity, from the start, as a technology company; so shifting in this direction might be an uphill battle, but I feel strongly that it's the right way to go. I'd like to highlight my first answer in my brief candidacy for the WMF board in 2015 [1]:
The distinction between "the community" and "newcomers" is a false and dangerously misleading one. It does not accurately reflect reality. I have had numerous students, clients, and friends who believe "the community" or "Wikipedia" was unwelcoming; but on closer inspection, the one comment that formed that opinion in fact came from somebody who was newer than "the newbie." If civility and collegiality on our sites is an issue -- and it is -- the artificial idea that "the community" is mean, and in need of reform, will not move us toward a solution.
Yes, this is a matter the Board should take very seriously. The Board should seek the guidance of social scientists and experienced practitioners in social movements. Lecturing and assigning blame (example: [2]) may bring applause and headlines, but it will not lead to solutions. The solution to this kind of problem lies in studying what works well in our communities and others, and cultivating leadership. Social practices are a good medium for spreading social solutions; we should be more skeptical of technical approaches.
I elaborated on what I see as the WMF's problematic cultivation of a culture of blame and exclusion in a blog post. [3]
Coincidentally, the most interesting idea I'm aware of in this realm comes from a former Wikia employee I know named...Danny Horn, who invented a system to facilitate rapid introductions between new and experienced users. It's one we might do well to try out on Wikimedia projects, perhaps in connection with the Teahouse.
-Pete
[[User:Peteforsyth]]
[1] https://meta.wikimedia.org/wiki/User:Peteforsyth/2015_board_election_Petefor...
[2] https://commons.wikimedia.org/wiki/File:Jimmy_Wales_at_Wikimania_2014_closin...
Hey,
On Sun, Feb 5, 2017 at 5:33 AM, Pete Forsyth peteforsyth@gmail.com wrote:
On Fri, Jan 27, 2017 at 9:17 AM, Lodewijk lodewijk@effeietsanders.org
wrote:
What I am curious about, is whether there are also efforts ongoing that
are focused on influencing community behavior in a more preventive manner.
On 01/27/2017 09:54 AM, Danny Horn wrote:
Your idea for using aggression/harassment scores in admin applications is really interesting; I hadn't thought of that before. Nothing's actually planned right now, just research and conversations, but it's neat to see people already coming up with interesting suggestions. :)
I'm delighted to see this issue getting some attention. I believe the core of the problem comes from the WMF's identity, from the start, as a technology company; so shifting in this direction might be an uphill battle, but I feel strongly that it's the right way to go.
Be careful there, we're agreeing! :D
Joke aside, I'm not sure it is an uphill battle, but that is a shift I believe we, not just WMF but all of us as a community, need to do. From mere "tool" to a movement. Which means that the tech and the platform are a way to enable us to achieve our goals. But our goals aren't technical, they're societal. We're a people movement not a tech movement :)
On 02/06/2017 12:43 AM, Christophe Henner wrote:
I'm delighted to see this issue getting some attention. I believe the core of the problem comes from the WMF's identity, from the start, as a technology company; so shifting in this direction might be an uphill battle, but I feel strongly that it's the right way to go.
Be careful there, we're agreeing! :D
Joke aside, I'm not sure it is an uphill battle, but that is a shift I believe we, not just WMF but all of us as a community, need to do. From mere "tool" to a movement. Which means that the tech and the platform are a way to enable us to achieve our goals. But our goals aren't technical, they're societal. We're a people movement not a tech movement :)
Never a surprise to find agreement with a Wikimedian in general, or with you in particular -- but I'm glad to hear it! I am heartened to hear that you believe this kind of shift is attainable, and look forward to seeing the WMF make some decisive moves toward centering on social dynamics before technical innovation.
One more past blog post of mine, which I think expresses the value of transitioning away from a tech focus, and toward a social focus: https://wikistrategies.net/wikimedia-needs-trustee/ (Please ignore the framing of "what WM needs in a trustee, I should probably republish this to be a bit more generic)
-Pete [[User:Peteforsyth]]
On 27 January 2017 at 18:17, Lodewijk lodewijk@effeietsanders.org wrote: [snip]
What I am curious about, is whether there are also efforts ongoing that are focused on influencing community behavior in a more preventive manner. I'm not sure how that would work out in practice,
[snip]
But I do want to express my hope that somewhere in the
Foundation (and affiliates), work is being done to also look at preventing bullying and harassment - besides handling it effectively. And that you maybe keep that work in mind, when developing these tools. Some overlap may exist - for example, I could imagine that if the harassment-identificationtool is reliable enough, it could trigger warnings to users before they save their edit, or the scores could be used in admin applications (and for others with example-functions). A more social approach that is unrelated, would be to train community members on how to respond to poisonous behavior. I'm just thinking out loud here, and others may have much better approaches in mind (or actually work on them).
Actually Lodewijk, it's happening not too far from you. Wikimedia Nederland [1] has been working on this for a while, quietly, with small samples and small steps, but with good results and most importantly, a lot of hope and resilience to pursue this really really hard work.
Delphine
[1] https://meta.wikimedia.org/wiki/Grants:APG/Proposals/2016-2017_round1/Wikime...
Hi all,
A number of staff and volunteers have been talking about community health for some time now, and I think it’s a point most can agree with that technical improvements alone don’t represent a comprehensive approach to the problem. While we believe they can substantially help those working on the front lines to deal with issues, it is true that there is much work to be done on reducing the number and severity of problems on the social side. As I mentioned in an earlier post https://lists.wikimedia.org/pipermail/wikimedia-l/2016-December/085668.html[1] on the topic, improvements in how we as a community both deal with and define problem behaviour is needed. The Wikimedia Foundation is working in other areas as well and hopes to further help communities research what is working and what is not, and provide support for trialing new approaches and processes.
The Support and Safety team at the Wikimedia Foundation is currently making progress on the development of training modules on both keeping events safe https://meta.wikimedia.org/wiki/Training_modules/Keeping_events_safe/drafting [3] and dealing with online harassment https://meta.wikimedia.org/wiki/Training_modules/Online_harassment/drafting.[4] Making use of community input and feedback, we're hoping to publish these in multiple languages by the beginning of the summer. We know that training alone will not eliminate harassment, but it will allow for the development of best practices in handling harassment online and at events, and help these practices to become more widespread on the Wikimedia projects.
Some challenging harassment situations arise from longstanding unresolved disputes between contributors. Asaf Bartov has done some innovative work with communities on identifying more effective methods of resolving conflicts - you can see his presentation at the recent Metrics meeting https://youtu.be/6fF4xLHkZe4?t=19m,[2] and there will be a more detailed report on this initiative next week. Improvement of dispute resolution practices could be of use on other projects as well, through the Community Capacity Development program or through other initiatives, which the Wikimedia Foundation may be able to support.
Our movement also has a variety of different policy approaches to bad behaviour and different enforcement practices in different communities. Some of these work well; others, perhaps not so much. The Foundation can support communities by helping research the effectiveness of these policies and practices, and we can work with contributors to trial new approaches.
We plan on proposing more of these types of approaches in our upcoming Annual Plan process over the next few months, and we are working to make anti-harassment programs more cross-disciplinary and collaborative between the technical and community teams. As Delphine mentions, affiliates have already taken a lead on some new initiatives, and we must help scale those improvements to the larger movement.
I think this thread illustrates how we can continue brainstorming on the sometimes less-straightforward social approaches to harassment mitigation (Lodewijk came up with some intriguing ideas above) and find ways forward that combine effective tools and technical infrastructure with an improved social environment.
[1] https://lists.wikimedia.org/pipermail/wikimedia-l/2016-December/085668.html
[2] https://youtu.be/6fF4xLHkZe4?t=19m
[3] https://meta.wikimedia.org/wiki/Training_modules/Keeping_events_safe/draftin... [4] https://meta.wikimedia.org/wiki/Training_modules/Online_harassment/drafting
On Fri, Jan 27, 2017 at 12:51 PM, Delphine Ménard notafishz@gmail.com wrote:
On 27 January 2017 at 18:17, Lodewijk lodewijk@effeietsanders.org wrote: [snip]
What I am curious about, is whether there are also efforts ongoing that
are
focused on influencing community behavior in a more preventive manner.
I'm
not sure how that would work out in practice,
[snip]
But I do want to express my hope that somewhere in the
Foundation (and affiliates), work is being done to also look at
preventing
bullying and harassment - besides handling it effectively. And that you maybe keep that work in mind, when developing these tools. Some overlap
may
exist - for example, I could imagine that if the harassment-identificationtool is reliable enough, it could trigger
warnings
to users before they save their edit, or the scores could be used in
admin
applications (and for others with example-functions). A more social approach that is unrelated, would be to train community members on how to respond to poisonous behavior. I'm just thinking out loud here, and
others
may have much better approaches in mind (or actually work on them).
Actually Lodewijk, it's happening not too far from you. Wikimedia Nederland [1] has been working on this for a while, quietly, with small samples and small steps, but with good results and most importantly, a lot of hope and resilience to pursue this really really hard work.
Delphine
[1] https://meta.wikimedia.org/wiki/Grants:APG/Proposals/ 2016-2017_round1/Wikimedia_Nederland/Proposal_form# Program_1:_Community_health
-- @notafish
NB. This gmail address is used for mailing lists. Personal emails will get lost. Intercultural musings: Ceci n'est pas une endive - http://blog.notanendive.org Photos with simple eyes: notaphoto - http://photo.notafish.org
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
Hello,
First, I am of course very happy about the attention and support from Mr. Newmark.
But I am wondering about the special focus to "tools"; harassment is a problem on the social level, not the technical one. Also, after all those years in which we talk about harassment, I find it difficult to trust our Wikimedia institutions to come with an effective approach...
Kind regards
2017-01-27 3:47 GMT+01:00 Todd Allen toddmallen@gmail.com:
These are all very nice sentiments. But they're phrased in very vague ways.
Is there anywhere we can see the actual concrete plan for the use of these funds?
Todd
On Thu, Jan 26, 2017 at 7:30 PM, Samantha Lien slien@wikimedia.org wrote:
This press release is also available online here: https://wikimediafoundation.org/wiki/Press_releases/ Wikimedia_Foundation_receives_$500,000_from_the_Craig_ Newmark_Foundation_and_craigslist_Charitable_Fund_to_ support_a_healthy_and_inclusive_Wikimedia_community <https://wikimediafoundation.org/wiki/Press_releases/
Wikimedia_Foundation_receives_$500,000_from_the_Craig_ Newmark_Foundation_and_craigslist_Charitable_Fund_to_ support_a_healthy_and_inclusive_Wikimedia_community>
And as a blog post on the Wikimedia blog here:
https://blog.wikimedia.org/2017/01/26/community-health-initiative-grant/
Wikimedia Foundation receives $500,000 from the Craig Newmark Foundation and craigslist Charitable Fund to support a healthy and inclusive
Wikimedia
community
Grant supports development of more advanced tools for volunteers and
staff
to reduce harassing behavior on Wikipedia and block harassers from the
site
SAN FRANCISCO — January 26, 2017 — Today, the Wikimedia Foundation announced the launch of a community health initiative to address
harassment
and toxic behavior on Wikipedia, with initial funding of US$500,000 from the Craig Newmark Foundation and craigslist Charitable Fund. The two seed grants, each US$250,000, will support the development of tools for volunteer editors and staff to reduce harassment on Wikipedia and block harassers.
Approximately 40% of internet users http://www.pewinternet.org/2014/10/22/online-harassment/, and as many as 70% of younger users have personally experienced harassment online,
with
regional studies showing rates as high as 76% <https://www.symantec.com/en/au/about/newsroom/press-
releases/2016/symantec_0309_01>
for young women. While harassment differs across the internet, on
Wikipedia
and other Wikimedia projects, harassment has been shown to reduce participation on the sites. More than 50% <https://upload.wikimedia.org/wikipedia/commons/5/52/
Harassment_Survey_2015_-_Results_Report.pdf>
of people who reported experiencing harassment also reported decreasing their participation in the Wikimedia community.
Volunteer editors on Wikipedia are often the first line of response for finding and addressing harassment on Wikipedia. "Trolling https://en.wikipedia.org/wiki/Internet_troll," "doxxing https://en.wikipedia.org/wiki/Doxing," and other menacing behaviors
are
burdens to Wikipedia's contributors, impeding their ability to do the writing and editing that makes Wikipedia so comprehensive and useful.
This
program seeks to respond to requests from editors over the years for
better
tools and support for responding to harassment and toxic behavior.
“To ensure Wikipedia’s vitality, people of good will need to work
together
to prevent trolling, harassment and cyber-bullying from interfering with the common good,” said Craig Newmark, founder of craigslist. “To that
end,
I'm supporting the work of the Wikimedia Foundation towards the
prevention
of harassment.”
The initiative is part of a commitment to community health at the Wikimedia Foundation, the non-profit organization that supports Wikipedia and the other Wikimedia projects, in collaboration with the global community of volunteer editors. In 2015, the Foundation published its
first
Harassment Survey https://meta.wikimedia.org/wiki/Research:Harassment_survey_2015 about the nature of the issue in order to identify key areas of concern. In November 2016, the Wikimedia Foundation Board of Trustees issued a statement of support <https://meta.wikimedia.org/wiki/Wikimedia_Foundation_
Board_noticeboard/November_2016_-_Statement_on_Healthy_Community_Culture,_ Inclusivity,_and_Safe_Spaces>
calling for a more “proactive” approach to addressing harassment as a barrier to healthy, inclusive communities on Wikipedia.
"If we want everyone to share in the sum of all knowledge, we need to
make
sure everyone feels welcome,” said Katherine Maher, Executive Director of the Wikimedia Foundation. “This grant supports a healthy culture for the volunteer editors of Wikipedia, so that more people can take part in sharing knowledge with the world."
The generous funding from the Craig Newmark Foundation and craigslist Charitable Fund will support the initial phase of a program https://meta.wikimedia.org/wiki/Community_health_initiative to strengthen existing tools and develop additional tools to more quickly identify potentially harassing behavior, and help volunteer
administrators
evaluate harassment reports and respond effectively. These improvements will be made in close collaboration with the Wikimedia community to evaluate, test, and give feedback on the tools as they are developed.
This initiative addresses the major forms of harassment reported on the Wikimedia Foundation’s 2015 Harassment Survey <https://meta.wikimedia.org/w/index.php?title=File:
Harassment_Survey_2015_-_Results_Report.pdf&page=17>,
which covers a wide range of different behaviors: content vandalism, stalking, name-calling, trolling, doxxing, discrimination—anything that targets individuals for unfair and harmful attention. From research and community feedback, four areas have been identified where new tools could be beneficial in addressing and responding to harassment:
- Detection and prevention - making it easier and faster for editors to
identify and flag harassing behavior
- Reporting - providing victims and respondents of harassment improved
ways to report instances that offer a clearer, more streamlined approach
- Evaluating - supporting tools that help volunteers better evaluate
harassing behavior and inform the best way to respond
- Blocking - making it more difficult for someone who is blocked from the
site to return
For more information, please visit: https://meta.wikimedia.org/ wiki/Community_health_initiative
About the Wikimedia Foundation
The Wikimedia Foundation is the non-profit organization that supports and operates Wikipedia and its sister projects. More than a billion unique devices access the Wikimedia sites each month. Roughly 75,000 people edit Wikipedia and its sister projects every month, collectively creating and improving its more than 40 million articles across hundreds of languages
–
this all makes Wikipedia one of the most popular web properties in the world. Based in San Francisco, California, the Wikimedia Foundation is a 501(c)(3) charity that is funded primarily through donations and grants.
About Wikipedia
Wikipedia is the world’s free knowledge resource. It is a collaborative creation that has been added to and edited by millions of people from around the globe since it was created in 2001: anyone can edit it, at any time. Wikipedia is offered in hundreds of languages containing more than
40
million articles. Wikimedia and its sister projects are collectively visited by more than a billion unique devices each month.
Harassment takes different forms on Wikipedia than it does on other major websites. Unlike other platforms, Wikipedia editors generally don’t write about their personal lives. Instead, on Wikipedia, harassment usually begins as a content dispute between editors that results in an attack on
an
editor’s personal attributes—their gender, race, religion, sexual orientation, political affiliation—based on something that they’ve
shared,
or an assumption based on the user’s edit history.
About the Craig Newmark Foundation
The Craig Newmark Foundation (CNF) is a private foundation created by craigslist founder Craig Newmark in 2016 to support and connect nonprofit communities and drive powerful civic engagement. The Foundation’s priorities include Trustworthy Journalism, Veterans and Military
Families,
Voter Protection and Education, Consumer Protection and Education, Public Diplomacy, Government Transparency, Micro-Lending to Alleviate Poverty,
and
Women in Tech.
About craigslist Charitable Fund
The craigslist Charitable Fund (CCF) provides millions of dollars each year in one-time and recurring grants to hundreds of partner
organizations
addressing four broad areas of interest including Environment and Transportation; Education, Rights, Justice, and Reason; Nonviolence, Veterans and Peace; and Journalism, Open Source, and Internet.
Press contacts
Craig Newmark Foundation
Bruce Bonafede
press@craigconnects.org
Wikimedia Foundation
Juliet Barbara
jbarbara@wikimedia.org
(415) 839-6885
-- *Samantha Lien* Communications Manager Wikimedia Foundation 149 New Montgomery Street San Francisco, CA 94105
(To be unsubscribed from this press release distribution list, please reply to communications@wikimedia.org with 'UNSUBSCRIBE' in the subject line) _______________________________________________ Please note: all replies sent to this mailing list will be immediately directed to Wikimedia-l, the public mailing list of the Wikimedia community. For more information about Wikimedia-l: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l _______________________________________________ WikimediaAnnounce-l mailing list WikimediaAnnounce-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikimediaannounce-l
Wikimedia-l mailing list, guidelines at: https://meta.wikimedia.org/ wiki/Mailing_lists/Guidelines New messages to: Wikimedia-l@lists.wikimedia.org Unsubscribe: https://lists.wikimedia.org/mailman/listinfo/wikimedia-l, mailto:wikimedia-l-request@lists.wikimedia.org?subject=unsubscribe
wikimedia-l@lists.wikimedia.org