Hello all,
The next Wikimedia Research Showcase will be on Wednesday, November 17, at 17:30 UTC (9:30am PST/12:30pm EST/ 18:30 CET). The topic is content moderation.
Livestream: https://www.youtube.com/watch?v=Rx3xesDkp2o
*Amy S. Bruckman (Georgia Institute of Technology, USA)Is Deplatforming Censorship? What happened when controversial figures were deplatformed, with philosophical musings on the nature of free speech*
Abstract: When a controversial figure is deplatformed, what happens to their online influence? In this talk, first, I’ll present results from a study of the deplatforming from Twitter of three figures who repeatedly broke platform rules (Alex Jones, Milo Yiannopoulos, and Owen Benjamin). Second, I’ll discuss what happened when this study was on the front page of Reddit, and the range of angry reactions from people who say that they’re in favor of “free speech.” I’ll explore the nature of free speech, and why our current speech regulation framework is fundamentally broken. Finally, I’ll conclude with thoughts on the strength of Wikipedia’s model in contrast to other platforms, and highlight opportunities for improvement.
*Nathan TeBlunthuis (University of Washington / Northwestern University, USA)Effects of Algorithmic Flagging on Fairness. Quasi-experimental Evidence from Wikipedia*
Abstract: Online community moderators often rely on social signals such as whether or not a user has an account or a profile page as clues that users may cause problems. Reliance on these clues can lead to "overprofiling bias when moderators focus on these signals but overlook the misbehavior of others. We propose that algorithmic flagging systems deployed to improve the efficiency of moderation work can also make moderation actions more fair to these users by reducing reliance on social signals and making norm violations by everyone else more visible. We analyze moderator behavior in Wikipedia as mediated by RCFilters, a system which displays social signals and algorithmic flags, and estimate the causal effect of being flagged on moderator actions. We show that algorithmically flagged edits are reverted more often, especially those by established editors with positive social signals, and that flagging decreases the likelihood that moderation actions will be undone. Our results suggest that algorithmic flagging systems can lead to increased fairness in some contexts but that the relationship is complex and contingent.
https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase
Just a reminder that this event is on Wednesday.
On Wed, Nov 10, 2021 at 3:24 PM Janna Layton jlayton@wikimedia.org wrote:
Hello all,
The next Wikimedia Research Showcase will be on Wednesday, November 17, at 17:30 UTC (9:30am PST/12:30pm EST/ 18:30 CET). The topic is content moderation.
Livestream: https://www.youtube.com/watch?v=Rx3xesDkp2o
*Amy S. Bruckman (Georgia Institute of Technology, USA)Is Deplatforming Censorship? What happened when controversial figures were deplatformed, with philosophical musings on the nature of free speech*
Abstract: When a controversial figure is deplatformed, what happens to their online influence? In this talk, first, I’ll present results from a study of the deplatforming from Twitter of three figures who repeatedly broke platform rules (Alex Jones, Milo Yiannopoulos, and Owen Benjamin). Second, I’ll discuss what happened when this study was on the front page of Reddit, and the range of angry reactions from people who say that they’re in favor of “free speech.” I’ll explore the nature of free speech, and why our current speech regulation framework is fundamentally broken. Finally, I’ll conclude with thoughts on the strength of Wikipedia’s model in contrast to other platforms, and highlight opportunities for improvement.
*Nathan TeBlunthuis (University of Washington / Northwestern University, USA)Effects of Algorithmic Flagging on Fairness. Quasi-experimental Evidence from Wikipedia*
Abstract: Online community moderators often rely on social signals such as whether or not a user has an account or a profile page as clues that users may cause problems. Reliance on these clues can lead to "overprofiling bias when moderators focus on these signals but overlook the misbehavior of others. We propose that algorithmic flagging systems deployed to improve the efficiency of moderation work can also make moderation actions more fair to these users by reducing reliance on social signals and making norm violations by everyone else more visible. We analyze moderator behavior in Wikipedia as mediated by RCFilters, a system which displays social signals and algorithmic flags, and estimate the causal effect of being flagged on moderator actions. We show that algorithmically flagged edits are reverted more often, especially those by established editors with positive social signals, and that flagging decreases the likelihood that moderation actions will be undone. Our results suggest that algorithmic flagging systems can lead to increased fairness in some contexts but that the relationship is complex and contingent.
https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase
-- Janna Layton (she/her) Administrative Associate - Product & Technology Wikimedia Foundation https://wikimediafoundation.org/
The Research Showcase will be starting in about 30 minutes.
On Wed, Nov 10, 2021 at 3:24 PM Janna Layton jlayton@wikimedia.org wrote:
Hello all,
The next Wikimedia Research Showcase will be on Wednesday, November 17, at 17:30 UTC (9:30am PST/12:30pm EST/ 18:30 CET). The topic is content moderation.
Livestream: https://www.youtube.com/watch?v=Rx3xesDkp2o
*Amy S. Bruckman (Georgia Institute of Technology, USA)Is Deplatforming Censorship? What happened when controversial figures were deplatformed, with philosophical musings on the nature of free speech*
Abstract: When a controversial figure is deplatformed, what happens to their online influence? In this talk, first, I’ll present results from a study of the deplatforming from Twitter of three figures who repeatedly broke platform rules (Alex Jones, Milo Yiannopoulos, and Owen Benjamin). Second, I’ll discuss what happened when this study was on the front page of Reddit, and the range of angry reactions from people who say that they’re in favor of “free speech.” I’ll explore the nature of free speech, and why our current speech regulation framework is fundamentally broken. Finally, I’ll conclude with thoughts on the strength of Wikipedia’s model in contrast to other platforms, and highlight opportunities for improvement.
*Nathan TeBlunthuis (University of Washington / Northwestern University, USA)Effects of Algorithmic Flagging on Fairness. Quasi-experimental Evidence from Wikipedia*
Abstract: Online community moderators often rely on social signals such as whether or not a user has an account or a profile page as clues that users may cause problems. Reliance on these clues can lead to "overprofiling bias when moderators focus on these signals but overlook the misbehavior of others. We propose that algorithmic flagging systems deployed to improve the efficiency of moderation work can also make moderation actions more fair to these users by reducing reliance on social signals and making norm violations by everyone else more visible. We analyze moderator behavior in Wikipedia as mediated by RCFilters, a system which displays social signals and algorithmic flags, and estimate the causal effect of being flagged on moderator actions. We show that algorithmically flagged edits are reverted more often, especially those by established editors with positive social signals, and that flagging decreases the likelihood that moderation actions will be undone. Our results suggest that algorithmic flagging systems can lead to increased fairness in some contexts but that the relationship is complex and contingent.
https://www.mediawiki.org/wiki/Wikimedia_Research/Showcase
-- Janna Layton (she/her) Administrative Associate - Product & Technology Wikimedia Foundation https://wikimediafoundation.org/
wikimedia-l@lists.wikimedia.org