A recent article that may be of interest to members of this list:
Classification and Its Consequences for Online Harassment: Design Insights from HeartMob
Online harassment is a pervasive and pernicious problem. Techniques like natural language
processing and machine learning are promising approaches for identifying abusive language,
but they fail to address structural power imbalances perpetuated by automated labeling and
classification. Similarly, platform policies and reporting tools are designed for a
seemingly homogenous user base and do not account for individual experiences and systems
of social oppression. This paper describes the design and evaluation of HeartMob, a
platform built by and for people who are disproportionately affected by the most severe
forms of online harassment. We conducted interviews with 18 HeartMob users, both targets
and supporters, about their harassment experiences and their use of the site. We examine
systems of classification enacted by technical systems, platform policies, and users to
demonstrate how 1) labeling serves to validate (or invalidate) harassment experiences; 2)
labeling motivates bystanders to provide support; and 3) labeling content as harassment is
critical for surfacing community norms around appropriate user behavior. We discuss these
results through the lens of Bowker and Star’s classification theories and describe
implications for labeling and classifying online abuse. Finally, informed by
intersectional feminist theory, we argue that fully addressing online harassment requires
the ongoing integration of vulnerable users’ needs into the design and moderation of
online platforms.
http://www.lindsayblackwell.net/wp-content/uploads/2017/11/Classification-a…