4.5 Article

The fabrics of machine moderation: Studying the technical, normative, and organizational structure of Perspective API

期刊

BIG DATA & SOCIETY
卷 8, 期 2, 页码 -

出版社

SAGE PUBLICATIONS INC
DOI: 10.1177/20539517211046181

关键词

Algorithmic content moderation; Perspective API; platformization; Google Jigsaw; machine learning; moral engineering

资金

  1. Deutsche Forschungsgemeinschaft [262513311 -SFB 1187]

向作者/读者索取更多资源

The complexity of online content moderation is increasing, with automated tools becoming more prevalent. This article examines Google's Perspective API system and emphasizes the importance of proactive engagement in the design of technologies and the institutions they are embedded in.
Over recent years, the stakes and complexity of online content moderation have been steadily raised, swelling from concerns about personal conflict in smaller communities to worries about effects on public life and democracy. Because of the massive growth in online expressions, automated tools based on machine learning are increasingly used to moderate speech. While 'design-based governance' through complex algorithmic techniques has come under intense scrutiny, critical research covering algorithmic content moderation is still rare. To add to our understanding of concrete instances of machine moderation, this article examines Perspective API, a system for the automated detection of 'toxicity' developed and run by the Google unit Jigsaw that can be used by websites to help moderate their forums and comment sections. The article proceeds in four steps. First, we present our methodological strategy and the empirical materials we were able to draw on, including interviews, documentation, and GitHub repositories. We then summarize our findings along five axes to identify the various threads Perspective API brings together to deliver a working product. The third section discusses two conflicting organizational logics within the project, paying attention to both critique and what can be learned from the specific case at hand. We conclude by arguing that the opposition between 'human' and 'machine' in speech moderation obscures the many ways these two come together in concrete systems, and suggest that the way forward requires proactive engagement with the design of technologies as well as the institutions they are embedded in.

作者

我是这篇论文的作者
点击您的名字以认领此论文并将其添加到您的个人资料中。

评论

主要评分

4.5
评分不足

次要评分

新颖性
-
重要性
-
科学严谨性
-
评价这篇论文

推荐

暂无数据
暂无数据