Effects of Algorithmic Flagging on Fairness: Quasi-experimental Evidence from Wikipedia

被引:4
|
作者
Teblunthuis N. [1 ]
Hill B.M. [1 ]
Halfaker A. [2 ]
机构
[1] University of Washington, Seattle, WA
[2] Microsoft, Redmond, WA
来源
| 1600年 / Association for Computing Machinery卷 / 05期
基金
美国国家科学基金会;
关键词
ai; causal inference; community norms; fairness; machine learning; moderation; online communities; peer production; sociotechnical systems; wikipedia;
D O I
10.1145/3449130
中图分类号
学科分类号
摘要
Online community moderators often rely on social signals such as whether or not a user has an account or a profile page as clues that users may cause problems. Reliance on these clues can lead to "overprofiling'' bias when moderators focus on these signals but overlook the misbehavior of others. We propose that algorithmic flagging systems deployed to improve the efficiency of moderation work can also make moderation actions more fair to these users by reducing reliance on social signals and making norm violations by everyone else more visible. We analyze moderator behavior in Wikipedia as mediated by RCFilters, a system which displays social signals and algorithmic flags, and estimate the causal effect of being flagged on moderator actions. We show that algorithmically flagged edits are reverted more often, especially those by established editors with positive social signals, and that flagging decreases the likelihood that moderation actions will be undone. Our results suggest that algorithmic flagging systems can lead to increased fairness in some contexts but that the relationship is complex and contingent. © 2021 Owner/Author.
引用
收藏
相关论文
共 50 条