Moderation, Networks, and Anti-Social Behavior Online

被引:0
|
作者
Haythornthwaite, Caroline [1 ]
机构
[1] Syracuse Univ, Sch Informat Studies, 343 Hinds Hall, Syracuse, NY 13244 USA
来源
SOCIAL MEDIA + SOCIETY | 2023年 / 9卷 / 03期
关键词
anti-social behavior; content moderation; social media; SPIRALS;
D O I
10.1177/20563051231196874
中图分类号
G2 [信息与知识传播];
学科分类号
05 ; 0503 ;
摘要
Major open platforms, such as Facebook, Twitter, Instagram, and Tik Tok, are bombarded with postings that violate platform community standards, offend societal norms, and cause harm to individuals and groups. To manage such sites requires identification of content and behavior that is anti-social and action to remove content and sanction posters. This process is not as straightforward as it seems: what is offensive and to whom varies by individual, group, and community; what action to take depends on stated standards, community expectations, and the extent of the offense; conversations can create and sustain anti-social behavior (ASB); networks of individuals can launch coordinated attacks; and fake accounts can side-step sanctioning behavior. In meeting the challenges of moderating extreme content, two guiding questions stand out: how do we define and identify ASB online? And, given the quantity and nuances of offensive content: how do we make the best use of automation and humans in the management of offending content and ASB? To address these questions, existing studies on ASB online were reviewed, and a detailed examination was made of social media moderation practices on major media. Pros and cons of automated and human review are discussed in a framework of three layers: environment, community, and crowd. Throughout, the article adds attention to the network impact of ASB, emphasizing the way ASB builds a relation between perpetrator(s) and victim(s), and can make ASB more or less offensive.
引用
收藏
页数:15
相关论文
共 50 条