Ethical, political and epistemic implications of machine learning (mis)information classification: insights from an interdisciplinary collaboration between social and data scientists

被引:5
|
作者
Dominguez Hernandez, Andres [1 ]
Owen, Richard [2 ]
Nielsen, Dan Saattrup [3 ]
McConville, Ryan [3 ]
机构
[1] Univ Bristol, Dept Comp Sci, Bristol, England
[2] Univ Bristol, Sch Management, Bristol, England
[3] Univ Bristol, Dept Engn Math, Bristol, England
关键词
And phrases; misinformation; reflexivity; content moderation; fact-checking; machine learning; responsible innovation collaboration; FAKE NEWS; SCIENCE; EXPERTISE;
D O I
10.1080/23299460.2023.2222514
中图分类号
B82 [伦理学(道德学)];
学科分类号
摘要
Machine learning (ML) classification models are becoming increasingly popular for tackling the sheer volume and speed of online misinformation. In building these models data scientists need to make assumptions about the legitimacy and authoritativeness of the sources of 'truth' employed for model training and testing. This has political, ethical and epistemic implications which are rarely addressed in technical papers. Despite (and due to) their reported high performance, ML-driven moderation systems have the potential to shape public debate and create downstream negative impacts. This article presents findings from a responsible innovation (RI) inflected collaboration between science and technology studies scholars and data scientists. Following an interactive co-ethnographic process, we identify a series of algorithmic contingencies-key moments during ML model development which could lead to different future outcomes, uncertainties and harmful effects. We conclude by offering recommendations on how to address the potential failures of ML tools for combating online misinformation.
引用
收藏
页数:25
相关论文
共 7 条