Ethics and governance of artificial intelligence: Evidence from a survey of machine learning researchers

被引:0
|
作者
Zhang B. [1 ]
Anderljung M. [2 ]
Kahn L. [3 ]
Dreksler N. [2 ]
Horowitz M.C. [3 ]
Dafoe A. [2 ]
机构
[1] Department of Government, Cornell University, Ithaca, 14853, NY
[2] Centre for the Governance of AI, Oxford
[3] Perry World House, University of Pennsylvania, Philadelphia, 19104, PA
来源
| 1600年 / AI Access Foundation卷 / 71期
关键词
Surveys;
D O I
10.1613/JAIR.1.12895
中图分类号
学科分类号
摘要
Machine learning (ML) and artificial intelligence (AI) researchers play an important role in the ethics and governance of AI, including through their work, advocacy, and choice of employment. Nevertheless, this influential group’s attitudes are not well understood, undermining our ability to discern consensuses or disagreements between AI/ML researchers. To examine these researchers’ views, we conducted a survey of those who published in two top AI/ML conferences (N = 524). We compare these results with those from a 2016 survey of AI/ML researchers (Grace et al., 2018) and a 2018 survey of the US public (Zhang & Dafoe, 2020). We find that AI/ML researchers place high levels of trust in international organizations and scientific organizations to shape the development and use of AI in the public interest; moderate trust in most Western tech companies; and low trust in national militaries, Chinese tech companies, and Facebook. While the respondents were overwhelmingly opposed to AI/ML researchers working on lethal autonomous weapons, they are less opposed to researchers working on other military applications of AI, particularly logistics algorithms. A strong majority of respondents think that AI safety research should be prioritized and that ML institutions should conduct pre-publication review to assess potential harms. Being closer to the technology itself, AI/ML researchers are well placed to highlight new risks and develop technical solutions, so this novel attempt to measure their attitudes has broad relevance. The findings should help to improve how researchers, private sector executives, and policymakers think about regulations, governance frameworks, guiding principles, and national and international governance strategies for AI. ©2021 AI Access Foundation.
引用
收藏
页码:591 / 666
页数:75
相关论文
共 50 条
  • [41] Bias in Artificial Intelligence and Machine Learning
    Dube, Raghavi
    Shafana, Jeenath N.
    BIOSCIENCE BIOTECHNOLOGY RESEARCH COMMUNICATIONS, 2021, 14 (09): : 227 - 234
  • [42] Artificial Intelligence and Machine Learning in Cardiology
    Deo, Rahul C.
    CIRCULATION, 2024, 149 (16) : 1235 - 1237
  • [43] Machine Learning and Artificial Intelligence in Radiology
    Sana, Munib
    JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2018, 15 (08) : 1139 - 1142
  • [44] Probabilistic machine learning and artificial intelligence
    Ghahramani, Zoubin
    NATURE, 2015, 521 (7553) : 452 - 459
  • [45] ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING IN NDT
    Aldrin, John
    MATERIALS EVALUATION, 2023, 81 (07) : 7 - 8
  • [46] Artificial Intelligence and Machine Learning for materials
    Zheng, Yuebing
    CURRENT OPINION IN SOLID STATE & MATERIALS SCIENCE, 2025, 34
  • [47] Machine learning and artificial intelligence in haematology
    Shouval, Roni
    Fein, Joshua A.
    Savani, Bipin
    Mohty, Mohamad
    Nagler, Arnon
    BRITISH JOURNAL OF HAEMATOLOGY, 2021, 192 (02) : 239 - 250
  • [48] Artificial Intelligence and Machine Learning in Cardiology
    Westcott, R. Jeffrey
    Tcheng, James E.
    JACC-CARDIOVASCULAR INTERVENTIONS, 2019, 12 (14) : 1312 - 1314
  • [49] MACHINE LEARNING IN ARTIFICIAL-INTELLIGENCE
    BRATKO, I
    ARTIFICIAL INTELLIGENCE IN ENGINEERING, 1993, 8 (03): : 159 - 164
  • [50] Artificial intelligence and machine learning in haematology
    Sivapalaratnam, Suthesh
    BRITISH JOURNAL OF HAEMATOLOGY, 2019, 185 (02) : 207 - 208