The Implications of Diverse Human Moral Foundations for Assessing the Ethicality of Artificial Intelligence

被引:15
|
作者
Telkamp, Jake B. [1 ]
Anderson, Marc H. [1 ]
机构
[1] Iowa State Univ, Ivy Coll Business, Dept Management & Entrepreneurship, Steve & Becky Smith Management & Mkt Suite, 2350 Gerdin Business Bldg,2167 Union Dr, Ames, IA 50011 USA
关键词
Moral foundations; Artificial intelligence; Moral judgment; Ethical AI frameworks; ETHICS; GO;
D O I
10.1007/s10551-022-05057-6
中图分类号
F [经济];
学科分类号
02 ;
摘要
Organizations are making massive investments in artificial intelligence (AI), and recent demonstrations and achievements highlight the immense potential for AI to improve organizational and human welfare. Yet realizing the potential of AI necessitates a better understanding of the various ethical issues involved with deciding to use AI, training and maintaining it, and allowing it to make decisions that have moral consequences. People want organizations using AI and the AI systems themselves to behave ethically, but ethical behavior means different things to different people, and many ethical dilemmas require trade-offs such that no course of action is universally considered ethical. How should organizations using AI-and the AI itself-process ethical dilemmas where humans disagree on the morally right course of action? Though a variety of ethical AI frameworks have been suggested, these approaches do not adequately address how people make ethical evaluations of AI systems or how to incorporate the fundamental disagreements people have regarding what is and is not ethical behavior. Drawing on moral foundations theory, we theorize that a person will perceive an organization's use of AI, its data procedures, and the resulting AI decisions as ethical to the extent that those decisions resonate with the person's moral foundations. Since people hold diverse moral foundations, this highlights the crucial need to consider individual moral differences at multiple levels of AI. We discuss several unresolved issues and suggest potential approaches (such as moral reframing) for thinking about conflicts in moral judgments concerning AI.
引用
收藏
页码:961 / 976
页数:16
相关论文
共 50 条
  • [41] Bilattices and reasoning in artificial intelligence: Concepts and foundations
    Sim, KM
    ARTIFICIAL INTELLIGENCE REVIEW, 2001, 15 (03) : 219 - 240
  • [42] FOUNDATIONS AND GRAND CHALLENGES OF ARTIFICIAL-INTELLIGENCE
    REDDY, R
    AI MAGAZINE, 1988, 9 (04) : 9 - 21
  • [43] The Moral Foundations of Human Rights Attitudes
    Stolerman, Dominic
    Lagnado, David
    POLITICAL PSYCHOLOGY, 2020, 41 (03) : 439 - 459
  • [44] Biological implications of Artificial Intelligence
    Varela, FJ
    CUADERNOS HISPANOAMERICANOS, 2000, (596): : 15 - 16
  • [45] EXPLORING THE IMPLICATIONS OF ARTIFICIAL INTELLIGENCE
    Johnson, Kristin N.
    Reyes, Carla L.
    JOURNAL OF INTERNATIONAL AND COMPARATIVE LAW, 2021, 8 (02): : 315 - 331
  • [46] Social implications of artificial intelligence
    Vickery, B
    SEARCH, 1997, 28 (10): : 316 - 316
  • [47] Artificial Intelligence as a Socratic Assistant for Moral Enhancement
    Lara, Francisco
    Deckers, Jan
    NEUROETHICS, 2020, 13 (03) : 275 - 287
  • [48] The rise of artificial intelligence and the crisis of moral passivity
    Berman Chan
    AI & SOCIETY, 2020, 35 : 991 - 993
  • [49] Moral Decision Making Frameworks for Artificial Intelligence
    Conitzer, Vincent
    Sinnott-Armstrong, Walter
    Borg, Jana Schaich
    Deng, Yuan
    Kramer, Max
    THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4831 - 4835
  • [50] THE MORAL OF ARTIFICIAL INTELLIGENCE: A CHANCE TO RECONSIDER PHILOSOPHY
    Zheleznov, Andrey
    LOGOS, 2021, 31 (06): : 95 - 122