Narrative responsibility and artificial intelligenceHow AI challenges human responsibility and sense-making

被引:0
|
作者
Mark Coeckelbergh
机构
[1] University of Vienna,Department of Philosophy
来源
AI & SOCIETY | 2023年 / 38卷
关键词
Responsibility; Narrative responsibility; Hermeneutic responsibility; Artificial intelligence; Hermeneutics; Philosophy of technology;
D O I
暂无
中图分类号
学科分类号
摘要
Most accounts of responsibility focus on one type of responsibility, moral responsibility, or address one particular aspect of moral responsibility such as agency. This article outlines a broader framework to think about responsibility that includes causal responsibility, relational responsibility, and what I call “narrative responsibility” as a form of “hermeneutic responsibility”, connects these notions of responsibility with different kinds of knowledge, disciplines, and perspectives on human being, and shows how this framework is helpful for mapping and analysing how artificial intelligence (AI) challenges human responsibility and sense-making in various ways. Mobilizing recent hermeneutic approaches to technology, the article argues that next to, and interwoven with, other types of responsibility such as moral responsibility, we also have narrative and hermeneutic responsibility—in general and for technology. For example, it is our task as humans to make sense of, with and, if necessary, against AI. While from a posthumanist point of view, technologies also contribute to sense-making, humans are the experiencers and bearers of responsibility and always remain in charge when it comes to this hermeneutic responsibility. Facing and working with a world of data, correlations, and probabilities, we are nevertheless condemned to make sense. Moreover, this also has a normative, sometimes even political aspect: acknowledging and embracing our hermeneutic responsibility is important if we want to avoid that our stories are written elsewhere—through technology.
引用
收藏
页码:2437 / 2450
页数:13
相关论文
共 50 条
  • [22] Applying Sense-Making and Narrative Techniques to Capture Lessons Learnt
    Cheuk, Bonnie
    [J]. JOURNAL OF INFORMATION & KNOWLEDGE MANAGEMENT, 2007, 6 (03) : 165 - 171
  • [23] Prompting for Discovery: Flexible Sense-Making for AI Art-Making with DreamSheets
    Almeda, Shm Garanganao
    Zamfirescu-Pereira, J. D.
    Kim, Kyu Won
    Rathnam, Pradeep Mani
    Hartmann, Bjoern
    [J]. PROCEEDINGS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYTEMS (CHI 2024), 2024,
  • [24] Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors
    Constantinescu M.
    Vică C.
    Uszkai R.
    Voinea C.
    [J]. Philosophy & Technology, 2022, 35 (2)
  • [25] The Nurturing Stance: Making Sense of Responsibility without Blame
    Brandenburg, Daphne
    [J]. PACIFIC PHILOSOPHICAL QUARTERLY, 2018, 99 : 5 - 22
  • [26] On the possibility of making even more sense of freedom and responsibility
    Daniel Speak
    [J]. Philosophical Studies, 2013, 163 : 117 - 122
  • [27] On the possibility of making even more sense of freedom and responsibility
    Speak, Daniel
    [J]. PHILOSOPHICAL STUDIES, 2013, 163 (01) : 117 - 122
  • [28] Making more sense of retributivism:: Desert as responsibility and proportionality
    Corlett, JA
    [J]. PHILOSOPHY, 2003, 78 (304) : 279 - 287
  • [29] Carving Up Participation: Sense-Making and Sociomorphing for Artificial Minds
    Zebrowski, Robin L.
    McGraw, Eli B.
    [J]. FRONTIERS IN NEUROROBOTICS, 2022, 16
  • [30] Representing narrative and testimonial knowledge in sense-making software for crime analysis
    van den Braak, Susan W.
    van Oostendorp, Herre
    Prakken, Henry
    Vreeswijk, Gerard A. W.
    [J]. LEGAL KNOWLEDGE AND INFORMATION SYSTEMS, 2008, 189 : 160 - 169