Suspicious Minds: the Problem of Trust and Conversational Agents

被引:0
|
作者
Jonas Ivarsson
Oskar Lindwall
机构
[1] University of Gothenburg,Department of Applied IT
来源
Computer Supported Cooperative Work (CSCW) | 2023年 / 32卷
关键词
Conversation; Human–computer interaction; Natural language processing; Trust; Understanding;
D O I
暂无
中图分类号
学科分类号
摘要
In recent years, the field of natural language processing has seen substantial developments, resulting in powerful voice-based interactive services. The quality of the voice and interactivity are sometimes so good that the artificial can no longer be differentiated from real persons. Thus, discerning whether an interactional partner is a human or an artificial agent is no longer merely a theoretical question but a practical problem society faces. Consequently, the ‘Turing test’ has moved from the laboratory into the wild. The passage from the theoretical to the practical domain also accentuates understanding as a topic of continued inquiry. When interactions are successful but the artificial agent has not been identified as such, can it also be said that the interlocutors have understood each other? In what ways does understanding figure in real-world human–computer interactions? Based on empirical observations, this study shows how we need two parallel conceptions of understanding to address these questions. By departing from ethnomethodology and conversation analysis, we illustrate how parties in a conversation regularly deploy two forms of analysis (categorial and sequential) to understand their interactional partners. The interplay between these forms of analysis shapes the developing sense of interactional exchanges and is crucial for established relations. Furthermore, outside of experimental settings, any problems in identifying and categorizing an interactional partner raise concerns regarding trust and suspicion. When suspicion is roused, shared understanding is disrupted. Therefore, this study concludes that the proliferation of conversational systems, fueled by artificial intelligence, may have unintended consequences, including impacts on human–human interactions.
引用
收藏
页码:545 / 571
页数:26
相关论文
共 50 条
  • [11] The Sound of Trust: Voice as a Measurement of Trust During Interactions with Embodied Conversational Agents
    Elkins, Aaron C.
    Derrick, Douglas C.
    GROUP DECISION AND NEGOTIATION, 2013, 22 (05) : 897 - 913
  • [12] The Sound of Trust: Voice as a Measurement of Trust During Interactions with Embodied Conversational Agents
    Aaron C. Elkins
    Douglas C. Derrick
    Group Decision and Negotiation, 2013, 22 : 897 - 913
  • [13] Suspicious minds and views of fairness
    Schoyen, Oivind
    THEORY AND DECISION, 2024, 97 (01) : 67 - 88
  • [14] The pandering politicians of suspicious minds
    McGraw, KM
    Lodge, M
    Jones, JM
    JOURNAL OF POLITICS, 2002, 64 (02): : 362 - 383
  • [15] Conversational Agents Trust Calibration A User-Centred Perspective to Design
    Dubiel, Mateusz
    Daronnat, Sylvain
    Leiva, Luis A.
    PROCEEDINGS OF THE 4TH INTERNATIONAL CONFERENCE ON CONVERSATIONAL USER INTERFACES, CUI 2022, 2022,
  • [16] Chatbot Acceptance: A Latent Profile Analysis on Individuals' Trust in Conversational Agents
    Mueller, Lea
    Mattke, Jens
    Maier, Christian
    Weitzel, Tim
    Graser, Heinrich
    PROCEEDINGS OF THE 2019 COMPUTERS AND PEOPLE RESEARCH CONFERENCE (SIGMIS-CPR '19), 2019, : 35 - 42
  • [17] Suspicious minds: The psychology of persecutory delusions
    Freeman, Daniel
    CLINICAL PSYCHOLOGY REVIEW, 2007, 27 (04) : 425 - 457
  • [18] Trust and acceptance of a virtual psychiatric interview between embodied conversational agents and outpatients
    Philip, Pierre
    Dupuy, Lucile
    Auriacombe, Marc
    Serre, Fushia
    de Sevin, Etienne
    Sauteraud, Alain
    Micoulaud-Franchi, Jean-Arthur
    NPJ DIGITAL MEDICINE, 2020, 3 (01)
  • [19] Exploring the Impact of Explainability on Trust and Acceptance of Conversational Agents - A Wizard of Oz Study
    Joshi, Rutuja
    Graefe, Julia
    Kraus, Michael
    Bengler, Klaus
    ARTIFICIAL INTELLIGENCE IN HCI, PT I, AI-HCI 2024, 2024, 14734 : 199 - 218
  • [20] Trust and acceptance of a virtual psychiatric interview between embodied conversational agents and outpatients
    Pierre Philip
    Lucile Dupuy
    Marc Auriacombe
    Fushia Serre
    Etienne de Sevin
    Alain Sauteraud
    Jean-Arthur Micoulaud-Franchi
    npj Digital Medicine, 3