Human-robot dialogue annotation for multi-modal common ground

被引:0
|
作者
Bonial, Claire [1 ]
Lukin, Stephanie M. [1 ]
Abrams, Mitchell [1 ]
Baker, Anthony [1 ]
Donatelli, Lucia [2 ]
Foots, Ashley [1 ]
Hayes, Cory J. [1 ]
Henry, Cassidy [1 ]
Hudson, Taylor [3 ]
Marge, Matthew [4 ]
Pollard, Kimberly A. [1 ]
Artstein, Ron [5 ]
Traum, David [5 ]
Voss, Clare R. [1 ]
机构
[1] DEVCOM Army Res Lab, Adelphi, MD 21005 USA
[2] Vrije Univ, Amsterdam, Netherlands
[3] Oak Ridge Associated Univ, Oak Ridge, TN USA
[4] DARPA, Arlington, VA USA
[5] USC Inst Creat Technol, Playa Vista, CA USA
关键词
Situated dialogue; Semantics; Multi-floor dialogue; Multi-modal dialogue;
D O I
10.1007/s10579-024-09784-2
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
In this paper, we describe the development of symbolic representations annotated on human-robot dialogue data to make dimensions of meaning accessible to autonomous systems participating in collaborative, natural language dialogue, and to enable common ground with human partners. A particular challenge for establishing common ground arises in remote dialogue (occurring in disaster relief or search-and-rescue tasks), where a human and robot are engaged in a joint navigation and exploration task of an unfamiliar environment, but where the robot cannot immediately share high quality visual information due to limited communication constraints. Engaging in a dialogue provides an effective way to communicate, while on-demand or lower-quality visual information can be supplemented for establishing common ground. Within this paradigm, we capture propositional semantics and the illocutionary force of a single utterance within the dialogue through our Dialogue-AMR annotation, an augmentation of Abstract Meaning Representation. We then capture patterns in how different utterances within and across speaker floors relate to one another in our development of a multi-floor Dialogue Structure annotation schema. Finally, we begin to annotate and analyze the ways in which the visual modalities provide contextual information to the dialogue for overcoming disparities in the collaborators' understanding of the environment. We conclude by discussing the use-cases, architectures, and systems we have implemented from our annotations that enable physical robots to autonomously engage with humans in bi-directional dialogue and navigation.
引用
收藏
页数:51
相关论文
共 50 条
  • [1] Multi-modal anchoring for human-robot interaction
    Fritsch, J
    Kleinehagenbrock, M
    Lang, S
    Plötz, T
    Fink, GA
    Sagerer, G
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2003, 43 (2-3) : 133 - 147
  • [2] Multi-modal interfaces for natural Human-Robot Interaction
    Andronas, Dionisis
    Apostolopoulos, George
    Fourtakas, Nikos
    Makris, Sotiris
    10TH CIRP SPONSORED CONFERENCE ON DIGITAL ENTERPRISE TECHNOLOGIES (DET 2020) - DIGITAL TECHNOLOGIES AS ENABLERS OF INDUSTRIAL COMPETITIVENESS AND SUSTAINABILITY, 2021, 54 : 197 - 202
  • [3] Multi-modal Language Models for Human-Robot Interaction
    Janssens, Ruben
    COMPANION OF THE 2024 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, HRI 2024 COMPANION, 2024, : 109 - 111
  • [4] Human-Robot Interaction with Multi-Human Social Pattern Inference on a Multi-Modal Robot
    Tseng, Shih-Huan
    Wu, Tung-Yen
    Cheng, Ching-Ying
    Fu, Li-Chen
    2014 14TH INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS (ICCAS 2014), 2014, : 819 - 824
  • [5] Multi-Modal Interfaces for Human-Robot Communication in Collaborative Assembly
    Horvath, Gergely
    Kardos, Csaba
    Kemeny, Zsolt
    Kovacs, Andras
    Pataki, Balazs E.
    Vancza, Jozsef
    ERCIM NEWS, 2018, (114): : 15 - 16
  • [6] Continuous Multi-Modal Interaction Causes Human-Robot Alignment
    Wallkotter, Sebastian
    Joannou, Michael
    Westlake, Samuel
    Belphaeme, Tony
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON HUMAN AGENT INTERACTION (HAI'17), 2017, : 375 - 379
  • [7] Multi-modal Robot Apprenticeship: Imitation Learning using Linearly Decayed DMP plus in a Human-Robot Dialogue System
    Wu, Yan
    Wang, Ruohan
    D'Haro, Luis F.
    Banchs, Rafael E.
    Tee, Keng Peng
    2018 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2018, : 8582 - 8588
  • [8] Design of Robot Teaching Assistants Through Multi-modal Human-Robot Interactions
    Ferrarelli, Paola
    Lazaro, Maria T.
    Iocchi, Luca
    ROBOTICS IN EDUCATION: LATEST RESULTS AND DEVELOPMENTS, 2018, 630 : 274 - 286
  • [9] Generation method of a robot task program for multi-modal human-robot interface
    Aramaki, Shigeto
    Nagai, Tatsuichirou
    Yayoshi, Koutarou
    Tsuruoka, Tomoaki
    Kawamura, Masato
    Kurono, Shigeru
    2006 IEEE INTERNATIONAL CONFERENCE ON INDUSTRIAL TECHNOLOGY, VOLS 1-6, 2006, : 1450 - +
  • [10] Collaborative Effort towards Common Ground in Situated Human-Robot Dialogue
    Chai, Joyce Y.
    She, Lanbo
    Fang, Rui
    Ottarson, Spencer
    Littley, Cody
    Liu, Changsong
    Hanson, Kenneth
    HRI'14: PROCEEDINGS OF THE 2014 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION, 2014, : 33 - 40