HARMONIC: A multimodal dataset of assistive human-robot collaboration

被引:7
|
作者
Newman, Benjamin A. [1 ]
Aronson, Reuben M. [1 ]
Srinivasa, Siddhartha S. [2 ]
Kitani, Kris [1 ]
Admoni, Henny [1 ]
机构
[1] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
[2] Univ Washington, Seattle, WA 98195 USA
来源
基金
美国国家科学基金会;
关键词
Human-robot interaction; shared autonomy; intention; multimodal; eye gaze; assistive robotics; EYE-MOVEMENTS;
D O I
10.1177/02783649211050677
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files.
引用
收藏
页码:3 / 11
页数:9
相关论文
共 50 条
  • [1] Multimodal Interface for Human-Robot Collaboration
    Rautiainen, Samu
    Pantano, Matteo
    Traganos, Konstantinos
    Ahmadi, Seyedamir
    Saenz, Jose
    Mohammed, Wael M.
    Lastra, Jose L. Martinez
    [J]. MACHINES, 2022, 10 (10)
  • [2] A Multimodal Human-Robot Interaction Manager for Assistive Robots
    Abbasi, Bahareh
    Monaikul, Natawut
    Rysbek, Zhanibek
    Di Eugenio, Barbara
    Zefran, Milos
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 6756 - 6762
  • [3] MULTIMODAL HUMAN ACTION RECOGNITION IN ASSISTIVE HUMAN-ROBOT INTERACTION
    Rodomagoulakis, I.
    Kardaris, N.
    Pitsikalis, V.
    Mavroudi, E.
    Katsamanis, A.
    Tsiami, A.
    Maragos, P.
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING PROCEEDINGS, 2016, : 2702 - 2706
  • [4] Natural multimodal communication for human-robot collaboration
    Maurtua, Inaki
    Fernandez, Izaskun
    Tellaeche, Alberto
    Kildal, Johan
    Susperregi, Loreto
    Ibarguren, Aitor
    Sierra, Basilio
    [J]. INTERNATIONAL JOURNAL OF ADVANCED ROBOTIC SYSTEMS, 2017, 14 (04): : 1 - 12
  • [5] A multimodal teleoperation interface for human-robot collaboration
    Si, Weiyong
    Zhong, Tianjian
    Wang, Ning
    Yang, Chenguang
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS, ICM, 2023,
  • [6] Safe Multimodal Communication in Human-Robot Collaboration
    Ferrari, Davide
    Pupa, Andrea
    Signoretti, Alberto
    Secchi, Cristian
    [J]. HUMAN-FRIENDLY ROBOTICS 2023, HFR 2023, 2024, 29 : 151 - 163
  • [7] Timing of Multimodal Robot Behaviors during Human-Robot Collaboration
    Jensen, Lars Christian
    Fischer, Kerstin
    Suvei, Stefan-Daniel
    Bodenhagen, Leon
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 1061 - 1066
  • [8] Mobile Multimodal Human-Robot Interface for Virtual Collaboration
    Song, Young Eun
    Niitsuma, Mihoko
    Kubota, Takashi
    Hashimoto, Hideki
    Son, Hyoung Il
    [J]. 3RD IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS (COGINFOCOM 2012), 2012, : 627 - 631
  • [9] MULTIMODAL SIGNAL PROCESSING AND LEARNING ASPECTS OF HUMAN-ROBOT INTERACTION FOR AN ASSISTIVE BATHING ROBOT
    Zlatintsi, A.
    Rodomagoulakis, I.
    Koutras, P.
    Dometios, A. C.
    Pitsikalis, V.
    Tzafestas, C. S.
    Maragos, P.
    [J]. 2018 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2018, : 3171 - 3175
  • [10] Context-dependent multimodal communication in human-robot collaboration
    Kardos, Csaba
    Kemeny, Zsolt
    Kovacs, Andras
    Pataki, Balazs E.
    Vancza, Jozsef
    [J]. 51ST CIRP CONFERENCE ON MANUFACTURING SYSTEMS, 2018, 72 : 15 - 20