HARMONIC: A multimodal dataset of assistive human-robot collaboration

被引:7
|
作者
Newman, Benjamin A. [1 ]
Aronson, Reuben M. [1 ]
Srinivasa, Siddhartha S. [2 ]
Kitani, Kris [1 ]
Admoni, Henny [1 ]
机构
[1] Carnegie Mellon Univ, Robot Inst, Pittsburgh, PA 15213 USA
[2] Univ Washington, Seattle, WA 98195 USA
来源
基金
美国国家科学基金会;
关键词
Human-robot interaction; shared autonomy; intention; multimodal; eye gaze; assistive robotics; EYE-MOVEMENTS;
D O I
10.1177/02783649211050677
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
We present the Human And Robot Multimodal Observations of Natural Interactive Collaboration (HARMONIC) dataset. This is a large multimodal dataset of human interactions with a robotic arm in a shared autonomy setting designed to imitate assistive eating. The dataset provides human, robot, and environmental data views of 24 different people engaged in an assistive eating task with a 6-degree-of-freedom (6-DOF) robot arm. From each participant, we recorded video of both eyes, egocentric video from a head-mounted camera, joystick commands, electromyography from the forearm used to operate the joystick, third-person stereo video, and the joint positions of the 6-DOF robot arm. Also included are several features that come as a direct result of these recordings, such as eye gaze projected onto the egocentric video, body pose, hand pose, and facial keypoints. These data streams were collected specifically because they have been shown to be closely related to human mental states and intention. This dataset could be of interest to researchers studying intention prediction, human mental state modeling, and shared autonomy. Data streams are provided in a variety of formats such as video and human-readable CSV and YAML files.
引用
收藏
页码:3 / 11
页数:9
相关论文
共 50 条
  • [21] Safety in human-robot collaboration
    Hofbaur, M.
    Rathmair, M.
    [J]. ELEKTROTECHNIK UND INFORMATIONSTECHNIK, 2019, 136 (07): : 301 - 306
  • [22] Acceptability of Tele-assistive Robotic Nurse for Human-Robot Collaboration in Medical Environment
    Lee, WonHyong
    Park, Jaebyung
    Park, Chung Hyuk
    [J]. COMPANION OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'18), 2018, : 171 - 172
  • [23] Human modeling for human-robot collaboration
    Hiatt, Laura M.
    Narber, Cody
    Bekele, Esube
    Khemlani, Sangeet S.
    Trafton, J. Gregory
    [J]. INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, 2017, 36 (5-7): : 580 - 596
  • [24] Multimodal perception-fusion-control and human-robot collaboration in manufacturing: a review
    Duan, Jianguo
    Zhuang, Liwen
    Zhang, Qinglei
    Zhou, Ying
    Qin, Jiyun
    [J]. INTERNATIONAL JOURNAL OF ADVANCED MANUFACTURING TECHNOLOGY, 2024, 132 (3-4): : 1071 - 1093
  • [25] Deep Learning-based Multimodal Control Interface for Human-Robot Collaboration
    Liu, Hongyi
    Fang, Tongtong
    Zhou, Tianyu
    Wang, Yuquan
    Wang, Lihui
    [J]. 51ST CIRP CONFERENCE ON MANUFACTURING SYSTEMS, 2018, 72 : 3 - 8
  • [26] A Self-Modulated Impedance Multimodal Interaction Framework for Human-Robot Collaboration
    Muratore, Luca
    Laurenzi, Arturo
    Tsagarakis, Nikos G.
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 4998 - 5004
  • [27] Impact of Robot Initiative on Human-Robot Collaboration
    Munzer, Thibaut
    Mollard, Yoan
    Lopes, Manuel
    [J]. COMPANION OF THE 2017 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'17), 2017, : 217 - 218
  • [28] Robust Robot Planning for Human-Robot Collaboration
    You, Yang
    Thomas, Vincent
    Colas, Francis
    Alami, Rachid
    Buffet, Olivier
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 9793 - 9799
  • [29] Enhancing Robot Explainability in Human-Robot Collaboration
    Wang, Yanting
    You, Sangseok
    [J]. HUMAN-COMPUTER INTERACTION, HCI 2023, PT III, 2023, 14013 : 236 - 247
  • [30] Effects of Robot Motion on Human-Robot Collaboration
    Dragan, Anca D.
    Bauman, Shira
    Forlizzi, Jodi
    Srinivasa, Siddhartha S.
    [J]. PROCEEDINGS OF THE 2015 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'15), 2015, : 51 - 58