Predicting Human Intentions in Human-Robot Hand-Over Tasks Through Multimodal Learning

被引:35
|
作者
Wang, Weitian [1 ]
Li, Rui [1 ]
Chen, Yi [2 ]
Sun, Yi [2 ]
Jia, Yunyi [2 ]
机构
[1] Montclair State Univ, Dept Comp Sci, Montclair, NJ 07043 USA
[2] Clemson Univ, Dept Automot Engn, Greenville, SC 29607 USA
基金
美国国家科学基金会;
关键词
Robots; Task analysis; Robot sensing systems; Collaboration; Education; Cognition; Tools; Extreme learning machine (ELM); human-robot hand-over; intention prediction; learning from demonstrations; natural language; wearable sensors; MACHINE; COLLABORATION; NETWORKS;
D O I
10.1109/TASE.2021.3074873
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In human-robot shared manufacturing contexts, product parts or tools hand-over between the robot and the human is an important collaborative task. Facilitating the robot to figure out and predict human hand-over intentions correctly to improve the task efficiency in human-robot collaboration is therefore a necessary issue to be addressed. In this study, a teaching-learning-prediction (TLP) framework is proposed for the robot to learn from its human partner's multimodal demonstrations and predict human hand-over intentions. In this approach, the robot can be programmed by the human through demonstrations utilizing natural language and wearable sensors according to task requirements and the human's working preferences. Then the robot learns from human hand-over demonstrations online via extreme learning machine (ELM) algorithms to update its cognition capacity, allowing the robot to use its learned policy to predict human intentions actively and assist its human companion in hand-over tasks. Experimental results and evaluations suggest that the human may program the robot easily by the proposed approach when the task changes, as the robot can effectively predict hand-over intentions with competitive accuracy to complete the hand-over tasks.
引用
收藏
页码:2339 / 2353
页数:15
相关论文
共 50 条
  • [1] Physiological and subjective evaluation of a human-robot object hand-over task
    Dehais, Frederic
    Sisbot, Emrah Akin
    Alami, Rachid
    Causse, Mickael
    APPLIED ERGONOMICS, 2011, 42 (06) : 785 - 791
  • [2] Vision-based control architecture for human-robot hand-over applications
    Melchiorre, Matteo
    Scimmi, Leonardo Sabatino
    Mauro, Stefano
    Pastorelli, Stefano Paolo
    ASIAN JOURNAL OF CONTROL, 2021, 23 (01) : 105 - 117
  • [3] Human Preferences for Robot-Human Hand-over Configurations
    Cakmak, Maya
    Srinivasa, Siddhartha S.
    Lee, Min Kyung
    Forlizzi, Jodi
    Kiesler, Sara
    2011 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, 2011,
  • [4] Controlling Object Hand-Over in Human-Robot Collaboration Via Natural Wearable Sensing
    Wang, Weitian
    Li, Rui
    Diekel, Zachary Max
    Chen, Yi
    Zhang, Zhujun
    Jia, Yunyi
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2019, 49 (01) : 59 - 71
  • [5] Trajectory planning for hand-over between human and robot
    Kajikawa, S
    Ishikawa, E
    IEEE RO-MAN 2000: 9TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION, PROCEEDINGS, 2000, : 281 - 287
  • [6] Predicting the Target in Human-Robot Manipulation Tasks
    Hamandi, Mahmoud
    Hatay, Emre
    Fazli, Pooyan
    SOCIAL ROBOTICS, ICSR 2018, 2018, 11357 : 580 - 587
  • [7] Probabilistic Multimodal Modeling for Human-Robot Interaction Tasks
    Campbell, Joseph
    Stepputtis, Simon
    Amor, Heni Ben
    ROBOTICS: SCIENCE AND SYSTEMS XV, 2019,
  • [8] Control of a robot hand emulating human's hand-over motion
    Kim, I
    Nakazawa, N
    Inooka, H
    MECHATRONICS, 2002, 12 (01) : 55 - 69
  • [9] Dual Track Multimodal Automatic Learning through Human-Robot Interaction
    Jiang, Shuqiang
    Min, Weiqing
    Li, Xue
    Wang, Huayang
    Sun, Jian
    Zhou, Jiaqi
    PROCEEDINGS OF THE TWENTY-SIXTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 4485 - 4491
  • [10] OHO: A Multi-Modal, Multi-Purpose Dataset for Human-Robot Object Hand-Over
    Stephan, Benedict
    Koehler, Mona
    Mueller, Steffen
    Zhang, Yan
    Gross, Horst-Michael
    Notni, Gunther
    SENSORS, 2023, 23 (18)