Enhancing Robotic Collaborative Tasks Through Contextual Human Motion Prediction and Intention Inference

被引:0
|
作者
Laplaza, Javier [1 ]
Moreno, Francesc [1 ]
Sanfeliu, Alberto [1 ]
机构
[1] Univ Politecn Cataluna, Inst Robot & Informat Ind Barcelona, C Llorens i Artigas 4-6,Catalonia, Barcelona 08024, Spain
基金
欧盟地平线“2020”;
关键词
Human-robot collaborative task; Human motion prediction; Human intention prediction; Deep learning attention architecture;
D O I
10.1007/s12369-024-01140-2
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot's sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot's behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model's flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.
引用
收藏
页数:20
相关论文
共 50 条
  • [41] A Human-Following Motion Planning and Control Scheme for Collaborative Robots Based on Human Motion Prediction
    Khawaja, Fahad Iqbal
    Kanazawa, Akira
    Kinugawa, Jun
    Kosuge, Kazuhiro
    SENSORS, 2021, 21 (24)
  • [42] Human Intention Inference and Motion Modeling using Approximate E-M with Online Learning
    Ravichandar, Harish Chaandar
    Dani, Ashwin
    2015 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2015, : 1819 - 1824
  • [43] Design of Human Adaptive Mechatronics Controller for Upper Limb Motion Intention Prediction
    Raj, R. Joshua Samuel
    Joel, J. Prince Antony
    Alelyani, Salem
    Alsaqer, Mohammed Saleh
    Durai, C. Anand Deva
    CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (01): : 1171 - 1188
  • [44] Human-Robot Collaborative Lifting Motion Prediction and Experimental Validation
    Arefeen, Asif
    Quarnstrom, Joel
    Syed, Shahbaz P. Qadri
    Bai, He
    Xiang, Yujiang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2023, 109 (04)
  • [45] Collaborative Multi-Dynamic Pattern Modeling for Human Motion Prediction
    Tang, Jin
    Zhang, Jin
    Ding, Rui
    Gu, Baoxuan
    Yin, Jianqin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (08) : 3689 - 3700
  • [46] ENHANCING PATIENT SAFETY THROUGH HUMAN FACTORS TRAINING IN ROBOTIC SURGERY
    Verga, M.
    Morandi, A.
    Baranzini, D.
    Oleari, E.
    Sanna, A.
    Buffi, N.
    Bonalumi, V
    BJU INTERNATIONAL, 2012, 110 : 150 - 151
  • [47] Perception-Intention-Action Cycle in Human-Robot Collaborative Tasks: The Collaborative Lightweight Object Transportation Use-Case
    Dominguez-Vidal, J. E.
    Rodriguez, Nicolas
    Sanfeliu, Alberto
    INTERNATIONAL JOURNAL OF SOCIAL ROBOTICS, 2024,
  • [48] Human-Robot Collaborative Manipulation Planning Using Early Prediction of Human Motion
    Mainprice, Jim
    Berenson, Dmitry
    2013 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2013, : 299 - 306
  • [49] Accommodating and Assisting Human Partners in Human-Robot Collaborative Tasks through Emotion Understanding
    Diamantopoulos, Hope
    Wang, Weitian
    2021 12TH INTERNATIONAL CONFERENCE ON MECHANICAL AND AEROSPACE ENGINEERING (ICMAE), 2021, : 523 - 528
  • [50] Human-wheelchair collaboration through prediction of intention and adaptive assistance
    Carlson, Tom
    Demiris, Yiannis
    2008 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, VOLS 1-9, 2008, : 3926 - 3931