Enhancing Robotic Collaborative Tasks Through Contextual Human Motion Prediction and Intention Inference

被引:0
|
作者
Laplaza, Javier [1 ]
Moreno, Francesc [1 ]
Sanfeliu, Alberto [1 ]
机构
[1] Univ Politecn Cataluna, Inst Robot & Informat Ind Barcelona, C Llorens i Artigas 4-6,Catalonia, Barcelona 08024, Spain
基金
欧盟地平线“2020”;
关键词
Human-robot collaborative task; Human motion prediction; Human intention prediction; Deep learning attention architecture;
D O I
10.1007/s12369-024-01140-2
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Predicting human motion based on a sequence of past observations is crucial for various applications in robotics and computer vision. Currently, this problem is typically addressed by training deep learning models using some of the most well-known 3D human motion datasets widely used in the community. However, these datasets generally do not consider how humans behave and move when a robot is nearby, leading to a data distribution different from the real distribution of motion that robots will encounter when collaborating with humans. Additionally, incorporating contextual information related to the interactive task between the human and the robot, as well as information on the human willingness to collaborate with the robot, can improve not only the accuracy of the predicted sequence but also serve as a useful tool for robots to navigate through collaborative tasks successfully. In this research, we propose a deep learning architecture that predicts both 3D human body motion and human intention for collaborative tasks. The model employs a multi-head attention mechanism, taking into account human motion and task context as inputs. The resulting outputs include the predicted motion of the human body and the inferred human intention. We have validated this architecture in two different tasks: collaborative object handover and collaborative grape harvesting. While the architecture remains the same for both tasks, the inputs differ. In the handover task, the architecture considers human motion, robot end effector, and obstacle positions as inputs. Additionally, the model can be conditioned on the desired intention to tailor the output motion accordingly. To assess the performance of the collaborative handover task, we conducted a user study to evaluate human perception of the robot's sociability, naturalness, security, and comfort. This evaluation was conducted by comparing the robot's behavior when it utilized the prediction in its planner versus when it did not. Furthermore, we also applied the model to a collaborative grape harvesting task. By integrating human motion prediction and human intention inference, our architecture shows promising results in enhancing the capabilities of robots in collaborative scenarios. The model's flexibility allows it to handle various tasks with different inputs, making it adaptable to real-world applications.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Human Intention Prediction in Human-Robot Collaborative Tasks
    Wang, Weitian
    Li, Rui
    Chen, Yi
    Jia, Yunyi
    [J]. COMPANION OF THE 2018 ACM/IEEE INTERNATIONAL CONFERENCE ON HUMAN-ROBOT INTERACTION (HRI'18), 2018, : 279 - 280
  • [2] Autonomous Robotic Escort Incorporating Motion Prediction and Human Intention
    Conte, Dean
    Furukawa, Tomonari
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2021), 2021, : 3480 - 3486
  • [3] Human Motion Trajectory Prediction in Human-Robot Collaborative Tasks
    Li, Shiqi
    Wang, Haipeng
    Zhang, Shuai
    Wang, Shuze
    Han, Ke
    [J]. 2019 3RD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE APPLICATIONS AND TECHNOLOGIES (AIAAT 2019), 2019, 646
  • [4] Human Intention Inference and On-Line Human Hand Motion Prediction for Human-Robot Collaboration
    Luo, Ren C.
    Mai, Licong
    [J]. 2019 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2019, : 5958 - 5964
  • [5] Implicit Human Intention Inference through Gaze Cues for People with Limited Motion Ability
    Li, Songpo
    Zhang, Xiaoli
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON MECHATRONICS AND AUTOMATION (IEEE ICMA 2014), 2014, : 257 - 262
  • [6] Anticipating Human Intention for Full-Body Motion Prediction in Object Grasping and Placing Tasks
    Kratzer, Philipp
    Midlagajni, Niteesh Balachandra
    Toussaint, Marc
    Mainprice, Jim
    [J]. 2020 29TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2020, : 1157 - 1163
  • [7] Gaze and motion information fusion for human intention inference
    Ravichandar, Harish Chaandar
    Kumar, Avnish
    Dani, Ashwin
    [J]. INTERNATIONAL JOURNAL OF INTELLIGENT ROBOTICS AND APPLICATIONS, 2018, 2 (02) : 136 - 148
  • [8] FedHIP: Federated learning for privacy-preserving human intention prediction in human-robot collaborative assembly tasks
    Cai, Jiannan
    Gao, Zhidong
    Guo, Yuanxiong
    Wibranek, Bastian
    Li, Shuai
    [J]. ADVANCED ENGINEERING INFORMATICS, 2024, 60
  • [9] Human-Aware Robotic Assistant for Collaborative Assembly: Integrating Human Motion Prediction With Planning in Time
    Unhelkar, Vaibhav V.
    Lasota, Przemyslaw A.
    Tyroller, Quirin
    Buhai, Rares-Darius
    Marceau, Laurie
    Deml, Barbara
    Shah, Julie A.
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2018, 3 (03): : 2394 - 2401
  • [10] Enhancing digital human motion planning of assembly tasks through dynamics and optimal control
    Bjorkenstam, Staffan
    Delfs, Niclas
    Carlson, Johan S.
    Bohlin, Robert
    Lennartson, Bengt
    [J]. 6TH CIRP CONFERENCE ON ASSEMBLY TECHNOLOGIES AND SYSTEMS (CATS), 2016, 44 : 20 - 25