Context-aware hand gesture interaction for human-robot collaboration in construction

被引:0
|
作者
Wang, Xin [1 ]
Veeramani, Dharmaraj [2 ]
Dai, Fei [3 ]
Zhu, Zhenhua [1 ]
机构
[1] Univ Wisconsin, Dept Civil & Environm Engn, Madison, WI 53706 USA
[2] Univ Wisconsin, Dept Ind & Syst Engn, Madison, WI 53706 USA
[3] West Virginia Univ, Dept Civil & Environm Engn, Morgantown, WV USA
关键词
RECOGNITION;
D O I
10.1111/mice.13202
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Construction robots play a pivotal role in enabling intelligent processes within the construction industry. User-friendly interfaces that facilitate efficient human-robot collaboration are essential for promoting robot adoption. However, most of the existing interfaces do not consider contextual information in the collaborative environment. The situation where humans and robots work together in the same jobsite creates a unique environmental context. Overlooking contextual information would limit the potential to optimize interaction efficiency. This paper proposes a novel context-aware method that utilizes a two-stream network to enhance human-robot interaction in construction settings. In the proposed network, the first-person view-based stream focuses on the relevant spatiotemporal regions for context extraction, while the motion sensory data-based stream obtains features related to hand motions. By fusing the vision context and motion data, the method achieves gesture recognition for efficient communication between construction workers and robots. Experimental evaluation on a dataset from five construction sites demonstrates an overall classification accuracy of 92.6%, underscoring the practicality and potential benefits of the proposed method.
引用
收藏
页码:3489 / 3504
页数:16
相关论文
共 50 条
  • [31] Diver's hand gesture recognition and segmentation for human-robot interaction on AUV
    Jiang, Yu
    Zhao, Minghao
    Wang, Chong
    Wei, Fenglin
    Wang, Kai
    Qi, Hong
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (08) : 1899 - 1906
  • [32] Towards context-aware natural language understanding in human-robot-collaboration
    Haase, Tobias
    Schoenheits, Manfred
    [J]. 2021 IEEE 17TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2021, : 1648 - 1653
  • [33] Collaboration, dialogue, and human-robot interaction
    Fong, T
    Thorpe, C
    Baur, C
    [J]. ROBOTICS RESEARCH, 2003, 6 : 255 - 266
  • [34] Human-robot interaction - Facial gesture recognition
    Rudall, BH
    [J]. ROBOTICA, 1996, 14 : 596 - 597
  • [35] Space, Speech, and Gesture in Human-Robot Interaction
    Mead, Ross
    [J]. ICMI '12: PROCEEDINGS OF THE ACM INTERNATIONAL CONFERENCE ON MULTIMODAL INTERACTION, 2012, : 333 - 336
  • [36] Estimation of gesture pointing for human-robot interaction
    Chen R.
    Fei M.
    Yang A.
    [J]. Yi Qi Yi Biao Xue Bao/Chinese Journal of Scientific Instrument, 2023, 44 (03): : 200 - 208
  • [37] Gesture Mimicry in Social Human-Robot Interaction
    Stolzenwald, Janis
    Bremner, Paul
    [J]. 2017 26TH IEEE INTERNATIONAL SYMPOSIUM ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2017, : 430 - 436
  • [38] Gesture spotting and recognition for human-robot interaction
    Yang, Hee-Deok
    Park, A-Yeon
    Lee, Seong-Whan
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (02) : 256 - 270
  • [39] A Gesture Based Interface for Human-Robot Interaction
    Stefan Waldherr
    Roseli Romero
    Sebastian Thrun
    [J]. Autonomous Robots, 2000, 9 : 151 - 173
  • [40] A gesture based interface for human-robot interaction
    Waldherr, S
    Romero, R
    Thrun, S
    [J]. AUTONOMOUS ROBOTS, 2000, 9 (02) : 151 - 173