Diver Gesture Recognition using Deep Learning for Underwater Human-Robot Interaction

被引:4
|
作者
Yang, Jing [1 ]
Wilson, James P. [1 ]
Gupta, Shalabh [1 ]
机构
[1] Univ Connecticut, Dept Elect & Comp Engn, Storrs, CT 06269 USA
关键词
Diver gesture recognition; Transfer learning; Convolution neural networks; Human-robot interaction; Autonomous underwater vehicles;
D O I
10.23919/oceans40490.2019.8962809
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
This paper presents a diver gesture recognition method for autonomous underwater vehicles (AUVs) to facilitate human-robot collaborative tasks. While previous methods of underwater human-robot communication required the use of expensive and bulky keyboard or joystick controls, hand gestures are becoming more popular for underwater reprogramming of AUVs because they are easier to use, faster, and cost effective. However, most of the existing datasets for the hand gesture recognition problem, were either based on unrealistic environments such as swimming pools or utilized ineffective sensor configurations. Recently, Cognitive Autonomous Driving Buddy (CADDY) dataset was released to the public which overcomes the limitations of the existing datasets. It contains the images of different diver gestures in several different and realistic underwater environments, including a set of true negatives such as divers with improper gestures or no gestures. To the best of our knowledge, this dataset has not yet been tested for gesture classification; as such, this paper presents the first benchmark results for efficient underwater human-robot interaction. In the proposed framework, a deep transfer learning approach is utilized to achieve high correct classification rate (CCR) of up to 95%. The classifier is constructed in relatively short amount of training time, while it is suitable for real-time underwater diver gesture recognition.
引用
收藏
页数:5
相关论文
共 50 条
  • [1] DARE: Diver Action Recognition Encoder for Underwater Human-Robot Interaction
    Yang, Jing
    Wilson, James P. P.
    Gupta, Shalabh
    [J]. IEEE ACCESS, 2023, 11 : 76926 - 76940
  • [2] Diver's hand gesture recognition and segmentation for human-robot interaction on AUV
    Jiang, Yu
    Zhao, Minghao
    Wang, Chong
    Wei, Fenglin
    Wang, Kai
    Qi, Hong
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (08) : 1899 - 1906
  • [3] Visual Diver Recognition for Underwater Human-Robot Collaboration
    Xia, Youya
    Sattar, Junaed
    [J]. 2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 6839 - 6845
  • [4] Human-robot interaction using facial gesture recognition
    Zelinsky, A
    Heinzmann, J
    [J]. RO-MAN '96 - 5TH IEEE INTERNATIONAL WORKSHOP ON ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 1996, : 256 - 261
  • [5] Advancements in Visual Gesture Recognition for Underwater Human-Robot Interaction: A Comprehensive Review
    Hozyn, Stanislaw
    [J]. IEEE Access, 2024, 12 : 163131 - 163142
  • [6] Human-robot interaction - Facial gesture recognition
    Rudall, BH
    [J]. ROBOTICA, 1996, 14 : 596 - 597
  • [7] Gesture spotting and recognition for human-robot interaction
    Yang, Hee-Deok
    Park, A-Yeon
    Lee, Seong-Whan
    [J]. IEEE TRANSACTIONS ON ROBOTICS, 2007, 23 (02) : 256 - 270
  • [8] Empowering human-robot interaction using sEMG sensor: Hybrid deep learning model for accurate hand gesture recognition
    Zafar, Muhammad Hamza
    Langas, Even Falkenberg
    Sanfilippo, Filippo
    [J]. RESULTS IN ENGINEERING, 2023, 20
  • [9] Face and gesture recognition using subspace method for human-robot interaction
    Hasanuzzaman, M
    Zhang, T
    Ampornaramveth, V
    Bhuiyan, MA
    Shirai, Y
    Ueno, H
    [J]. ADVANCES IN MULTIMEDIA INFORMATION PROCESSING - PCM 2004, PT 1, PROCEEDINGS, 2004, 3331 : 369 - 376
  • [10] Human-Robot Interaction Based on Facial Expression Recognition Using Deep Learning
    Maeda, Yoichiro
    Sakai, Tensei
    Kamei, Katsuari
    Cooper, Eric W.
    [J]. 2020 JOINT 11TH INTERNATIONAL CONFERENCE ON SOFT COMPUTING AND INTELLIGENT SYSTEMS AND 21ST INTERNATIONAL SYMPOSIUM ON ADVANCED INTELLIGENT SYSTEMS (SCIS-ISIS), 2020, : 211 - 216