Diver Gesture Recognition using Deep Learning for Underwater Human-Robot Interaction

被引:4
|
作者
Yang, Jing [1 ]
Wilson, James P. [1 ]
Gupta, Shalabh [1 ]
机构
[1] Univ Connecticut, Dept Elect & Comp Engn, Storrs, CT 06269 USA
关键词
Diver gesture recognition; Transfer learning; Convolution neural networks; Human-robot interaction; Autonomous underwater vehicles;
D O I
10.23919/oceans40490.2019.8962809
中图分类号
U6 [水路运输]; P75 [海洋工程];
学科分类号
0814 ; 081505 ; 0824 ; 082401 ;
摘要
This paper presents a diver gesture recognition method for autonomous underwater vehicles (AUVs) to facilitate human-robot collaborative tasks. While previous methods of underwater human-robot communication required the use of expensive and bulky keyboard or joystick controls, hand gestures are becoming more popular for underwater reprogramming of AUVs because they are easier to use, faster, and cost effective. However, most of the existing datasets for the hand gesture recognition problem, were either based on unrealistic environments such as swimming pools or utilized ineffective sensor configurations. Recently, Cognitive Autonomous Driving Buddy (CADDY) dataset was released to the public which overcomes the limitations of the existing datasets. It contains the images of different diver gestures in several different and realistic underwater environments, including a set of true negatives such as divers with improper gestures or no gestures. To the best of our knowledge, this dataset has not yet been tested for gesture classification; as such, this paper presents the first benchmark results for efficient underwater human-robot interaction. In the proposed framework, a deep transfer learning approach is utilized to achieve high correct classification rate (CCR) of up to 95%. The classifier is constructed in relatively short amount of training time, while it is suitable for real-time underwater diver gesture recognition.
引用
收藏
页数:5
相关论文
共 50 条
  • [21] Gesture recognition based on context awareness for human-robot interaction
    Hong, Seok-Ju
    Setiawan, Nurul Arif
    Kim, Song-Gook
    Lee, Chil-Woo
    [J]. ADVANCES IN ARTIFICIAL REALITY AND TELE-EXISTENCE, PROCEEDINGS, 2006, 4282 : 1 - +
  • [22] A Novel Gesture-Based Language for Underwater Human-Robot Interaction
    Chiarella, Davide
    Bibuli, Marco
    Bruzzone, Gabriele
    Caccia, Massimo
    Ranieri, Andrea
    Zereik, Enrica
    Marconi, Lucia
    Cutugno, Paola
    [J]. JOURNAL OF MARINE SCIENCE AND ENGINEERING, 2018, 6 (03)
  • [23] HRI-Gestures: Gesture Recognition for Human-Robot Interaction
    Kollakidou, Avgi
    Haarslev, Frederik
    Odabasi, Cagatay
    Bodenhagen, Leon
    Krueger, Norbert
    [J]. PROCEEDINGS OF THE 17TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISAPP), VOL 5, 2022, : 559 - 566
  • [24] A General Pipeline for Online Gesture Recognition in Human-Robot Interaction
    Villani, Valeria
    Secchi, Cristian
    Lippi, Marco
    Sabattini, Lorenzo
    [J]. IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2023, 53 (02) : 315 - 324
  • [25] Human-robot interaction by whole body gesture spotting and recognition
    Yang, Hee-Deok
    Park, A-Yeon
    Lee, Seong-Whan
    [J]. 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, PROCEEDINGS, 2006, : 774 - +
  • [26] Diver-robot communication dataset for underwater hand gesture recognition
    Kvasic, Igor
    Antillon, Derek Orbaugh
    Nad, Dula
    Walker, Christopher
    Anderson, Iain
    Miskovic, Nikola
    [J]. COMPUTER NETWORKS, 2024, 245
  • [27] Sonar-Based Detection and Tracking of a Diver for Underwater Human-Robot Interaction Scenarios
    DeMarco, Kevin J.
    West, Michael E.
    Howard, Ayanna M.
    [J]. 2013 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC 2013), 2013, : 2378 - 2383
  • [28] Gesture-based Language for Diver-Robot Underwater Interaction
    Chiarella, D.
    Bibuli, M.
    Bruzzone, G.
    Caccia, M.
    Ranieri, A.
    Zereik, E.
    Marconi, L.
    Cutugno, P.
    [J]. OCEANS 2015 - GENOVA, 2015,
  • [29] Head and Eye Egocentric Gesture Recognition for Human-Robot Interaction Using Eyewear Cameras
    Marina-Miranda, Javier
    Javier Traver, V
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 7067 - 7074
  • [30] Gesture Learning Based on A Topological Approach for Human-Robot Interaction
    Obo, Takenori
    Takizawa, Kazuma
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,