An Underwater Human-Robot Interaction Using a Visual-Textual Model for Autonomous Underwater Vehicles

被引:6
|
作者
Zhang, Yongji [1 ]
Jiang, Yu [1 ,2 ]
Qi, Hong [1 ,2 ]
Zhao, Minghao [1 ]
Wang, Yuehang [1 ]
Wang, Kai [1 ]
Wei, Fenglin [1 ]
机构
[1] Jilin Univ, Coll Comp Sci & Technol, Changchun 130012, Peoples R China
[2] Jilin Univ, State Key Lab Symbol Computat & Knowledge Engn, Minist Educ, Changchun 130012, Peoples R China
基金
中国国家自然科学基金;
关键词
autonomous underwater vehicle; underwater human-robot interaction; gesture recognition; visual-textual association;
D O I
10.3390/s23010197
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
The marine environment presents a unique set of challenges for human-robot interaction. Communicating with gestures is a common way for interacting between the diver and autonomous underwater vehicles (AUVs). However, underwater gesture recognition is a challenging visual task for AUVs due to light refraction and wavelength color attenuation issues. Current gesture recognition methods classify the whole image directly or locate the hand position first and then classify the hand features. Among these purely visual approaches, textual information is largely ignored. This paper proposes a visual-textual model for underwater hand gesture recognition (VT-UHGR). The VT-UHGR model encodes the underwater diver's image as visual features, the category text as textual features, and generates visual-textual features through multimodal interactions. We guide AUVs to use image-text matching for learning and inference. The proposed method achieves better performance than most existing purely visual methods on the dataset CADDY, demonstrating the effectiveness of using textual patterns for underwater gesture recognition.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Visual identification of biological motion for underwater human-robot interaction
    Sattar, Junaed
    Dudek, Gregory
    AUTONOMOUS ROBOTS, 2018, 42 (01) : 111 - 124
  • [2] SLQR Suboptimal Human-Robot Collaborative Guidance and Navigation for Autonomous Underwater Vehicles
    Spencer, David A.
    Wang, Yue
    2015 AMERICAN CONTROL CONFERENCE (ACC), 2015, : 2131 - 2136
  • [3] A Simulator for Underwater Human-Robot Interaction Scenarios
    DeMarco, Kevin J.
    West, Michael E.
    Howard, Ayanna M.
    2013 OCEANS - SAN DIEGO, 2013,
  • [4] Visual observation of underwater objects by autonomous underwater vehicles
    Kondo, H
    Ura, T
    3RD INTERNATIONAL WORKSHOP ON SCIENTIFIC USE OF SUBMARINE CABLES AND RELATED TECHNOLOGY, PROCEEDINGS, 2003, : 145 - 150
  • [5] Advancements in Visual Gesture Recognition for Underwater Human-Robot Interaction: A Comprehensive Review
    Hozyn, Stanislaw
    IEEE ACCESS, 2024, 12 : 163131 - 163142
  • [6] Visual Diver Recognition for Underwater Human-Robot Collaboration
    Xia, Youya
    Sattar, Junaed
    2019 INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2019, : 6839 - 6845
  • [7] CADDY Cognitive Autonomous Diving Buddy: Two Years of Underwater Human-Robot Interaction
    Miskovic, Nikola
    Bibuli, Marco
    Birk, Andreas
    Caccia, Massimo
    Egi, Murat
    Grammer, Karl
    Marroni, Alessandro
    Neasham, Jeff
    Pascoal, Antonio
    Vasilijevic, Antonio
    Vukic, Zoran
    MARINE TECHNOLOGY SOCIETY JOURNAL, 2016, 50 (04) : 54 - 66
  • [8] An Underwater Human-Robot Interaction Using Hand Gestures for Fuzzy Control
    Jiang, Yu
    Peng, Xianglong
    Xue, Mingzhu
    Wang, Chong
    Qi, Hong
    INTERNATIONAL JOURNAL OF FUZZY SYSTEMS, 2021, 23 (06) : 1879 - 1889
  • [9] Human-Robot Interaction Underwater: Communication and Safety Requirements
    Miskovic, Nikola
    Egi, Murat
    Nad, Dula
    Pascoal, Antonio
    Sebastiao, Luis
    Bibuli, Marco
    2016 IEEE THIRD UNDERWATER COMMUNICATIONS AND NETWORKING CONFERENCE (UCOMMS), 2016,
  • [10] Visual Odometry for Autonomous Underwater Vehicles
    Wirth, Stephan
    Negre Carrasco, Pep Lluis
    Oliver Codina, Gabriel
    2013 MTS/IEEE OCEANS - BERGEN, 2013,