UIVNAV: Underwater Information-driven Vision-based Navigation via Imitation Learning

被引:1
|
作者
Lin, Xiaomin [1 ]
Karapetyan, Nare [2 ]
Joshi, Kaustubh [1 ]
Liu, Tianchen [1 ]
Chopra, Nikhil [1 ]
Yu, Miao [1 ]
Tokekar, Pratap [1 ]
Aloimonos, Yiannis [1 ]
机构
[1] Univ Maryland, Maryland Robot Ctr MRC, College Pk, MD 20742 USA
[2] Woods Hole Oceanog Inst WHOI, Woods Hole, MA 02543 USA
关键词
COVERAGE; AVOIDANCE;
D O I
10.1109/ICRA57147.2024.10611203
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Autonomous navigation in the underwater environment is challenging due to limited visibility, dynamic changes, and the lack of a cost-efficient, accurate localization system. We introduce UIVNAV, a novel end-to-end underwater navigation solution designed to navigate robots over Objects of Interest (OOI) while avoiding obstacles, all without relying on localization. UIVNAV utilizes imitation learning and draws inspiration from the navigation strategies employed by human divers, who do not rely on localization. UIVNAV consists of the following phases: (1) generating an intermediate representation (IR) and (2) training the navigation policy based on human-labeled IR. By training the navigation policy on IR instead of raw data, the second phase is domain-invariant- the navigation policy does not need to be retrained if the domain or the OOI changes. We demonstrate this within simulation by deploying the same navigation policy to survey two distinct Objects of Interest (OOIs): oyster and rock reefs. We compared our method with complete coverage and random walk methods, showing that our approach is more efficient in gathering information for OOIs while avoiding obstacles. The results show that UIVNAV chooses to visit the areas with larger area sizes of oysters or rocks with no prior information about the environment or localization. Moreover, a robot using UIVNAV compared to complete coverage method surveys on average 36% more oysters when traveling the same distances. We also demonstrate the feasibility of real-time deployment of UIVNAV in pool experiments with BlueROV underwater robot for surveying a bed of oyster shells.
引用
收藏
页码:5250 / 5256
页数:7
相关论文
共 50 条
  • [31] Vision-based motion sensing for underwater navigation and mosaicing of ocean floor images
    Xu, X
    Negahdaripour, S
    OCEANS '97 MTS/IEEE CONFERENCE PROCEEDINGS, VOLS 1 AND 2, 1997, : 1412 - 1417
  • [32] Information-Driven Path Planning for Hybrid Aerial Underwater Vehicles
    Zeng, Zheng
    Xiong, Chengke
    Yuan, Xinyi
    Zhou, Hexiong
    Bai, Yuling
    Jin, Yufei
    Lu, Di
    Lian, Lian
    IEEE JOURNAL OF OCEANIC ENGINEERING, 2023, 48 (03) : 689 - 715
  • [33] Demonstration of a vision-based dead-reckoning system for navigation of an underwater vehicle
    Huster, A
    Fleischer, SD
    Rock, SM
    PROCEEDINGS OF THE 1998 WORKSHOP ON AUTONOMOUS UNDERWATER VEHICLES, (AUV '98), 1998, : 185 - 189
  • [34] Demonstration of a vision-based dead-reckoning system for navigation of an underwater vehicle
    Huster, A
    Fleischer, SD
    Rock, SM
    OCEANS'98 - CONFERENCE PROCEEDINGS, VOLS 1-3, 1998, : 326 - 330
  • [35] Vision-Based Goal-Conditioned Policies for Underwater Navigation in the Presence of Obstacles
    Manderson, Travis
    Gamboa, Juan Camilo
    Wapnick, Stefan
    Tremblay, Jean-Francois
    Shkurti, Florian
    Meger, Dave
    Dudek, Gregory
    ROBOTICS: SCIENCE AND SYSTEMS XVI, 2020,
  • [36] Vision-based wheelchair navigation using geometric AdaBoost learning
    Kim, Eun Yi
    ELECTRONICS LETTERS, 2017, 53 (08) : 534 - 536
  • [37] Vision-Based Autonomous Navigation Using Supervised Learning Techniques
    Souza, Jefferson R.
    Pessin, Gustavo
    Osorio, Fernando S.
    Wolf, Denis F.
    ENGINEERING APPLICATIONS OF NEURAL NETWORKS, PT I, 2011, 363 : 11 - 20
  • [38] Learning of Sensorimotor Behaviors by a SASE Agent for Vision-based Navigation
    Ji, Zhengping
    Huang, Xiao
    Weng, Juyang
    2008 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-8, 2008, : 3374 - 3381
  • [39] A method of vision-based navigation for rescue robots using motion information
    Luo, Jun
    Yan, Chunming
    Pu, Huayan
    Liu, Hengli
    Xie, Shaorong
    Gu, Jason
    2015 IEEE 28TH CANADIAN CONFERENCE ON ELECTRICAL AND COMPUTER ENGINEERING (CCECE), 2015, : 765 - 770
  • [40] Vision-Based Imitation Learning of Needle Reaching Skill for Robotic Precision Manipulation
    Li, Ying
    Qin, Fangbo
    Du, Shaofeng
    Xu, De
    Zhang, Jianqiang
    JOURNAL OF INTELLIGENT & ROBOTIC SYSTEMS, 2021, 101 (01)