Reliable pose estimation of underwater dock using single camera: a scene invariant approach

被引:40
|
作者
Ghosh, Shatadal [1 ]
Ray, Ranjit [2 ]
Vadali, Siva Ram Krishna [2 ]
Shome, Sankar Nath [2 ]
Nandy, Sambhunath [2 ]
机构
[1] CSIR CMERI, Acad Sci & Innovat Res AcSIR, Durgapur, India
[2] CSIR CMERI, Robot & Automat Div, Durgapur, India
关键词
AUV; Perspective projection; Pose estimation; Underwater docking; Vision guidance;
D O I
10.1007/s00138-015-0736-4
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
It is well known that docking of Autonomous Underwater Vehicle (AUV) provides scope to perform long duration deep-sea exploration. A large amount of literature is available on vision-based docking which exploit mechanical design, colored markers to estimate the pose of a docking station. In this work, we propose a method to estimate the relative pose of a circular-shaped docking station (arranged with LED lights on periphery) up to five degrees of freedom (5-DOF, neglecting roll effect). Generally, extraction of light markers from underwater images is based on fixed/adaptive choice of threshold, followed by mass moment-based computation of individual markers as well as center of the dock. Novelty of our work is the proposed highly effective scene invariant histogram-based adaptive thresholding scheme (HATS) which reliably extracts positions of light sources seen in active marker images. As the perspective projection of a circle features a family of ellipses, we then fit an appropriate ellipse for the markers and subsequently use the ellipse parameters to estimate the pose of a circular docking station with the help of a well-known method in Safaee-Rad et al. (IEEE Trans Robot Autom 8(5): 624-640, 1992). We analyze the effectiveness of HATS as well as proposed approach through simulations and experimentation. We also compare performance of targeted curvature-based pose estimation with a non-iterative efficient perspective-n-point (EPnP) method. The paper ends with a few interesting remarks on vantages with ellipse fitting for markers and utility of proposed method in case of non-detection of all the light markers.
引用
收藏
页码:221 / 236
页数:16
相关论文
共 50 条
  • [1] Reliable pose estimation of underwater dock using single camera: a scene invariant approach
    Shatadal Ghosh
    Ranjit Ray
    Siva Ram Krishna Vadali
    Sankar Nath Shome
    Sambhunath Nandy
    Machine Vision and Applications, 2016, 27 : 221 - 236
  • [2] Illumination invariant head pose estimation using single camera
    Nanda, H
    Fujimura, K
    IEEE IV2003: INTELLIGENT VEHICLES SYMPOSIUM, PROCEEDINGS, 2003, : 434 - 437
  • [3] Single-camera pose estimation using mirage
    Singhirunnusorn, Khomsun
    Fahimi, Farbod
    Aygun, Ramazan
    IET COMPUTER VISION, 2018, 12 (05) : 720 - 727
  • [4] Scene based camera pose estimation in Manhattan worlds
    Vehar, Darko
    Nestler, Rico
    Franke, Karl-Heinz
    PHOTONICS AND EDUCATION IN MEASUREMENT SCIENCE, 2019, 11144
  • [5] Coplanarity-Based Approach for Camera Motion Estimation Invariant to the Scene Depth
    Goshin, Y.
    OPTICAL MEMORY AND NEURAL NETWORKS, 2022, 31 (SUPPL 1) : 22 - 30
  • [6] Camera Absolute Pose Estimation Using Hierarchical Attention in Multi-Scene
    Lu, Xinhua
    Miao, Jingui
    Xue, Qingji
    Wan, Hui
    Zhang, Hao
    IEEE Access, 2025, 13 : 19624 - 19634
  • [7] Coplanarity-Based Approach for Camera Motion Estimation Invariant to the Scene Depth
    Y. Goshin
    Optical Memory and Neural Networks, 2022, 31 : 22 - 30
  • [8] Coplanarity-Based Approach for Camera Motion Estimation Invariant to the Scene Depth
    Goshin, Y.
    Optical Memory and Neural Networks (Information Optics), 2022, 31 : 22 - 30
  • [9] Voxel-Based Scene Representation for Camera Pose Estimation of a Single RGB Image
    Lee, Sangyoon
    Hong, Hyunki
    Eem, Changkyoung
    APPLIED SCIENCES-BASEL, 2020, 10 (24): : 1 - 15
  • [10] MARKER BASED CAMERA POSE ESTIMATION FOR UNDERWATER ROBOTS
    Ishida, Masahiro
    Shimonomura, Kazuhiro
    2012 IEEE/SICE INTERNATIONAL SYMPOSIUM ON SYSTEM INTEGRATION (SII), 2012, : 629 - 634