A Robust Crater Matching Algorithm for Autonomous Vision-Based Spacecraft Navigation

被引:0
|
作者
Roberto, Del Prete [1 ]
Alfredo, Renga [1 ]
机构
[1] Univ Naples Federico II, Dept Ind Engn Aerosp Engn, I-80125 Naples, Italy
关键词
ORBITAL NAVIGATION; VISUAL NAVIGATION; IDENTIFICATION;
D O I
10.1109/METROAEROSPACE51421.2021.9511670
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Advancements in Computer Vision (CV) and Machine Learning (ML) of past decades have contributed to the realization of autonomous systems like self-driving cars. This manuscript explores the possibility of transferring this technology to the next planetary exploratory missions. Similarly to a star tracker, it is possible to match a pattern of observed craters with a reference, i.e. a crater catalogue, in order to perform the spacecraft state estimation with no external support (i.e. GNSS or DSN). Such kind of technology, born for missilistic applications before the advent of GPS, is known as Terrain Relative Navigation (TRN). However, unlike stars, craters largely vary their appearances also depending on image qualities, lighting geometry and noises. While these problems can nowadays be overcome with the modern approach of deep learning, the inherent limit of crater detectors, i.e. the false detections, still poses a problem for the matching phase. In response, this paper proposes a novel solution, exploiting attitude and sensor pointing knowledge to discriminate false matches. A complete TRN system, called FederNet, was finally developed implementing the matching algorithm within a processing chain including a Convolutional Neural Network and an extended Kalman filter (EKF). FederNet has been validated with a numerical anlysis on real lunar elevation images. However, the adopted methodology further extends to other airless bodies. Despite the usage of a medium resolution (118 m/px) Digital Elevation Model (DEM), results showed that the navigation accuracy lie below 400 meters in the best case scenario, guaranteeing real time autonomous on-board operations with no need for ground support. The capabilities of such TRN system can be additionally improved with higher resolution data and data fusion integration with other sensor measurements.
引用
收藏
页码:322 / 327
页数:6
相关论文
共 50 条
  • [1] A Deep Learning-based Crater Detector for Autonomous Vision-Based Spacecraft Navigation
    Prete, Roberto Del
    Saveriano, Alfonso
    Renga, Alfredo
    [J]. 2022 IEEE INTERNATIONAL WORKSHOP ON METROLOGY FOR AEROSPACE (IEEE METROAEROSPACE 2022), 2022, : 231 - 236
  • [2] Autonomous Vision-Based Algorithm for Interplanetary Navigation
    Andreis, Eleonora
    Panicucci, Paolo
    Topputo, Francesco
    [J]. JOURNAL OF GUIDANCE CONTROL AND DYNAMICS, 2024, 47 (09) : 1792 - 1807
  • [3] Navigation Command Matching for Vision-based Autonomous Driving
    Pan, Yuxin
    Xue, Jianru
    Zhang, Pengfei
    Ouyang, Wanli
    Fang, Jianwu
    Chen, Xingyu
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA), 2020, : 4343 - 4349
  • [4] Robust Navigation Solution for Vision-Based Autonomous Rendezvous
    Comellini, Anthea
    Mave, Florent
    Dubanchet, Vincent
    Casu, Davide
    Zenou, Emmanuel
    Espinosa, Christine
    [J]. 2021 IEEE AEROSPACE CONFERENCE (AEROCONF 2021), 2021,
  • [5] Robust Vision-based Autonomous Navigation against Environment Changes
    Kim, Jungho
    Bok, Yunsu
    Kweon, In So
    [J]. 2008 IEEE/RSJ INTERNATIONAL CONFERENCE ON ROBOTS AND INTELLIGENT SYSTEMS, VOLS 1-3, CONFERENCE PROCEEDINGS, 2008, : 696 - 701
  • [6] Flight Results of Vision-Based Navigation for Autonomous Spacecraft Inspection of Unknown Objects
    Fourie, Dehann
    Tweddle, Brent E.
    Ulrich, Steve
    Saenz-Otero, Alvar
    [J]. JOURNAL OF SPACECRAFT AND ROCKETS, 2014, 51 (06) : 2016 - 2026
  • [7] Robust Vision-Based Autonomous Navigation, Mapping and Landing for MAVs at Night
    Daftry, Shreyansh
    Das, Manash
    Delaune, Jeff
    Sorice, Cristina
    Hewitt, Robert
    Reddy, Shreetej
    Lytle, Daniel
    Gu, Elvin
    Matthies, Larry
    [J]. PROCEEDINGS OF THE 2018 INTERNATIONAL SYMPOSIUM ON EXPERIMENTAL ROBOTICS, 2020, 11 : 232 - 242
  • [8] Computer Vision-based navigation for autonomous blimps
    Coelho, LD
    Campos, MFM
    Kumar, V
    [J]. SIBGRAPI '98 - INTERNATIONAL SYMPOSIUM ON COMPUTER GRAPHICS, IMAGE PROCESSING, AND VISION, PROCEEDINGS, 1998, : 287 - 294
  • [9] Vision-Based Autonomous Navigation with Evolutionary Learning
    Moya-Albor, Ernesto
    Ponce, Hiram
    Brieva, Jorge
    Coronel, Sandra L.
    Chavez-Domingue, Rodrigo
    [J]. ADVANCES IN COMPUTATIONAL INTELLIGENCE, MICAI 2020, PT II, 2020, 12469 : 459 - 471
  • [10] Vision-based Perception for Autonomous Urban Navigation
    Bansal, Mayank
    Das, Aveek
    Kreutzer, Greg
    Eledath, Jayan
    Kumar, Rakesh
    Sawhney, Harpreet
    [J]. PROCEEDINGS OF THE 11TH INTERNATIONAL IEEE CONFERENCE ON INTELLIGENT TRANSPORTATION SYSTEMS, 2008, : 434 - 440