Visual localization for asteroid touchdown operation based on local image features

被引:0
|
作者
Yoshiyuki Anzai
Takehisa Yairi
Naoya Takeishi
Yuichi Tsuda
Naoko Ogawa
机构
[1] The University of Tokyo,
[2] RIKEN Center for Advanced Intelligence Project,undefined
[3] Japan Aerospace Exploration Agency,undefined
来源
Astrodynamics | 2020年 / 4卷
关键词
visual navigation; structure from motion; asteroid; touchdown; Hayabusa2;
D O I
暂无
中图分类号
学科分类号
摘要
In an asteroid sample-return mission, accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point. During the missions of Hayabusa and Hayabusa2, the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft. Although this approach is still adopted in critical space missions, there is an increasing demand for automated visual position estimation, so that the time and cost of human intervention may be reduced. In this paper, we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing, feature extraction, and structure from motion. We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m. It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km. This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence, whereas full automation and autonomous on-board estimation are beyond the scope of this study. Furthermore, we discuss the challenges of developing a completely automatic position estimation framework.
引用
收藏
页码:149 / 161
页数:12
相关论文
共 50 条
  • [1] Visual localization for asteroid touchdown operation based on local image features
    Anzai, Yoshiyuki
    Yairi, Takehisa
    Takeishi, Naoya
    Tsuda, Yuichi
    Ogawa, Naoko
    [J]. ASTRODYNAMICS, 2020, 4 (02) : 149 - 161
  • [3] Scene image classification based on visual words concatenation of local and global features
    Shrinivasa, S. R.
    Prabhakar, C. J.
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (01) : 1237 - 1256
  • [4] Scene image classification based on visual words concatenation of local and global features
    [J]. Multimedia Tools and Applications, 2022, 81 : 1237 - 1256
  • [5] CNN-Based Local Features for Navigation Near an Asteroid
    Knuuttila, Olli
    Kestila, Antti
    Kallio, Esa
    [J]. IEEE ACCESS, 2024, 12 : 16652 - 16672
  • [6] Trial Report of Localization for Visual Based Tracking System in Asteroid Flyby
    Hashizume, Koya
    Miyata, Kikuko
    Hara, Susumu
    [J]. IEEJ JOURNAL OF INDUSTRY APPLICATIONS, 2021, 10 (02) : 200 - 201
  • [7] HYBRID CODING OF VISUAL CONTENT AND LOCAL IMAGE FEATURES
    Baroffio, Luca
    Cesana, Matteo
    Redondi, Alessandro
    Tagliasacchi, Marco
    Tubaro, Stefano
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 2530 - U1640
  • [8] Learning Task-Aligned Local Features for Visual Localization
    Liu, Chuanjin
    Liu, Hongmin
    Zhang, Lixin
    Zeng, Hui
    Luo, Lufeng
    Fan, Bin
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (06) : 3366 - 3373
  • [9] Content-Based Image Retrieval Based on Visual Words Fusion Versus Features Fusion of Local and Global Features
    Zahid Mehmood
    Fakhar Abbas
    Toqeer Mahmood
    Muhammad Arshad Javid
    Amjad Rehman
    Tabassam Nawaz
    [J]. Arabian Journal for Science and Engineering, 2018, 43 : 7265 - 7284
  • [10] Capsule Endoscope Localization based on Visual Features
    Iakovidis, Dimitris K.
    Spyrou, Evaggelos
    Diamantis, Dimitris
    Tsiompanidis, Ilias
    [J]. 2013 IEEE 13TH INTERNATIONAL CONFERENCE ON BIOINFORMATICS AND BIOENGINEERING (BIBE), 2013,