Visual localization for asteroid touchdown operation based on local image features

被引:0
|
作者
Yoshiyuki Anzai
Takehisa Yairi
Naoya Takeishi
Yuichi Tsuda
Naoko Ogawa
机构
[1] The University of Tokyo,
[2] RIKEN Center for Advanced Intelligence Project,undefined
[3] Japan Aerospace Exploration Agency,undefined
来源
Astrodynamics | 2020年 / 4卷
关键词
visual navigation; structure from motion; asteroid; touchdown; Hayabusa2;
D O I
暂无
中图分类号
学科分类号
摘要
In an asteroid sample-return mission, accurate position estimation of the spacecraft relative to the asteroid is essential for landing at the target point. During the missions of Hayabusa and Hayabusa2, the main part of the visual position estimation procedure was performed by human operators on the Earth based on a sequence of asteroid images acquired and sent by the spacecraft. Although this approach is still adopted in critical space missions, there is an increasing demand for automated visual position estimation, so that the time and cost of human intervention may be reduced. In this paper, we propose a method for estimating the relative position of the spacecraft and asteroid during the descent phase for touchdown from an image sequence using state-of-the-art techniques of image processing, feature extraction, and structure from motion. We apply this method to real Ryugu images that were taken by Hayabusa2 from altitudes of 20 km-500 m. It is demonstrated that the method has practical relevance for altitudes within the range of 5-1 km. This result indicates that our method could improve the efficiency of the ground operation in the global mapping and navigation during the touchdown sequence, whereas full automation and autonomous on-board estimation are beyond the scope of this study. Furthermore, we discuss the challenges of developing a completely automatic position estimation framework.
引用
收藏
页码:149 / 161
页数:12
相关论文
共 50 条
  • [21] Local Features Based Image Sequence Retrieval
    Fu, Xiang
    Zeng, Jie-xian
    [J]. JOURNAL OF COMPUTERS, 2010, 5 (07) : 987 - 994
  • [22] Image retrieval methods based on local features
    [J]. Cao, J. (caojian@th.btbu.edu.cn), 1600, Tsinghua University (52):
  • [23] Image Stitching Based on Local Symmetry Features
    Yang Di
    Bo Yu-ming
    Zhao Gao-peng
    [J]. 2014 33RD CHINESE CONTROL CONFERENCE (CCC), 2014, : 4641 - 4646
  • [24] Image Features Based on Local Hough Transforms
    Sluzek, Andrzej
    [J]. KNOWLEDGE-BASED AND INTELLIGENT INFORMATION AND ENGINEERING SYSTEMS, PT II, PROCEEDINGS, 2009, 5712 : 143 - 150
  • [25] Image Dehazing Based on Local and Non-Local Features
    Jiao, Qingliang
    Liu, Ming
    Ning, Bu
    Zhao, Fengfeng
    Dong, Liquan
    Kong, Lingqin
    Hui, Mei
    Zhao, Yuejin
    [J]. FRACTAL AND FRACTIONAL, 2022, 6 (05)
  • [26] Image moment invariants as local features for content based image retrieval using the Bag-of-Visual-Words model
    Karakasis, E. G.
    Amanatiadis, A.
    Gasteratos, A.
    Chatzichristofis, S. A.
    [J]. PATTERN RECOGNITION LETTERS, 2015, 55 : 22 - 27
  • [27] Unmanned Visual Localization Based on Satellite and Image Fusion
    Yang, Xiaodan
    [J]. 2018 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS AND CONTROL ENGINEERING (ISPECE 2018), 2019, 1187
  • [28] A hybrid approach for vision-based outdoor robot localization using global and local image features
    Weiss, Christian
    Tamimi, Hashem
    Masselli, Andreas
    Zell, Andreas
    [J]. 2007 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS, VOLS 1-9, 2007, : 1053 - +
  • [29] Image Features-Based Mobile Robot Localization
    Lin, Rui
    Wang, Zhenhua
    Sun, Rongchuan
    Sun, Lining
    [J]. PROCEEDING OF THE IEEE INTERNATIONAL CONFERENCE ON INFORMATION AND AUTOMATION, 2012, : 304 - 310
  • [30] Learning Semantic-Aware Local Features for Long Term Visual Localization
    Fan, Bin
    Zhou, Junjie
    Feng, Wensen
    Pu, Huayan
    Yang, Yuzhu
    Kong, Qingqun
    Wu, Fuchao
    Liu, Hongmin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 4842 - 4855