Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations

被引:39
|
作者
Phisannupawong, Thaweerath [1 ,2 ]
Kamsing, Patcharin [1 ]
Torteeka, Peerapong [3 ]
Channumsin, Sittiporn [4 ]
Sawangwit, Utane [3 ]
Hematulin, Warunyu [1 ]
Jarawan, Tanatthep [1 ]
Somjit, Thanaporn [1 ]
Yooyen, Soemsak [1 ]
Delahaye, Daniel [5 ]
Boonsrimuang, Pisit [6 ]
机构
[1] King Mongkuts Inst Technol Ladkrabang, Int Acad Aviat Ind, Dept Aeronaut Engn, Air Space Control Optimizat & Management Lab, Bangkok 10520, Thailand
[2] Natl Astron Res Inst Thailand, Internship Program, Chiang Mai 50180, Thailand
[3] Natl Astron Res Inst Thailand, Res Grp, Chiang Mai 50180, Thailand
[4] Geoinformat & Space Technol Dev Agcy GISTDA, Astrodynam Res Lab, Chon Buri 20230, Thailand
[5] Ecole Natl Aviat Civile, F-31400 Toulouse, France
[6] King Mongkuts Inst Technol Ladkrabang, Fac Engn, Bangkok 10520, Thailand
关键词
spacecraft docking operation; on-orbit services; pose estimation; deep convolutional neural network;
D O I
10.3390/aerospace7090126
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft's six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
引用
收藏
页码:1 / 22
页数:22
相关论文
共 50 条
  • [41] Multiclass classification based on a deep convolutional network for head pose estimation
    Cai, Ying
    Yang, Meng-long
    Li, Jun
    FRONTIERS OF INFORMATION TECHNOLOGY & ELECTRONIC ENGINEERING, 2015, 16 (11) : 930 - 939
  • [42] Research on Pose Estimation of Mobile Robot Based on Convolutional Neural Network
    Yue, Min
    Fu, Guangyuan
    Wu, Ming
    2019 INTERNATIONAL CONFERENCE ON INTELLIGENT MANUFACTURING AND INTELLIGENT MATERIALS (2IM 2019), 2019, 565
  • [43] Monocular Depth Estimation of Noncooperative Spacecraft Based on Deep Learning
    Zhao, Erxun
    Zhang, Yang
    Gao, Jingmin
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2023, 20 (06): : 334 - 342
  • [44] ChiNet: Deep Recurrent Convolutional Learning for Multimodal Spacecraft Pose Estimation
    Rondao, Duarte
    Aouf, Nabil
    Richardson, Mark A.
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (02) : 937 - 949
  • [45] Vision-Based Gait Events Detection Using Deep Convolutional Neural Networks
    Jamsrandorj, Ankhzaya
    Mau Dung Nguyen
    Park, Mina
    Kumar, Konki Sravan
    Mun, Kyung-Ryoul
    Kim, Jinwook
    2021 43RD ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE & BIOLOGY SOCIETY (EMBC), 2021, : 1936 - 1941
  • [46] Hand pose estimation for vision-based human interface
    Ueda, E
    Matsumoto, Y
    Imai, M
    Ogasawara, T
    ROBOT AND HUMAN COMMUNICATION, PROCEEDINGS, 2001, : 473 - 478
  • [47] Vision-based pose estimation for cooperative space objects
    Zhang, Haopeng
    Jiang, Zhiguo
    Elgammal, Ahmed
    ACTA ASTRONAUTICA, 2013, 91 : 115 - 122
  • [48] Applying Vision-Based Pose Estimation in a Telerehabilitation Application
    Rosique, Francisca
    Losilla, Fernando
    Navarro, Pedro J.
    APPLIED SCIENCES-BASEL, 2021, 11 (19):
  • [49] Robust and Accurate Pose Estimation for Vision-based Localisation
    Mei, Christopher
    2012 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2012, : 3165 - 3170
  • [50] Vision-based excavator pose estimation for automatic control
    Liu, Guangxu
    Wang, Qingfeng
    Wang, Tao
    Li, Bingcheng
    Xi, Xiangshuo
    AUTOMATION IN CONSTRUCTION, 2024, 157