Vision-Based Spacecraft Pose Estimation via a Deep Convolutional Neural Network for Noncooperative Docking Operations

被引:39
|
作者
Phisannupawong, Thaweerath [1 ,2 ]
Kamsing, Patcharin [1 ]
Torteeka, Peerapong [3 ]
Channumsin, Sittiporn [4 ]
Sawangwit, Utane [3 ]
Hematulin, Warunyu [1 ]
Jarawan, Tanatthep [1 ]
Somjit, Thanaporn [1 ]
Yooyen, Soemsak [1 ]
Delahaye, Daniel [5 ]
Boonsrimuang, Pisit [6 ]
机构
[1] King Mongkuts Inst Technol Ladkrabang, Int Acad Aviat Ind, Dept Aeronaut Engn, Air Space Control Optimizat & Management Lab, Bangkok 10520, Thailand
[2] Natl Astron Res Inst Thailand, Internship Program, Chiang Mai 50180, Thailand
[3] Natl Astron Res Inst Thailand, Res Grp, Chiang Mai 50180, Thailand
[4] Geoinformat & Space Technol Dev Agcy GISTDA, Astrodynam Res Lab, Chon Buri 20230, Thailand
[5] Ecole Natl Aviat Civile, F-31400 Toulouse, France
[6] King Mongkuts Inst Technol Ladkrabang, Fac Engn, Bangkok 10520, Thailand
关键词
spacecraft docking operation; on-orbit services; pose estimation; deep convolutional neural network;
D O I
10.3390/aerospace7090126
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
The capture of a target spacecraft by a chaser is an on-orbit docking operation that requires an accurate, reliable, and robust object recognition algorithm. Vision-based guided spacecraft relative motion during close-proximity maneuvers has been consecutively applied using dynamic modeling as a spacecraft on-orbit service system. This research constructs a vision-based pose estimation model that performs image processing via a deep convolutional neural network. The pose estimation model was constructed by repurposing a modified pretrained GoogLeNet model with the available Unreal Engine 4 rendered dataset of the Soyuz spacecraft. In the implementation, the convolutional neural network learns from the data samples to create correlations between the images and the spacecraft's six degrees-of-freedom parameters. The experiment has compared an exponential-based loss function and a weighted Euclidean-based loss function. Using the weighted Euclidean-based loss function, the implemented pose estimation model achieved moderately high performance with a position accuracy of 92.53 percent and an error of 1.2 m. The in-attitude prediction accuracy can reach 87.93 percent, and the errors in the three Euler angles do not exceed 7.6 degrees. This research can contribute to spacecraft detection and tracking problems. Although the finished vision-based model is specific to the environment of synthetic dataset, the model could be trained further to address actual docking operations in the future.
引用
收藏
页码:1 / 22
页数:22
相关论文
共 50 条
  • [1] Neural Network-Based Pose Estimation for Noncooperative Spacecraft Rendezvous
    Sharma, Sumant
    D'Amico, Simone
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2020, 56 (06) : 4638 - 4658
  • [2] Vision-Based Human Pose Estimation via Deep Learning: A Survey
    Lan, Gongjin
    Wu, Yu
    Hu, Fei
    Hao, Qi
    IEEE TRANSACTIONS ON HUMAN-MACHINE SYSTEMS, 2023, 53 (01) : 253 - 268
  • [3] Spacecraft Homography Pose Estimation with Single-Stage Deep Convolutional Neural Network
    Chen, Shengpeng
    Yang, Wenyi
    Wang, Wei
    Mai, Jianting
    Liang, Jian
    Zhang, Xiaohu
    SENSORS, 2024, 24 (06)
  • [4] Vision-based attitude estimation for spacecraft docking operation through deep learning algorithm
    Phisannupawong, Thaweerath
    Kamsing, Patcharin
    Torteeka, Peerapong
    Yooyen, Soemsak
    2020 22ND INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): DIGITAL SECURITY GLOBAL AGENDA FOR SAFE SOCIETY!, 2020, : 280 - 284
  • [5] Neural-Network-Based Pose Estimation During Noncooperative Spacecraft Rendezvous Using Point Cloud
    Zhang, Shaodong
    Hu, Weiduo
    Guo, Wulong
    Liu, Chang
    JOURNAL OF AEROSPACE INFORMATION SYSTEMS, 2023, 20 (08): : 462 - 472
  • [6] Parallel vision-based pose estimation for non-cooperative spacecraft
    Li, Ronghua
    Zhou, Ying
    Chen, Feng
    Chen, Yong
    ADVANCES IN MECHANICAL ENGINEERING, 2015, 7 (07) : 1 - 9
  • [7] Position Awareness Network for Noncooperative Spacecraft Pose Estimation Based on Point Cloud
    Liu, Xiang
    Wang, Hongyuan
    Chen, Xinlong
    Chen, Weichun
    Xie, Zhengyou
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2023, 59 (01) : 507 - 518
  • [8] Influence factors of the accuracy of monocular vision pose estimation for spacecraft based on neural network
    Zhao, Hongliang
    Ye, Dong
    Guo, Yubo
    Chen, Gang
    ENGINEERING RESEARCH EXPRESS, 2022, 4 (01):
  • [9] Stereo-Vision-Based Relative Pose Estimation for the Rendezvous and Docking of Noncooperative Satellites
    Yu, Feng
    He, Zhen
    Qiao, Bing
    Yu, Xiaoting
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2014, 2014
  • [10] Design Application of Deep Convolutional Neural Network for Vision-Based Defect Inspection
    Nagata, Fusaomi
    Tokuno, Kenta
    Watanabe, Keigo
    Habib, Maki K.
    2018 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2018, : 1705 - 1710