Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap

被引:18
|
作者
Park, Tae Ha [1 ]
D'Amico, Simone [1 ]
机构
[1] Stanford Univ, Dept Aeronaut & Astronaut, 496 Lomita Mall, Stanford, CA 94305 USA
关键词
Vision-only navigation; Rendezvous; Pose estimation; Computer vision; Deep learning; Domain gap;
D O I
10.1016/j.asr.2023.03.036
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output. These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary segmentation of the satellite foreground. It is shown that by jointly training on different yet related tasks with extensive data augmentations on synthetic images only, the shared encoder learns features that are common across image domains that have fundamentally different visual characteristics compared to synthetic images. This work also introduces Online Domain Refinement (ODR) which refines the parameters of the normalization layers of SPNv2 on the target domain images online at deployment. Specifically, ODR performs self-supervised entropy minimization of the predicted satellite foreground, thereby improving the CNN's performance on the target domain images without their pose labels and with minimal computational efforts. The GitHub repository for SPNv2 is available at https://github.com/tpark94/spnv2. (c) 2023 COSPAR. Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:5726 / 5740
页数:15
相关论文
共 50 条
  • [31] Robust Multi-Task Learning With Flexible Manifold Constraint
    Zhang, Rui
    Zhang, Hongyuan
    Li, Xuelong
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (06) : 2150 - 2157
  • [32] Robust Estimator based Adaptive Multi-Task Learning
    Zhu, Peiyuan
    Chen, Cailian
    He, Jianping
    Zhu, Shanying
    2019 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI 2019), 2019, : 740 - 747
  • [33] Kernel collaborative online algorithms for multi-task learning
    A. Aravindh
    S. S. Shiju
    S. Sumitra
    Annals of Mathematics and Artificial Intelligence, 2019, 86 : 269 - 286
  • [34] Online Multi-Task Learning Framework for Ensemble Forecasting
    Xu, Jianpeng
    Tan, Pang-Ning
    Zhou, Jiayu
    Luo, Lifeng
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2017, 29 (06) : 1268 - 1280
  • [35] Online Multi-Task Learning for Policy Gradient Methods
    Ammar, Haitham Bou
    Eaton, Eric
    Ruvolo, Paul
    Taylor, Matthew E.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 1206 - 1214
  • [36] Pose-robust and Discriminative Feature Representation by Multi-task Deep Learning for Multi-view Face Recognition
    Seo, Jeong-Jik
    Kim, Hyung-Il
    Ro, Yong Man
    2015 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM), 2015, : 166 - 171
  • [37] Robust Task Grouping with Representative Tasks for Clustered Multi-Task Learning
    Yao, Yaqiang
    Cao, Jie
    Chen, Huanhuan
    KDD'19: PROCEEDINGS OF THE 25TH ACM SIGKDD INTERNATIONAL CONFERENCCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2019, : 1408 - 1417
  • [38] Analysis on Compressed Domain: A Multi-Task Learning Approach
    Zhang, Yuefeng
    Jia, Chuanmin
    Chang, Jianhui
    Ma, Siwei
    DCC 2022: 2022 DATA COMPRESSION CONFERENCE (DCC), 2022, : 494 - 494
  • [39] Multi-State Online Estimation of Lithium-Ion Batteries Based on Multi-Task Learning
    Bao, Xiang
    Liu, Yuefeng
    Liu, Bo
    Liu, Haofeng
    Wang, Yue
    ENERGIES, 2023, 16 (07)
  • [40] MULTI-TASK LEARNING FOR FACE IDENTIFICATION AND ATTRIBUTE ESTIMATION
    Hsieh, Hui-Lan
    Hsu, Winston
    Chen, Yan-Ying
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 2981 - 2985