Robust multi-task learning and online refinement for spacecraft pose estimation across domain gap

被引:18
|
作者
Park, Tae Ha [1 ]
D'Amico, Simone [1 ]
机构
[1] Stanford Univ, Dept Aeronaut & Astronaut, 496 Lomita Mall, Stanford, CA 94305 USA
关键词
Vision-only navigation; Rendezvous; Pose estimation; Computer vision; Deep learning; Domain gap;
D O I
10.1016/j.asr.2023.03.036
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
This work presents Spacecraft Pose Network v2 (SPNv2), a Convolutional Neural Network (CNN) for pose estimation of noncooperative spacecraft across domain gap. SPNv2 is a multi-scale, multi-task CNN which consists of a shared multi-scale feature encoder and multiple prediction heads that perform different tasks on a shared feature output. These tasks are all related to detection and pose estimation of a target spacecraft from an image, such as prediction of pre-defined satellite keypoints, direct pose regression, and binary segmentation of the satellite foreground. It is shown that by jointly training on different yet related tasks with extensive data augmentations on synthetic images only, the shared encoder learns features that are common across image domains that have fundamentally different visual characteristics compared to synthetic images. This work also introduces Online Domain Refinement (ODR) which refines the parameters of the normalization layers of SPNv2 on the target domain images online at deployment. Specifically, ODR performs self-supervised entropy minimization of the predicted satellite foreground, thereby improving the CNN's performance on the target domain images without their pose labels and with minimal computational efforts. The GitHub repository for SPNv2 is available at https://github.com/tpark94/spnv2. (c) 2023 COSPAR. Published by Elsevier B.V. All rights reserved.
引用
收藏
页码:5726 / 5740
页数:15
相关论文
共 50 条
  • [41] Multi-task Representation Learning for Travel Time Estimation
    Li, Yaguang
    Fu, Kun
    Wang, Zheng
    Shahabi, Cyrus
    Ye, Jieping
    Liu, Yan
    KDD'18: PROCEEDINGS OF THE 24TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2018, : 1695 - 1704
  • [42] Multi-Task Rank Learning for Visual Saliency Estimation
    Li, Jia
    Tian, Yonghong
    Huang, Tiejun
    Gao, Wen
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2011, 21 (05) : 623 - 636
  • [43] Robust Lifelong Multi-task Multi-view Representation Learning
    Sun, Gan
    Cong, Yang
    Li, Jun
    Fu, Yun
    2018 9TH IEEE INTERNATIONAL CONFERENCE ON BIG KNOWLEDGE (ICBK), 2018, : 91 - 98
  • [44] Semantic Segmentation via Multi-task, Multi-domain Learning
    Fourure, Damien
    Emonet, Remi
    Fromont, Elisa
    Muselet, Damien
    Tremeau, Alain
    Wolf, Christian
    STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2016, 2016, 10029 : 333 - 343
  • [45] Multi-Domain and Multi-Task Learning for Human Action Recognition
    Liu, An-An
    Xu, Ning
    Nie, Wei-Zhi
    Su, Yu-Ting
    Zhang, Yong-Dong
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (02) : 853 - 867
  • [46] Robust Stuttering Detection via Multi-task and Adversarial Learning
    Sheikh, Shakeel A.
    Sahidullah, Md
    Hirsch, Fabrice
    Ouni, Slim
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 190 - 194
  • [47] Online Multi-Task Learning via Sparse Dictionary Optimization
    Ruvolo, Paul
    Eaton, Eric
    PROCEEDINGS OF THE TWENTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2014, : 2062 - 2068
  • [48] 3D human pose and shape estimation via de-occlusion multi-task learning
    Ran, Hang
    Ning, Xin
    Li, Weijun
    Hao, Meilan
    Tiwari, Prayag
    NEUROCOMPUTING, 2023, 548
  • [49] Multi-task oriented team formation in online collaborative learning
    Chen, Yingzhi
    Zhang, Lichen
    Ding, Yu
    Guo, Longjiang
    Bian, Kexin
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 259
  • [50] Stratified Multi-Task Learning for Robust Spotting of Scene Texts
    Dasgupta, Kinjal
    Das, Sudip
    Bhattacharya, Ujjwal
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 3130 - 3137