Learning Good Features to Transfer Across Tasks and Domains

被引:3
|
作者
Ramirez, Pierluigi Zama [1 ]
Cardace, Adriano [1 ]
De Luigi, Luca [1 ]
Tonioni, Alessio [2 ]
Salti, Samuele [1 ]
Stefano, Luigi Di [1 ]
机构
[1] Univ Bologna, I-40126 Bologna, BO, Italy
[2] Google Inc, Mountain View, CA 94043 USA
关键词
Task analysis; Feature extraction; Training; Multitasking; Transfer learning; Semantic segmentation; Estimation; Depth estimation; domain adaptation; semantic segmentation; task transfer;
D O I
10.1109/TPAMI.2023.3240316
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Availability of labelled data is the major obstacle to the deployment of deep learning algorithms for computer vision tasks in new domains. The fact that many frameworks adopted to solve different tasks share the same architecture suggests that there should be a way of reusing the knowledge learned in a specific setting to solve novel tasks with limited or no additional supervision. In this work, we first show that such knowledge can be shared across tasks by learning a mapping between task-specific deep features in a given domain. Then, we show that this mapping function, implemented by a neural network, is able to generalize to novel unseen domains. Besides, we propose a set of strategies to constrain the learned feature spaces, to ease learning and increase the generalization capability of the mapping network, thereby considerably improving the final performance of our framework. Our proposal obtains compelling results in challenging synthetic-to-real adaptation scenarios by transferring knowledge between monocular depth estimation and semantic segmentation tasks.
引用
收藏
页码:9981 / 9995
页数:15
相关论文
共 50 条
  • [1] Learning Across Tasks and Domains
    Ramirez, Pierluigi Zama
    Tonioni, Alessio
    Salti, Samuele
    Di Stefano, Luigi
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 8109 - 8118
  • [2] Simultaneous Deep Transfer Across Domains and Tasks
    Tzeng, Eric
    Hoffman, Judy
    Darrell, Trevor
    Saenko, Kate
    2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 4068 - 4076
  • [3] GradMix: Multi-source Transfer across Domains and Tasks
    Li, Junnan
    Xu, Ziwei
    Wang, Yongkang
    Zhao, Qi
    Kankanhalli, Mohan S.
    2020 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2020, : 3008 - 3016
  • [4] Label Efficient Learning of Transferable Representations across Domains and Tasks
    Luo, Zelun
    Zou, Yuliang
    Hoffman, Judy
    Fei-Fei, Li
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [5] Learning with Style: Continual Semantic Segmentation Across Tasks and Domains
    Toldo M.
    Michieli U.
    Zanuttigh P.
    IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024, 46 (11) : 1 - 16
  • [6] IGLUE: A Benchmark for Transfer Learning across Modalities, Tasks, and Languages
    Bugliarello, Emanuele
    Liu, Fangyu
    Pfeiffer, Jonas
    Reddy, Siva
    Elliott, Desmond
    Ponti, Edoardo Maria
    Vuli, Ivan
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [7] Transfer Learning Across Heterogeneous Tasks Using Behavioural Genetic Principles
    Kohli, Maitrei
    Magoulas, George D.
    Thomas, Michael S. C.
    2013 13TH UK WORKSHOP ON COMPUTATIONAL INTELLIGENCE (UKCI), 2013, : 151 - 158
  • [8] Technical Question Answering across Tasks and Domains
    Yu, Wenhao
    Wu, Lingfei
    Deng, Yu
    Zeng, Qingkai
    Mahindru, Ruchi
    Guven, Sinem
    Jiang, Meng
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, NAACL-HLT 2021, 2021, : 178 - 186
  • [9] A transfer learning based on canonical correlation analysis across different domains
    Zhang, Bo
    Shi, Zhong-Zhi
    Zhao, Xiao-Fei
    Zhang, Jian-Hua
    Jisuanji Xuebao/Chinese Journal of Computers, 2015, 38 (07): : 1326 - 1336
  • [10] Generalisation of modified interpretive bias across tasks and domains
    Salemink, Elske
    van den Hout, Marcel
    Kindt, Merel
    COGNITION & EMOTION, 2010, 24 (03) : 453 - 464