Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

被引:127
|
作者
Valada, Abhinav [1 ]
Mohan, Rohit [1 ]
Burgard, Wolfram [1 ,2 ]
机构
[1] Univ Freiburg, Freiburg, Germany
[2] Toyota Res Inst, Los Altos, CA USA
关键词
Semantic segmentation; Multimodal fusion; Scene understanding; Model adaptation; Deep learning; NETWORKS; RGB;
D O I
10.1007/s11263-019-01188-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10x\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10\,\times $$\end{document} fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance while simultaneously being efficient in terms of parameters and inference time as well as demonstrating substantial robustness in adverse perceptual conditions.
引用
收藏
页码:1239 / 1285
页数:47
相关论文
共 50 条
  • [1] Self-Supervised Model Adaptation for Multimodal Semantic Segmentation
    Abhinav Valada
    Rohit Mohan
    Wolfram Burgard
    [J]. International Journal of Computer Vision, 2020, 128 : 1239 - 1285
  • [2] Distribution regularized self-supervised learning for domain adaptation of semantic segmentation
    Iqbal, Javed
    Rawal, Hamza
    Ha, Rehan
    Chi, Yu-Tseh
    Ali, Mohsen
    [J]. IMAGE AND VISION COMPUTING, 2022, 124
  • [3] FogAdapt: Self-supervised domain adaptation for semantic segmentation of foggy images
    Iqbal, Javed
    Hafiz, Rehan
    Ali, Mohsen
    [J]. NEUROCOMPUTING, 2022, 501 : 844 - 856
  • [4] Distribution regularized self-supervised learning for domain adaptation of semantic segmentation
    Iqbal, Javed
    Rawal, Hamza
    Hafiz, Rehan
    Chi, Yu-Tseh
    Ali, Mohsen
    [J]. Image and Vision Computing, 2022, 124
  • [5] Plugging Self-Supervised Monocular Depth into Unsupervised Domain Adaptation for Semantic Segmentation
    Cardace, Adriano
    De Luigi, Luca
    Ramirez, Pierluigi Zama
    Salti, Samuele
    Di Stefano, Luigi
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 1999 - 2009
  • [6] Self-Supervised Embodied Learning for Semantic Segmentation
    Wang, Juan
    Liu, Xinzhu
    Zhao, Dawei
    Dai, Bin
    Liu, Huaping
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON DEVELOPMENT AND LEARNING, ICDL, 2023, : 383 - 390
  • [7] Self-Supervised Temporal Consistency applied to Domain Adaptation in Semantic Segmentation of Urban Scenes
    Barbosa, Felipe M.
    Osorio, Fernando S.
    [J]. 2023 LATIN AMERICAN ROBOTICS SYMPOSIUM, LARS, 2023 BRAZILIAN SYMPOSIUM ON ROBOTICS, SBR, AND 2023 WORKSHOP ON ROBOTICS IN EDUCATION, WRE, 2023, : 555 - 560
  • [8] Self-supervised Semantic Segmentation: Consistency over Transformation
    Karimijafarbigloo, Sanaz
    Azad, Reza
    Kazerouni, Amirhossein
    Velichko, Yury
    Bagci, Ulas
    Merhof, Dorit
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 2646 - 2655
  • [9] Self-Supervised Learning of Object Parts for Semantic Segmentation
    Ziegler, Adrian
    Asano, Yuki M.
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 14482 - 14491
  • [10] Self-supervised Augmentation Consistency for Adapting Semantic Segmentation
    Araslanov, Nikita
    Roth, Stefan
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15379 - 15389