Self-Supervised Model Adaptation for Multimodal Semantic Segmentation

被引:0
|
作者
Abhinav Valada
Rohit Mohan
Wolfram Burgard
机构
[1] University of Freiburg,
[2] Toyota Research Institute,undefined
来源
关键词
Semantic segmentation; Multimodal fusion; Scene understanding; Model adaptation; Deep learning;
D O I
暂无
中图分类号
学科分类号
摘要
Learning to reliably perceive and understand the scene is an integral enabler for robots to operate in the real-world. This problem is inherently challenging due to the multitude of object types as well as appearance changes caused by varying illumination and weather conditions. Leveraging complementary modalities can enable learning of semantically richer representations that are resilient to such perturbations. Despite the tremendous progress in recent years, most multimodal convolutional neural network approaches directly concatenate feature maps from individual modality streams rendering the model incapable of focusing only on the relevant complementary information for fusion. To address this limitation, we propose a mutimodal semantic segmentation framework that dynamically adapts the fusion of modality-specific features while being sensitive to the object category, spatial location and scene context in a self-supervised manner. Specifically, we propose an architecture consisting of two modality-specific encoder streams that fuse intermediate encoder representations into a single decoder using our proposed self-supervised model adaptation fusion mechanism which optimally combines complementary features. As intermediate representations are not aligned across modalities, we introduce an attention scheme for better correlation. In addition, we propose a computationally efficient unimodal segmentation architecture termed AdapNet++ that incorporates a new encoder with multiscale residual units and an efficient atrous spatial pyramid pooling that has a larger effective receptive field with more than 10×\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$10\,\times $$\end{document} fewer parameters, complemented with a strong decoder with a multi-resolution supervision scheme that recovers high-resolution details. Comprehensive empirical evaluations on Cityscapes, Synthia, SUN RGB-D, ScanNet and Freiburg Forest benchmarks demonstrate that both our unimodal and multimodal architectures achieve state-of-the-art performance while simultaneously being efficient in terms of parameters and inference time as well as demonstrating substantial robustness in adverse perceptual conditions.
引用
收藏
页码:1239 / 1285
页数:46
相关论文
共 50 条
  • [31] Self-Supervised MultiModal Versatile Networks
    Alayrac, Jean-Baptiste
    Recasens, Adria
    Schneider, Rosalia
    Arandjelovic, Relja
    Ramapuram, Jason
    De Fauw, Jeffrey
    Smaira, Lucas
    Dieleman, Sander
    Zisserman, Andrew
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [32] Self-Supervised Multimodal Opinion Summarization
    Im, Jinbae
    Kim, Moonki
    Lee, Hoyeop
    Cho, Hyunsouk
    Chung, Sehee
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING, VOL 1 (ACL-IJCNLP 2021), 2021, : 388 - 403
  • [33] Motion perception-driven multimodal self-supervised video object segmentation
    Wang, Jun
    Cao, Honghui
    Sun, Chenhao
    Huang, Ziqing
    Zhang, Yonghua
    VISUAL COMPUTER, 2024,
  • [34] Spatial and Semantic Consistency Contrastive Learning for Self-Supervised Semantic Segmentation of Remote Sensing Images
    Dong, Zhe
    Liu, Tianzhu
    Gu, Yanfeng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [35] Spatial and Semantic Consistency Contrastive Learning for Self-Supervised Semantic Segmentation of Remote Sensing Images
    Dong, Zhe
    Liu, Tianzhu
    Gu, Yanfeng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2023, 61
  • [36] Bootstrapped Self-Supervised Training with Monocular Video for Semantic Segmentation and Depth Estimation
    Zhang, Yihao
    Leonard, John J.
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 2420 - 2427
  • [37] EVALUATION OF SELF-SUPERVISED LEARNING APPROACHES FOR SEMANTIC SEGMENTATION OF INDUSTRIAL BURNER FLAMES
    Landgraf, S.
    Kuehnlein, L.
    Hillemann, M.
    Hoyer, M.
    Keller, S.
    Ulrich, M.
    XXIV ISPRS CONGRESS IMAGING TODAY, FORESEEING TOMORROW, COMMISSION II, 2022, 43-B2 : 601 - 607
  • [38] ScoreSeg: Leveraging Score-Based Generative Model for Self-Supervised Semantic Segmentation of Remote Sensing
    Lu, Junzhe
    He, Guangjun
    Dou, Hongkun
    Gao, Qing
    Fang, Leyuan
    Deng, Yue
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 8818 - 8833
  • [39] Fully Convolutional Network-Based Self-Supervised Learning for Semantic Segmentation
    Yang, Zhengeng
    Yu, Hongshan
    He, Yong
    Sun, Wei
    Mao, Zhi-Hong
    Mian, Ajmal
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (01) : 132 - 142
  • [40] Semantic Segmentation of Remote Sensing Images With Self-Supervised Multitask Representation Learning
    Li, Wenyuan
    Chen, Hao
    Shi, Zhenwei
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 : 6438 - 6450