Complementary Masked-Guided Meta-Learning for Domain Adaptive Nighttime Segmentation

被引:0
|
作者
Chen, Ruiying [1 ]
Bo, Yuming [1 ]
Wu, Panlong [1 ]
Wang, Simiao [2 ]
Liu, Yunan [2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Automat, Nanjing 210094, Peoples R China
[2] Dalian Maritime Univ, Sch Artificial Intelligence, Dalian 116026, Peoples R China
关键词
Metalearning; Adaptation models; Semantic segmentation; Training; Optimization; Fast Fourier transforms; Semantics; Nighttime semantic segmentation; mask-based consistency constraint; meta-learning;
D O I
10.1109/LSP.2024.3465352
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Semantic segmentation in nighttime scenes presents a significant challenge in autonomous driving. Unsupervised domain adaptation (UDA) offers an effective solution by learning domain-invariant features to transfer models from the source domain (daytime scenes) to the target domain (nighttime scenes). Many methods introduce a latent domain to reduce the difficulty of UDA. However, they often build only a single adaptation pair of "latent-to-target", which limits the effectiveness of knowledge transfer across different domains. In this letter, we propose a Masked Guided Meta-Learning (MGML) framework for domain-adaptive nighttime semantic segmentation. Within the MGML framework, we explore two key issues: how to generate the latent domain, and how to leverage the latent domain to assist meta-learning in reducing domain discrepancy. For the first issue, we employ the fast Fourier transform along with a complementary masking strategy to generate masked latent images that resemble the target scenes in the latent domain without adding to the training burden. For the second issue, we nest a mask-based consistency constraint within a bi-level meta-learning framework, enabling cross-domain knowledge acquired from the pair of "source-to-latent" to enhance the "latent-to-target" adaptation. Experiments on benchmark datasets demonstrate that our MGML achieves state-of-the-art performance, demonstrating the effectiveness of our approach in nighttime semantic segmentation.
引用
收藏
页码:3010 / 3014
页数:5
相关论文
共 50 条
  • [41] Prediction Guided Meta-Learning for Multi-Objective Reinforcement Learning
    Liu, Fei-Yu
    Qian, Chao
    2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021), 2021, : 2171 - 2178
  • [42] Lifelong Domain Word Embedding via Meta-Learning
    Xu, Hu
    Liu, Bing
    Shu, Lei
    Yu, Philip S.
    PROCEEDINGS OF THE TWENTY-SEVENTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2018, : 4510 - 4516
  • [43] MULTI-INITIALIZATION META-LEARNING WITH DOMAIN ADAPTATION
    Chen, Zhengyu
    Wang, Donglin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1390 - 1394
  • [44] Learning from Demonstrations via Deformable Residual Multi-Attention Domain-Adaptive Meta-Learning
    Yan, Zeyu
    Gan, Zhongxue
    Lu, Gaoxiong
    Liu, Junxiu
    Li, Wei
    BIOMIMETICS, 2025, 10 (02)
  • [45] Leveraging Meta-Learning To Improve Unsupervised Domain Adaptation
    Farhadi, Amirfarhad
    Sharifi, Arash
    COMPUTER JOURNAL, 2023, 67 (05): : 1838 - 1850
  • [46] MLANE: Meta-Learning Based Adaptive Network Embedding
    Cui, Chen
    Yang, Ning
    Yu, Philip S.
    2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 904 - 909
  • [47] Adaptive guidance and integrated navigation with reinforcement meta-learning
    Gaudet, Brian
    Linares, Richard
    Furfaro, Roberto
    ACTA ASTRONAUTICA, 2020, 169 : 180 - 190
  • [48] Geometry-adaptive Meta-learning in Riemannian Manifolds
    Gao, Zhi
    PROCEEDINGS OF THE ACM TURING AWARD CELEBRATION CONFERENCE-CHINA 2024, ACM-TURC 2024, 2024, : 231 - 232
  • [49] FAM: Adaptive federated meta-learning for MRI data
    Sinha, Indrajeet Kumar
    Verma, Shekhar
    Singh, Krishna Pratap
    PATTERN RECOGNITION LETTERS, 2024, 186 : 205 - 212
  • [50] Meta-forests: Domain generalization on random forests with meta-learning
    Sun, Yuyang
    Kosmas, Panagiotis
    ASIAN CONFERENCE ON MACHINE LEARNING, VOL 222, 2023, 222