CEDNet: A cascade encoder-decoder network for dense prediction

被引:3
|
作者
Zhang, Gang [1 ]
Li, Ziyi [2 ]
Tang, Chufeng [1 ]
Li, Jianmin [1 ]
Hu, Xiaolin [1 ,3 ]
机构
[1] Tsinghua Univ, Inst AI, McGovern Inst Brain Res, Tsinghua Lab Brain & Intelligence THBI,IDG,Bosch J, Beijing 100084, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Artificial Intelligence & Automat, Wuhan 430074, Peoples R China
[3] Chinese Inst Brain Res CIBR, Beijing 100010, Peoples R China
基金
中国国家自然科学基金;
关键词
Dense prediction; Object detection; Instance segmentation; Semantic segmentation; Cascade encoder-decoder; Multi-scale feature fusion;
D O I
10.1016/j.patcog.2024.111072
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The prevailing methods for dense prediction tasks typically utilize a heavy classification backbone to extract multi-scale features and then fuse these features using a lightweight module. However, these methods allocate most computational resources to the classification backbone, which delays the multi-scale feature fusion and potentially leads to inadequate feature fusion. Although some methods perform feature fusion from early stages, they either fail to fully leverage high-level features to guide low-level feature learning or have complex structures, resulting in sub-optimal performance. We propose a streamlined cascade encoder-decoder network, named CEDNet, tailored for dense prediction tasks. All stages in CEDNet share the same encoder-decoder structure and perform multi-scale feature fusion within each decoder, thereby enhancing the effectiveness of multi-scale feature fusion. We explored three well-known encoder-decoder structures: Hourglass, UNet, and FPN, all of which yielded promising results. Experiments on various dense prediction tasks demonstrated the effectiveness of our method.1
引用
收藏
页数:10
相关论文
共 50 条
  • [1] An encoder-decoder switch network for purchase prediction
    Park, Chanyoung
    Kim, Donghyun
    Yu, Hwanjo
    KNOWLEDGE-BASED SYSTEMS, 2019, 185
  • [2] Contextual encoder-decoder network for visual saliency prediction
    Kroner, Alexander
    Senden, Mario
    Driessens, Kurt
    Goebel, Rainer
    NEURAL NETWORKS, 2020, 129 : 261 - 270
  • [3] Recurrent Encoder-Decoder Networks for Time-Varying Dense Prediction
    Zeng, Tao
    Wu, Bian
    Zhou, Jiayu
    Davidson, Ian
    Ji, Shuiwang
    2017 17TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2017, : 1165 - 1170
  • [4] A Dense Encoder-Decoder Network with Feedback Connections for Pan-Sharpening
    Li, Weisheng
    Xiang, Minghao
    Liang, Xuesong
    REMOTE SENSING, 2021, 13 (22)
  • [5] A Convolutional Encoder-Decoder Network With Skip Connections for Saliency Prediction
    Qi, Fei
    Lin, Chunhuan
    Shi, Guangming
    Li, Hao
    IEEE ACCESS, 2019, 7 : 60428 - 60438
  • [6] Deep Encoder-Decoder Neural Networks for Retinal Blood Vessels Dense Prediction
    Zhang, Wenlu
    Li, Lusi
    Cheong, Vincent
    Fu, Bo
    Aliasgari, Mehrdad
    INTERNATIONAL JOURNAL OF COMPUTATIONAL INTELLIGENCE SYSTEMS, 2021, 14 (01) : 1078 - 1086
  • [7] Multi-scale Recurrent Encoder-Decoder Network for Dense Temporal Classification
    Choo, Sungkwon
    Seo, Wonkyo
    Jeong, Dong-Ju
    Cho, Nam Ik
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 103 - 108
  • [8] EDChannel: channel prediction of backscatter communication network based on encoder-decoder
    Dengao Li
    Yongxin Wen
    Shuang Xu
    Qiang Wang
    Ruiqin Bai
    Jumin Zhao
    Telecommunication Systems, 2022, 81 : 99 - 114
  • [9] EDChannel: channel prediction of backscatter communication network based on encoder-decoder
    Li, Dengao
    Wen, Yongxin
    Xu, Shuang
    Wang, Qiang
    Bai, Ruiqin
    Zhao, Jumin
    TELECOMMUNICATION SYSTEMS, 2022, 81 (01) : 99 - 114
  • [10] Encoder-decoder network with RMP for tongue segmentation
    Kusakunniran, Worapan
    Borwarnginn, Punyanuch
    Karnjanapreechakorn, Sarattha
    Thongkanchorn, Kittikhun
    Ritthipravat, Panrasee
    Tuakta, Pimchanok
    Benjapornlert, Paitoon
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2023, 61 (05) : 1193 - 1207