ZigZagNet: Fusing Top-Down and Bottom-Up Context for Object Segmentation

被引:44
|
作者
Lin, Di [1 ]
Shen, Dingguo [1 ]
Shen, Siting [1 ]
Ji, Yuanfeng [1 ]
Lischinski, Dani [2 ]
Cohen-Or, Daniel [1 ]
Huang, Hui [1 ]
机构
[1] Shenzhen Univ, Shenzhen, Peoples R China
[2] Hebrew Univ Jerusalem, Jerusalem, Israel
关键词
D O I
10.1109/CVPR.2019.00767
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Multi-scale context information has proven to be essential for object segmentation tasks. Recent works construct the multi-scale context by aggregating convolutional feature maps extracted by different levels of a deep neural network. This is typically done by propagating and fusing features in a one-directional,top-down and bottom-up, manner.In this work, we introduce ZigZagNet, which aggregates a richer multi-context feature map by using not only dense top-down and bottom-up propagation,but also by introducing pathways crossing between different levels of the top-down and the bottom-up hierarchies, in a zig-zag fashion. Furthermore, the context information is exchanged and aggregated over multiple stages, where the fused feature maps from one stage are fed into the next one, yielding a more comprehensive context for improved segmentation performance. Our extensive evaluation on the public benchmarks demonstrates that ZigZagNet surpasses the state-of-the-art accuracy for both semantic segmentation and instance segmentation tasks.
引用
收藏
页码:7482 / 7491
页数:10
相关论文
共 50 条
  • [22] LEARNING HISTOPATHOLOGICAL REGIONS OF INTEREST BY FUSING BOTTOM-UP AND TOP-DOWN INFORMATION
    Corredor, German
    Romero, Eduardo
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2015, : 3200 - 3204
  • [23] Integration of bottom-up and top-down processes in visual object recognition
    Schmid, A
    Eddy, M
    Holcomb, P
    [J]. JOURNAL OF COGNITIVE NEUROSCIENCE, 2005, : 145 - 145
  • [24] Bottom-up and Top-down Object Inference Networks for Image Captioning
    Pan, Yingwei
    Li, Yehao
    Yao, Ting
    Mei, Tao
    [J]. ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2023, 19 (05)
  • [25] A Bottom-Up and Top-Down Integration Framework for Online Object Tracking
    Li, Meihui
    Peng, Lingbing
    Wu, Tianfu
    Peng, Zhenming
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 105 - 119
  • [26] BOTTOM-UP SALIENCY MEETS TOP-DOWN SEMANTICS FOR OBJECT DETECTION
    Sawada, Tomoya
    Lee, Teng-Yok
    Mizuno, Masahiro
    [J]. 2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 729 - 733
  • [27] OBJCUT: Efficient Segmentation Using Top-Down and Bottom-Up Cues
    Kumar, M. Pawan
    Torr, P. H. S.
    Zisserman, A.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (03) : 530 - 545
  • [28] Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues
    Allen, Josef D.
    Zhao, Nan
    Yuan, Jiangbo
    Liu, Xiuwen
    [J]. MOBILE MULTIMEDIA/IMAGE PROCESSING, SECURITY, AND APPLICATIONS 2011, 2011, 8063
  • [29] Unsupervised Sounding Object Localization with Bottom-Up and Top-Down Attention
    Shi, Jiayin
    Ma, Chao
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2161 - 2170
  • [30] BOTTOM-UP AND TOP-DOWN APPROACH TO USING CONTEXT IN TEXT RECOGNITION
    SHINGHAL, R
    TOUSSAINT, GT
    [J]. INTERNATIONAL JOURNAL OF MAN-MACHINE STUDIES, 1979, 11 (02): : 201 - 212