CDADNet: Context-guided dense attentional dilated network for crowd counting

被引:1
|
作者
Zhu, Aichun [1 ,2 ]
Duan, Guoxiu [1 ]
Zhu, Xiaomei [1 ]
Zhao, Lu [1 ]
Huang, Yaoying [1 ]
Hua, Gang [2 ]
Snoussi, Hichem [3 ]
机构
[1] Nanjing Tech Univ, Sch Comp Sci & Technol, Nanjing, Peoples R China
[2] China Univ Min & Technol, Sch Informat & Control Engn, Xuzhou, Jiangsu, Peoples R China
[3] Univ Technol Troyes, ICD LM2S, Troyes, France
基金
中国国家自然科学基金;
关键词
Crowd counting; Density map; Dense dilated; Attention;
D O I
10.1016/j.image.2021.116379
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Crowd counting is a conspicuous task in computer vision owing to scale variations, perspective distortions, and complex backgrounds. Existing research usually adopts the dilated convolution network to enlarge the receptive fields to solve the problem of scale variations. However, these methods easily bring background information into the large receptive fields to generate poor quality density maps. To address this problem, we propose a novel backbone called Context-guided Dense Attentional Dilated Network (CDADNet). CDADNet contains three components: an attentional module, a context-guided module and a dense attentional dilated module. The attentional module is used to provide attention maps which can remove background information, while the context-guided module is proposed to extract multi-scale contextual information. Moreover, the dense attentional dilated module aims to generate high-granularity density maps and the cascaded strategy is used to preserve information from changing scales. To verify the feasibility of our method, we compare it to the existing approaches on five crowd counting datasets (ShanghaiTech (Part_A and Part_B), WorldEXPO'10, UCSD, UCF_CC_50). The comparison results demonstrate that CDADNet is effective and robust for various scenes.
引用
收藏
页数:8
相关论文
共 50 条
  • [41] Multi-scale dilated convolution of convolutional neural network for crowd counting
    Wang, Yanjie
    Hu, Shiyu
    Wang, Guodong
    Chen, Chenglizhao
    Pan, Zhenkuan
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (1-2) : 1057 - 1073
  • [42] Multiple Frequency Inputs and Context-Guided Attention Network for Stereo Disparity Estimation
    Hua, Yan
    Yang, Lin
    Yang, Yingyun
    ELECTRONICS, 2022, 11 (12)
  • [43] MCGFE-CR: Cloud Removal With Multiscale Context-Guided Feature Enhancement Network
    Bie, Qiang
    Su, Xiaojie
    IEEE ACCESS, 2024, 12 : 181303 - 181315
  • [44] Correlation-attention guided regression network for efficient crowd counting
    Zeng, Xin
    Wang, Huake
    Guo, Qiang
    Wu, Yunpeng
    JOURNAL OF VISUAL COMMUNICATION AND IMAGE REPRESENTATION, 2024, 99
  • [45] Synthetic guided domain adaptive and edge aware network for crowd counting
    Cao, Zhijie
    Shamsolmoali, Pourya
    Yang, Jie
    IMAGE AND VISION COMPUTING, 2020, 104
  • [46] Crowd Counting via Joint SASNet and a Guided Batch Normalization Network
    Haldiz, Cengizhan
    Ismael, Sarmad F.
    Celebi, Hasari
    Aptoula, Erchan
    2023 31ST SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU, 2023,
  • [47] Crowd counting via scale-adaptive convolutional neural network in extremely dense crowd images
    Yan, Ran
    Gong, Shengrong
    Zhong, Shan
    INTERNATIONAL JOURNAL OF COMPUTER APPLICATIONS IN TECHNOLOGY, 2019, 61 (04) : 318 - 324
  • [48] Crowd Counting Based on Multiscale Spatial Guided Perception Aggregation Network
    Chen, Zhangping
    Zhang, Shuo
    Zheng, Xiaoqing
    Zhao, Xiaodong
    Kong, Yaguang
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (12) : 17465 - 17478
  • [49] SRNet: Scale-Aware Representation Learning Network for Dense Crowd Counting
    Huang, Liangjun
    Zhu, Luning
    Shen, Shihui
    Zhang, Qing
    Zhang, Jianwei
    IEEE ACCESS, 2021, 9 : 136032 - 136044
  • [50] A Crowd Counting Method Based on Multi-column Dilated Convolutional Neural Network
    Wu, Weiqun
    Sang, Jun
    Alam, Mohammad S.
    Xia, Xiaofeng
    Tan, Jinghan
    PATTERN RECOGNITION AND TRACKING XXX, 2019, 10995