Feature channel enhancement for crowd counting

被引:4
|
作者
Wu, Xingjiao [1 ,2 ]
Kong, Shuchen [3 ]
Zheng, Yingbin [3 ]
Ye, Hao [3 ]
Yang, Jing [2 ]
He, Liang [1 ,3 ]
机构
[1] East China Normal Univ, Shanghai Key Lab Multidimens Informat Proc, Shanghai 200062, Peoples R China
[2] East China Normal Univ, Sch Comp Sci & Technol, Shanghai 200062, Peoples R China
[3] Videt Tech Ltd, Shanghai 201203, Peoples R China
关键词
fuzzy set theory; feature extraction; feature channel enhancement; crowded visual space; crowd counting system; stable model; accurate robust model; feature channels; counting network; featured channel enhancement block; FCE; feature extraction unit; encoded channel information; positive characteristic channel; weak channel information; negative channel information;
D O I
10.1049/iet-ipr.2019.1308
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Crowd counting, i.e. count the number of people in a crowded visual space, is emerging as an essential research problem with public security. A key in the design of the crowd counting system is to create a stable and accurate robust model, which requires to process on the feature channels of the counting network. In this study, the authors present a featured channel enhancement (FCE) block for crowd counting. First, they use a feature extraction unit to obtain the information of each channel and encodes the information of each channel. Then use a non-linear variation unit to deal with the encoded channel information, finally, normalise the data and affixed to each channel separately. With the use of the FCE, the positive characteristic channel can be enhanced and weak or negative channel information can be suppressed. The authors successfully incorporate the FCE with two compact networks on the standard benchmarks and prove that the proposed FCE achieves promising results.
引用
下载
收藏
页码:2376 / 2382
页数:7
相关论文
共 50 条
  • [21] Multiscale Feature Adaptive Integration for Crowd Counting in Highly Congested Scenes
    Gao, Hui
    Deng, Miaolei
    Zhao, Wenjun
    Zhang, Dexian
    Gong, Yuehong
    IEEE ACCESS, 2022, 10 : 47846 - 47853
  • [22] Efficient crowd counting model using feature pyramid network and ResNeXt
    Kalyani, G.
    Janakiramaiah, B.
    Prasad, L. V. Narasimha
    Karuna, A.
    Babu, A. Mohan
    SOFT COMPUTING, 2021, 25 (15) : 10497 - 10507
  • [23] Crowd counting using cross-adversarial loss and global feature
    Li, Shufang
    Hu, Zhengping
    Zhao, Mengyao
    Sun, Zhe
    JOURNAL OF ELECTRONIC IMAGING, 2020, 29 (05)
  • [24] Crowd Counting via Unsupervised Cross-Domain Feature Adaptation
    Ding, Guanchen
    Yang, Daiqin
    Wang, Tao
    Wang, Sihan
    Zhang, Yunfei
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 4665 - 4678
  • [25] A LIGHTWEIGHT FEATURE FUSION ARCHITECTURE FOR RESOURCE-CONSTRAINED CROWD COUNTING
    Chaudhuri, Yashwardhan
    Kumar, Ankit
    Phukan, Orchid Chetia
    Buduru, Arun Balaji
    arXiv, 2024,
  • [26] Crowd counting by feature-level fusion of appearance and fluid force
    Ma, Dingxin
    Zhang, Xuguang
    Yu, Hui
    2020 11TH INTERNATIONAL CONFERENCE ON AWARENESS SCIENCE AND TECHNOLOGY (ICAST), 2020,
  • [27] High-density crowd counting method based on SURF feature
    Liang, Ronghua
    Liu, Xiangdong
    Ma, Xiangyin
    Wang, Ziren
    Song, Mingli
    Liang, R. (rhliang@zjut.edu.cn), 1600, Institute of Computing Technology (24): : 1568 - 1575
  • [28] Counting with the Crowd
    Marcus, Adam
    Karger, David
    Madden, Samuel
    Miller, Robert
    Oh, Sewoong
    PROCEEDINGS OF THE VLDB ENDOWMENT, 2012, 6 (02): : 109 - 120
  • [29] Double multi-scale feature fusion network for crowd counting
    Liu, Qian
    Fang, Jiongtao
    Zhong, Yixiong
    Wang, Cunbao
    Qi, Youwei
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (34) : 81831 - 81855
  • [30] Explicit Invariant Feature Induced Cross-Domain Crowd Counting
    Cai, Yiqing
    Chen, Lianggangxu
    Guan, Haoyue
    Lin, Shaohui
    Lu, Changhong
    Wang, Changbo
    He, Gaoqi
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 259 - 267