Optimized cross-module attention network and medium-scale dataset for effective fire detection

被引:1
|
作者
Khan, Zulfiqar Ahmad [1 ]
Ullah, Fath U. Min [2 ]
Yar, Hikmat [1 ]
Ullah, Waseem [3 ]
Khan, Noman [4 ]
Kim, Min Je [1 ]
Baik, Sung Wook [1 ]
机构
[1] Sejong Univ, Seoul 143747, South Korea
[2] Univ Cent Lancashire, Sch Engn & Comp, Dept Comp, Preston, England
[3] Mohamed bin Zayed Univ Artificial Intelligence, Masdar City, Abu Dhabi, U Arab Emirates
[4] Yonsei Univ, Seoul, South Korea
关键词
Fire detection; Channel attention; Multi-scale feature selection; Image classification and detection; CONVOLUTIONAL NEURAL-NETWORKS; FLAME DETECTION; SURVEILLANCE; COLOR;
D O I
10.1016/j.patcog.2024.111273
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Over a decade, computer vision has shown a keen interest toward vision-based fire detection due to its wide range of applications. Primarily, fire detection relies on color features that inspired recent deep models to achieve reasonable performance. However, a perfect balance between high fire detection rate and computational complexity over mainstream surveillance setups is a challenging task. To establish a better tradeoff between model complexity and fire detection rate, this article develops an efficient and effective Cross Module Attention Network (CANet) for fire detection. CANet is developed from scratch with a squeezing and expansive paths to focus on the fire regions and its location. Next, the channel attention and Multi-Scale Feature Selection (MSFS) modules are integrated to accomplish the most important channels, selectively emphasize the contributions of feature maps, and enhance the discrimination potential of fire and non-fire objects. Furthermore, the CANet is optimized by removing a significant number of parameters for real-world applications. Finally, we introduce a challenging database for fire classification comprised of multiple classes and highly similar fire and non-fire object images. CANet improved accuracy by 2.5 % for the BWF, 2.2 % for the DQFF, 1.42 % for the LSFD, 1.8 % for the DSFD, and 1.14 % for the FG, Additionally, CANet achieved a 3.6 times higher FPS on resourceconstrained devices compared to baseline methods.
引用
收藏
页数:12
相关论文
共 35 条
  • [31] Controllably Deep Supervision and Multi-Scale Feature Fusion Network for Cloud and Snow Detection Based on Medium- and High-Resolution Imagery Dataset
    Zhang, Guangbin
    Gao, Xianjun
    Yang, Yuanwei
    Wang, Mingwei
    Ran, Shuhao
    REMOTE SENSING, 2021, 13 (23)
  • [32] CASF-MNet: multi-scale network with cross attention mechanism and spatial dimension feature fusion for maize leaf disease detection
    Sun, Lixiang
    He, Jie
    Zhang, Lingtao
    CROP PROTECTION, 2024, 180
  • [33] Multi-scale context feature and cross-attention network-enabled system and software-based for pavement crack detection
    Wen, Xin
    Li, Shuo
    Yu, Hao
    He, Yu
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 127
  • [34] Multi-detector head target detection network with three-stage cross-level feature fusion: effective detection of multi-scale objects
    Zhao, Yuhui
    Yang, Ruifeng
    Guo, Chenxia
    Chen, Xiaole
    JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03)
  • [35] ISPANet: A Pyramid Self-Attention Network for Single-Frame High-Resolution Infrared Small Target Detection With a Large-Scale Dataset SHR-IRST
    Wang, Wenjing
    Xiao, Chengwang
    Dou, Haofeng
    Liang, Ruixiang
    Yuan, Huaibin
    Zhao, Guanghui
    Chen, Zhiwei
    Huang, Yuhang
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2024, 17 : 11146 - 11162