共 35 条
Optimized cross-module attention network and medium-scale dataset for effective fire detection
被引:1
|作者:
Khan, Zulfiqar Ahmad
[1
]
Ullah, Fath U. Min
[2
]
Yar, Hikmat
[1
]
Ullah, Waseem
[3
]
Khan, Noman
[4
]
Kim, Min Je
[1
]
Baik, Sung Wook
[1
]
机构:
[1] Sejong Univ, Seoul 143747, South Korea
[2] Univ Cent Lancashire, Sch Engn & Comp, Dept Comp, Preston, England
[3] Mohamed bin Zayed Univ Artificial Intelligence, Masdar City, Abu Dhabi, U Arab Emirates
[4] Yonsei Univ, Seoul, South Korea
关键词:
Fire detection;
Channel attention;
Multi-scale feature selection;
Image classification and detection;
CONVOLUTIONAL NEURAL-NETWORKS;
FLAME DETECTION;
SURVEILLANCE;
COLOR;
D O I:
10.1016/j.patcog.2024.111273
中图分类号:
TP18 [人工智能理论];
学科分类号:
081104 ;
0812 ;
0835 ;
1405 ;
摘要:
Over a decade, computer vision has shown a keen interest toward vision-based fire detection due to its wide range of applications. Primarily, fire detection relies on color features that inspired recent deep models to achieve reasonable performance. However, a perfect balance between high fire detection rate and computational complexity over mainstream surveillance setups is a challenging task. To establish a better tradeoff between model complexity and fire detection rate, this article develops an efficient and effective Cross Module Attention Network (CANet) for fire detection. CANet is developed from scratch with a squeezing and expansive paths to focus on the fire regions and its location. Next, the channel attention and Multi-Scale Feature Selection (MSFS) modules are integrated to accomplish the most important channels, selectively emphasize the contributions of feature maps, and enhance the discrimination potential of fire and non-fire objects. Furthermore, the CANet is optimized by removing a significant number of parameters for real-world applications. Finally, we introduce a challenging database for fire classification comprised of multiple classes and highly similar fire and non-fire object images. CANet improved accuracy by 2.5 % for the BWF, 2.2 % for the DQFF, 1.42 % for the LSFD, 1.8 % for the DSFD, and 1.14 % for the FG, Additionally, CANet achieved a 3.6 times higher FPS on resourceconstrained devices compared to baseline methods.
引用
收藏
页数:12
相关论文