Abnormal event detection algorithm based on dual attention future frame prediction and gap fusion discrimination

被引:3
|
作者
Wang, Dongliang [1 ,2 ]
Wang, Suyu [1 ,2 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing, Peoples R China
[2] Beijing Engn Res Ctr IoT Software & Syst, Beijing, Peoples R China
基金
中国国家自然科学基金;
关键词
abnormal event detection; generative adversarial network; gap fusion; deep learning;
D O I
10.1117/1.JEI.30.2.023009
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Rapid and accurate detection of crowd abnormal events, such as stampedes and violent attacks in public places, has great research significance and application value. Due to the diversity and uncertainty of abnormal events, almost all existing methods tackle the problem by minimizing the reconstruction errors of training data, which cannot guarantee a larger reconstruction error for all abnormal events. According to the idea that "normal events can be predicted, abnormal events cannot be predicted," we proposed a future frame prediction-based anomaly detection algorithm. First, the generative adversarial network (GAN) is trained by the normal videos to predict normal future frames. Then, it can determine the existence of abnormal events by identifying the difference between the ground truth and predicted video frame. In the design of the GAN, the attention module is introduced to improve the prediction level of the network. At the same time, the optical flow information is added for motion constraint to improve the constraint ability on the appearance characteristics. In the testing stage, the appearance gap and optical flow gap between the ground truth and the predicted video frame are fused to determine whether the frame is abnormal. The experimental results on the datasets of CUHK Avenue, UCSD, and ShanghaiTech show that the proposed algorithm is superior to that of the current mainstream anomaly detection algorithms. (C) 2021 SPIE and IS&T
引用
收藏
页数:16
相关论文
共 50 条
  • [21] Research on Traffic Acoustic Event Detection Algorithm Based on Model Fusion
    Zhang, Xiaodan
    Li, Ming
    Huang, Chengwei
    ENGINEERING LETTERS, 2021, 29 (03) : 1078 - 1082
  • [22] Detection Algorithm of Crowd Abnormal Event Based on Girvan-Newman Splitting
    Li Wentao
    Fu Han
    Hao Zhen
    Ten Yan
    Yan Lin
    Zhao Peiran
    Zhang Xuewu
    LASER & OPTOELECTRONICS PROGRESS, 2020, 57 (06)
  • [23] Abnormal Sound Event Detection Method Based on Time-Spectrum Information Fusion
    Changgeng Yu
    He, Chaowen
    Lin, Dashi
    Optical Memory and Neural Networks (Information Optics), 2024, 33 (04): : 411 - 421
  • [24] Pedestrian abnormal event detection based on multi-feature fusion in traffic video
    Wang, Xuan
    Song, Huansheng
    Cui, Hua
    OPTIK, 2018, 154 : 22 - 32
  • [25] Fusion-Based Feature Attention Gate Component for Vehicle Detection Based on Event Camera
    Cao, Hu
    Chen, Guang
    Xia, Jiahao
    Zhuang, Genghang
    Knoll, Alois
    IEEE SENSORS JOURNAL, 2021, 21 (21) : 24540 - 24548
  • [26] Video Prediction and Anomaly Detection Algorithm Based On Dual Discriminator
    Fan, Sinuo
    Meng, Fanjie
    2020 5TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE AND APPLICATIONS (ICCIA 2020), 2020, : 123 - 127
  • [27] An Abnormal External Link Detection Algorithm Based on Multi-Modal Fusion
    Wu, Zhiqiang
    INTERNATIONAL JOURNAL OF INFORMATION SECURITY AND PRIVACY, 2024, 18 (01)
  • [28] Future frame prediction based on generative assistant discriminative network for anomaly detection
    Chaobo Li
    Hongjun Li
    Guoan Zhang
    Applied Intelligence, 2023, 53 : 542 - 559
  • [29] Future frame prediction based on generative assistant discriminative network for anomaly detection
    Li, Chaobo
    Li, Hongjun
    Zhang, Guoan
    APPLIED INTELLIGENCE, 2023, 53 (01) : 542 - 559
  • [30] Generative Adversarial Networks for Abnormal Event Detection in Videos Based on Self-Attention Mechanism
    Zhang, Weichao
    Wang, Guanjun
    Huang, Mengxing
    Wang, Hongyu
    Wen, Shaoping
    IEEE ACCESS, 2021, 9 : 124847 - 124860