Incremental Audio-Visual Fusion for Person Recognition in Earthquake Scene

被引:1
|
作者
You, Sisi [1 ]
Zuo, Yukun [2 ]
Yao, Hantao [3 ,4 ]
Xu, Changsheng [3 ,4 ]
机构
[1] Nanjing Univ Posts & Telecommun, Sch Commun & Informat Engn, Nanjing 210003, Jiangsu, Peoples R China
[2] Univ Sci & Technol China, Dept Elect Engn & Informat Sci, 96 JinZhai Rd, Hefei 230026, Anhui, Peoples R China
[3] Chinese Acad Sci CASIA, Inst Automat, State Key Lab Multimodal Artificial Intelligence, Beijing 100190, Peoples R China
[4] Univ Chinese Acad Sci UCAS, Sch Artificial Intelligence, Beijing 101408, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Cross-modal audio-visual fusion; incremental learning; person recognition; elastic weight consolidation; feature replay;
D O I
10.1145/3614434
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Earthquakes have a profound impact on social harmony and property, resulting in damage to buildings and infrastructure. Effective earthquake rescue efforts require rapid and accurate determination of whether any survivors are trapped in the rubble of collapsed buildings. While deep learning algorithms can enhance the speed of rescue operations using single-modal data (either visual or audio), they are confronted with two primary challenges: insufficient information provided by single-modal data and catastrophic forgetting. In particular, the complexity of earthquake scenes means that single-modal features may not provide adequate information. Additionally, catastrophic forgetting occurs when the model loses the information learned in a previous task after training on subsequent tasks, due to non-stationary data distributions in changing earthquake scenes. To address these challenges, we propose an innovative approach that utilizes an incremental audio-visual fusion model for person recognition in earthquake rescue scenarios. Firstly, we leverage a cross-modal hybrid attention network to capture discriminative temporal context embedding, which uses self-attention and cross-modal attention mechanisms to combine multi-modality information, enhancing the accuracy and reliability of person recognition. Secondly, an incremental learning model is proposed to overcome catastrophic forgetting, which includes elastic weight consolidation and feature replay modules. Specifically, the elastic weight consolidation module slows down learning on certain weights based on their importance to previously learned tasks. The feature replay module reviews the learned knowledge by reusing the features conserved from the previous task, thus preventing catastrophic forgetting in dynamic environments. To validate the proposed algorithm, we collected the Audio-Visual Earthquake Person Recognition (AVEPR) dataset from earthquake films and real scenes. Furthermore, the proposed method gets 85.41% accuracy while learning the 10th new task, which demonstrates the effectiveness of the proposed method and highlights its potential to significantly improve earthquake rescue efforts.
引用
收藏
页数:19
相关论文
共 50 条
  • [1] Scene recognition with audio-visual sensor fusion
    Devicharan, D
    Mehrotra, KG
    Mohan, CK
    Varshney, PK
    Zuo, L
    [J]. Multisensor, Multisource Information Fusion: Architectures, Algorithms and Applications 2005, 2005, 5813 : 201 - 210
  • [2] Dynamic Audio-Visual Biometric Fusion for Person Recognition
    Alsaedi, Najlaa Hindi
    Jaha, Emad Sami
    [J]. CMC-COMPUTERS MATERIALS & CONTINUA, 2022, 71 (01): : 1283 - 1311
  • [3] Audio-Visual Sensor Fusion Framework Using Person Attributes Robust to Missing Visual Modality for Person Recognition
    John, Vijay
    Kawanishi, Yasutomo
    [J]. MULTIMEDIA MODELING, MMM 2023, PT II, 2023, 13834 : 523 - 535
  • [4] Detection of documentary scene changes by audio-visual fusion
    Velivelli, A
    Ngo, CW
    Huang, TS
    [J]. IMAGE AND VIDEO RETRIEVAL, PROCEEDINGS, 2003, 2728 : 227 - 237
  • [5] Multifactor fusion for audio-visual speaker recognition
    Chetty, Girija
    Tran, Dat
    [J]. LECTURE NOTES IN SIGNAL SCIENCE, INTERNET AND EDUCATION (SSIP'07/MIV'07/DIWEB'07), 2007, : 70 - +
  • [6] Bimodal fusion in audio-visual speech recognition
    Zhang, XZ
    Mersereau, RM
    Clements, M
    [J]. 2002 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOL I, PROCEEDINGS, 2002, : 964 - 967
  • [7] A Deep Neural Network for Audio-Visual Person Recognition
    Alam, Mohammad Rafiqul
    Bennamoun, Mohammed
    Togneri, Roberto
    Sohel, Ferdous
    [J]. 2015 IEEE 7TH INTERNATIONAL CONFERENCE ON BIOMETRICS THEORY, APPLICATIONS AND SYSTEMS (BTAS 2015), 2015,
  • [8] Multi-Feature Audio-Visual Person Recognition
    Das, Amitav
    Manyam, Ohil K.
    Tapaswi, Makarand
    [J]. 2008 IEEE WORKSHOP ON MACHINE LEARNING FOR SIGNAL PROCESSING, 2008, : 227 - 232
  • [9] Audio-visual person recognition based on rank level fusion and Gaussian Mixture Model
    College of Electrical and Information Engineering, Hunan Institute of Engineering, Xiangtan, China
    [J]. Int. J. Control Autom., 4 (313-332):
  • [10] EXPANDING AUDIO-VISUAL SCENE
    RICHMOND, JW
    [J]. AMERICAN JOURNAL OF ORTHODONTICS, 1965, 51 (04): : 298 - &