Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition

被引:20
|
作者
Zhang, Su [1 ]
Tang, Chuangao [2 ]
Guan, Cuntai [1 ]
机构
[1] Nanyang Technol Univ, Sch Comp Sci & Engn, Singapore 639798, Singapore
[2] Southeast Univ, Sch Biol Sci & Med Engn, Key Lab Child Dev & Learning Sci, Minist Educ, Nanjing 210096, Peoples R China
关键词
Continuous emotion recognition; Knowledge distillation; Cross-modality; BRAIN;
D O I
10.1016/j.patcog.2022.108833
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Visual modality is one of the most dominant modalities for current continuous emotion recognition methods. Compared to which the EEG modality is relatively less sound due to its intrinsic limitation such as subject bias and low spatial resolution. This work attempts to improve the continuous prediction of the EEG modality by using the dark knowledge from the visual modality. The teacher model is built by a cascade convolutional neural network - temporal convolutional network (CNN-TCN) architecture, and the student model is built by TCNs. They are fed by video frames and EEG average band power features, respectively. Two data partitioning schemes are employed, i.e., the trial-level random shuffling (TRS) and the leave-one-subject-out (LOSO). The standalone teacher and student can produce continuous prediction superior to the baseline method, and the employment of the visual-to-EEG cross-modal KD further improves the prediction with statistical significance, i.e., p-value < 0.01 for TRS and p-value < 0.05 for LOSO partitioning. The saliency maps of the trained student model show that the brain areas associated with the active valence state are not located in precise brain areas. Instead, it results from synchronized activity among various brain areas. And the fast beta and gamma waves, with the frequency of 18 - 30Hz and 30 - 45Hz, contribute the most to the human emotion process compared to other bands. The code is available at https://github.com/sucv/Visual_to_EEG_Cross_Modal_KD_for_CER. (C) 2022 The Authors. Published by Elsevier Ltd.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Visual-to-EEG cross-modal knowledge distillation for continuous emotion recognition
    Zhang, Su
    Tang, Chuangao
    Guan, Cuntai
    [J]. PATTERN RECOGNITION, 2022, 130
  • [2] Cross-modal knowledge distillation for continuous sign language recognition
    Gao, Liqing
    Shi, Peng
    Hu, Lianyu
    Feng, Jichao
    Zhu, Lei
    Wan, Liang
    Feng, Wei
    [J]. NEURAL NETWORKS, 2024, 179
  • [3] DistilVPR: Cross-Modal Knowledge Distillation for Visual Place Recognition
    Wang, Sijie
    She, Rui
    Kang, Qiyu
    Jian, Xingchao
    Zhao, Kai
    Song, Yang
    Tay, Wee Peng
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 9, 2024, : 10377 - 10385
  • [4] FedCMD: A Federated Cross-modal Knowledge Distillation for Drivers' Emotion Recognition
    Bano, Saira
    Tonellotto, Nicola
    Cassara, Pietro
    Gotta, Alberto
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2024, 15 (03)
  • [5] CROSS-MODAL KNOWLEDGE DISTILLATION FOR ACTION RECOGNITION
    Thoker, Fida Mohammad
    Gall, Juergen
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 6 - 10
  • [6] EmotionKD: A Cross-Modal Knowledge Distillation Framework for Emotion Recognition Based on Physiological Signals
    Liu, Yucheng
    Jia, Ziyu
    Wang, Haichao
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 6122 - 6131
  • [7] Progressive Cross-modal Knowledge Distillation for Human Action Recognition
    Ni, Jianyuan
    Ngu, Anne H. H.
    Yan, Yan
    [J]. PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 5903 - 5912
  • [8] Visual context learning based on cross-modal knowledge for continuous sign language recognition
    Liu, Kailin
    Hou, Yonghong
    Guo, Zihui
    Yin, Wenjie
    Ren, Yi
    [J]. VISUAL COMPUTER, 2024,
  • [9] Electroglottograph-Based Speech Emotion Recognition via Cross-Modal Distillation
    Chen, Lijiang
    Ren, Jie
    Mao, Xia
    Zhao, Qi
    [J]. APPLIED SCIENCES-BASEL, 2022, 12 (09):
  • [10] Cross-Modal Knowledge Distillation for Depth Privileged Monocular Visual Odometry
    Li, Bin
    Wang, Shuling
    Ye, Haifeng
    Gong, Xiaojin
    Xiang, Zhiyu
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2022, 7 (03) : 6171 - 6178