Cockpit Facial Expression Recognition Model Based on Attention Fusion and Feature Enhancement Network

被引:0
|
作者
Luo, Yutao [1 ,2 ]
Guo, Fengrui [1 ,2 ]
机构
[1] School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou,510640, China
[2] Guangdong Provincial Key Laboratory of Automotive Engineering, Guangzhou,510640, China
来源
关键词
Emotion Recognition - Image segmentation - Transfer learning;
D O I
10.19562/j.chinasae.qcgc.2024.09.017
中图分类号
学科分类号
摘要
For the problem of difficulty in balancing accuracy and real-time performance of deep learning models for intelligent cockpit driver expression recognition, an expression recognition model called EmotionNet based on attention fusion and feature enhancement network is proposed. Based on GhostNet , the model utilizes two detection branches within the feature extraction module to fuse coordinate attention and channel attention mecha⁃ nisms to realize complementary attention mechanisms and all-round attention to important features. A feature en⁃ hanced neck network is established to fuse feature information of different scales. Finally, decision level fusion of feature information at different scales is achieved through the head network. In training, transfer learning and cen⁃ tral loss function are introduced to improve the recognition accuracy of the model. In the embedded device testing ex⁃ periments on the RAF-DB and KMU-FED datasets, the model achieves the recognition accuracy of 85.23% and 99.95%, respectively, with a recognition speed of 59.89 FPS. EmotionNet balances recognition accuracy and realtime performance, achieving a relatively advanced level and possessing certain applicability for intelligent cockpit expression recognition tasks. © 2024 SAE-China. All rights reserved.
引用
收藏
页码:1697 / 1706
相关论文
共 50 条
  • [31] A novel facial expression recognition model based on harnessing complementary features in multi-scale network with attention fusion
    Ghadai, Chakrapani
    Patra, Dipti
    Okade, Manish
    IMAGE AND VISION COMPUTING, 2024, 149
  • [32] Facial Expression Recognition in-the-Wild Using Blended Feature Attention Network
    Karnati, Mohan
    Seal, Ayan
    Jaworek-Korjakowska, Joanna
    Krejcar, Ondrej
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2023, 72
  • [33] A Dual-Direction Attention Mixed Feature Network for Facial Expression Recognition
    Zhang, Saining
    Zhang, Yuhang
    Zhang, Ye
    Wang, Yufei
    Song, Zhigang
    ELECTRONICS, 2023, 12 (17)
  • [34] Facial Expression Recognition Based on Region Enhanced Attention Network
    Gongguan C.
    Fan Z.
    Hua W.
    Hui F.
    Caiming Z.
    Jisuanji Fuzhu Sheji Yu Tuxingxue Xuebao/Journal of Computer-Aided Design and Computer Graphics, 2024, 36 (01): : 152 - 160
  • [35] Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition
    Yang, Hongling
    Xie, Lun
    Pan, Hang
    Li, Chiqin
    Wang, Zhiliang
    Zhong, Jialiang
    ENTROPY, 2023, 25 (09)
  • [36] BFFN: A novel balanced feature fusion network for fair facial expression recognition
    Li, Hao
    Luo, Yiqin
    Gu, Tianlong
    Chang, Liang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 138
  • [37] Optimization of facial expression recognition based on dual attention mechanism by lightweight network model
    Fang, Jian
    Lin, Xiaomei
    Wu, Yue
    An, Yi
    Sun, Haoran
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2023, 45 (05) : 9069 - 9081
  • [38] Facial Expression Recognition Algorithm Based on CNN and LBP Feature Fusion
    Yang, Xinli
    Li, Ming
    Zhao, ShiLin
    PROCEEDINGS OF 2017 INTERNATIONAL CONFERENCE ON ROBOTICS AND ARTIFICIAL INTELLIGENCE (ICRAI 2017), 2015, : 33 - 38
  • [39] Facial expression recognition based on fusion feature of PCA and LBP with SVM
    Luo, Yuan
    Wu, Cai-ming
    Zhang, Yi
    OPTIK, 2013, 124 (17): : 2767 - 2770
  • [40] Application of facial expression recognition technology based on feature fusion in teaching
    Deng X.
    Hu Y.
    Yang Y.
    Journal of Intelligent and Fuzzy Systems, 2024, 46 (04): : 7739 - 7750