Offset or Onset Frame: A Multi-Stream Convolutional Neural Network with CapsuleNet Module for Micro-expression Recognition

被引:10
|
作者
Liu, Nian [1 ]
Liu, Xinyu [1 ]
Zhang, Zhihao [1 ]
Xu, Xueming [1 ]
Chen, Tong [1 ]
机构
[1] Southwest Univ, Chongqing Key Lab Nonlinear Circuit & Intelligent, Chool Elect & Informat Engn, Chongqing 400715, Peoples R China
关键词
micro-expression recognition; CNN; CapsNet; deep learning; optical flow;
D O I
10.1109/ICIIBMS50712.2020.9336412
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Micro-expression is a spontaneous facial expression, which may reveal people's real emotions. The micro-expression recognition has recently attracted much attention in psychology and computer vision community. In this paper, we designed a multi-stream Convolutional Neural Network (CNN) combined with the Capsule Network(CapsNet) module,named CNNCapsNet, to improve the performance of micro-expression recognition. Firstly, both vertical and horizontal optical flow are computed from the onset to the apex, and from the apex to the offset frame respectively, which is the first time that the offset frame information has been taken into account in the field of micro-expression recognition. Secondly, these four optical flow images and the grayscale image of apex frame are input into the five-stream CNN model to extract features. Finally, CapsNet completes micro-expression recognition by learning the features extracted by CNN. The method proposed in this paper are evaluated using the Leave-One-Subject-Out (LOSO) cross-validation protocol on CASME II. The results show that the offset information, which is often neglected, is more important than onset information for the recognition task. Our CNNCapsNet framework can achieve the accuracy of 64.63% for the five-class micro-expression classification.
引用
收藏
页码:236 / 240
页数:5
相关论文
共 50 条
  • [1] Multi-Stream Deep Convolution Neural Network With Ensemble Learning for Facial Micro-Expression Recognition
    Perveen, Gulnaz
    Ali, Syed Farooq
    Ahmad, Jameel
    Shahab, Sana
    Adnan, Muhammad
    Anjum, Mohd
    Khosa, Ikramullah
    [J]. IEEE ACCESS, 2023, 11 (118474-118489) : 118474 - 118489
  • [2] Spatiotemporal Convolutional Neural Network with Convolutional Block Attention Module for Micro-Expression Recognition
    Chen, Boyu
    Zhang, Zhihao
    Liu, Nian
    Tan, Yang
    Liu, Xinyu
    Chen, Tong
    [J]. INFORMATION, 2020, 11 (08)
  • [3] A Convolutional Neural Network for Compound Micro-Expression Recognition
    Zhao, Yue
    Xu, Jiancheng
    [J]. SENSORS, 2019, 19 (24)
  • [4] Shallow multi-branch attention convolutional neural network for micro-expression recognition
    Wang, Gang
    Huang, Shucheng
    Tao, Zhe
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (04) : 1967 - 1980
  • [5] Shallow multi-branch attention convolutional neural network for micro-expression recognition
    Gang Wang
    Shucheng Huang
    Zhe Tao
    [J]. Multimedia Systems, 2023, 29 : 1967 - 1980
  • [6] Multi-Stream Convolutional Neural Network for SAR Automatic Target Recognition
    Zhao, Pengfei
    Liu, Kai
    Zou, Hao
    Zhen, Xiantong
    [J]. REMOTE SENSING, 2018, 10 (09)
  • [7] Cross-Database Micro-Expression Recognition Based on a Dual-Stream Convolutional Neural Network
    Song, Baolin
    Zong, Yuan
    Li, Ke
    Zhu, Jie
    Shi, Jingang
    Zhao, Li
    [J]. IEEE ACCESS, 2022, 10 : 66227 - 66237
  • [8] Dual Temporal Scale Convolutional Neural Network for Micro-Expression Recognition
    Peng, Min
    Wang, Chongyang
    Chen, Tong
    Liu, Guangyuan
    Fu, Xiaolan
    [J]. FRONTIERS IN PSYCHOLOGY, 2017, 8
  • [9] Deep Convolutional Neural Network with Optical Flow for Facial Micro-Expression Recognition
    Li, Qiuyu
    Yu, Jun
    Kurihara, Toru
    Zhang, Haiyan
    Zhan, Shu
    [J]. JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2020, 29 (01)
  • [10] Recognizing Spontaneous Micro-Expression Using a Three-Stream Convolutional Neural Network
    Song, Baolin
    Li, Ke
    Zong, Yuan
    Zhu, Jie
    Zheng, Wenming
    Shi, Jingang
    Zhao, Li
    [J]. IEEE ACCESS, 2019, 7 : 184537 - 184551