MANet: a Motion-Driven Attention Network for Detecting the Pulse from a Facial Video with Drastic Motions

被引:3
|
作者
Liu, Xuenan [1 ]
Yang, Xuezhi [1 ]
Meng, Ziyan [1 ]
Wang, Ye [1 ]
Zhang, Jie [2 ]
Wong, Alexander [3 ]
机构
[1] Hefei Univ Technol, Anhui Prov Key Lab Ind Safety & Emergency Technol, Hefei 230601, Anhui, Peoples R China
[2] First Affiliated Hosp USTC, Hefei 230601, Anhui, Peoples R China
[3] Univ Waterloo, Waterloo, ON N2L 3G1, Canada
关键词
NONCONTACT;
D O I
10.1109/ICCVW54120.2021.00270
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Video Photoplethysmography (VPPG) technique can detect pulse signals from facial videos, becoming increasingly popular due to its convenience and low cost. However, it fails to be sufficiently robust to drastic motion disturbances such as continuous head movements in our real life. A motion-driven attention network (MANet) is proposed in this paper to improve its motion robustness. MANet takes the frequency spectrum of a skin color signal and of a synchronous nose motion signal as the inputs, following by removing the motion features out of the skin color signal using an attention mechanism driven by the nose motion signal. Thus, it predicts frequency spectrum without components resulting from motion disturbances, which is finally transformed back to a pulse signal. MANet is tested on 1000 samples of 200 subjects provided by the 2nd Remote Physiological Signal Sensing (RePSS) Challenge. It achieves a mean inter-beat-interval (IBI) error of 122.80 milliseconds and a mean heart rate error of 7.29 beats per minute.
引用
收藏
页码:2385 / 2390
页数:6
相关论文
共 2 条
  • [1] Detecting Pulse from Head Motions in Video
    Balakrishnan, Guha
    Durand, Fredo
    Guttag, John
    2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, : 3430 - 3437
  • [2] A Two-Stage Spatiotemporal Attention Convolution Network for Continuous Dimensional Emotion Recognition From Facial Video
    Hu, Min
    Chu, Qian
    Wang, Xiaohua
    He, Lei
    Ren, Fuji
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 698 - 702