A multimodal face antispoofing method based on multifeature vision transformer and multirank fusion

被引:1
|
作者
Li, Zuhe [1 ]
Cui, Yuhao [1 ]
Wang, Fengqin [1 ,4 ]
Liu, Weihua [2 ]
Yang, Yongshuang [1 ]
Yu, Zeqi [1 ]
Jiang, Bin [1 ]
Chen, Hui [3 ]
机构
[1] Zhengzhou Univ Light Ind, Sch Comp & Commun Engn, Zhengzhou, Peoples R China
[2] China Mobile Res Inst, Beijing, Peoples R China
[3] Simshine Intelligent Technol Co Ltd, Ningbo, Peoples R China
[4] Zhengzhou Univ Light Ind, Sch Comp & Commun Engn, Zhengzhou 450002, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
face antispoofing; multifeature vision transformer; multimodal fusion; multirank fusion; GESTURE RECOGNITION; ALGORITHM;
D O I
10.1002/cpe.7824
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Face antispoofing (FAS) is attracting increasing attention from researchers because of its important role in preventing facial recognition systems from face spoofing attacks. With the advancement of various new convolutional neural network structures and the construction of various face antispoofing databases, the deep learning-based face antispoofing algorithm has become the main method in the FAS field. However, the generalization performance of the current multimodal face antispoofing algorithm is poor, and the recognition performance of the model on different datasets is quite different. Therefore, we design a multimodal face antispoofing framework based on a multifeature transformer (MFViT) and multirank fusion (MRF). First, we use a vision transformer structure, MFViT, for multimodal face antispoofing and a combination of modalities to capture the distinguishing characteristics in each modality. Second, we design a multidimensional multimodal fusion module, MRF, according to the various modal fusion characteristics obtained by the MFViT to fuse modal information in different dimensions more effectively. Evaluation results indicate that framework we designed achieves an average classification error rate (ACER) of 1.61% on the CASIA-SURF dataset and an ACER of 6.5% on the CASIA-SURF CeFA dataset.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] A Face Recognition Method Based on Multifeature Fusion
    Ye, Shengxi
    [J]. JOURNAL OF SENSORS, 2022, 2022
  • [2] A Novel Bird Sound Recognition Method Based on Multifeature Fusion and a Transformer Encoder
    Zhang, Shaokai
    Gao, Yuan
    Cai, Jianmin
    Yang, Hangxiao
    Zhao, Qijun
    Pan, Fan
    [J]. SENSORS, 2023, 23 (19)
  • [3] Face Recognition Based on Wavelet Transform and Multifeature Fusion Coding
    Guo Xiucai
    Cong Haoran
    [J]. LASER & OPTOELECTRONICS PROGRESS, 2021, 58 (12)
  • [4] Multifeature Fusion Detection Method for Fake Face Attack in Identity Authentication
    Liu, Haiqing
    Zheng, Shiqiang
    Hao, Shuhua
    Li, Yuancheng
    [J]. ADVANCES IN MULTIMEDIA, 2018, 2018
  • [5] Face antispoofing method based on single-modal and lightweight network
    Tong, Guoxiang
    Yan, Xinrong
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03)
  • [6] Rethinking Vision Transformer and Masked Autoencoder in Multimodal Face Anti-Spoofing
    Yu, Zitong
    Cai, Rizhao
    Cui, Yawen
    Liu, Xin
    Hu, Yongjian
    Kot, Alex C.
    [J]. INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024,
  • [7] Feature fusion method based on KCCA for ear and profile face based multimodal recognition
    Xu, Xiaona
    Mu, Zhichun
    [J]. 2007 IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS, VOLS 1-6, 2007, : 620 - 623
  • [8] Transformer Based Conditional GAN for Multimodal Image Fusion
    Zhang, Jun
    Jiao, Licheng
    Ma, Wenping
    Liu, Fang
    Liu, Xu
    Li, Lingling
    Chen, Puhua
    Yang, Shuyuan
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8988 - 9001
  • [9] Multimodal recognition based on fusion of ear and profile face
    Xu, Xiaona
    Mu, Zhichun
    [J]. PROCEEDINGS OF THE FOURTH INTERNATIONAL CONFERENCE ON IMAGE AND GRAPHICS, 2007, : 598 - +
  • [10] Multimodal biometric fusion approach based on iris and face
    School of Electronics and Information Engineering, Xi'an Jiaotong University, Xi'an 710049, China
    [J]. Hsi-An Chiao Tung Ta Hsueh/Journal of Xi'an Jiaotong University, 2008, 42 (02): : 133 - 137