Research on Multimodality Face Antispoofing Model Based on Adversarial Attacks

被引:0
|
作者
Mao, Junjie [1 ,2 ,3 ]
Weng, Bin [1 ,2 ,3 ]
Huang, Tianqiang [1 ,2 ,3 ]
Ye, Feng [1 ,2 ,3 ]
Huang, Liqing [1 ,2 ,3 ]
机构
[1] Fujian Normal Univ, Coll Math & Informat, Fuzhou 350007, Peoples R China
[2] Digital Fujian Inst Big Data Secur Technol, Fuzhou 350007, Peoples R China
[3] Fujian Prov Engn Res Ctr Big Data Anal & Applicat, Fuzhou 350007, Peoples R China
基金
中国国家自然科学基金;
关键词
D O I
10.1155/2021/3670339
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Face antispoofing detection aims to identify whether the user's face identity information is legal. Multimodality models generally have high accuracy. However, the existing works of face antispoofing detection have the problem of insufficient research on the safety of the model itself. Therefore, the purpose of this paper is to explore the vulnerability of existing face antispoofing models, especially multimodality models, when resisting various types of attacks. In this paper, we firstly study the resistance ability of multimodality models when they encounter white-box attacks and black-box attacks from the perspective of adversarial examples. Then, we propose a new method that combines mixed adversarial training and differentiable high-frequency suppression modules to effectively improve model safety. Experimental results show that the accuracy of the multimodality face antispoofing model is reduced from over 90% to about 10% when it is attacked by adversarial examples. But, after applying the proposed defence method, the model can still maintain more than 90% accuracy on original examples, and the accuracy of the model can reach more than 80% on attack examples.
引用
收藏
页数:12
相关论文
共 50 条
  • [31] Boosting Model Inversion Attacks With Adversarial Examples
    Zhou, Shuai
    Zhu, Tianqing
    Ye, Dayong
    Yu, Xin
    Zhou, Wanlei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (03) : 1451 - 1468
  • [32] Crossover phenomenon in adversarial attacks on voter model
    Mizutaka, Shogo
    JOURNAL OF PHYSICS-COMPLEXITY, 2023, 4 (03):
  • [33] Artificial Immune System of Secure Face Recognition Against Adversarial Attacks
    Ren, Min
    Wang, Yunlong
    Zhu, Yuhao
    Huang, Yongzhen
    Sun, Zhenan
    Li, Qi
    Tan, Tieniu
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2024, 132 (12) : 5718 - 5740
  • [34] An Adversarial Model for Expressing Attacks on Control Protocols
    Butts, Jonathan
    Rice, Mason
    Shenoi, Sujeet
    JOURNAL OF DEFENSE MODELING AND SIMULATION-APPLICATIONS METHODOLOGY TECHNOLOGY-JDMS, 2012, 9 (03): : 243 - 255
  • [35] A Customized Model for Defensing Against Adversarial Attacks
    Sun, Jiang
    Zhou, Pingqiang
    CONFERENCE OF SCIENCE & TECHNOLOGY FOR INTEGRATED CIRCUITS, 2024 CSTIC, 2024,
  • [36] Adversarial Light Projection Attacks on Face Recognition Systems: A Feasibility Study
    Dinh-Luan Nguyen
    Arora, Sunpreet S.
    Wu, Yuhang
    Yang, Hao
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2020), 2020, : 3548 - 3556
  • [37] Enhancing Remote Adversarial Patch Attacks on Face Detectors with Tiling and Scaling
    Okano, Masora
    Ito, Koichi
    Nishigaki, Masakatsu
    Ohki, Tetsushi
    2024 ASIA PACIFIC SIGNAL AND INFORMATION PROCESSING ASSOCIATION ANNUAL SUMMIT AND CONFERENCE, APSIPA ASC, 2024,
  • [38] Adversarial Attacks on Graph Neural Network Based on Local Influence Analysis Model
    Wu Yiteng
    Liu Wei
    Yu Hongtao
    Cao Xiaochun
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2022, 44 (07) : 2576 - 2583
  • [39] Detection of GPS Spoofing Attacks in UAVs Based on Adversarial Machine Learning Model
    Alhoraibi, Lamia
    Alghazzawi, Daniyal
    Alhebshi, Reemah
    SENSORS, 2024, 24 (18)
  • [40] Detection of adversarial attacks against security systems based on deep learning model
    Jaber, Mohanad J.
    Jaber, Zahraa Jasim
    Obaid, Ahmed J.
    JOURNAL OF DISCRETE MATHEMATICAL SCIENCES & CRYPTOGRAPHY, 2024, 27 (05): : 1523 - 1538