Defending Against Adversarial Fingerprint Attacks Based on Deep Image Prior

被引:1
|
作者
Yoo, Hwajung [1 ]
Hong, Pyo Min [1 ]
Kim, Taeyong [1 ]
Yoon, Jung Won [1 ]
Lee, Youn Kyu [1 ]
机构
[1] Hongik Univ, Dept Comp Engn, Seoul 04066, South Korea
基金
新加坡国家研究基金会;
关键词
Adversarial attack defense; image reconstruction; fingerprint authentication system; deep learning; denoising; deep image prior; DEFENSE;
D O I
10.1109/ACCESS.2023.3299862
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, deep learning-based biometric authentication systems, especially fingerprint authentication, have been used widely in real-world. However, these systems are vulnerable to adversarial attacks which prevent deep learning models from distinguishing input data properly. To solve these problems, various defense methods have been proposed, especially utilizing denoising mechanisms, but they provided limited defense performance. In this study, we proposed a new defense method against adversarial fingerprint attacks. To ensure defense performance, we have introduced Deep Image Prior mechanism which has superior performance in image reconstruction without prior training and a large amount of dataset. The proposed method aims to remove adversarial perturbations of the input fingerprint image and reconstruct it close to the original fingerprint image by adapting Deep Image Prior. Our method has achieved robust defense performance against various types of adversarial fingerprint attacks across different datasets, encompassing variations in sensors, shapes, and materials of fingerprint images. Furthermore, our method has demonstrated that it is superior to other image reconstruction methods.
引用
收藏
页码:78713 / 78725
页数:13
相关论文
共 50 条
  • [1] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [2] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [3] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16
  • [4] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    Security and Communication Networks, 2022, 2022
  • [5] Defending Against Deep Learning-Based Traffic Fingerprinting Attacks with Adversarial Examples
    Hayden, Blake
    Walsh, Timothy
    Barton, Armon
    ACM Transactions on Privacy and Security, 2024, 28 (01)
  • [6] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [7] Deep image prior based defense against adversarial examples
    Dai, Tao
    Feng, Yan
    Chen, Bin
    Lu, Jian
    Xia, Shu-Tao
    PATTERN RECOGNITION, 2022, 122
  • [8] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [9] Mockingbird: Defending Against Deep-Learning-Based Website Fingerprinting Attacks With Adversarial Traces
    Rahman, Mohammad Saidur
    Imani, Mohsen
    Mathews, Nate
    Wright, Matthew
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2021, 16 (16) : 1594 - 1609
  • [10] Defending against adversarial attacks by randomized diversification
    Taran, Olga
    Rezaeifar, Shideh
    Holotyak, Taras
    Voloshynovskiy, Slava
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 11218 - 11225