Protection against adversarial attacks with randomization of recognition algorithm

被引:0
|
作者
Marshalko, Grigory [1 ,2 ]
Koreshkova, Svetlana [3 ]
机构
[1] Tech Comm Standardisat Cryptog & Secur Mech TC 02, Moscow, Russia
[2] Higher Sch Econ, Moscow, Russia
[3] JSC Kryptonite, Moscow, Russia
关键词
Biometric recognition; Statistical distance; Local binary patterns; Password based authentication;
D O I
10.1007/s11416-023-00503-z
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study a randomized variant of one type of biometric recognition algorithms, which is intended to mitigate adversarial attacks. We show that the problem of an estimation of the security of the proposed algorithm can be formulated in the form of an estimation of statistical distance between the probability distributions, induced by the initial and the randomized algorithm. A variant of practical password-based implementation is discussed. The results of experimental evaluation are given. The preliminary verison of this research was presented at CTCrypt 2020 workshop.
引用
收藏
页码:127 / 133
页数:7
相关论文
共 50 条
  • [41] Bringing robustness against adversarial attacks
    Gean T. Pereira
    André C. P. L. F. de Carvalho
    Nature Machine Intelligence, 2019, 1 : 499 - 500
  • [42] Adversarial Attacks Against Uncertainty Quantification
    Ledda, Emanuele
    Angioni, Daniele
    Piras, Giorgio
    Fumera, Giorgio
    Biggio, Battista
    Roli, Fabio
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 4601 - 4610
  • [43] Resilience of GANs against Adversarial Attacks
    Rudayskyy, Kyrylo
    Miri, Ali
    SECRYPT : PROCEEDINGS OF THE 19TH INTERNATIONAL CONFERENCE ON SECURITY AND CRYPTOGRAPHY, 2022, : 390 - 397
  • [44] Bringing robustness against adversarial attacks
    Pereira, Gean T.
    de Carvalho, Andre C. P. L. F.
    NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 499 - 500
  • [45] Transferable Adversarial Attacks Against ASR
    Gao, Xiaoxue
    Li, Zexin
    Chen, Yiming
    Liu, Cong
    Li, Haizhou
    IEEE SIGNAL PROCESSING LETTERS, 2024, 31 : 2200 - 2204
  • [46] Adversarial mRMR against Evasion Attacks
    Wu, Miaomiao
    Li, Yun
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [47] DeepAdversaryDefense: A Deep Model to Identify and Prevent Adversarial Attacks against Medical Speech Recognition
    Panwar, Kirtee
    Singh, Akansha
    Singh, Krishna Kant
    5TH INTERNATIONAL CONFERENCE ON INFORMATICS & DATA-DRIVEN MEDICINE, IDDM 2022, 2022, 3302
  • [48] Leveraging Domain Features for Detecting Adversarial Attacks Against Deep Speech Recognition in Noise
    Nielsen, Christian Heider
    Tan, Zheng-Hua
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2023, 4 : 179 - 187
  • [49] Representation Learning to Classify and Detect Adversarial Attacks against Speaker and Speech Recognition Systems
    Villalba, Jesus
    Joshi, Sonal
    Zelasko, Piotr
    Dehak, Najim
    INTERSPEECH 2021, 2021, : 4304 - 4308
  • [50] On the Limitations of Targeted Adversarial Evasion Attacks Against Deep Learning Enabled Modulation Recognition
    Bair, Samuel
    DelVecchio, Matthew
    Flowers, Bryse
    Michaels, Alan J.
    Headley, William C.
    PROCEEDINGS OF THE 2019 ACM WORKSHOP ON WIRELESS SECURITY AND MACHINE LEARNING (WISEML '19), 2019, : 25 - 30