Design of Robust Classifiers for Adversarial Environments

被引:0
|
作者
Biggio, Battista [1 ]
Fumera, Giorgio [1 ]
Roli, Fabio [1 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
关键词
Pattern classification; adversarial classification; robust classifiers;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In adversarial classification tasks like spam filtering, intrusion detection in computer networks, and biometric identity verification, malicious adversaries can design attacks which exploit vulnerabilities of machine learning algorithms to evade detection, or to force a classification system to generate many false alarms, making it useless. Several works have addressed the problem of designing robust classifiers against these threats, although mainly focusing on specific applications and kinds of attacks. In this work, we propose a model of data distribution for adversarial classification tasks, and exploit it to devise a general method for designing robust classifiers, focusing on generative classifiers. Our method is then evaluated on two case studies concerning biometric identity verification and spam filtering.
引用
收藏
页码:977 / 982
页数:6
相关论文
共 50 条
  • [41] Robust Structured Declarative Classifiers for 3D Point Clouds: Defending Adversarial Attacks with Implicit Gradients
    Li, Kaidong
    Zhang, Ziming
    Zhong, Cuncong
    Wang, Guanghui
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15273 - 15283
  • [42] On Security and Sparsity of Linear Classifiers for Adversarial Settings
    Demontis, Ambra
    Russu, Paolo
    Biggio, Battista
    Fumera, Giorgio
    Roli, Fabio
    [J]. STRUCTURAL, SYNTACTIC, AND STATISTICAL PATTERN RECOGNITION, S+SSPR 2016, 2016, 10029 : 322 - 332
  • [43] Vulnerability of classifiers to evolutionary generated adversarial examples
    Vidnerova, Petra
    Neruda, Roman
    [J]. NEURAL NETWORKS, 2020, 127 : 168 - 181
  • [44] Dual adversarial attacks: Fooling humans and classifiers
    Schneider, Johannes
    Apruzzese, Giovanni
    [J]. JOURNAL OF INFORMATION SECURITY AND APPLICATIONS, 2023, 75
  • [45] Trainwreck: A damaging adversarial attack on image classifiers
    Zahálka, Jan
    [J]. arXiv, 2023,
  • [46] Robustness of Sketched Linear Classifiers to Adversarial Attacks
    Mahadevan, Ananth
    Merchant, Arpit
    Wang, Yanhao
    Mathioudakis, Michael
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2022, 2022, : 4319 - 4323
  • [47] Universal adversarial examples and perturbations for quantum classifiers
    Gong, Weiyuan
    Deng, Dong-Ling
    [J]. NATIONAL SCIENCE REVIEW, 2022, 9 (06)
  • [48] Generating Universal Adversarial Perturbations for Quantum Classifiers
    Anil, Gautham
    Vinod, Vishnu
    Narayan, Apurva
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 10, 2024, : 10891 - 10899
  • [49] Universal adversarial examples and perturbations for quantum classifiers
    Weiyuan Gong
    Dong-Ling Deng
    [J]. National Science Review, 2022, 9 (06) : 48 - 55
  • [50] Explainability of Image Classifiers for Targeted Adversarial Attack
    Pandya, Mayur Anand
    Siddalingaswamy, P. C.
    Singh, Sanjay
    [J]. 2022 IEEE 19TH INDIA COUNCIL INTERNATIONAL CONFERENCE, INDICON, 2022,