Design of Robust Classifiers for Adversarial Environments

被引:0
|
作者
Biggio, Battista [1 ]
Fumera, Giorgio [1 ]
Roli, Fabio [1 ]
机构
[1] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
关键词
Pattern classification; adversarial classification; robust classifiers;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In adversarial classification tasks like spam filtering, intrusion detection in computer networks, and biometric identity verification, malicious adversaries can design attacks which exploit vulnerabilities of machine learning algorithms to evade detection, or to force a classification system to generate many false alarms, making it useless. Several works have addressed the problem of designing robust classifiers against these threats, although mainly focusing on specific applications and kinds of attacks. In this work, we propose a model of data distribution for adversarial classification tasks, and exploit it to devise a general method for designing robust classifiers, focusing on generative classifiers. Our method is then evaluated on two case studies concerning biometric identity verification and spam filtering.
引用
收藏
页码:977 / 982
页数:6
相关论文
共 50 条
  • [1] Multiple classifier systems for robust classifier design in adversarial environments
    Battista Biggio
    Giorgio Fumera
    Fabio Roli
    [J]. International Journal of Machine Learning and Cybernetics, 2010, 1 : 27 - 41
  • [2] Multiple classifier systems for robust classifier design in adversarial environments
    Biggio, Battista
    Fumera, Giorgio
    Roli, Fabio
    [J]. INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2010, 1 (1-4) : 27 - 41
  • [3] Robust Physical Adversarial Camouflages for Image Classifiers
    Duan, Ye-Xin
    He, Zheng-Yun
    Zhang, Song
    Zhan, Da-Zhi
    Wang, Tian-Feng
    Lin, Geng-You
    Zhang, Jin
    Pan, Zhi-Song
    [J]. Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2024, 52 (03): : 863 - 871
  • [4] Are Generative Classifiers More Robust to Adversarial Attacks?
    Li, Yingzhen
    Bradshaw, John
    Sharma, Yash
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [5] Robust quantum classifiers via NISQ adversarial learning
    Banchi, Leonardo
    [J]. NATURE COMPUTATIONAL SCIENCE, 2022, 2 (11): : 699 - 700
  • [6] Robust quantum classifiers via NISQ adversarial learning
    Leonardo Banchi
    [J]. Nature Computational Science, 2022, 2 : 699 - 700
  • [7] Robust Dynamic Spectrum Access in Adversarial Environments
    Guan, Ziwei
    Hu, Timothy Y.
    Palombo, Joseph
    Liston, Michael J.
    Bucci, Donald J., Jr.
    Liang, Yingbin
    [J]. ICC 2020 - 2020 IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS (ICC), 2020,
  • [8] Design of robust neural network classifiers
    Larsen, J
    Andersen, LN
    Hintz-Madsen, M
    Hansen, LK
    [J]. PROCEEDINGS OF THE 1998 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING, VOLS 1-6, 1998, : 1205 - 1208
  • [9] On the design of robust classifiers for computer vision
    Masnadi-Shirazi, Hamed
    Mahadevan, Vijay
    Vasconcelos, Nuno
    [J]. 2010 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2010, : 779 - 786
  • [10] Robust Decentralized Virtual Coordinate Systems in Adversarial Environments
    Zage, David
    Nita-Rotaru, Cristina
    [J]. ACM TRANSACTIONS ON INFORMATION AND SYSTEM SECURITY, 2010, 13 (04)