Randomized Prediction Games for Adversarial Machine Learning

被引:45
|
作者
Bulo, Samuel Rota [1 ]
Biggio, Battista [2 ]
Pillai, Ignazio [2 ]
Pelillo, Marcello [3 ]
Roli, Fabio [2 ]
机构
[1] Fdn Bruno Kessler, ICT Tev, I-38123 Trento, Italy
[2] Univ Cagliari, Dept Elect & Elect Engn, I-09123 Cagliari, Italy
[3] Ca Foscari Univ Venice, Dipartimento Sci Ambientali Informat & Stat, I-30123 Venice, Italy
关键词
Adversarial learning; computer security; evasion attacks; game theory; pattern classification; randomization; VARIATIONAL INEQUALITY; CLASSIFIERS;
D O I
10.1109/TNNLS.2016.2593488
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.
引用
收藏
页码:2466 / 2478
页数:13
相关论文
共 50 条
  • [1] Static Prediction Games for Adversarial Learning Problems
    Brueckner, Michael
    Kanzow, Christian
    Scheffer, Tobias
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2012, 13 : 2617 - 2654
  • [2] Static prediction games for adversarial learning problems
    [J]. Brückner, M. (mibrueck@cs.uni-potsdam.de), 1600, Microtome Publishing (13):
  • [3] Large-Scale Strategic Games and Adversarial Machine Learning
    Alpcan, Tansu
    Rubinstein, Benjamin I. P.
    Leckie, Christopher
    [J]. 2016 IEEE 55TH CONFERENCE ON DECISION AND CONTROL (CDC), 2016, : 4420 - 4426
  • [4] Adversarial Machine Learning for Inferring Augmented Cyber Agility Prediction
    Muhati, Eric
    Rawat, Danda B.
    [J]. IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (IEEE INFOCOM WKSHPS 2021), 2021,
  • [5] Adversarial Transfer Learning for Machine Remaining Useful Life Prediction
    Ragab, Mohamed
    Chen, Zhenghua
    Wu, Min
    Kwoh, Chee Keong
    Li, Xiaoli
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON PROGNOSTICS AND HEALTH MANAGEMENT (ICPHM), 2020,
  • [6] Adversarial Prediction Games for Multivariate Losses
    Wang, Hong
    Xing, Wei
    Asif, Kaiser
    Ziebart, Brian D.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 28 (NIPS 2015), 2015, 28
  • [7] Adversarial Machine Learning
    Tygar, J. D.
    [J]. IEEE INTERNET COMPUTING, 2011, 15 (05) : 4 - 6
  • [8] Adversarial Learning Games with Deep Learning Models
    Chivukula, Aneesh Sreevallabh
    Liu, Wei
    [J]. 2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 2758 - 2767
  • [9] Adversarial Deep Learning with Stackelberg Games
    Chivukula, Aneesh Sreevallabh
    Yang, Xinghao
    Liu, Wei
    [J]. NEURAL INFORMATION PROCESSING (ICONIP 2019), PT IV, 2019, 1142 : 3 - 12
  • [10] Efficient Reinforcement Learning in Adversarial Games
    Skoulakis, Ioannis E.
    Lagoudakis, Michail G.
    [J]. 2012 IEEE 24TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2012), VOL 1, 2012, : 704 - 711