Morphence: Moving Target Defense Against Adversarial Examples

被引:10
|
作者
Amich, Abderrahmen [1 ]
Eshete, Birhanu [1 ]
机构
[1] Univ Michigan, Dearborn, MI 48128 USA
关键词
D O I
10.1145/3485832.3485899
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Robustness to adversarial examples of machine learning models remains an open topic of research. Attacks often succeed by repeatedly probing a fixed target model with adversarial examples purposely crafted to fool it. In this paper, we introduce Morphence, an approach that shifts the defense landscape by making a model a moving target against adversarial examples. By regularly moving the decision function of a model, Morphence makes it significantly challenging for repeated or correlated attacks to succeed. Morphence deploys a pool of models generated from a base model in a manner that introduces sufficient randomness when it responds to prediction queries. To ensure repeated or correlated attacks fail, the deployed pool of models automatically expires after a query budget is reached and the model pool is seamlessly replaced by a new model pool generated in advance. We evaluate Morphence on two benchmark image classification datasets (MNIST and CIFAR10) against five reference attacks (2 white-box and 3 black-box). In all cases, Morphence consistently outperforms the thus-far effective defense, adversarial training, even in the face of strong white-box attacks, while preserving accuracy on clean data and reducing attack transferability.
引用
收藏
页码:61 / 75
页数:15
相关论文
共 50 条
  • [1] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    [J]. ACM TRANSACTIONS ON SENSOR NETWORKS, 2022, 18 (01)
  • [2] Moving Target Defense for Embedded Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    [J]. PROCEEDINGS OF THE 17TH CONFERENCE ON EMBEDDED NETWORKED SENSOR SYSTEMS (SENSYS '19), 2019, : 124 - 137
  • [3] DeepMTD: Moving Target Defense for Deep Visual Sensing against Adversarial Examples
    Song, Qun
    Yan, Zhenyu
    Tan, Rui
    [J]. ACM Transactions on Sensor Networks, 2021, 18 (01)
  • [4] A Moving Target Defense against Adversarial Machine Learning
    Roy, Abhishek
    Chhabra, Anshuman
    Kamhoua, Charles A.
    Mohapatra, Prasant
    [J]. SEC'19: PROCEEDINGS OF THE 4TH ACM/IEEE SYMPOSIUM ON EDGE COMPUTING, 2019, : 383 - 388
  • [5] Toward Effective Moving Target Defense Against Adversarial AI
    Martin, Peter
    Fan, Jian
    Kim, Taejin
    Vesey, Konrad
    Greenwald, Lloyd
    [J]. 2021 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2021), 2021,
  • [6] Hadamard's Defense Against Adversarial Examples
    Hoyos, Angello
    Ruiz, Ubaldo
    Chavez, Edgar
    [J]. IEEE ACCESS, 2021, 9 : 118324 - 118333
  • [7] Background Class Defense Against Adversarial Examples
    McCoyd, Michael
    Wagner, David
    [J]. 2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY WORKSHOPS (SPW 2018), 2018, : 96 - 102
  • [8] MoNet: Impressionism As A Defense Against Adversarial Examples
    Ge, Huangyi
    Chau, Sze Yiu
    Li, Ninghui
    [J]. 2020 SECOND IEEE INTERNATIONAL CONFERENCE ON TRUST, PRIVACY AND SECURITY IN INTELLIGENT SYSTEMS AND APPLICATIONS (TPS-ISA 2020), 2020, : 246 - 255
  • [9] Advocating for Multiple Defense Strategies Against Adversarial Examples
    Araujo, Alexandre
    Meunier, Laurent
    Pinot, Rafael
    Negrevergne, Benjamin
    [J]. ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 165 - 177
  • [10] EI-MTD: Moving Target Defense for Edge Intelligence against Adversarial Attacks
    Qian, Yaguan
    Guo, Yankai
    Shao, Qiqi
    Wang, Jiamin
    Wang, Bin
    Gu, Zhaoquan
    Ling, Xiang
    Wu, Chunming
    [J]. ACM TRANSACTIONS ON PRIVACY AND SECURITY, 2022, 25 (03)