Adversarial Evasion Attacks to Deep Neural Networks in ECR Models

被引:0
|
作者
Nemoto, Shota [1 ]
Rajapaksha, Subhash [2 ]
Perouli, Despoina [2 ]
机构
[1] Case Western Reserve Univ, 10900 Euclid Ave, Cleveland, OH 44106 USA
[2] Marquette Univ, 1250 West Wisconsin Ave, Milwaukee, WI 53233 USA
基金
美国国家科学基金会;
关键词
Neural Networks; Adversarial Examples; Evasion Attacks; Security; Electrocardiogram; ECR;
D O I
10.5220/0010848700003123
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Evasion attacks produce adversarial examples by adding human imperceptible perturbations and causing a machine learning model to label the input incorrectly. These black box attacks do not require knowledge of the internal workings of the model or access to inputs. Although such adversarial attacks have been shown to be successful in image classification problems, they have not been adequately explored in health care models. In this paper, we produce adversarial examples based on successful algorithms in the literature and attack a deep neural network that classifies heart rhythms in electrocardiograms (ECGs). Several batches of adversarial examples were produced, with each batch having a different limit on the number of queries. The adversarial ECGs with the median distance to their original counterparts were found to have slight but noticeable perturbations when compared side-by-side with the original. However, the adversarial ECGs with the minimum distance in the batches were practically indistinguishable from the originals.
引用
收藏
页码:135 / 141
页数:7
相关论文
共 50 条
  • [1] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [2] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [3] Evasion and Causative Attacks with Adversarial Deep Learning
    Shi, Yi
    Sagduyu, Yalin E.
    [J]. MILCOM 2017 - 2017 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM), 2017, : 243 - 248
  • [4] Priority Adversarial Example in Evasion Attack on Multiple Deep Neural Networks
    Kwon, Hyun
    Yoon, Hyunsoo
    Choi, Daeseon
    [J]. 2019 1ST INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION (ICAIIC 2019), 2019, : 399 - 404
  • [5] Bluff: Interactively Deciphering Adversarial Attacks on Deep Neural Networks
    Das, Nilaksh
    Park, Haekyu
    Wang, Zijie J.
    Hohman, Fred
    Firstman, Robert
    Rogers, Emily
    Chau, Duen Horng
    [J]. 2020 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2020), 2020, : 271 - 275
  • [6] Hardware Accelerator for Adversarial Attacks on Deep Learning Neural Networks
    Guo, Haoqiang
    Peng, Lu
    Zhang, Jian
    Qi, Fang
    Duan, Lide
    [J]. 2019 TENTH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2019,
  • [7] A survey on the vulnerability of deep neural networks against adversarial attacks
    Andy Michel
    Sumit Kumar Jha
    Rickard Ewetz
    [J]. Progress in Artificial Intelligence, 2022, 11 : 131 - 141
  • [8] Adversarial Attacks and Defenses Against Deep Neural Networks: A Survey
    Ozdag, Mesut
    [J]. CYBER PHYSICAL SYSTEMS AND DEEP LEARNING, 2018, 140 : 152 - 161
  • [9] Reinforced Adversarial Attacks on Deep Neural Networks Using ADMM
    Zhao, Pu
    Xu, Kaidi
    Zhang, Tianyun
    Fardad, Makan
    Wang, Yanzhi
    Lin, Xue
    [J]. 2018 IEEE GLOBAL CONFERENCE ON SIGNAL AND INFORMATION PROCESSING (GLOBALSIP 2018), 2018, : 1169 - 1173
  • [10] Adversarial Attacks on Deep Neural Networks Based Modulation Recognition
    Liu, Mingqian
    Zhang, Zhenju
    Zhao, Nan
    Chen, Yunfei
    [J]. IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,