Compressive imaging for defending deep neural networks from adversarial attacks

被引:8
|
作者
Kravets, Vladislav [1 ]
Javidi, Bahram [2 ]
Stern, Adrian [1 ]
机构
[1] Ben Gurion Univ Negev, Sch Elect & Comp Engn, Dept Electroopt & Photon Engn, POB 653, IL-8410501 Beer Sheva, Israel
[2] Univ Connecticut, Dept Elect & Comp Engn, U-4157, Storrs, CT 06269 USA
关键词
D O I
10.1364/OL.418808
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Despite their outstanding performance, convolutional deep neural networks (DNNs) are vulnerable to small adversarial perturbations. In this Letter, we introduce a novel approach to thwart adversarial attacks. We propose to employ compressive sensing (CS) to defend DNNs from adversarial attacks, and at the same time to encode the image, thus preventing counterattacks. We present computer simulations and optical experimental results of object classification in adversarial images captured with a CS single pixel camera. (C) 2021 Optical Society of America
引用
收藏
页码:1951 / 1954
页数:4
相关论文
共 50 条
  • [1] Defending Against Adversarial Attacks in Deep Neural Networks
    You, Suya
    Kuo, C-C Jay
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS, 2019, 11006
  • [2] Efficacy of Defending Deep Neural Networks against Adversarial Attacks with Randomization
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    [J]. ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR MULTI-DOMAIN OPERATIONS APPLICATIONS II, 2020, 11413
  • [3] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [4] HeteroGuard: Defending Heterogeneous Graph Neural Networks against Adversarial Attacks
    Kumarasinghe, Udesh
    Nabeel, Mohamed
    De Zoysa, Kasun
    Gunawardana, Kasun
    Elvitigala, Charitha
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW, 2022, : 698 - 705
  • [5] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    [J]. PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,
  • [6] Defending Quantum Neural Networks against Adversarial Attacks with Homomorphic Data Encryption
    Wang, Ellen
    Chain, Helena
    Wang, Xiaodi
    Ray, Avi
    Wooldridge, Tyler
    [J]. 2023 INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND COMPUTATIONAL INTELLIGENCE, CSCI 2023, 2023, : 816 - 822
  • [7] Defending against adversarial attacks on graph neural networks via similarity property
    Yao, Minghong
    Yu, Haizheng
    Bian, Hong
    [J]. AI COMMUNICATIONS, 2023, 36 (01) : 27 - 39
  • [8] Recurrent Generative Adversarial Neural Networks for Compressive Imaging
    Mardani, Morteza
    Gong, Enhao
    Cheng, Joseph Y.
    Pauly, John
    Xing, Lei
    [J]. 2017 IEEE 7TH INTERNATIONAL WORKSHOP ON COMPUTATIONAL ADVANCES IN MULTI-SENSOR ADAPTIVE PROCESSING (CAMSAP), 2017,
  • [9] Neuron Selecting: Defending Against Adversarial Examples in Deep Neural Networks
    Zhang, Ming
    Li, Hu
    Kuang, Xiaohui
    Pang, Ling
    Wu, Zhendong
    [J]. INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2019), 2020, 11999 : 613 - 629
  • [10] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    [J]. INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89