Verifying Attention Robustness of Deep Neural Networks Against Semantic Perturbations

被引:1
|
作者
Munakata, Satoshi [1 ]
Urban, Caterina [2 ,3 ,4 ]
Yokoyama, Haruki [1 ]
Yamamoto, Koji [1 ]
Munakata, Kazuki [1 ]
机构
[1] Fujitsu, Kawasaki, Kanagawa, Japan
[2] Inria, Paris, France
[3] ENS PSL, Paris, France
[4] CNRS, Paris, France
来源
NASA FORMAL METHODS, NFM 2023 | 2023年 / 13903卷
关键词
D O I
10.1007/978-3-031-33170-1_3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
It is known that deep neural networks (DNNs) classify an input image by paying particular attention to certain specific pixels; a graphical representation of the magnitude of attention to each pixel is called a saliency-map. Saliency-maps are used to check the validity of the classification decision basis, e.g., it is not a valid basis for classification if a DNN pays more attention to the background rather than the subject of an image. Semantic perturbations can significantly change the saliency-map. In this work, we propose the first verification method for attention robustness, i.e., the local robustness of the changes in the saliency-map against combinations of semantic perturbations. Specifically, our method determines the range of the perturbation parameters (e.g., the brightness change) that maintains the difference between the actual saliency-map change and the expected saliency-map change below a given threshold value. Our method is based on activation region traversals, focusing on the outermost robust boundary for scalability on larger DNNs. We empirically evaluate the effectiveness and performance of our method on DNNs trained on popular image classification datasets.
引用
收藏
页码:37 / 61
页数:25
相关论文
共 50 条
  • [1] Verifying Attention Robustness of Deep Neural Networks against Semantic Perturbations
    Munakata, Satoshi
    Urban, Caterina
    Yokoyama, Haruki
    Yamamoto, Koji
    Munakata, Kazuki
    [J]. 2022 29TH ASIA-PACIFIC SOFTWARE ENGINEERING CONFERENCE, APSEC, 2022, : 560 - 561
  • [2] Towards Verifying Robustness of Neural Networks Against A Family of Semantic Perturbations
    Mohapatra, Jeet
    Weng, Tsui-Wei
    Chen, Pin-Yu
    Liu, Sijia
    Daniel, Luca
    [J]. 2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 241 - 249
  • [3] Towards Improving Robustness of Deep Neural Networks to Adversarial Perturbations
    Amini, Sajjad
    Ghaemmaghami, Shahrokh
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (07) : 1889 - 1903
  • [4] Improving adversarial robustness of deep neural networks by using semantic information
    Wang, Lina
    Chen, Xingshu
    Tang, Rui
    Yue, Yawei
    Zhu, Yi
    Zeng, Xuemei
    Wang, Wei
    [J]. KNOWLEDGE-BASED SYSTEMS, 2021, 226 (226)
  • [5] Distillation as a Defense to Adversarial Perturbations against Deep Neural Networks
    Papernot, Nicolas
    McDaniel, Patrick
    Wu, Xi
    Jha, Somesh
    Swami, Ananthram
    [J]. 2016 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2016, : 582 - 597
  • [6] PROVEN: Verifying Robustness of Neural Networks with a Probabilistic Approach
    Weng, Tsui-Wei
    Chen, Pin-Yu
    Nguyen, Lam M.
    Squillante, Mark S.
    Boopathy, Akhilan
    Oseledets, Ivan
    Daniel, Luca
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [7] Benchmarking robustness of deep neural networks in semantic segmentation of fluorescence microscopy images
    Zhong, Liqun
    Li, Lingrui
    Yang, Ge
    [J]. BMC BIOINFORMATICS, 2024, 25 (01):
  • [8] MRobust: A Method for Robustness against Adversarial Attacks on Deep Neural Networks
    Liu, Yi-Ling
    Lomuscio, Alessio
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [9] Verifying Properties of Binarized Deep Neural Networks
    Narodytska, Nina
    Kasiviswanathan, Shiva
    Ryzhyk, Leonid
    Sagiv, Mooly
    Walsh, Toby
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 6615 - 6624
  • [10] Verifying Neural Networks Against Backdoor Attacks
    Pham, Long H.
    Sun, Jun
    [J]. COMPUTER AIDED VERIFICATION (CAV 2022), PT I, 2022, 13371 : 171 - 192