Subsampling and Knowledge Distillation on Adversarial Examples: New Techniques for Deep Learning Based Side Channel Evaluations

被引:1
|
作者
Gohr, Aron [1 ]
Jacob, Sven [1 ]
Schindler, Werner [1 ]
机构
[1] Bundesamt Sicherheit Informat Tech BSI, Godesberger Allee 185-189, D-53175 Bonn, Germany
来源
关键词
Power analysis; Machine learning; Deep learning; SAT solver;
D O I
10.1007/978-3-030-81652-0_22
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This paper has four main goals. First, we show how we solved the CHES 2018 AES challenge in the contest using essentially a linear classifier combined with a SAT solver and a custom error correction method. This part of the paper has previously appeared in a preprint by the current authors (e-print report 2019/094) and later as a contribution to a preprint write-up of the solutions by the winning teams (e-print report 2019/860). Second, we develop a novel deep neural network architecture for sidechannel analysis that completely breaks the AES challenge, allowing for fairly reliable key recovery with just a single trace on the unknown-device part of the CHES challenge (with an expected success rate of roughly 70% if about 100 CPU hours are allowed for the equation solving stage of the attack). This solution significantly improves upon all previously published solutions of the AES challenge, including our baseline linear solution. Third, we consider the question of leakage attribution for both the classifier we used in the challenge and for our deep neural network. Direct inspection of the weight vector of our machine learning model yields a lot of information on the implementation for our linear classifier. For the deep neural network, we test three other strategies (occlusion of traces; inspection of adversarial changes; knowledge distillation) and find that these can yield information on the leakage essentially equivalent to that gained by inspecting the weights of the simpler model. Fourth, we study the properties of adversarially generated sidechannel traces for our model. Partly reproducing recent computer vision work by Ilyas et al. in our application domain, we find that a linear classifier that generalizes to an unseen device much better than our linear baseline can be trained using only adversarial examples (fresh random keys, adversarially perturbed traces) for our deep neural network. This gives a new way of extracting human-usable knowledge from a deep side channel model while also yielding insights on adversarial examples in an application domain where relatively few sources of spurious correlations between data and labels exist. The experiments described in this paper can be reproduced using code available at https://github.com/agohr/ches2018.
引用
下载
收藏
页码:567 / 592
页数:26
相关论文
共 50 条
  • [1] Topology-guided Adversarial Deep Mutual Learning for Knowledge Distillation
    Lai X.
    Qu Y.-Y.
    Xie Y.
    Pei Y.-L.
    Zidonghua Xuebao/Acta Automatica Sinica, 2023, 49 (01): : 102 - 110
  • [2] Summary of Adversarial Examples Techniques Based on Deep Neural Networks
    Bai, Zhixu
    Wang, Hengjun
    Guo, Kexiang
    Computer Engineering and Applications, 57 (23): : 61 - 70
  • [3] Light Deep Face Recognition based on Knowledge Distillation and Adversarial Training
    Liu, Jinjin
    Li, Xiaonan
    2022 INTERNATIONAL CONFERENCE ON MECHANICAL, AUTOMATION AND ELECTRICAL ENGINEERING, CMAEE, 2022, : 127 - 132
  • [4] GrOD: Deep Learning with Gradients Orthogonal Decomposition for Knowledge Transfer, Distillation, and Adversarial Training
    Xiong, Haoyi
    Wan, Ruosi
    Zhao, Jian
    Chen, Zeyu
    Li, Xingjian
    Zhu, Zhanxing
    Huan, Jun
    ACM TRANSACTIONS ON KNOWLEDGE DISCOVERY FROM DATA, 2022, 16 (06)
  • [5] On the performance of non-profiled side channel attacks based on deep learning techniques
    Do, Ngoc-Tuan
    Hoang, Van-Phuc
    Doan, Van Sang
    Pham, Cong-Kha
    IET INFORMATION SECURITY, 2023, 17 (03) : 377 - 393
  • [6] Feature-Based Adversarial Training for Deep Learning Models Resistant to Transferable Adversarial Examples
    Ryu, Gwonsang
    Choi, Daeseon
    IEICE TRANSACTIONS ON INFORMATION AND SYSTEMS, 2022, E105D (05) : 1039 - 1049
  • [7] Using Adversarial Examples to Bypass Deep Learning Based URL Detection System
    Chen, Wencheng
    Zeng, Yi
    Qiu, Meikang
    4TH IEEE INTERNATIONAL CONFERENCE ON SMART CLOUD (SMARTCLOUD 2019) / 3RD INTERNATIONAL SYMPOSIUM ON REINFORCEMENT LEARNING (ISRL 2019), 2019, : 128 - 130
  • [8] Adversarial Examples Against the Deep Learning Based Network Intrusion Detection Systems
    Yang, Kaichen
    Liu, Jianqing
    Zhang, Chi
    Fang, Yuguang
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 559 - 564
  • [9] An Efficient and Robust Cloud-Based Deep Learning With Knowledge Distillation
    Tao, Zeyi
    Xia, Qi
    Cheng, Songqing
    Li, Qun
    IEEE TRANSACTIONS ON CLOUD COMPUTING, 2023, 11 (02) : 1733 - 1745
  • [10] Defending against Deep-Learning-Based Flow Correlation Attacks with Adversarial Examples
    Zhang, Ziwei
    Ye, Dengpan
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022