Peek into the Black-Box: Interpretable Neural Network using SAT Equations in Side-Channel Analysis

被引:0
|
作者
Yap, Trevor [1 ]
Benamira, Adrien [1 ]
Bhasin, Shivam [1 ]
Peyrin, Thomas [1 ]
机构
[1] School of Physical and Mathematical Sciences, Nanyang Technological University, Singapore
关键词
Black boxes - Convolutional neural network - Deep learning - Interpretability - Neural-networks - Profiling attack - SAT - Side-channel - Side-channel analysis - Truth tables;
D O I
10.46586/tches.v2023.i2.24-53
中图分类号
学科分类号
摘要
Deep neural networks (DNN) have become a significant threat to the security of cryptographic implementations with regards to side-channel analysis (SCA), as they automatically combine the leakages without any preprocessing needed, leading to a more efficient attack. However, these DNNs for SCA remain mostly black-box algorithms that are very difficult to interpret. Benamira et al. recently proposed an interpretable neural network called Truth Table Deep Convolutional Neural Network (TT-DCNN), which is both expressive and easier to interpret. In particular, a TT-DCNN has a transparent inner structure that can entirely be transformed into SAT equations after training. In this work, we analyze the SAT equations extracted from a TT-DCNN when applied in SCA context, eventually obtaining the rules and decisions that the neural networks learned when retrieving the secret key from the cryptographic primitive (i.e., exact formula). As a result, we can pinpoint the critical rules that the neural network uses to locate the exact Points of Interest (PoIs). We validate our approach first on simulated traces for higher-order masking. However, applying TT-DCNN on real traces is not straightforward. We propose a method to adapt TT-DCNN for application on real SCA traces containing thousands of sample points. Experimental validation is performed on software-based ASCADv1 and hardware-based AES_HD_ext datasets. In addition, TT-DCNN is shown to be able to learn the exact countermeasure in a best-case setting. © 2023, Ruhr-University of Bochum. All rights reserved.
引用
收藏
页码:24 / 53
相关论文
共 50 条
  • [1] Automated Side-Channel Attacks using Black-Box Neural Architecture Search
    Gupta, Pritha
    Drees, Jan Peter
    Huellermeier, Eyke
    [J]. 18TH INTERNATIONAL CONFERENCE ON AVAILABILITY, RELIABILITY & SECURITY, ARES 2023, 2023,
  • [2] Targeted Black-Box Side-Channel Mitigation for IoT
    Kadron, Ismet Burak
    Shou, Chaofan
    O'Mahony, Emily
    Vural, Yilmaz
    Bultan, Tevfik
    [J]. PROCEEDINGS OF THE 12TH INTERNATIONAL CONFERENCE ON THE INTERNET OF THINGS 2022, IOT 2022, 2022, : 49 - 56
  • [3] Adversarial Black-Box Attacks with Timing Side-Channel Leakage
    Nakai, Tsunato
    Suzuki, Daisuke
    Omatsu, Fumio
    Fujino, Takeshi
    [J]. IEICE TRANSACTIONS ON FUNDAMENTALS OF ELECTRONICS COMMUNICATIONS AND COMPUTER SCIENCES, 2021, E104A (01) : 143 - 151
  • [4] Automated Black-Box Detection of Side-Channel Vulnerabilities in Web Applications
    Chapman, Peter
    Evans, David
    [J]. PROCEEDINGS OF THE 18TH ACM CONFERENCE ON COMPUTER & COMMUNICATIONS SECURITY (CCS 11), 2011, : 263 - 274
  • [5] When Side-Channel Attacks Break the Black-Box Property of Embedded Artificial Intelligence
    Coqueret, Benoit
    Carbone, Mathieu
    Sentieys, Olivier
    Zaid, Gabriel
    [J]. PROCEEDINGS OF THE 16TH ACM WORKSHOP ON ARTIFICIAL INTELLIGENCE AND SECURITY, AISEC 2023, 2023, : 127 - 138
  • [6] Side-channel analysis based on Siamese neural network
    Li, Di
    Li, Lang
    Ou, Yu
    [J]. JOURNAL OF SUPERCOMPUTING, 2024, 80 (04): : 4423 - 4450
  • [7] Side-channel analysis based on Siamese neural network
    Di Li
    Lang Li
    Yu Ou
    [J]. The Journal of Supercomputing, 2024, 80 : 4423 - 4450
  • [8] Explaining Black-Box Models Using Interpretable Surrogates
    Kuttichira, Deepthi Praveenlal
    Gupta, Sunil
    Li, Cheng
    Rana, Santu
    Venkatesh, Svetha
    [J]. PRICAI 2019: TRENDS IN ARTIFICIAL INTELLIGENCE, PT I, 2019, 11670 : 3 - 15
  • [9] Evaluating and Designing against Side-Channel Leakage: White Box or Black Box?
    Standaert, Francois-Xavier
    [J]. PROCEEDINGS OF THE 2021 ACM WORKSHOP ON INFORMATION HIDING AND MULTIMEDIA SECURITY, IH&MMSEC 2021, 2021, : 1 - 1
  • [10] Exploration into the Explainability of Neural Network Models for Power Side-Channel Analysis
    Golder, Anupam
    Bhat, Ashwin
    Raychowdhury, Arijit
    [J]. PROCEEDINGS OF THE 32ND GREAT LAKES SYMPOSIUM ON VLSI 2022, GLSVLSI 2022, 2022, : 59 - 64