The Limits of SEMA on Distinguishing Similar Activation Functions of Embedded Deep Neural Networks

被引:3
|
作者
Takatoi, Go [1 ]
Sugawara, Takeshi [1 ]
Sakiyama, Kazuo [1 ]
Hara-Azumi, Yuko [2 ]
Li, Yang [1 ]
机构
[1] Univ Electrocommun, Dept Informat, 1-5-1 Chofugaoka, Chofu, Tokyo 1828585, Japan
[2] Tokyo Inst Technol, Dept Informat & Commun Engn, Meguro Ku, 2-12-1 Ookayama, Tokyo 1528550, Japan
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 09期
关键词
machine learning; deep learning; side-channel; activation function; SEMA;
D O I
10.3390/app12094135
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Artificial intelligence (AI) is progressing rapidly, and in this trend, edge AI has been researched intensively. However, much less work has been performed around the security of edge AI. Machine learning models are a mass of intellectual property, and an optimized network is very valuable. Trained machine learning models need to be black boxes as well because they may give away information about the training data to the outside world. As selecting the appropriate activation functions to enable fast training of accurate deep neural networks is an active area of research, it is important to conceal the information of the activation functions used in a neural network architecture as well. There has been research on the use of physical attacks such as the side-channel attack (SCA) in areas other than cryptography. The SCA is highly effective against edge artificial intelligence due to its property of the device computing close to the user. We studied a previously proposed method to retrieve the activation functions of a black box neural network implemented on an edge device by using simple electromagnetic analysis (SEMA) and improved the signal processing procedure for further noisy measurements. The SEMA attack identifies activation functions by directly observing distinctive electromagnetic (EM) traces that correspond to the operations in the activation function. This method requires few executions and inputs and also has little implementation dependency on the activation functions. We distinguished eight similar activation functions with EM measurements and examined the versatility and limits of this attack. In this work, the machine learning architecture is a multilayer perceptron, evaluated on an Arduino Uno.
引用
收藏
页数:20
相关论文
共 50 条
  • [1] Deep Neural Networks with Multistate Activation Functions
    Cai, Chenghao
    Xu, Yanyan
    Ke, Dengfeng
    Su, Kaile
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2015, 2015
  • [2] Activation Functions and Their Characteristics in Deep Neural Networks
    Ding, Bin
    Qian, Huimin
    Zhou, Jun
    PROCEEDINGS OF THE 30TH CHINESE CONTROL AND DECISION CONFERENCE (2018 CCDC), 2018, : 1836 - 1841
  • [3] A Formal Characterization of Activation Functions in Deep Neural Networks
    Amrouche, Massi
    Stipanovic, Dusan M.
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (02) : 2153 - 2166
  • [4] Learning Activation Functions in Deep (Spline) Neural Networks
    Bohra, Pakshal
    Campos, Joaquim
    Gupta, Harshit
    Aziznejad, Shayan
    Unser, Michael
    IEEE OPEN JOURNAL OF SIGNAL PROCESSING, 2020, 1 : 295 - 309
  • [5] Deep Kronecker neural networks: A general framework for neural networks with adaptive activation functions
    Jagtap, Ameya D.
    Shin, Yeonjong
    Kawaguchi, Kenji
    Karniadakis, George Em
    NEUROCOMPUTING, 2022, 468 (165-180) : 165 - 180
  • [6] Effective Activation Functions for Homomorphic Evaluation of Deep Neural Networks
    Obla, Srinath
    Gong, Xinghan
    Aloufi, Asma
    Hu, Peizhao
    Takabi, Daniel
    IEEE ACCESS, 2020, 8 (08): : 153098 - 153112
  • [7] Activation Functions of Deep Neural Networks for Polar Decoding Applications
    Seo, Jihoon
    Lee, Juyul
    Kim, Keunyoung
    2017 IEEE 28TH ANNUAL INTERNATIONAL SYMPOSIUM ON PERSONAL, INDOOR, AND MOBILE RADIO COMMUNICATIONS (PIMRC), 2017,
  • [8] Deep limits of residual neural networks
    Thorpe, Matthew
    van Gennip, Yves
    RESEARCH IN THE MATHEMATICAL SCIENCES, 2023, 10 (01)
  • [9] Approximating smooth functions by deep neural networks with sigmoid activation function
    Langer, Sophie
    JOURNAL OF MULTIVARIATE ANALYSIS, 2021, 182
  • [10] Simple Electromagnetic Analysis Against Activation Functions of Deep Neural Networks
    Takatoi, Go
    Sugawara, Takeshi
    Sakiyama, Kazuo
    Li, Yang
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2020, 2020, 12418 : 181 - 197