The Limits of SEMA on Distinguishing Similar Activation Functions of Embedded Deep Neural Networks

被引:3
|
作者
Takatoi, Go [1 ]
Sugawara, Takeshi [1 ]
Sakiyama, Kazuo [1 ]
Hara-Azumi, Yuko [2 ]
Li, Yang [1 ]
机构
[1] Univ Electrocommun, Dept Informat, 1-5-1 Chofugaoka, Chofu, Tokyo 1828585, Japan
[2] Tokyo Inst Technol, Dept Informat & Commun Engn, Meguro Ku, 2-12-1 Ookayama, Tokyo 1528550, Japan
来源
APPLIED SCIENCES-BASEL | 2022年 / 12卷 / 09期
关键词
machine learning; deep learning; side-channel; activation function; SEMA;
D O I
10.3390/app12094135
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Artificial intelligence (AI) is progressing rapidly, and in this trend, edge AI has been researched intensively. However, much less work has been performed around the security of edge AI. Machine learning models are a mass of intellectual property, and an optimized network is very valuable. Trained machine learning models need to be black boxes as well because they may give away information about the training data to the outside world. As selecting the appropriate activation functions to enable fast training of accurate deep neural networks is an active area of research, it is important to conceal the information of the activation functions used in a neural network architecture as well. There has been research on the use of physical attacks such as the side-channel attack (SCA) in areas other than cryptography. The SCA is highly effective against edge artificial intelligence due to its property of the device computing close to the user. We studied a previously proposed method to retrieve the activation functions of a black box neural network implemented on an edge device by using simple electromagnetic analysis (SEMA) and improved the signal processing procedure for further noisy measurements. The SEMA attack identifies activation functions by directly observing distinctive electromagnetic (EM) traces that correspond to the operations in the activation function. This method requires few executions and inputs and also has little implementation dependency on the activation functions. We distinguished eight similar activation functions with EM measurements and examined the versatility and limits of this attack. In this work, the machine learning architecture is a multilayer perceptron, evaluated on an Arduino Uno.
引用
收藏
页数:20
相关论文
共 50 条
  • [21] Embedded Solutions for Deep Neural Networks Implementation
    Erofei, Adrian-Aliosa
    Druta, Cristian-Filip
    Caleanu, Catalin Daniel
    2018 IEEE 12TH INTERNATIONAL SYMPOSIUM ON APPLIED COMPUTATIONAL INTELLIGENCE AND INFORMATICS (SACI), 2018, : 425 - 429
  • [22] Wavelets as activation functions in Neural Networks
    Herrera, Oscar
    Priego, Belem
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2022, 42 (05) : 4345 - 4355
  • [23] Activation Ensembles for Deep Neural Networks
    Klabjan, Diego
    Harmon, Mark
    2019 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2019, : 206 - 214
  • [24] Simple activation functions for neural and fuzzy neural networks
    Mendil, B
    Benmahammed, K
    ISCAS '99: PROCEEDINGS OF THE 1999 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS, VOL 5: SYSTEMS, POWER ELECTRONICS, AND NEURAL NETWORKS, 1999, : 347 - 350
  • [25] Simple activation functions for neural and fuzzy neural networks
    Mendil, Boubekeur
    Benmahammed, K.
    Proceedings - IEEE International Symposium on Circuits and Systems, 1999, 5
  • [26] Related or Duplicate: Distinguishing Similar CQA Questions via Convolutional Neural Networks
    Zhang, Wei Emma
    Sheng, Quan Z.
    Tang, Zhejun
    Ruan, Wenjie
    ACM/SIGIR PROCEEDINGS 2018, 2018, : 1153 - 1156
  • [27] Learning continuous piecewise non-linear activation functions for deep neural networks
    Gao, Xinchen
    Li, Yawei
    Li, Wen
    Duan, Lixin
    Van Gool, Luc
    Benini, Luca
    Magno, Michele
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 1835 - 1840
  • [28] Genetic Deep Neural Networks Using Different Activation Functions for Financial Data Mining
    Zhang, Luna M.
    PROCEEDINGS 2015 IEEE INTERNATIONAL CONFERENCE ON BIG DATA, 2015, : 2849 - 2851
  • [29] An Efficient Hardware Implementation of Activation Functions Using Stochastic Computing for Deep Neural Networks
    Van-Tinh Nguyen
    Tieu-Khanh Luong
    Han Le Duc
    Van-Phuc Hoang
    2018 IEEE 12TH INTERNATIONAL SYMPOSIUM ON EMBEDDED MULTICORE/MANY-CORE SYSTEMS-ON-CHIP (MCSOC 2018), 2018, : 233 - 236
  • [30] Uniform Convergence of Deep Neural Networks With Lipschitz Continuous Activation Functions and Variable Widths
    Xu, Yuesheng
    Zhang, Haizhang
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2024, 70 (10) : 7125 - 7142