Dash: Accelerating Distributed Private Convolutional Neural Network Inference with Arithmetic Garbled Circuits

被引:0
|
作者
Sander, Jonas [1 ]
Berndt, Sebastian [2 ]
Bruhns, Ida [1 ]
Eisenbarth, Thomas [1 ]
机构
[1] University of Luebeck, Luebeck, Germany
[2] Technische Hochschule Luebeck, Luebeck, Germany
关键词
Computer graphics equipment - Data privacy - Deep neural networks - Differential privacy - Digital storage - Network security;
D O I
10.46586/tches.v2025.i1.420-449
中图分类号
学科分类号
摘要
The adoption of machine learning solutions is rapidly increasing across all parts of society. As the models grow larger, both training and inference of machine learning models is increasingly outsourced, e.g. to cloud service providers. This means that potentially sensitive data is processed on untrusted platforms, which bears inherent data security and privacy risks. In this work, we investigate how to protect distributed machine learning systems, focusing on deep convolutional neural networks. The most common and best-performing mixed MPC approaches are based on HE, secret sharing, and garbled circuits. They commonly suffer from large performance overheads, big accuracy losses, and communication overheads that grow linearly in the depth of the neural network. To improve on these problems, we present Dash, a fast and distributed private convolutional neural network inference scheme secure against malicious attackers. Building on arithmetic garbling gadgets [BMR16] and fancy-garbling [BCM+19], Dash is based purely on arithmetic garbled circuits. We introduce LabelTensors that allow us to leverage the massive parallelity of modern GPUs. Combined with state-of-the-art garbling optimizations, Dash outperforms previous garbling approaches up to a factor of about 100. Furthermore, we introduce an efficient scaling operation over the residues of the Chinese remainder theorem representation to arithmetic garbled circuits, which allows us to garble larger networks and achieve much higher accuracy than previous approaches. Finally, Dash requires only a single communication round per inference step, regardless of the depth of the neural network, and a very small constant online communication volume. © 2025, Ruhr-University of Bochum. All rights reserved.
引用
收藏
页码:420 / 449
相关论文
共 50 条
  • [1] Adaptive Distributed Convolutional Neural Network Inference at the Network Edge with ADCNN
    Zhang, Sai Qian
    Lin, Jieyu
    Zhang, Qi
    PROCEEDINGS OF THE 49TH INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2020, 2020,
  • [2] Accelerating Convolutional Neural Network Inference in Split Computing: An In-Network Computing Approach
    Lee, Hochan
    Ko, Haneul
    Bae, Chanbin
    Pack, Sangheon
    38TH INTERNATIONAL CONFERENCE ON INFORMATION NETWORKING, ICOIN 2024, 2024, : 773 - 776
  • [3] Accelerating Convolutional Neural Network Inference Based on a Reconfigurable Sliced Systolic Array
    Zeng, Yixuan
    Sun, Heming
    Katto, Jiro
    Fan, Yibo
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [4] 1D-FALCON: Accelerating Deep Convolutional Neural Network Inference by Co-optimization of Models and Underlying Arithmetic Implementation
    Maji, Partha
    Mullins, Robert
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, PT II, 2017, 10614 : 21 - 29
  • [5] Accelerating Garbled Circuits in the Open Cloud Testbed with Multiple Network-Attached FPGAs
    Huang, Kai
    Gungor, Mehmet
    Handagala, Suranga
    Ioannidis, Stratis
    Leeser, Miriam
    2023 IEEE HIGH PERFORMANCE EXTREME COMPUTING CONFERENCE, HPEC, 2023,
  • [6] DistrEdge: Speeding up Convolutional Neural Network Inference on Distributed Edge Devices
    Hou, Xueyu
    Guan, Yongjie
    Han, Tao
    Zhang, Ning
    2022 IEEE 36TH INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS 2022), 2022, : 1097 - 1107
  • [7] Depth Inference with Convolutional Neural Network
    Tian, Hu
    Zhuang, Bojin
    Hua, Yan
    Cai, Anni
    2014 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING CONFERENCE, 2014, : 169 - 172
  • [8] A Parallel Algorithm for Bayesian Network Inference using Arithmetic Circuits
    Vasimuddin, Md.
    Chockalingam, Sriram P.
    Aluru, Srinivas
    2018 32ND IEEE INTERNATIONAL PARALLEL AND DISTRIBUTED PROCESSING SYMPOSIUM (IPDPS), 2018, : 34 - 43
  • [9] Neural network implementation using distributed arithmetic
    Szabo, T
    Feher, B
    Horvath, G
    1998 SECOND INTERNATIONAL CONFERENCE ON KNOWLEDGE-BASED INTELLIGENT ELECTRONIC SYSTEMS, KES '98, PROCEEDINGS, VOL, 3, 1998, : 510 - 518
  • [10] Convolutional neural network attack on cryptographic circuits
    Yu, Weize
    ELECTRONICS LETTERS, 2019, 55 (05) : 246 - 248