Efficient Machine Learning on Encrypted Data using Hyperdimensional Computing

被引:0
|
作者
Nam, Yujin [1 ]
Zhou, Minxuan [1 ]
Gupta, Saransh [2 ]
De Micheli, Gabrielle [1 ]
Cammarota, Rosario [3 ]
Wilkerson, Chris [3 ]
Micciancio, Daniele [1 ]
Rosing, Tajana [1 ]
机构
[1] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
[2] IBM Res, Santa Clara, CA USA
[3] Intel Labs, Santa Clara, CA USA
关键词
D O I
10.1109/ISLPED58423.2023.10244262
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fully Homomorphic Encryption (FHE) enables arbitrary computations on encrypted data without decryption, thus protecting data in cloud computing scenarios. However, FHE adoption has been slow due to the significant computation and memory overhead it introduces. This becomes particularly challenging for end-to-end processes, including training and inference, for conventional neural networks on FHE-encrypted data. Additionally, machine learning tasks require a high throughput system due to data-level parallelism. However, existing FHE accelerators only utilize a single SoC, disregarding the importance of scalability. In this work, we address these challenges through two key innovations. First, at an algorithmic level, we combine hyperdimensional Computing (HDC) with FHE. The machine learning formulation based on HDC, a brain-inspired model, provides lightweight operations that are inherently well-suited for FHE computation. Consequently, FHE-HD has significantly lower complexity while maintaining comparable accuracy to the state-of-the-art. Second, we propose an efficient and scalable FHE system for FHE-based machine learning. The proposed system adopts a novel interconnect network between multiple FHE accelerators, along with an automated scheduling and data allocation framework to optimize throughput and hardware utilization. We evaluate the value of the proposed FHE-HD system on the MNIST dataset and demonstrate that the expected training time is 4.7 times faster compared to state-of-the-art MLP training. Furthermore, our system framework exhibits up to 38.2 times speedup and 13.8 times energy efficiency improvement over the baseline scalable FHE systems that use the conventional dataparallel processing flow.
引用
收藏
页数:6
相关论文
共 50 条
  • [1] HyperFeel: An Efficient Federated Learning Framework Using Hyperdimensional Computing
    Li, Haomin
    Liu, Fangxin
    Chen, Yichi
    Jiang, Li
    [J]. 29TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2024, 2024, : 716 - 721
  • [2] HyDREA: Towards More Robust and Efficient Machine Learning Systems with Hyperdimensional Computing
    Morris, Justin
    Ergun, Kazim
    Khaleghi, Behnam
    Imani, Mohsen
    Aksanli, Baris
    Rosing, Tajana
    [J]. PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 723 - 728
  • [3] HyDREA: Utilizing Hyperdimensional Computing for a More Robust and Efficient Machine Learning System
    Morris, Justin
    Ergun, Kazim
    Khaleghi, Behnam
    Imani, Mohen
    Aksanli, Baris
    Simunic, Tajana
    [J]. ACM TRANSACTIONS ON EMBEDDED COMPUTING SYSTEMS, 2022, 21 (06)
  • [4] Attentive Multimodal Learning on Sensor Data using Hyperdimensional Computing
    Zhao, Quanling
    Yu, Xiaofan
    Rosing, Tajana
    [J]. PROCEEDINGS OF THE 2023 THE 22ND INTERNATIONAL CONFERENCE ON INFORMATION PROCESSING IN SENSOR NETWORKS, IPSN 2023, 2023, : 312 - 313
  • [5] GENERIC: Highly Efficient Learning Engine on Edge using Hyperdimensional Computing
    Khaleghi, Behnam
    Kang, Jaeyoung
    Xu, Hanyang
    Morris, Justin
    Rosing, Tajana
    [J]. PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 1117 - 1122
  • [6] Hyperdimensional Computing as a Rescue for Efficient Privacy-Preserving Machine Learning-as-a-Service
    Park, Jaewoo
    Quan, Chenghao
    Moon, Hyungon
    Lee, Jongeun
    [J]. 2023 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER AIDED DESIGN, ICCAD, 2023,
  • [7] Efficient Hyperdimensional Computing
    Yan, Zhanglu
    Wang, Shida
    Tang, Kaiwen
    Wong, Weng-Fai
    [J]. MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES: RESEARCH TRACK, ECML PKDD 2023, PT II, 2023, 14170 : 141 - 155
  • [8] CascadeHD: Efficient Many-Class Learning Framework Using Hyperdimensional Computing
    Kim, Yeseong
    Kim, Jiseung
    Imani, Mohsen
    [J]. 2021 58TH ACM/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2021, : 775 - 780
  • [9] Supervised machine learning using encrypted training data
    Francisco-Javier González-Serrano
    Adrián Amor-Martín
    Jorge Casamayón-Antón
    [J]. International Journal of Information Security, 2018, 17 : 365 - 377
  • [10] Supervised machine learning using encrypted training data
    Gonzalez-Serrano, Francisco-Javier
    Amor-Martin, Adrian
    Casamayon-Anton, Jorge
    [J]. INTERNATIONAL JOURNAL OF INFORMATION SECURITY, 2018, 17 (04) : 365 - 377