Efficient Machine Learning on Encrypted Data using Hyperdimensional Computing

被引:0
|
作者
Nam, Yujin [1 ]
Zhou, Minxuan [1 ]
Gupta, Saransh [2 ]
De Micheli, Gabrielle [1 ]
Cammarota, Rosario [3 ]
Wilkerson, Chris [3 ]
Micciancio, Daniele [1 ]
Rosing, Tajana [1 ]
机构
[1] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
[2] IBM Res, Santa Clara, CA USA
[3] Intel Labs, Santa Clara, CA USA
关键词
D O I
10.1109/ISLPED58423.2023.10244262
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fully Homomorphic Encryption (FHE) enables arbitrary computations on encrypted data without decryption, thus protecting data in cloud computing scenarios. However, FHE adoption has been slow due to the significant computation and memory overhead it introduces. This becomes particularly challenging for end-to-end processes, including training and inference, for conventional neural networks on FHE-encrypted data. Additionally, machine learning tasks require a high throughput system due to data-level parallelism. However, existing FHE accelerators only utilize a single SoC, disregarding the importance of scalability. In this work, we address these challenges through two key innovations. First, at an algorithmic level, we combine hyperdimensional Computing (HDC) with FHE. The machine learning formulation based on HDC, a brain-inspired model, provides lightweight operations that are inherently well-suited for FHE computation. Consequently, FHE-HD has significantly lower complexity while maintaining comparable accuracy to the state-of-the-art. Second, we propose an efficient and scalable FHE system for FHE-based machine learning. The proposed system adopts a novel interconnect network between multiple FHE accelerators, along with an automated scheduling and data allocation framework to optimize throughput and hardware utilization. We evaluate the value of the proposed FHE-HD system on the MNIST dataset and demonstrate that the expected training time is 4.7 times faster compared to state-of-the-art MLP training. Furthermore, our system framework exhibits up to 38.2 times speedup and 13.8 times energy efficiency improvement over the baseline scalable FHE systems that use the conventional dataparallel processing flow.
引用
收藏
页数:6
相关论文
共 50 条
  • [21] Trajectory clustering of road traffic in urban environments using incremental machine learning in combination with hyperdimensional computing
    Bandaragoda, Tharindu
    De Silva, Daswin
    Kleyko, Denis
    Osipov, Evgeny
    Wiklund, Urban
    Alahakoon, Damminda
    [J]. 2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 1664 - 1670
  • [22] Machine Learning Classification over Encrypted Data
    Bost, Raphael
    Popa, Raluca Ada
    Tu, Stephen
    Goldwasser, Shafi
    [J]. 22ND ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2015), 2015,
  • [23] Symbolic Representation and Learning With Hyperdimensional Computing
    Mitrokhin, Anton
    Sutor, Peter
    Summers-Stay, Douglas
    Fermueller, Cornelia
    Aloimonos, Yiannis
    [J]. FRONTIERS IN ROBOTICS AND AI, 2020, 7
  • [24] A Binary Learning Framework for Hyperdimensional Computing
    Imani, Mohsen
    Messerly, John
    Wu, Fan
    Pi, Wang
    Rosing, Tajana
    [J]. 2019 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE), 2019, : 126 - 131
  • [25] SemiHD: Semi-Supervised Learning Using Hyperdimensional Computing
    Imani, Mohsen
    Bosch, Samuel
    Javaheripi, Mojan
    Rouhani, Bita
    Wu, Xinyu
    Koushanfar, Farinaz
    Rosing, Tajana
    [J]. 2019 IEEE/ACM INTERNATIONAL CONFERENCE ON COMPUTER-AIDED DESIGN (ICCAD), 2019,
  • [26] HDFL: Private and Robust Federated Learning using Hyperdimensional Computing
    Kasyap, Harsh
    Tripathy, Somanath
    Conti, Mauro
    [J]. 2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 214 - 221
  • [27] Efficient Biosignal Processing Using Hyperdimensional Computing: Network Templates for Combined Learning and Classification of ExG Signals
    Rahimi, Abbas
    Kanerva, Pentti
    Benini, Luca
    Rabaey, Jan M.
    [J]. PROCEEDINGS OF THE IEEE, 2019, 107 (01) : 123 - 143
  • [28] Efficient Hyperdimensional Learning with Trainable, Quantizable, and Holistic Data Representation
    Kim, Jiseung
    Lee, Hyunsei
    Imani, Mohsen
    Kim, Yeseong
    [J]. 2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [29] FedHD: Federated Learning with Hyperdimensional Computing
    Zhao, Quanling
    Lee, Kai
    Liu, Jeffrey
    Huzaifa, Muhammad
    Yu, Xiaofan
    Rosing, Tajana
    [J]. PROCEEDINGS OF THE 2022 THE 28TH ANNUAL INTERNATIONAL CONFERENCE ON MOBILE COMPUTING AND NETWORKING, ACM MOBICOM 2022, 2022, : 791 - 793
  • [30] PIONEER: Highly Efficient and Accurate Hyperdimensional Computing using Learned Projection
    Asgarinejad, Fatemeh
    Morris, Justin
    Rosing, Tajana
    Aksanli, Baris
    [J]. 29TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2024, 2024, : 896 - 901