Testing and Enhancing Adversarial Robustness of Hyperdimensional Computing

被引:0
|
作者
Ma, Dongning [1 ]
Rosing, Tajana Simunic [2 ]
Jiao, Xun [1 ]
机构
[1] Villanova Univ, Dept Elect & Comp Engn, Villanova, PA 19085 USA
[2] Univ Calif San Diego, Dept Comp Sci & Engn, La Jolla, CA 92093 USA
关键词
Robustness; Brain modeling; Testing; Fuzzing; Computational modeling; Perturbation methods; Data models; Adversarial attack; differential fuzz testing; hyperdimensional computing (HDC); robust computing;
D O I
10.1109/TCAD.2023.3263120
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Brain-inspired hyperdimensional computing (HDC), also known as vector symbolic architecture (VSA), is an emerging "non-von Neumann" computing scheme that imitates human brain functions to process information or perform learning tasks using abstract and high-dimensional patterns. Compared with deep neural networks (DNNs), HDC shows advantages, such as compact model size, energy efficiency, and few-shot learning. Despite of those advantages, one under-investigated area of HDC is the adversarial robustness; existing works have shown that HDC is vulnerable to adversarial attacks where attackers can add minor perturbations onto the original inputs to "fool" HDC models, producing wrong predictions. In this article, we systematically study the adversarial robustness of HDC by developing a systematic approach to test and enhance the robustness of HDC against adversarial attacks with two main components: 1) TestHD, which is a highly automated testing tool that can generate high-quality adversarial data for a given HDC model and 2) GuardHD, which utilizes the adversarial data generated by TestHD to enhance the adversarial robustness of HDC models. The core idea of TestHD is built on top of fuzz testing method. We customize the fuzzing approach by proposing a similarity-based coverage metric to guide TestHD to continuously mutate original inputs to generate new inputs that can trigger incorrect behaviors of HDC model. Thanks to the use of differential testing, TestHD does not require knowing the labels of the samples beforehand. For enhancing the adversarial robustness, we design, implement, and evaluate GuardHD to defend HDC models against adversarial data. The core idea of GuardHD is an adversarial detector which can be trained by TestHD-generated adversarial samples. During inference, once an adversarial sample is detected, GuardHD will override the prediction result with an "invalid" signal. We evaluate the proposed methods on four datasets and five adversarial attack scenarios with six adversarial generation strategies and two defense mechanisms, and compare the performance correspondingly. GuardHD is able to differentiate between benign and adversarial inputs with over 90% accuracy, which is up to 55% higher than adversarial training-based baselines. To the best of our knowledge, this article presents the first comprehensive effort in systematically testing and enhancing the robustness against adversarial data of this emerging brain-inspired computational model.
引用
收藏
页码:4052 / 4064
页数:13
相关论文
共 50 条
  • [1] Evaluating the Adversarial Robustness of Text Classifiers in Hyperdimensional Computing
    Moraliyage, Harsha
    Kahawala, Sachin
    De Silva, Daswin
    Alahakoon, Damminda
    [J]. 2022 15TH INTERNATIONAL CONFERENCE ON HUMAN SYSTEM INTERACTION (HSI), 2022,
  • [2] Adversarial Attack on Hyperdimensional Computing-based NLP Applications
    Zhang, Sizhe
    Wang, Zhao
    Jiao, Xun
    [J]. 2023 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION, DATE, 2023,
  • [3] On the Vulnerability of Hyperdimensional Computing-Based Classifiers to Adversarial Attacks
    Yang, Fangfang
    Ren, Shaolei
    [J]. NETWORK AND SYSTEM SECURITY, NSS 2020, 2020, 12570 : 371 - 387
  • [4] Assessing Robustness of Hyperdimensional Computing Against Errors in Associative Memory
    Zhang, Sizhe
    Wang, Ruixuan
    Zhang, Jeff Jun
    Rahimi, Abbas
    Jiao, Xun
    [J]. 2021 IEEE 32ND INTERNATIONAL CONFERENCE ON APPLICATION-SPECIFIC SYSTEMS, ARCHITECTURES AND PROCESSORS (ASAP 2021), 2021, : 211 - 217
  • [5] On the Adversarial Robustness of Hypothesis Testing
    Jin, Yulu
    Lai, Lifeng
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 515 - 530
  • [6] Adversarial-HD: Hyperdimensional Computing Adversarial Attack Design for Secure Industrial Internet of Things
    Gungor, Onat
    Rosing, Tajana
    Aksanli, Baris
    [J]. 2023 CYBER-PHYSICAL SYSTEMS AND INTERNET-OF-THINGS WEEK, CPS-IOT WEEK WORKSHOPS, 2023, : 1 - 6
  • [7] Enhancing Adversarial Robustness for Deep Metric Learning
    Zhou, Mo
    Patel, Vishal M.
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15304 - 15313
  • [8] Enhancing adversarial robustness with randomized interlayer processing
    Mohammed, Ameer
    Ali, Ziad
    Ahmad, Imtiaz
    [J]. EXPERT SYSTEMS WITH APPLICATIONS, 2024, 245
  • [9] Enhancing quantum adversarial robustness by randomized encodings
    Gong, Weiyuan
    Yuan, Dong
    Li, Weikang
    Deng, Dong-Ling
    [J]. PHYSICAL REVIEW RESEARCH, 2024, 6 (02):
  • [10] Enhancing Adversarial Robustness via Anomaly-aware Adversarial Training
    Tang, Keke
    Lou, Tianrui
    He, Xu
    Shi, Yawen
    Zhu, Peican
    Gu, Zhaoquan
    [J]. KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2023, 2023, 14117 : 328 - 342