Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks

被引:19
|
作者
Kleyko, Denis [1 ,2 ]
Kheffache, Mansour [3 ]
Frady, E. Paxon [1 ]
Wiklund, Urban [4 ]
Osipov, Evgeny [5 ]
机构
[1] Univ Calif Berkeley, Redwood Ctr Theoret Neurosci, Berkeley, CA 94720 USA
[2] Res Inst Sweden, Intelligent Syst Lab, S-16440 Kista, Sweden
[3] Netlight Consulting AB, S-11153 Stockholm, Sweden
[4] Umea Univ, Dept Radiat Sci, Biomed Engn, S-90187 Umea, Sweden
[5] Lulea Univ Technol, Dept Comp Sci Elect & Space Engn, S-97187 Lulea, Sweden
基金
欧盟地平线“2020”; 瑞典研究理事会;
关键词
Density-based encoding; hyperdimensional computing; random vector functional link (RVFL) networks; APPROXIMATION;
D O I
10.1109/TNNLS.2020.3015971
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The deployment of machine learning algorithms on resource-constrained edge devices is an important challenge from both theoretical and applied points of view. In this brief, we focus on resource-efficient randomly connected neural networks known as random vector functional link (RVFL) networks since their simple design and extremely fast training time make them very attractive for solving many applied classification tasks. We propose to represent input features via the density-based encoding known in the area of stochastic computing and use the operations of binding and bundling from the area of hyperdimensional computing for obtaining the activations of the hidden neurons. Using a collection of 121 real-world data sets from the UCI machine learning repository, we empirically show that the proposed approach demonstrates higher average accuracy than the conventional RVFL. We also demonstrate that it is possible to represent the readout matrix using only integers in a limited range with minimal loss in the accuracy. In this case, the proposed approach operates only on small n-bits integers, which results in a computationally efficient architecture. Finally, through hardware field-programmable gate array (FPGA) implementations, we show that such an approach consumes approximately 11 times less energy than that of the conventional RVFL.
引用
收藏
页码:3777 / 3783
页数:7
相关论文
共 50 条
  • [1] Resource-Efficient Neural Networks for Embedded Systems
    Roth, Wolfgang
    Schindler, Guenther
    Klein, Bernhard
    Peharz, Robert
    Tschiatschek, Sebastian
    Froening, Holger
    Pernkopf, Franz
    Ghahramani, Zoubin
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25 : 1 - 51
  • [2] Ternary Neural Networks for Resource-Efficient AI Applications
    Alemdar, Hande
    Leroy, Vincent
    Prost-Boucle, Adrien
    Petrot, Frederic
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 2547 - 2554
  • [3] A Resource-efficient Spiking Neural Network Accelerator Supporting Emerging Neural Encoding
    Gerlinghoff, Daniel
    Wang, Zhehui
    Gu, Xiaozhe
    Goh, Rick Siow Mong
    Luo, Tao
    PROCEEDINGS OF THE 2022 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2022), 2022, : 92 - 95
  • [4] On Resource-Efficient Bayesian Network Classifiers and Deep Neural Networks
    Roth, Wolfgang
    Pernkopf, Franz
    Schindler, Gunther
    Froening, Holger
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 10297 - 10304
  • [5] A Resource-Efficient Inference Accelerator for Binary Convolutional Neural Networks
    Kim, Tae-Hwan
    Shin, Jihoon
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2021, 68 (01) : 451 - 455
  • [6] Squeezing Correlated Neurons for Resource-Efficient Deep Neural Networks
    Ozen, Elbruz
    Orailoglu, Alex
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 52 - 68
  • [7] Robust and Resource-Efficient Identification of Two Hidden Layer Neural Networks
    Massimo Fornasier
    Timo Klock
    Michael Rauchensteiner
    Constructive Approximation, 2022, 55 : 475 - 536
  • [8] Binarized Neural Networks for Resource-Efficient Hashing with Minimizing Quantization Loss
    Zheng, Feng
    Deng, Cheng
    Huang, Heng
    PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2019, : 1032 - 1040
  • [9] TRTRL: A localized resource-efficient learning algorithm for recurrent neural networks
    Budik, Danny
    Elhanany, Itamar
    IEEE MWSCAS'06: PROCEEDINGS OF THE 2006 49TH MIDWEST SYMPOSIUM ON CIRCUITS AND SYSTEMS,, 2006, : 371 - +
  • [10] Resource-Efficient Deep Neural Networks for Automotive Radar Interference Mitigation
    Rock, Johanna
    Roth, Wolfgang
    Toth, Mate
    Meissner, Paul
    Pernkopf, Franz
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2021, 15 (04) : 927 - 940