Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

被引:0
|
作者
Osorio, Maria [1 ,2 ]
Sa-Couto, Luis [1 ,2 ]
Wichert, Andreas [1 ,2 ]
机构
[1] Univ Lisbon, Dept Comp Sci & Engn, INESC ID, Ave Prof Dr Anibal Cavaco Silva, P-2744016 Lisbon, Portugal
[2] Univ Lisbon, Inst Super Tecn, Ave Prof Dr Anibal Cavaco Silva, P-2744016 Lisbon, Portugal
关键词
Hebbian learning; Restricted Boltzmann machines; Sparse distributed representations; Curse of dimensionality; ACCOUNT; MEMORY;
D O I
10.1007/s00422-024-00995-y
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the "curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the "curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.
引用
收藏
页码:267 / 276
页数:10
相关论文
共 29 条
  • [1] CLASSIFICATION PERFORMANCE OF A HOPFIELD NEURAL NETWORK BASED ON A HEBBIAN-LIKE LEARNING RULE
    JACYNA, GM
    MALARET, ER
    IEEE TRANSACTIONS ON INFORMATION THEORY, 1989, 35 (02) : 263 - 280
  • [2] Independent component analysis by general nonlinear Hebbian-like learning rules
    Hyvarinen, A
    Oja, E
    SIGNAL PROCESSING, 1998, 64 (03) : 301 - 313
  • [3] Uncertainty Propagation in Fuzzy Grey Cognitive Maps With Hebbian-Like Learning Algorithms
    Salmeron, Jose L.
    Palos-Sanchez, Pedro R.
    IEEE TRANSACTIONS ON CYBERNETICS, 2019, 49 (01) : 211 - 220
  • [4] Breaking the curse of dimensionality for machine learning on genomic data
    O'Brien, A.
    Szul, P.
    Dunne, R.
    Bauer, D. C.
    EUROPEAN JOURNAL OF HUMAN GENETICS, 2018, 26 : 727 - 728
  • [5] Extended nonlinear Hebbian learning for developing sparse-distributed representation
    Zhang, BL
    Gedeon, TD
    FOUNDATIONS AND TOOLS FOR NEURAL MODELING, PROCEEDINGS, VOL I, 1999, 1606 : 442 - 449
  • [6] Distributed Machine Learning with Sparse Heterogeneous Data
    Richards, Dominic
    Negahban, Sahand N.
    Rebeschini, Patrick
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [7] SURVFIT: Doubly sparse rule learning for survival data
    Shakur, Ameer Hamza
    Huang, Shuai
    Qian, Xiaoning
    Chang, Xiangyu
    JOURNAL OF BIOMEDICAL INFORMATICS, 2021, 117 (117)
  • [8] A machine learning approach to circumventing the curse of dimensionality in discontinuous time series machine data
    Aremu, Oluseun Omotola
    Hyland-Wood, David
    McAree, Peter Ross
    RELIABILITY ENGINEERING & SYSTEM SAFETY, 2020, 195
  • [9] Effective rule mining of sparse data based on transfer learning
    Yongjiao Sun
    Jiancheng Guo
    Boyang Li
    Nur Al Hasan Haldar
    World Wide Web, 2023, 26 : 461 - 480
  • [10] Effective rule mining of sparse data based on transfer learning
    Sun, Yongjiao
    Guo, Jiancheng
    Li, Boyang
    Haldar, Nur Al Hasan
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2023, 26 (01): : 461 - 480