Can a Hebbian-like learning rule be avoiding the curse of dimensionality in sparse distributed data?

被引:0
|
作者
Osorio, Maria [1 ,2 ]
Sa-Couto, Luis [1 ,2 ]
Wichert, Andreas [1 ,2 ]
机构
[1] Univ Lisbon, Dept Comp Sci & Engn, INESC ID, Ave Prof Dr Anibal Cavaco Silva, P-2744016 Lisbon, Portugal
[2] Univ Lisbon, Inst Super Tecn, Ave Prof Dr Anibal Cavaco Silva, P-2744016 Lisbon, Portugal
关键词
Hebbian learning; Restricted Boltzmann machines; Sparse distributed representations; Curse of dimensionality; ACCOUNT; MEMORY;
D O I
10.1007/s00422-024-00995-y
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
It is generally assumed that the brain uses something akin to sparse distributed representations. These representations, however, are high-dimensional and consequently they affect classification performance of traditional Machine Learning models due to the "curse of dimensionality". In tasks for which there is a vast amount of labeled data, Deep Networks seem to solve this issue with many layers and a non-Hebbian backpropagation algorithm. The brain, however, seems to be able to solve the problem with few layers. In this work, we hypothesize that this happens by using Hebbian learning. Actually, the Hebbian-like learning rule of Restricted Boltzmann Machines learns the input patterns asymmetrically. It exclusively learns the correlation between non-zero values and ignores the zeros, which represent the vast majority of the input dimensionality. By ignoring the zeros the "curse of dimensionality" problem can be avoided. To test our hypothesis, we generated several sparse datasets and compared the performance of a Restricted Boltzmann Machine classifier with some Backprop-trained networks. The experiments using these codes confirm our initial intuition as the Restricted Boltzmann Machine shows a good generalization performance, while the Neural Networks trained with the backpropagation algorithm overfit the training data.
引用
收藏
页码:267 / 276
页数:10
相关论文
共 29 条
  • [21] Rule Weight Update in Parallel Distributed Fuzzy Genetics-Based Machine Learning with Data Rotation
    Ishibuchi, Hisao
    Yamane, Masakazu
    Nojima, Yusuke
    2013 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS (FUZZ - IEEE 2013), 2013,
  • [22] Realistic tropical cyclone wind and pressure fields can be reconstructed from sparse data using deep learning
    Eusebi, Ryan
    Vecchi, Gabriel A.
    Lai, Ching-Yao
    Tong, Mingjing
    COMMUNICATIONS EARTH & ENVIRONMENT, 2024, 5 (01):
  • [23] Can Machine Learning Predict Hospitalization From Sparse Interstage Home Monitoring Data of Single Ventricle Infants?
    Banerjee, Anirban
    Tsui, Fuchiang R.
    Marin, Jorge J. Guerra
    Ruiz, Victor
    Natarajan, Shobha
    Szwast, Anita L.
    Stagg, Alyson
    Giglia, Therese M.
    CIRCULATION, 2019, 140
  • [24] Realistic tropical cyclone wind and pressure fields can be reconstructed from sparse data using deep learning
    Ryan Eusebi
    Gabriel A. Vecchi
    Ching-Yao Lai
    Mingjing Tong
    Communications Earth & Environment, 5
  • [25] Lightweight Driver Behavior Identification Model with Sparse Learning on In-Vehicle CAN-BUS Sensor Data
    Ullah, Shan
    Kim, Deok-Hwan
    SENSORS, 2020, 20 (18) : 1 - 21
  • [26] Like diamonds in the sky: how feedback can boost the amount of available data for learning analytics
    Mazarakis, Athanasios
    INTERNATIONAL JOURNAL OF TECHNOLOGY ENHANCED LEARNING, 2013, 5 (02) : 107 - 116
  • [27] Can deep learning compensate for sparse shots in the imaging domain? A potential alternative for reducing the acquisition cost of seismic data
    Dong, Xintong
    Lu, Shaoping
    Lin, Jun
    Zhang, Shukui
    Ren, Kai
    Cheng, Ming
    GEOPHYSICS, 2024, 89 (02) : V119 - V137
  • [28] Sparse FCM-Based Map-Reduce Framework for Distributed Parallel Data Clustering in E-Khool Learning Platform
    Antely, A. Suki
    Jegatheeswari, P.
    Prasad, M. Bibin
    Vinolin, V.
    Vinusha, S.
    Rajakumar, B. R.
    Binu, D.
    INTERNATIONAL JOURNAL OF UNCERTAINTY FUZZINESS AND KNOWLEDGE-BASED SYSTEMS, 2023, 31 (01) : 1 - 23
  • [29] Local and non-local dependency learning and emergence of rule-like representations in speech data by deep convolutional generative adversarial networks
    Begus, Gasper
    COMPUTER SPEECH AND LANGUAGE, 2022, 71