Efficient neural codes naturally emerge through gradient descent learning

被引:8
|
作者
Benjamin, Ari S. S. [1 ]
Zhang, Ling-Qi [2 ]
Qiu, Cheng [2 ]
Stocker, Alan A. A. [2 ]
Kording, Konrad P. P. [1 ,3 ]
机构
[1] Univ Penn, Dept Bioengn, Philadelphia, PA 19104 USA
[2] Univ Penn, Dept Psychol, Philadelphia, PA USA
[3] Univ Penn, Dept Neurosci, Philadelphia, PA USA
关键词
ORIENTATION PERCEPTION; DISCRIMINATION; INFORMATION; ADAPTATION; STATISTICS; INFANTS; MODELS; ACUITY; SIGNAL; RULES;
D O I
10.1038/s41467-022-35659-7
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
Human sensory systems are more sensitive to common features in the environment than uncommon features. For example, small deviations from the more frequently encountered horizontal orientations can be more easily detected than small deviations from the less frequent diagonal ones. Here we find that artificial neural networks trained to recognize objects also have patterns of sensitivity that match the statistics of features in images. To interpret these findings, we show mathematically that learning with gradient descent in neural networks preferentially creates representations that are more sensitive to common features, a hallmark of efficient coding. This effect occurs in systems with otherwise unconstrained coding resources, and additionally when learning towards both supervised and unsupervised objectives. This result demonstrates that efficient codes can naturally emerge from gradient-like learning.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Efficient neural codes naturally emerge through gradient descent learning
    Ari S. Benjamin
    Ling-Qi Zhang
    Cheng Qiu
    Alan A. Stocker
    Konrad P. Kording
    [J]. Nature Communications, 13
  • [2] Efficient learning with robust gradient descent
    Matthew J. Holland
    Kazushi Ikeda
    [J]. Machine Learning, 2019, 108 : 1523 - 1560
  • [3] Efficient Dictionary Learning with Gradient Descent
    Gilboa, Dar
    Buchanan, Sam
    Wright, John
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [4] Efficient learning with robust gradient descent
    Holland, Matthew J.
    Ikeda, Kazushi
    [J]. MACHINE LEARNING, 2019, 108 (8-9) : 1523 - 1560
  • [5] EFFICIENT GRADIENT DESCENT METHOD OFRBF NEURAL ENTWORKS WITHADAPTIVE LEARNING RATE
    Lin Jiayu Liu Ying(School of Electro. Sci. and Tech.
    [J]. Journal of Electronics(China), 2002, (03) : 255 - 258
  • [6] Robust and Fast Learning of Sparse Codes With Stochastic Gradient Descent
    Labusch, Kai
    Barth, Erhardt
    Martinetz, Thomas
    [J]. IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2011, 5 (05) : 1048 - 1060
  • [7] Gradient descent learning for quaternionic Hopfield neural networks
    Kobayashi, Masaki
    [J]. NEUROCOMPUTING, 2017, 260 : 174 - 179
  • [8] Learning Graph Neural Networks with Approximate Gradient Descent
    Li, Qunwei
    Zou, Shaofeng
    Zhong, Wenliang
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 8438 - 8446
  • [9] Learning to learn by gradient descent by gradient descent
    Andrychowicz, Marcin
    Denil, Misha
    Colmenarejo, Sergio Gomez
    Hoffman, Matthew W.
    Pfau, David
    Schaul, Tom
    Shillingford, Brendan
    de Freitas, Nando
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 29 (NIPS 2016), 2016, 29
  • [10] A gradient descent learning algorithm for fuzzy neural networks
    Feuring, T
    Buckley, JJ
    Hayashi, Y
    [J]. 1998 IEEE INTERNATIONAL CONFERENCE ON FUZZY SYSTEMS AT THE IEEE WORLD CONGRESS ON COMPUTATIONAL INTELLIGENCE - PROCEEDINGS, VOL 1-2, 1998, : 1136 - 1141