Continuous transformation learning of translation invariant representations

被引:0
|
作者
G. Perry
E. T. Rolls
S. M. Stringer
机构
[1] Oxford University,Centre for Computational Neuroscience, Department of Experimental Psychology
来源
关键词
Object recognition; Continuous transformation; Trace learning; Inferior temporal cortex; Invariant representations;
D O I
暂无
中图分类号
学科分类号
摘要
We show that spatial continuity can enable a network to learn translation invariant representations of objects by self-organization in a hierarchical model of cortical processing in the ventral visual system. During ‘continuous transformation learning’, the active synapses from each overlapping transform are associatively modified onto the set of postsynaptic neurons. Because other transforms of the same object overlap with previously learned exemplars, a common set of postsynaptic neurons is activated by the new transforms, and learning of the new active inputs onto the same postsynaptic neurons is facilitated. We show that the transforms must be close for this to occur; that the temporal order of presentation of each transformed image during training is not crucial for learning to occur; that relatively large numbers of transforms can be learned; and that such continuous transformation learning can be usefully combined with temporal trace training.
引用
收藏
页码:255 / 270
页数:15
相关论文
共 50 条
  • [1] Continuous transformation learning of translation invariant representations
    Perry, G.
    Rolls, E. T.
    Stringer, S. M.
    [J]. EXPERIMENTAL BRAIN RESEARCH, 2010, 204 (02) : 255 - 270
  • [2] Learning Transformation Invariant Representations with Weak Supervision
    Coors, Benjamin
    Condurache, Alexandru
    Mertins, Alfred
    Geiger, Andreas
    [J]. PROCEEDINGS OF THE 13TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS (VISIGRAPP 2018), VOL 5: VISAPP, 2018, : 64 - 72
  • [3] Learning Continuous Phrase Representations for Translation Modeling
    Gao, Jianfeng
    He, Xiaodong
    Yih, Wen-tau
    Deng, Li
    [J]. PROCEEDINGS OF THE 52ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1, 2014, : 699 - 709
  • [4] Learning Transformation-Invariant Representations for Image Recognition With Drop Transformation Networks
    Fan, Chunxiao
    Li, Yang
    Wang, Guijin
    Li, Yong
    [J]. IEEE ACCESS, 2018, 6 : 73357 - 73369
  • [5] TRANSFORMATION-INVARIANT DICTIONARY LEARNING FOR CLASSIFICATION WITH 1-SPARSE REPRESENTATIONS
    Yuzuguler, Ahmet Caner
    Vural, Elif
    Frossard, Pascal
    [J]. 2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [6] Unsupervised learning of invariant representations
    Anselmi, Fabio
    Leibo, Joel Z.
    Rosasco, Lorenzo
    Mutch, Jim
    Tacchetti, Andrea
    Poggio, Tomaso
    [J]. THEORETICAL COMPUTER SCIENCE, 2016, 633 : 112 - 121
  • [7] Representations for Continuous Learning
    Isele, David
    [J]. THIRTY-FIRST AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2017, : 5040 - 5041
  • [8] On Learning Invariant Representations for Domain Adaptation
    Zhao, Han
    des Combes, Remi Tachet
    Zhang, Kun
    Gordon, Geoffrey J.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [9] Learning Invariant Representations with Kernel Warping
    Ma, Yingyi
    Ganapathiraman, Vignesh
    Zhang, Xinhua
    [J]. 22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89
  • [10] Invariant Representations Learning with Future Dynamics
    Hu, Wenning
    He, Ming
    Chen, Xirui
    Wang, Nianbin
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 128