Extracting Low-Dimensional Psychological Representations from Convolutional Neural Networks

被引:7
|
作者
Jha, Aditi [1 ,2 ,5 ]
Peterson, Joshua C. [3 ]
Griffiths, Thomas L. [3 ,4 ]
机构
[1] Princeton Univ, Dept Elect & Comp Engn, Princeton, NJ USA
[2] Princeton Univ, Princeton Neurosci Inst, Princeton, NJ USA
[3] Princeton Univ, Dept Comp Sci, Princeton, NJ USA
[4] Princeton Univ, Dept Psychol, Princeton, NJ USA
[5] Princeton Neurosci Inst, PNI 238A Washington Rd, Princeton, NJ 08540 USA
基金
美国国家科学基金会;
关键词
Similarity judgments; Categorization; Psychological representations; Neural networks; Deep learning; Interpretability; CONTEXT THEORY; SIMILARITY; MODELS;
D O I
10.1111/cogs.13226
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Convolutional neural networks (CNNs) are increasingly widely used in psychology and neuroscience to predict how human minds and brains respond to visual images. Typically, CNNs represent these images using thousands of features that are learned through extensive training on image datasets. This raises a question: How many of these features are really needed to model human behavior? Here, we attempt to estimate the number of dimensions in CNN representations that are required to capture human psychological representations in two ways: (1) directly, using human similarity judgments and (2) indirectly, in the context of categorization. In both cases, we find that low-dimensional projections of CNN representations are sufficient to predict human behavior. We show that these low-dimensional representations can be easily interpreted, providing further insight into how people represent visual information. A series of control studies indicate that these findings are not due to the size of the dataset we used and may be due to a high level of redundancy in the features appearing in CNN representations.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] Extracting Meaningful High-Fidelity Knowledge from Convolutional Neural Networks
    Ngan, Kwun Ho
    Garcez, Artur D'Avila
    Townsend, Joseph
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [32] Extracting coarse-grained classifiers from large Convolutional Neural Networks
    Filus, Katarzyna
    Domańska, Joanna
    Engineering Applications of Artificial Intelligence, 2024, 138
  • [33] Extracting rules from artificial Neural Networks with kernel-based representations
    Ramírez, JM
    ENGINEERING APPLICATIONS OF BIO-INSPIRED ARTIFICIAL NEURAL NETWORKS, VOL II, 1999, 1607 : 68 - 77
  • [34] Low-Dimensional Motor Control Representations in Throwing Motions
    Ruiz, Ana Lucia Cruz
    Pontonnier, Charles
    Dumont, Georges
    APPLIED BIONICS AND BIOMECHANICS, 2017, 2017
  • [35] Learning Low-Dimensional Temporal Representations with Latent Alignments
    Su, Bing
    Wu, Ying
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2020, 42 (11) : 2842 - 2857
  • [36] Low-dimensional unitary representations of B3
    Tuba, I
    PROCEEDINGS OF THE AMERICAN MATHEMATICAL SOCIETY, 2001, 129 (09) : 2597 - 2606
  • [37] Classification and representations of low-dimensional nanomaterials: Terms and symbols
    Teo, Boon K.
    Sun, X. H.
    JOURNAL OF CLUSTER SCIENCE, 2007, 18 (02) : 346 - 357
  • [38] Diagnostic quality assessment for low-dimensional ECG representations
    Kovács, Péter
    Böck, Carl
    Tschoellitsch, Thomas
    Huemer, Mario
    Meier, Jens
    Computers in Biology and Medicine, 2022, 150
  • [39] Low-dimensional representations of Aut (F-2)
    Dokovic, DZ
    Platonov, VP
    MANUSCRIPTA MATHEMATICA, 1996, 89 (04) : 475 - 509
  • [40] Classification and Representations of Low-Dimensional Nanomaterials: Terms and Symbols
    Boon K. Teo
    X. H. Sun
    Journal of Cluster Science, 2007, 18 : 346 - 357