Learning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approach

被引:0
|
作者
Ilker Yildirim
Robert A. Jacobs
机构
[1] Massachusetts Institute of Technology,Brain and Cognitive Sciences
[2] The Rockefeller University,Laboratory of Neural Systems
[3] University of Rochester,Brain and Cognitive Sciences
来源
Psychonomic Bulletin & Review | 2015年 / 22卷
关键词
Multisensory perception; Language of thought; Sequence learning; Computational modeling;
D O I
暂无
中图分类号
学科分类号
摘要
If a person is trained to recognize or categorize objects or events using one sensory modality, the person can often recognize or categorize those same (or similar) objects and events via a novel modality. This phenomenon is an instance of cross-modal transfer of knowledge. Here, we study the Multisensory Hypothesis which states that people extract the intrinsic, modality-independent properties of objects and events, and represent these properties in multisensory representations. These representations underlie cross-modal transfer of knowledge. We conducted an experiment evaluating whether people transfer sequence category knowledge across auditory and visual domains. Our experimental data clearly indicate that we do. We also developed a computational model accounting for our experimental results. Consistent with the probabilistic language of thought approach to cognitive modeling, our model formalizes multisensory representations as symbolic “computer programs” and uses Bayesian inference to learn these representations. Because the model demonstrates how the acquisition and use of amodal, multisensory representations can underlie cross-modal transfer of knowledge, and because the model accounts for subjects’ experimental performances, our work lends credence to the Multisensory Hypothesis. Overall, our work suggests that people automatically extract and represent objects’ and events’ intrinsic properties, and use these properties to process and understand the same (and similar) objects and events when they are perceived through novel sensory modalities.
引用
收藏
页码:673 / 686
页数:13
相关论文
共 9 条
  • [1] Learning multisensory representations for auditory-visual transfer of sequence category knowledge: a probabilistic language of thought approach
    Yildirim, Ilker
    Jacobs, Robert A.
    PSYCHONOMIC BULLETIN & REVIEW, 2015, 22 (03) : 673 - 686
  • [2] Learning Deep Representations with Probabilistic Knowledge Transfer
    Passalis, Nikolaos
    Tefas, Anastasios
    COMPUTER VISION - ECCV 2018, PT XI, 2018, 11215 : 283 - 299
  • [3] MULTILAYER PROBABILISTIC KNOWLEDGE TRANSFER FOR LEARNING IMAGE REPRESENTATIONS
    Passalis, Nikolaos
    Tzelepi, Maria
    Tefas, Anastasios
    2020 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2020,
  • [4] Auditory-visual speech integration by adults with and without language-learning disabilities
    Norrix, LW
    Plante, E
    Vance, R
    JOURNAL OF COMMUNICATION DISORDERS, 2006, 39 (01) : 22 - 36
  • [5] Learning abstract visual concepts via probabilistic program induction in a Language of Thought
    Overlan, Matthew C.
    Jacobs, Robert A.
    Piantadosi, Steven T.
    COGNITION, 2017, 168 : 320 - 334
  • [6] Crossmodal Correspondence Mediates Crossmodal Transfer from Visual to Auditory Stimuli in Category Learning
    Sun, Ying
    Yao, Liansheng
    Fu, Qiufang
    JOURNAL OF INTELLIGENCE, 2024, 12 (09)
  • [7] From Sensory Signals to Modality-Independent Conceptual Representations: A Probabilistic Language of Thought Approach
    Erdogan, Goker
    Yildirim, Ilker
    Jacobs, Robert A.
    PLOS COMPUTATIONAL BIOLOGY, 2015, 11 (11)
  • [8] The declarative system in children with specific language impairment: a comparison of meaningful and meaningless auditory-visual paired associate learning
    Dorothy V M Bishop
    Hsinjen Julie Hsu
    BMC Psychology, 3 (1)
  • [9] LEARNING VISUAL CATEGORIES THROUGH A SPARSE REPRESENTATION CLASSIFIER BASED CROSS-CATEGORY KNOWLEDGE TRANSFER
    Lu, Ying
    Chen, Liming
    Saidi, Alexandre
    Zhang, Zhaoxiang
    Wang, Yunhong
    2014 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2014, : 165 - 169