The time course of cross-modal representations of conceptual categories

被引:4
|
作者
Dirani, Julien [1 ,4 ]
Pylkkanen, Liina [1 ,2 ,3 ]
机构
[1] NYU, Dept Psychol, New York, NY 10003 USA
[2] NYU, Dept Linguist, New York, NY 10003 USA
[3] New York Univ Abu Dhabi, NYUAD Res Inst, Abu Dhabi 129188, U Arab Emirates
[4] NYU, 6 Washington Pl, New York, NY 10003 USA
关键词
Language; MEG; Concepts; Categories; Modality independent; SEMANTIC IMPAIRMENT; OBJECT RECOGNITION; LEXICAL ACCESS; PICTURE; BRAIN; LONGER; DISSOCIATIONS; SIGNATURES; DEMENTIA; REGIONS;
D O I
10.1016/j.neuroimage.2023.120254
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
To what extent does language production activate cross-modal conceptual representations? In picture naming, we view specific exemplars of concepts and then name them with a label, like "dog ". In overt reading, the written word does not express a specific exemplar. Here we used a decoding approach with magnetoencephalography (MEG) to address whether picture naming and overt word reading involve shared representations of superordinate categories (e.g., animal). This addresses a fundamental question about the modality-generality of conceptual representations and their temporal evolution. Crucially, we do this using a language production task that does not require explicit categorization judgment and that controls for word form properties across semantic categories. We trained our models to classify the animal/tool distinction using MEG data of one modality at each time point and then tested the generalization of those models on the other modality. We obtained evidence for the automatic activation of cross-modal semantic category representations for both pictures and words later than their respective modality-specific representations. Cross-modal representations were activated at 150 ms and lasted until around 450 ms. The time course of lexical activation was also assessed revealing that semantic category is represented before lexical access for pictures but after lexical access for words. Notably, this earlier activation of semantic category in pictures occurred simultaneously with visual representations. We thus show evidence for the spontaneous activation of cross-modal semantic categories in picture naming and word reading. These results serve to anchor a more comprehensive spatio-temporal delineation of the semantic feature space during production planning.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Time- but not sleep-dependent consolidation promotes the emergence of cross-modal conceptual representations
    Hennies, Nora
    Lewis, Penelope A.
    Durrant, Simon J.
    Cousins, James N.
    Ralph, Matthew A. Lambon
    [J]. NEUROPSYCHOLOGIA, 2014, 63 : 116 - 123
  • [2] Conceptual representations in the default, control and attention networks are task-dependent and cross-modal
    Kuhnke, Philipp
    Kiefer, Markus
    Hartwigsen, Gesa
    [J]. BRAIN AND LANGUAGE, 2023, 244
  • [3] Cross-modal integration and conceptual categorization in baboons
    Martin-Malivel, J
    Fagot, J
    [J]. BEHAVIOURAL BRAIN RESEARCH, 2001, 122 (02) : 209 - 213
  • [4] Horses form cross-modal representations of adults and children
    Jardat, Plotine
    Ringhofer, Monamie
    Yamamoto, Shinya
    Gouyet, Chloe
    Degrande, Rachel
    Parias, Celine
    Reigner, Fabrice
    Calandreau, Ludovic
    Lansade, Lea
    [J]. ANIMAL COGNITION, 2023, 26 (02) : 369 - 377
  • [5] CONCEPTUAL CROSS-MODAL TRANSFER IN DEAF AND HEARING CHILDREN
    BLANK, M
    BRIDGER, WH
    [J]. CHILD DEVELOPMENT, 1966, 37 (01) : 29 - +
  • [6] DEEP CROSS-MODAL STEGANOGRAPHY USING NEURAL REPRESENTATIONS
    Han, Gyojin
    Lee, Dong-Jae
    Hur, Jiwan
    Choi, Jaehyun
    Kim, Junmo
    [J]. 2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1205 - 1209
  • [7] Horses form cross-modal representations of adults and children
    Plotine Jardat
    Monamie Ringhofer
    Shinya Yamamoto
    Chloé Gouyet
    Rachel Degrande
    Céline Parias
    Fabrice Reigner
    Ludovic Calandreau
    Léa Lansade
    [J]. Animal Cognition, 2023, 26 : 369 - 377
  • [8] CrossCLR: Cross-modal Contrastive Learning For Multi-modal Video Representations
    Zolfaghari, Mohammadreza
    Zhu, Yi
    Gehler, Peter
    Brox, Thomas
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 1430 - 1439
  • [9] Developmental time course of peripheral cross-modal sensory interaction of the trigeminal and gustatory systems
    Omelian, Jacquelyn M.
    Berry, Marissa J.
    Gomez, Adam M.
    Apa, Kristi L.
    Sollars, Suzanne I.
    [J]. DEVELOPMENTAL NEUROBIOLOGY, 2016, 76 (06) : 626 - 641
  • [10] Learning Cross-Modal Deep Representations for Robust Pedestrian Detection
    Xu, Dan
    Ouyang, Wanli
    Ricci, Elisa
    Wang, Xiaogang
    Sebe, Nicu
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4236 - 4244