When are implicit agents encoded? Evidence from cross-modal naming

被引:5
|
作者
Melinger, A
Mauner, G
机构
[1] SUNY Buffalo, Dept Linguist, Buffalo, NY 14260 USA
[2] SUNY Buffalo, Ctr Cognit Sci, Buffalo, NY 14260 USA
[3] SUNY Buffalo, Dept Psychol, Buffalo, NY 14260 USA
关键词
argument structure; cross-modal naming; sentence processing; implicit agents; verb representations; lexical semantics;
D O I
10.1006/brln.1999.2097
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
A cross-modal naming task was used to investigate when readers access and use semantic argument information to integrate a verb into the representation of a sentence. Previous research has shown that readers include implicit agents as part of their understanding of short passive sentences like The door was shut but not in intransitive sentences like The door shut. We demonstrate that implicit agents are accessed immediately upon recognizing a passive verb. Additionally, our results suggest that cross-modal naming is sensitive to some types of lexically encoded semantic information. (C) 1999 Academic Press.
引用
收藏
页码:185 / 191
页数:7
相关论文
共 50 条
  • [21] Cross-modal music integration in expert memory: Evidence from eye movements
    Drai-Zerbib, Veronique
    Baccino, Thierry
    JOURNAL OF EYE MOVEMENT RESEARCH, 2018, 11 (02): : 1 - 21
  • [22] Delayed commitment in spoken word recognition: Evidence from cross-modal priming
    Luce, PA
    Cluff, MS
    PERCEPTION & PSYCHOPHYSICS, 1998, 60 (03): : 484 - 490
  • [23] Delayed commitment in spoken word recognition: Evidence from cross-modal priming
    Paul A. Luce
    Michael S. Cluff
    Perception & Psychophysics, 1998, 60 : 484 - 490
  • [24] IMPLICIT ATTENTION-BASED CROSS-MODAL COLLABORATIVE LEARNING FOR ACTION RECOGNITION
    Zhang, Jianghao
    Zhong, Xian
    Liu, Wenxuan
    Jiang, Kui
    Yang, Zhengwei
    Wang, Zheng
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3020 - 3024
  • [25] Lack of Cross-Modal Effects in Dual-Modality Implicit Statistical Learning
    Li, Xiujun
    Zhao, Xudong
    Shi, Wendian
    Lu, Yang
    Conway, Christopher M.
    FRONTIERS IN PSYCHOLOGY, 2018, 9
  • [26] Based on Spatial and Temporal Implicit Semantic Relational Inference for Cross-Modal Retrieval
    Jin M.
    Hu W.
    Zhu L.
    Wang X.
    Hong R.
    IEEE Transactions on Circuits and Systems for Video Technology, 2024, 34 (11) : 1 - 1
  • [27] Within-modal and cross-modal implicit and explicit memory: Influence of modalities and the stimulus type.
    Ballesteros, S
    Reales, JM
    Manga, D
    PSICOTHEMA, 1999, 11 (04) : 831 - 851
  • [28] Learning Disentangled Factors from Paired Data in Cross-Modal Retrieval: An Implicit Identifiable VAE Approach
    Kim, Minyoung
    Guerrero, Ricardo
    Pavlovic, Vladimir
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2862 - 2870
  • [29] Evidence for Cross-Modal Integration of Emotional Audio/Visual Stimuli
    Woloszyn, Michael Richard
    Lauriente, Teagan Lee
    CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2016, 70 (04): : 425 - 426
  • [30] Emotional expression in speech and music - Evidence of cross-modal similarities
    Juslin, PN
    Laukka, P
    EMOTIONS INSIDE OUT: 130 YEARS AFTER DARWIN'S THE EXPRESSION OF THE EMOTIONS IN MAN AND ANIMALS, 2003, 1000 : 279 - 282