Prediction in a visual language: real-time sentence processing in American Sign Language across development

被引:19
|
作者
Lieberman, Amy M. [1 ]
Borovsky, Arielle [2 ]
Mayberry, Rachel I. [3 ]
机构
[1] Boston Univ, Sch Educ, Boston, MA 02215 USA
[2] Purdue Univ, Speech Language & Hearing Sci, W Lafayette, IN 47907 USA
[3] Univ Calif San Diego, Dept Linguist, La Jolla, CA 92093 USA
关键词
American sign language; deaf; semantic processing; prediction; eye-tracking; ANTICIPATORY EYE-MOVEMENTS; WORLD PARADIGM; DEAF-CHILDREN; COMPREHENSION; ASL; ORGANIZATION; RECOGNITION; INFORMATION; ICONICITY; ADULTS;
D O I
10.1080/23273798.2017.1411961
中图分类号
R36 [病理学]; R76 [耳鼻咽喉科学];
学科分类号
100104 ; 100213 ;
摘要
Prediction during sign language comprehension may enable signers to integrate linguistic and non-linguistic information within the visual modality. In two eye-tracking experiments, we investigated American Sign language (ASL) semantic prediction in deaf adults and children (aged 4-8 years). Participants viewed ASL sentences in a visual world paradigm in which the sentence-initial verb was either neutral or constrained relative to the sentence-final target noun. Adults and children made anticipatory looks to the target picture before the onset of the target noun in the constrained condition only, showing evidence for semantic prediction. Crucially, signers alternated gaze between the stimulus sign and the target picture only when the sentential object could be predicted from the verb. Signers therefore engage in prediction by optimising visual attention between divided linguistic and referential signals. These patterns suggest that prediction is a modality-independent process, and theoretical implications are discussed.
引用
收藏
页码:387 / 401
页数:15
相关论文
共 50 条
  • [41] A REAL-TIME APPROACH TO SPOKEN LANGUAGE PROCESSING IN APHASIA
    METZLUTZ, MN
    WIOLAND, F
    BROCK, G
    [J]. BRAIN AND LANGUAGE, 1992, 43 (04) : 565 - 582
  • [42] Gap-filling and end-of-sentence effects in real-time language processing: Implications for modeling sentence comprehension in aphasia
    Balogh, J
    Zurif, E
    Prather, P
    Swinney, D
    Finkel, L
    [J]. BRAIN AND LANGUAGE, 1998, 61 (02) : 169 - 182
  • [43] Real Time Sign Language Interpreter
    Nath, Geethu G.
    Arun, C. S.
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON ELECTRICAL, INSTRUMENTATION AND COMMUNICATION ENGINEERING (ICEICE), 2017,
  • [44] Captured motion data processing for real time synthesis of sign language
    Heloir, Alexis
    Gibet, Sylvie
    Multon, Franck
    Courty, Nicolas
    [J]. GESTURE IN HUMAN-COMPUTER INTERACTION AND SIMULATION, 2006, 3881 : 168 - 171
  • [45] Real Time Classification of American Sign Language for Finger Spelling Purpose
    Kumar, Amit
    Assaf, Mansour
    Mehta, Utkal
    [J]. INTERNET OF VEHICLES - TECHNOLOGIES AND SERVICES, 2016, 10036 : 128 - 137
  • [46] Real-time Japanese sign language recognition based on three phonological elements of sign
    Sako, Shinji
    Hatano, Mika
    Kitamura, Tadashi
    [J]. Communications in Computer and Information Science, 2016, 618 : 130 - 136
  • [47] Real-Time Japanese Sign Language Recognition Based on Three Phonological Elements of Sign
    Sako, Shinji
    Hatano, Mika
    Kitamura, Tadashi
    [J]. HCI INTERNATIONAL 2016 - POSTERS' EXTENDED ABSTRACTS, PT II, 2016, 618 : 130 - 136
  • [48] Real-time language processing in school-age children with specific language impairment
    Montgomery, James W.
    [J]. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS, 2006, 41 (03) : 275 - 291
  • [49] Relation of working memory to off-line and real-time sentence processing in children with specific language impairment
    Montgomery, JW
    [J]. APPLIED PSYCHOLINGUISTICS, 2000, 21 (01) : 117 - 148
  • [50] Towards Real-Time Sign Language Recognition and Translation on Edge Devices
    Gan, Shiwei
    Yin, Yafeng
    Jiang, Zhiwei
    Xie, Lei
    Lu, Sanglu
    [J]. PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 4502 - 4512