Prosodic Temporal Alignment of Co-speech Gestures to Speech Facilitates Referent Resolution

被引:14
|
作者
Jesse, Alexandra [1 ]
Johnson, Elizabeth K. [2 ]
机构
[1] Univ Massachusetts, Dept Psychol, Amherst, MA 01003 USA
[2] Univ Toronto, Dept Psychol, Toronto, ON M5S 1A1, Canada
基金
加拿大自然科学与工程研究理事会;
关键词
audiovisual perception; referent resolution; prosody; synchrony; speech; VISUAL-PERCEPTION; INTERSENSORY REDUNDANCY; INFANTS; LEARN; ATTENTION; MOVEMENT; INTONATION; MOTIONESE; EMPHASIS; BEHAVIOR;
D O I
10.1037/a0027921
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Using a referent detection paradigm, we examined whether listeners can determine the object speakers are referring to by using the temporal alignment between the motion speakers impose on objects and their labeling utterances. Stimuli were created by videotaping speakers labeling a novel creature. Without being explicitly instructed to do so, speakers moved the creature during labeling. Trajectories of these motions were used to animate photographs of the creature. Participants in subsequent perception studies heard these labeling utterances while seeing side-by-side animations of two identical creatures in which only the target creature moved as originally intended by the speaker. Using the cross-modal temporal relationship between speech and referent motion, participants identified which creature the speaker was labeling, even when the labeling utterances were low-pass filtered to remove their semantic content or replaced by tone analogues. However, when the prosodic structure was eliminated by reversing the speech signal, participants no longer detected the referent as readily. These results provide strong support for a prosodic cross-modal alignment hypothesis. Speakers produce a perceptible link between the motion they impose upon a referent and the prosodic structure of their speech, and listeners readily use this prosodic cross-modal relationship to resolve referential ambiguity in word-learning situations.
引用
收藏
页码:1567 / 1581
页数:15
相关论文
共 50 条
  • [41] Externalizing the Private Experience of Pain: A Role for Co-Speech Gestures in Pain Communication?
    Rowbotham, Samantha
    Lloyd, Donna M.
    Holler, Judith
    Wearden, Alison
    [J]. HEALTH COMMUNICATION, 2015, 30 (01) : 70 - 80
  • [42] Speakers adapt gestures to addressees' knowledge: implications for models of co-speech gesture
    Galati, Alexia
    Brennan, Susan E.
    [J]. LANGUAGE COGNITION AND NEUROSCIENCE, 2014, 29 (04) : 435 - 451
  • [43] The cognitive-emotional dialectics in second language development: speech and co-speech gestures of Chinese learners of English
    Tian, Lixian
    McCafferty, Steven G.
    Zhu, Man
    [J]. IRAL-INTERNATIONAL REVIEW OF APPLIED LINGUISTICS IN LANGUAGE TEACHING, 2024,
  • [44] Multimodal integration of spontaneously produced representational co-speech gestures: an fMRI study
    Weisberg, Jill
    Hubbard, Amy Lynn
    Emmorey, Karen
    [J]. LANGUAGE COGNITION AND NEUROSCIENCE, 2017, 32 (02) : 158 - 174
  • [45] Exploring the use of co-speech hand gestures as treatment outcome measures for aphasia
    Devanga, Suma R.
    Mathew, Mili
    [J]. APHASIOLOGY, 2024,
  • [46] Increased Pain Intensity Is Associated with Greater Verbal Communication Difficulty and Increased Production of Speech and Co-Speech Gestures
    Rowbotham, Samantha
    Wardy, April J.
    Lloyd, Donna M.
    Wearden, Alison
    Holler, Judith
    [J]. PLOS ONE, 2014, 9 (10):
  • [47] DEGELS1: A comparable corpus of French Sign Language and co-speech gestures
    Braffort, Annelies
    Boutora, Leila
    [J]. LREC 2012 - EIGHTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2012, : 2426 - 2429
  • [48] A third-person perspective on co-speech action gestures in Parkinson's disease
    Humphries, Stacey
    Holler, Judith
    Crawford, Trevor J.
    Herrera, Elena
    Polialzoff, Ellen
    [J]. CORTEX, 2016, 78 : 44 - 54
  • [49] The content of the message influences the hand choice in co-speech gestures and in gesturing without speaking
    Lausberg, H
    Kita, S
    [J]. BRAIN AND LANGUAGE, 2003, 86 (01) : 57 - 69
  • [50] Prediction of Audience Response from Spoken Sequences, Speech Pauses and Co-speech Gestures in Humorous Discourse by Barack Obama
    Navarretta, Costanza
    [J]. 2017 8TH IEEE INTERNATIONAL CONFERENCE ON COGNITIVE INFOCOMMUNICATIONS (COGINFOCOM), 2017, : 327 - 331