The temporal dynamics of conscious and unconscious audio-visual semantic integration

被引:0
|
作者
Gao, Mingjie [1 ]
Zhu, Weina [1 ]
Drewes, Jan [2 ]
机构
[1] Yunnan Univ, Sch Informat Sci, Kunming 650091, Peoples R China
[2] Sichuan Normal Univ, Inst Brain & Psychol Sci, Chengdu, Peoples R China
基金
中国国家自然科学基金;
关键词
NATURALISTIC SOUNDS; OCULAR DOMINANCE; SPOKEN WORDS; TIME-COURSE; SPEECH; CORRESPONDENCES; IDENTIFICATION; PERCEPTION; COMPONENTS; SOFTWARE;
D O I
10.1016/j.heliyon.2024.e33828
中图分类号
O [数理科学和化学]; P [天文学、地球科学]; Q [生物科学]; N [自然科学总论];
学科分类号
07 ; 0710 ; 09 ;
摘要
We compared the time course of cross-modal semantic effects induced by both naturalistic sounds and spoken words on the processing of visual stimuli, whether visible or suppressed form awareness through continuous flash suppression. We found that, under visible conditions, spoken words elicited audio-visual semantic effects over longer time (-1000,-500,-250 ms SOAs) than naturalistic sounds (-500,-250 ms SOAs). Performance was generally better with auditory primes, but more so with congruent stimuli. Spoken words presented in advance (-1000,-500 ms) outperformed naturalistic sounds; the opposite was true for (near-)simultaneous presentations. Congruent spoken words demonstrated superior categorization performance compared to congruent naturalistic sounds. The audio-visual semantic congruency effect still occurred with suppressed visual stimuli, although without significant variations in the temporal patterns between auditory types. These findings indicate that: 1. Semantically congruent auditory input can enhance visual processing performance, even when the visual stimulus is imperceptible to conscious awareness. 2. The temporal dynamics is contingent on the auditory types only when the visual stimulus is visible. 3. Audiovisual semantic integration requires sufficient time for processing auditory information.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Audio-visual integration in temporal perception
    Wada, Y
    Kitagawa, N
    Noguchi, K
    [J]. INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2003, 50 (1-2) : 117 - 124
  • [2] The Development of Audio-Visual Integration for Temporal Judgements
    Adams, Wendy J.
    [J]. PLOS COMPUTATIONAL BIOLOGY, 2016, 12 (04)
  • [3] Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks
    Suerig, Ralf
    Bottari, Davide
    Roeder, Brigitte
    [J]. MULTISENSORY RESEARCH, 2018, 31 (06) : 556 - 578
  • [4] Semantic incongruity influences response caution in audio-visual integration
    Benjamin Steinweg
    Fred W. Mast
    [J]. Experimental Brain Research, 2017, 235 : 349 - 363
  • [5] Semantic incongruity influences response caution in audio-visual integration
    Steinweg, Benjamin
    Mast, Fred W.
    [J]. EXPERIMENTAL BRAIN RESEARCH, 2017, 235 (01) : 349 - 363
  • [6] Semantic Audio-Visual Navigation
    Chen, Changan
    Al-Halah, Ziad
    Grauman, Kristen
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15511 - 15520
  • [7] Oscillatory brain dynamics in audio-visual speech integration
    Fingelkurts, AA
    Fingelkurts, AA
    Krause, CM
    Sams, M
    [J]. INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2002, 45 (1-2) : 97 - 97
  • [8] The Dynamics of Audio-Visual Integration Capacity as Interference, and SOA
    Wilbiks, Jonathan Michael Paul
    Dyson, Ben J.
    [J]. CANADIAN JOURNAL OF EXPERIMENTAL PSYCHOLOGY-REVUE CANADIENNE DE PSYCHOLOGIE EXPERIMENTALE, 2015, 69 (04): : 341 - 341
  • [9] Neural dynamics driving audio-visual integration in autism
    Ronconi, Luca
    Vitale, Andrea
    Federici, Alessandra
    Mazzoni, Noemi
    Battaglini, Luca
    Molteni, Massimo
    Casartelli, Luca
    [J]. CEREBRAL CORTEX, 2023, 33 (03) : 543 - 556
  • [10] Audio-visual event detection based on mining of semantic audio-visual labels
    Goh, KS
    Miyahara, K
    Radhakrishan, R
    Xiong, ZY
    Divakaran, A
    [J]. STORAGE AND RETRIEVAL METHODS AND APPLICATIONS FOR MULTIMEDIA 2004, 2004, 5307 : 292 - 299