Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality

被引:4
|
作者
Alwashmi, Kholoud [1 ,4 ,6 ]
Meyer, Georg [2 ]
Rowe, Fiona [3 ]
Ward, Ryan [2 ,5 ]
机构
[1] Univ Liverpool, Fac Hlth & Life Sci, Liverpool, England
[2] Univ Liverpool, Digital Innovat Facil, Liverpool, England
[3] Univ Liverpool, Inst Populat Hlth, Liverpool, England
[4] Princess Nourah bint Abdulrahman Univ, Dept Radiol, Riyadh, Saudi Arabia
[5] Liverpool John Moores Univ, Sch Comp Sci & Math, Liverpool, England
[6] Univ Liverpool, Eleanor Rathbone Bldg,Bedford St South, Liverpool L69 7ZA, England
关键词
fMRI; Multisensory; Audio-visual; Learning; Virtual; -reality; Eye; -movement; INFERIOR PARIETAL CORTEX; VISUAL-MOTION SIGNALS; SPATIAL ATTENTION; NEURAL RESPONSES; SPEECH SOUNDS; TIME-COURSE; PERFORMANCE; ACTIVATION; PLASTICITY; STIMULI;
D O I
10.1016/j.neuroimage.2023.120483
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions.This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements.This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
引用
收藏
页数:18
相关论文
共 50 条
  • [21] Multisensory Integration of Audio-Visual Motion Cues during Active Self-Movement
    Gallagher, Maria
    Culling, John F.
    Freeman, Tom C. A.
    PERCEPTION, 2021, 50 (1_SUPPL) : 158 - 158
  • [22] TOWARDS GENERATING AMBISONICS USING AUDIO-VISUAL CUE FOR VIRTUAL REALITY
    Rana, Aakanksha
    Ozcinar, Cagri
    Smolic, Aljosa
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2012 - 2016
  • [23] Multisensory integration of audio-visual motion cues during active self-movement
    Gallagher, Maria
    Culling, John F.
    Freeman, Tom C. A.
    PERCEPTION, 2022, 51 (05) : 358 - 359
  • [24] Enhancing Learners' Communicative Skills through Audio-Visual Means
    Labinska, Bohdana
    Matiichuk, Kvitoslava
    Morarash, Halyna
    REVISTA ROMANEASCA PENTRU EDUCATIE MULTIDIMENSIONALA, 2020, 12 (02): : 220 - 236
  • [25] A Biologically Plausible Audio-Visual Integration Model for Continual Learning
    Chen, Wenjie
    Du, Fengtong
    Wang, Ye
    Cao, Lihong
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [26] Music training is associated with better audio-visual integration in Chinese language
    Ju, Ping
    Zhou, Zihang
    Xie, Yuhan
    Hui, Jiaying
    Yang, Xiaohong
    INTERNATIONAL JOURNAL OF PSYCHOPHYSIOLOGY, 2024, 203
  • [27] Immersive virtual reality and environmental noise assessment: An innovative audio-visual approach
    Ruotolo, Francesco
    Maffei, Luigi
    Di Gabriele, Maria
    Iachini, Tina
    Masullo, Massimiliano
    Ruggiero, Gennaro
    Senese, Vincenzo Paolo
    ENVIRONMENTAL IMPACT ASSESSMENT REVIEW, 2013, 41 : 10 - 20
  • [28] Enhancing Engineering and Architectural Design Through Virtual Reality and Machine Learning Integration
    Shehadeh, Ali
    Alshboul, Odey
    BUILDINGS, 2025, 15 (03)
  • [29] Audio-visual speech perception in adult readers with dyslexia: an fMRI study
    Ruesseler, Jascha
    Ye, Zheng
    Gerth, Ivonne
    Szycik, Gregor R.
    Muente, Thomas F.
    BRAIN IMAGING AND BEHAVIOR, 2018, 12 (02) : 357 - 368
  • [30] Neural substrates of the audio-visual temporal simultaneity perception: An fMRI study
    Murase, Mika
    Tanabe, Hiroki C.
    Hayashi, Masamichi J.
    Kochiyama, Takanori
    Sadato, Norihiro
    NEUROSCIENCE RESEARCH, 2009, 65 : S238 - S238