Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality

被引:4
|
作者
Alwashmi, Kholoud [1 ,4 ,6 ]
Meyer, Georg [2 ]
Rowe, Fiona [3 ]
Ward, Ryan [2 ,5 ]
机构
[1] Univ Liverpool, Fac Hlth & Life Sci, Liverpool, England
[2] Univ Liverpool, Digital Innovat Facil, Liverpool, England
[3] Univ Liverpool, Inst Populat Hlth, Liverpool, England
[4] Princess Nourah bint Abdulrahman Univ, Dept Radiol, Riyadh, Saudi Arabia
[5] Liverpool John Moores Univ, Sch Comp Sci & Math, Liverpool, England
[6] Univ Liverpool, Eleanor Rathbone Bldg,Bedford St South, Liverpool L69 7ZA, England
关键词
fMRI; Multisensory; Audio-visual; Learning; Virtual; -reality; Eye; -movement; INFERIOR PARIETAL CORTEX; VISUAL-MOTION SIGNALS; SPATIAL ATTENTION; NEURAL RESPONSES; SPEECH SOUNDS; TIME-COURSE; PERFORMANCE; ACTIVATION; PLASTICITY; STIMULI;
D O I
10.1016/j.neuroimage.2023.120483
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions.This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements.This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
引用
收藏
页数:18
相关论文
共 50 条
  • [31] Audio-visual speech perception in adult readers with dyslexia: an fMRI study
    Jascha Rüsseler
    Zheng Ye
    Ivonne Gerth
    Gregor R. Szycik
    Thomas F. Münte
    Brain Imaging and Behavior, 2018, 12 : 357 - 368
  • [32] Enhancing Audio-Visual Association with Self-Supervised Curriculum Learning
    Zhang, Jingran
    Xu, Xing
    Shen, Fumin
    Lu, Huimin
    Lu, Xin
    Shen, Heng Tao
    THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 3351 - 3359
  • [33] ENHANCING CONTRASTIVE LEARNING WITH TEMPORAL COGNIZANCE FOR AUDIO-VISUAL REPRESENTATION GENERATION
    Lavania, Chandrashekhar
    Sundaram, Shiva
    Srinivasan, Sundararajan
    Kirchhoff, Katrin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 4728 - 4732
  • [34] Concurrent intramodal learning enhances multisensory responses of symmetric crossmodal learning in robotic audio-visual tracking
    Shaikh, Danish
    Bodenhagen, Leon
    Manoonpong, Poramate
    COGNITIVE SYSTEMS RESEARCH, 2019, 54 : 138 - 153
  • [35] Enhancing Virtual Reality Training Through Artificial Intelligence: A Case Study
    Giussani, Riccardo
    Dozio, Nicolo
    Rigone, Stefano
    Parenzan, Luca
    Ferrise, Francesco
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2024, 44 (06) : 13 - 23
  • [36] Optimal Time Window for the Integration of Spatial Audio-Visual Information in Virtual Environments
    Liu, Jiacheng
    Drga, Vit
    Yasin, Ifat
    2021 IEEE VIRTUAL REALITY AND 3D USER INTERFACES (VR), 2021, : 723 - 728
  • [37] Virtual Talk: A model-based virtual phone using a layered audio-visual integration
    Chang, YJ
    Chen, CC
    Chou, JC
    Chen, YC
    2000 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, PROCEEDINGS VOLS I-III, 2000, : 415 - 418
  • [38] Development of a Bayesian Estimator for Audio-Visual Integration: A Neurocomputational Study
    Ursino, Mauro
    Crisafulli, Andrea
    di Pellegrino, Giuseppe
    Magosso, Elisa
    Cuppini, Cristiano
    FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2017, 11
  • [39] Neural correlates of multisensory reliability and perceptual weights emerge at early latencies during audio-visual integration
    Boyle, Stephanie C.
    Kayser, Stephanie J.
    Kayser, Christoph
    EUROPEAN JOURNAL OF NEUROSCIENCE, 2017, 46 (10) : 2565 - 2577
  • [40] Effect of audio-visual interaction on soundscape in the urban residential context: A virtual reality experiment
    Lu, Yichun
    Hasegawa, Yoshimi
    Tan, Johann Kay Ann
    Lau, Siu-Kit
    APPLIED ACOUSTICS, 2022, 192