Audio-visual facilitation of the mu rhythm

被引:25
|
作者
McGarry, Lucy M. [1 ]
Russo, Frank A. [1 ]
Schalles, Matt D. [2 ]
Pineda, Jaime A. [2 ,3 ]
机构
[1] Ryerson Univ, Dept Psychol, Toronto, ON M5B 2K3, Canada
[2] Univ Calif San Diego, Dept Cognit Sci, La Jolla, CA 92037 USA
[3] Univ Calif San Diego, Neurosci Grp, La Jolla, CA 92037 USA
基金
加拿大自然科学与工程研究理事会;
关键词
Mu rhythm; Mirror neuron system; Multimodal facilitation; Independent components analysis; GRASP REPRESENTATIONS; MOTOR FACILITATION; EEG; RECOGNITION; ACTIVATION; HUMANS; CORTEX; PERCEPTION; COMPONENT; PREMOTOR;
D O I
10.1007/s00221-012-3046-3
中图分类号
Q189 [神经科学];
学科分类号
071006 ;
摘要
Previous studies demonstrate that perception of action presented audio-visually facilitates greater mirror neuron system (MNS) activity in humans (Kaplan and Iacoboni in Cogn Process 8(2):103-113, 2007) and non-human primates (Keysers et al. in Exp Brain Res 153(4):628-636, 2003) than perception of action presented unimodally. In the current study, we examined whether audio-visual facilitation of the MNS can be indexed using electroencephalography (EEG) measurement of the mu rhythm. The mu rhythm is an EEG oscillation with peaks at 10 and 20 Hz that is suppressed during the execution and perception of action and is speculated to reflect activity in the premotor and inferior parietal cortices as a result of MNS activation (Pineda in Behav Brain Funct 4(1):47, 2008). Participants observed experimental stimuli unimodally (visual-alone or audio-alone) or bimodally during randomized presentations of two hands ripping a sheet of paper, and a control video depicting a box moving up and down. Audio-visual perception of action stimuli led to greater event-related desynchrony (ERD) of the 8-13 Hz mu rhythm compared to unimodal perception of the same stimuli over the C3 electrode, as well as in a left central cluster when data were examined in source space. These results are consistent with Kaplan and Iacoboni's (in Cogn Process 8(2):103-113, 2007), findings that indicate audio-visual facilitation of the MNS; our left central cluster was localized approximately 13.89 mm away from the ventral premotor cluster identified in their fMRI study, suggesting that these clusters originate from similar sources. Consistency of results in electrode space and component space support the use of ICA as a valid source localization tool.
引用
收藏
页码:527 / 538
页数:12
相关论文
共 50 条
  • [21] Transfer of Audio-Visual Temporal Training to Temporal and Spatial Audio-Visual Tasks
    Suerig, Ralf
    Bottari, Davide
    Roeder, Brigitte
    MULTISENSORY RESEARCH, 2018, 31 (06) : 556 - 578
  • [22] Audio-visual event detection based on mining of semantic audio-visual labels
    Goh, KS
    Miyahara, K
    Radhakrishan, R
    Xiong, ZY
    Divakaran, A
    STORAGE AND RETRIEVAL METHODS AND APPLICATIONS FOR MULTIMEDIA 2004, 2004, 5307 : 292 - 299
  • [23] LEARNING CONTEXTUALLY FUSED AUDIO-VISUAL REPRESENTATIONS FOR AUDIO-VISUAL SPEECH RECOGNITION
    Zhang, Zi-Qiang
    Zhang, Jie
    Zhang, Jian-Shu
    Wu, Ming-Hui
    Fang, Xin
    Dai, Li-Rong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1346 - 1350
  • [24] Audio-Visual Causality and Stimulus Reliability Affect Audio-Visual Synchrony Perception
    Li, Shao
    Ding, Qi
    Yuan, Yichen
    Yue, Zhenzhu
    FRONTIERS IN PSYCHOLOGY, 2021, 12
  • [25] AV2AV: Direct Audio-Visual Speech to Audio-Visual Speech Translation with Unified Audio-Visual Speech Representation
    Choi, Jeongsoo
    Park, Se Jin
    Kim, Minsu
    Ro, Yong Man
    2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2024, : 27315 - 27327
  • [26] Measuring the visual in audio-visual input
    Pujadas, Georgia
    Munoz, Carmen
    ITL-INTERNATIONAL JOURNAL OF APPLIED LINGUISTICS, 2023, 174 (02) : 263 - 290
  • [27] A Robust Audio-visual Speech Recognition Using Audio-visual Voice Activity Detection
    Tamura, Satoshi
    Ishikawa, Masato
    Hashiba, Takashi
    Takeuchi, Shin'ichi
    Hayamizu, Satoru
    11TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION 2010 (INTERSPEECH 2010), VOLS 3 AND 4, 2010, : 2702 - +
  • [28] Audio-visual speech enhancement with AVCDCN (audio-visual codebook dependent cepstral normalization)
    Deligne, S
    Potamianos, G
    Neti, C
    SAM2002: IEEE SENSOR ARRAY AND MULTICHANNEL SIGNAL PROCESSING WORKSHOP PROCEEDINGS, 2002, : 68 - 71
  • [29] Audio-visual speech experience with age influences perceived audio-visual asynchrony in speech
    Alm, M. (magnus.alm@svt.ntnu.no), 1600, Acoustical Society of America (134):
  • [30] Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus
    Fleming, Justin T.
    Noyce, Abigail L.
    Shinn-Cunningham, Barbara G.
    NEUROPSYCHOLOGIA, 2020, 146