Cross-modality effect in implicit learning of temporal sequence

被引:1
|
作者
Feng, Zhengning [1 ]
Zhu, Sijia [1 ]
Duan, Jipeng [1 ]
Lu, Yang [2 ]
Li, Lin [1 ,3 ]
机构
[1] East China Normal Univ, Sch Psychol & Cognit Sci, 3663 North Zhongshan Rd, Shanghai 200062, Peoples R China
[2] Fudan Univ, Fudan Inst Ageing, 220 Handan Rd, Shanghai 200433, Peoples R China
[3] East China Normal Univ, Sch Psychol & Cognit Sci, Shanghai Key Lab Mental Hlth & Psychol Crisis Inte, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
Implicit learning; Temporal sequence; Cross-modal switching; Vision; Auditory; TIME; DISCRIMINATION; REPRESENTATION; INTERVAL;
D O I
10.1007/s12144-022-04228-y
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Although implicit learning of temporal structure in the uni-modal are widely studied, the effect of cross-modal switching on implicit learning of temporal sequence is currently unknown. The present study adopted the modified serial reaction time (SRT) task based on the temporal sequence. After learning, the attribution test was used to access the consciousness state of acquired knowledge. A total of 116 healthy participants were randomized to four groups based on the sensory channels in the front and back halves of the task, AA (auditory-only), VV (visual-only), AV (auditory-visual), and VA (visual-auditory). The results showed that temporal sequences were acquired implicitly for all groups. In particular, the results also showed that temporal sequences can be successfully switched across modalities during implicit learning, regardless of modality switching directions. Most importantly, a significant gain was found only in the VA group during the learning transition compared to the VV group. The attribution test showed similar results. These results supported the argument that temporal perception was based on an internal clock model, as well as temporal sequence information is dependent on auditory representations.
引用
收藏
页码:32125 / 32133
页数:9
相关论文
共 50 条
  • [1] Cross-modality effect in implicit learning of temporal sequence
    Zhengning Feng
    Sijia Zhu
    Jipeng Duan
    Yang Lu
    Lin Li
    Current Psychology, 2023, 42 : 32125 - 32133
  • [2] IMPLICIT LEARNING - WITHIN-MODALITY AND CROSS-MODALITY TRANSFER OF TACIT KNOWLEDGE
    MANZA, L
    REBER, AS
    BULLETIN OF THE PSYCHONOMIC SOCIETY, 1991, 29 (06) : 499 - 499
  • [3] The Congruency Sequence Effect of the Simon Task in a Cross-Modality Context
    Lee, Yoon Seo
    Cho, Yang Seok
    JOURNAL OF EXPERIMENTAL PSYCHOLOGY-HUMAN PERCEPTION AND PERFORMANCE, 2023, 49 (09) : 1221 - 1235
  • [4] POLO: Learning Explicit Cross-Modality Fusion for Temporal Action Localization
    Wang, Binglu
    Yang, Le
    Zhao, Yongqiang
    IEEE SIGNAL PROCESSING LETTERS, 2021, 28 : 503 - 507
  • [5] AN INVESTIGATION OF CROSS-MODALITY EFFECTS IN IMPLICIT AND EXPLICIT MEMORY
    MCCLELLAND, AGR
    PRING, L
    QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY SECTION A-HUMAN EXPERIMENTAL PSYCHOLOGY, 1991, 43 (01): : 19 - 33
  • [6] Representation Learning for Cross-Modality Classification
    van Tulder, Gijs
    de Bruijne, Marleen
    MEDICAL COMPUTER VISION AND BAYESIAN AND GRAPHICAL MODELS FOR BIOMEDICAL IMAGING, 2017, 10081 : 126 - 136
  • [7] Cross-Modality Learning by Exploring Modality Interactions for Emotion Reasoning
    Tran, Thi-Dung
    Ho, Ngoc-Huynh
    Pant, Sudarshan
    Yang, Hyung-Jeong
    Kim, Soo-Hyung
    Lee, Gueesang
    IEEE ACCESS, 2023, 11 : 56634 - 56648
  • [8] Cross-modality collaborative learning identified pedestrian
    Wen, Xiongjun
    Feng, Xin
    Li, Ping
    Chen, Wenfang
    VISUAL COMPUTER, 2023, 39 (09): : 4117 - 4132
  • [9] Learning Cross-modality Similarity for Multinomial Data
    Jia, Yangqing
    Salzmann, Mathieu
    Darrell, Trevor
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2011, : 2407 - 2414
  • [10] Cross-modality collaborative learning identified pedestrian
    Xiongjun Wen
    Xin Feng
    Ping Li
    Wenfang Chen
    The Visual Computer, 2023, 39 : 4117 - 4132