A role for consolidation in cross-modal category learning

被引:12
|
作者
Ashton, Jennifer E. [1 ]
Jefferies, Elizabeth [1 ]
Gaskell, M. Gareth [1 ]
机构
[1] Univ York, Dept Psychol, York YO10 5DD, N Yorkshire, England
基金
欧洲研究理事会; 英国经济与社会研究理事会;
关键词
Memory; Sleep; Consolidation; Categorization; INFORMATION-INTEGRATION; MEMORY CONSOLIDATION; SLEEP; CATEGORIZATION; KNOWLEDGE; SYSTEMS; TIME;
D O I
10.1016/j.neuropsychologia.2017.11.010
中图分类号
B84 [心理学]; C [社会科学总论]; Q98 [人类学];
学科分类号
03 ; 0303 ; 030303 ; 04 ; 0402 ;
摘要
The ability to categorize objects and events is a fundamental human skill that depends upon the representation of multimodal conceptual knowledge. This study investigated the acquisition and consolidation of categorical information that required participants to integrate information across visual and auditory dimensions. The impact of wake- and sleep-dependent consolidation was investigated using a paradigm in which training and testing were separated by a delay spanning either an evening of sleep or daytime wakefulness, with a paired-associate episodic memory task used as a measure of classic sleep-dependent consolidation. Participants displayed good evidence of category learning, but did not show any wake- or sleep-dependent changes in memory for category information immediately following the delay. This is in contrast to paired-associate learning, where a sleep dependent benefit was observed in memory recall. To replicate real-world concept learning, in which knowledge is acquired across multiple distinct episodes, participants were given a second opportunity for category learning following the consolidation delay. Here we found an interaction between consolidation and learning; with greater improvements in category knowledge as a result of the second learning session for those participants who had a sleep-filled delay. These results suggest a role for sleep in the consolidation of recently acquired categorical knowledge; however this benefit does not emerge as an immediate benefit in memory recall, but by enhancing the effectiveness of future learning. This study therefore provides insights into the processes responsible for the formation and development of conceptual representations.
引用
收藏
页码:50 / 60
页数:11
相关论文
共 50 条
  • [21] Multimodal Graph Learning for Cross-Modal Retrieval
    Xie, Jingyou
    Zhao, Zishuo
    Lin, Zhenzhou
    Shen, Ying
    PROCEEDINGS OF THE 2023 SIAM INTERNATIONAL CONFERENCE ON DATA MINING, SDM, 2023, : 145 - 153
  • [22] Quaternion Representation Learning for cross-modal matching
    Wang, Zheng
    Xu, Xing
    Wei, Jiwei
    Xie, Ning
    Shao, Jie
    Yang, Yang
    KNOWLEDGE-BASED SYSTEMS, 2023, 270
  • [23] Learning Cross-Modal Retrieval with Noisy Labels
    Hu, Peng
    Peng, Xi
    Zhu, Hongyuan
    Zhen, Liangli
    Lin, Jie
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5399 - 5409
  • [24] Cross-Modal Contrastive Learning for Code Search
    Shi, Zejian
    Xiong, Yun
    Zhang, Xiaolong
    Zhang, Yao
    Li, Shanshan
    Zhu, Yangyong
    2022 IEEE INTERNATIONAL CONFERENCE ON SOFTWARE MAINTENANCE AND EVOLUTION (ICSME 2022), 2022, : 94 - 105
  • [25] Deeply Coupled Cross-Modal Prompt Learning
    Liu, Xuejing
    Tang, Wei
    Lu, Jinghui
    Zhao, Rui
    Guo, Zhaojun
    Tan, Fei
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2023), 2023, : 7957 - 7970
  • [26] Cross-modal Metric Learning with Graph Embedding
    Zhang, Youcai
    Gu, Xiaodong
    2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018, : 758 - 764
  • [27] Learning Visual Locomotion with Cross-Modal Supervision
    Loquercio, Antonio
    Kumar, Ashish
    Malik, Jitendra
    2023 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA 2023), 2023, : 7295 - 7302
  • [28] Learning with Noisy Correspondence for Cross-modal Matching
    Huang, Zhenyu
    Niu, Guocheng
    Liu, Xiao
    Ding, Wenbiao
    Xiao, Xinyan
    Wu, Hua
    Peng, Xi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [29] Simple to complex cross-modal learning to rank
    Luo, Minnan
    Chang, Xiaojun
    Li, Zhihui
    Nie, Liqiang
    Hauptmann, Alexander G.
    Zheng, Qinghua
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2017, 163 : 67 - 77
  • [30] Hybrid representation learning for cross-modal retrieval
    Cao, Wenming
    Lin, Qiubin
    He, Zhihai
    He, Zhiquan
    NEUROCOMPUTING, 2019, 345 : 45 - 57