Comparing models of learning and relearning in large-scale cognitive training data sets

被引:0
|
作者
Aakriti Kumar
Aaron S. Benjamin
Andrew Heathcote
Mark Steyvers
机构
[1] University of California,
[2] University of Illinois at Urbana-Champaign,undefined
[3] University of Newcastle,undefined
来源
关键词
D O I
暂无
中图分类号
学科分类号
摘要
Practice in real-world settings exhibits many idiosyncracies of scheduling and duration that can only be roughly approximated by laboratory research. Here we investigate 39,157 individuals’ performance on two cognitive games on the Lumosity platform over a span of 5 years. The large-scale nature of the data allows us to observe highly varied lengths of uncontrolled interruptions to practice and offers a unique view of learning in naturalistic settings. We enlist a suite of models that grow in the complexity of the mechanisms they postulate and conclude that long-term naturalistic learning is best described with a combination of long-term skill and task-set preparedness. We focus additionally on the nature and speed of relearning after breaks in practice and conclude that those components must operate interactively to produce the rapid relearning that is evident even at exceptionally long delays (over 2 years). Naturalistic learning over long time spans provides a strong test for the robustness of theoretical accounts of learning, and should be more broadly used in the learning sciences.
引用
收藏
相关论文
共 50 条
  • [1] Comparing models of learning and relearning in large-scale cognitive training data sets
    Kumar, Aakriti
    Benjamin, Aaron S.
    Heathcote, Andrew
    Steyvers, Mark
    [J]. NPJ SCIENCE OF LEARNING, 2022, 7 (01)
  • [2] Inferring latent learning factors in large-scale cognitive training data
    Steyvers, Mark
    Schafer, Robert J.
    [J]. NATURE HUMAN BEHAVIOUR, 2020, 4 (11) : 1145 - +
  • [3] Inferring latent learning factors in large-scale cognitive training data
    Mark Steyvers
    Robert J. Schafer
    [J]. Nature Human Behaviour, 2020, 4 : 1145 - 1155
  • [4] DISVMs: Fast SVMs Training on Large-scale Data Sets
    Cui, Lijuan
    Wang, Changjian
    Li, Ziyang
    Peng, Yuxing
    [J]. 2016 IEEE 28TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2016), 2016, : 967 - 971
  • [5] On Efficient Training of Large-Scale Deep Learning Models
    Shen, Li
    Sun, Yan
    Yu, Zhiyuan
    Ding, Liang
    Tian, Xinmei
    Tao, Dacheng
    [J]. ACM Computing Surveys, 57 (03):
  • [6] Sequential learning with LS-SVM for large-scale data sets
    Jung, Tobias
    Polani, Daniel
    [J]. ARTIFICIAL NEURAL NETWORKS - ICANN 2006, PT 2, 2006, 4132 : 381 - 390
  • [7] A fast algorithm for learning a ranking function from large-scale data sets
    Raykar, Vikas C.
    Duraiswami, Ramani
    Krishnapuram, Balaji
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2008, 30 (07) : 1158 - 1170
  • [8] Learning large-scale graphical Gaussian models from genomic data
    Schäfer, J
    Strimmer, K
    [J]. SCIENCE OF COMPLEX NETWORKS: FROM BIOLOGY TO THE INTERNET AND WWW, 2005, 776 : 263 - 276
  • [9] Large-Scale Exploration of Feature Sets and Deep Learning Models to Classify Malicious Applications
    Vanderbruggen, Tristan
    Cavazos, John
    [J]. 2017 RESILIENCE WEEK (RWS), 2017, : 37 - 43
  • [10] An investigation of complex fuzzy sets for large-scale learning
    Sobhi, Sayedabbas
    Dick, Scott
    [J]. FUZZY SETS AND SYSTEMS, 2023, 471