THE REPRESENTATION OF STRUCTURE IN SEQUENCE PREDICTION TASKS

被引:0
|
作者
CLEEREMANS, A
机构
关键词
D O I
暂无
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Is knowledge acquired implicitly abstract or based on memory for exemplars? This question is at the heart of a current, but long-standing, controversy in the field of implicit learning (see Reber 1989 for a review). For some authors, implicit knowledge is best characterized as rulelike. For others, however, knowledge acquired implicitly is little more than knowledge about memorized exemplars or, at best, knowledge about elementary features of the material, such as the frequency of particular events. In this chapter, I argue that the debate may be ill-posed, and that the two positions are not necessarily incompatible. Using simulation studies, I show that abstract knowledge about the stimulus material may emerge through the operation of elementary, associationist learning mechanisms of the kind that operate in connectionist networks. I focus on a sequence learning task first proposed by Kushner, Cleeremans, and Reber (1991), during which subjects are exposed to random fixed-length sequences and are asked to predict the location at which the last element of each sequence will appear. Unknown to them, the location of the last element is determined based on the relationship between specific previous elements. This situation is thus quite complex, because the relevant information is relational, and because it is embedded in a large number of irrelevant contexts. Kushner, Cleeremans, and Reber (1991) showed that human subjects are able to learn this material despite limited ability to verbalize their knowledge. In this chapter, I first present simulation studies in which connectionist networks are trained to predict the last event of the sequences in the same conditions as subjects were. I focus on issues of representation and transfer. What knowledge do the networks acquire about the temporal extent of the material? What is the form of this knowledge? The results highlight limitations of two well-known models of sequential processing, that is, the SRN model (Cleeremans and McClelland 1991) and Jordan's recurrent network (Jordan 1986), and indicate that a simple decay-based, buffer network may be sufficient to account for human performance. Next, I explore how well the model can transfer to various test situations, in which new sequences may include either relevant or irrelevant sequence elements that have never been presented during training. I discuss the results in light of the abstraction and memory for instances debate. Based on these and other results, I suggest that the kind of representations developed by connectionist models are intermediate between abstract representations and exemplar-based representations, and that these two extreme forms of representation are points on a continuum.
引用
收藏
页码:783 / 809
页数:27
相关论文
共 50 条
  • [1] Sequence representation approaches for sequence-based protein prediction tasks that use deep learning
    Cui, Feifei
    Zhang, Zilong
    Zou, Quan
    BRIEFINGS IN FUNCTIONAL GENOMICS, 2021, 20 (01) : 61 - 73
  • [2] Sequence-Structure Embeddings via Protein Language Models Improve on Prediction Tasks
    Kabir, Anowarul
    Shehu, Amarda
    2022 IEEE INTERNATIONAL CONFERENCE ON KNOWLEDGE GRAPH (ICKG), 2022, : 105 - 112
  • [3] Sequence representation and prediction of protein secondary structure for structural motifs in twilight zone proteins
    Kurgan, Lukasz
    Kedarisetti, Kanaka Durga
    PROTEIN JOURNAL, 2006, 25 (7-8): : 463 - 474
  • [4] Sequence Representation and Prediction of Protein Secondary Structure for Structural Motifs in Twilight Zone Proteins
    Lukasz Kurgan
    Kanaka Durga Kedarisetti
    The Protein Journal, 2006, 25 : 463 - 474
  • [5] Stimulus-specific sequence representation in serial reaction time tasks
    Clegg, BA
    QUARTERLY JOURNAL OF EXPERIMENTAL PSYCHOLOGY SECTION A-HUMAN EXPERIMENTAL PSYCHOLOGY, 2005, 58 (06): : 1087 - 1101
  • [6] Adversarial Structure Matching for Structured Prediction Tasks
    Hwang, Jyh-Jing
    Ke, Tsung-Wei
    Shi, Jianbo
    Yu, Stella X.
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4051 - 4060
  • [7] On the need for structure modelling in sequence prediction
    Niall Twomey
    Tom Diethe
    Peter Flach
    Machine Learning, 2016, 104 : 291 - 314
  • [8] Sequence comparison and protein structure prediction
    Dunbrack, Roland L., Jr.
    CURRENT OPINION IN STRUCTURAL BIOLOGY, 2006, 16 (03) : 374 - 384
  • [9] On the need for structure modelling in sequence prediction
    Twomey, Niall
    Diethe, Tom
    Flach, Peter
    MACHINE LEARNING, 2016, 104 (2-3) : 291 - 314
  • [10] Optimal Sub-Sequence Matching for the Automatic Prediction of Surgical Tasks
    Forestier, Germain
    Petitjean, Francois
    Riffaud, Laurent
    Jannin, Pierre
    ARTIFICIAL INTELLIGENCE IN MEDICINE (AIME 2015), 2015, 9105 : 123 - 132