A Feature-Space Theory of the Production Effect in Recognition

被引:6
|
作者
Caplan, Jeremy B. [1 ,2 ]
Guitard, Dominic [3 ]
机构
[1] Univ Alberta, Dept Psychol & Neurosci, BSP 217, Edmonton, AB T6G 2E9, Canada
[2] Univ Alberta, Mental Hlth Inst, BSP 217, Edmonton, AB T6G 2E9, Canada
[3] Cardiff Univ, Sch Psychol, Cardiff, Wales
基金
加拿大自然科学与工程研究理事会;
关键词
production effect; list-strength effect; recognition memory; selective attention; matched filter model; DUAL-PROCESS MODEL; MEMORY; BENEFITS; STRENGTH; ITEM;
D O I
10.1027/1618-3169/a000611
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Mathematical models explaining production effects assume that production leads to the encoding of additional features, such as phonological ones. This improves memory with a combination of encoding strength and feature distinctiveness, implementing aspects of propositional theories. However, it is not clear why production differs from other manipulations such as study time and spaced repetition, which are also thought to influence strength. Here we extend attentional subsetting theory and propose an explanation based on the dimensionality of feature spaces. Specifically, we suggest phonological features are drawn from a compact feature space. Deeper features are sparsely subselected from a larger subspace. Algebraic and numerical solutions shed light on several findings, including the dependency of production effects on how other list items are encoded (differing from other strength factors) and the production advantage even for homophones. This places production within a continuum of strength-like manipulations that differ in terms of the feature subspaces they operate upon and leads to novel predictions based on direct manipulations of feature-space properties.
引用
收藏
页码:64 / 82
页数:19
相关论文
共 50 条
  • [41] Make Split, not Hijack: Preventing Feature-Space Hijacking Attacks in Split Learning
    Tampere University, Tampere, Finland
    不详
    arXiv,
  • [42] Editing Compression Dictionaries toward Refined Compression-Based Feature-Space
    Koga, Hisashi
    Ouchi, Shota
    Nakajima, Yuji
    INFORMATION, 2022, 13 (06)
  • [43] Transfer learning by feature-space transformation: A method for Hippocampus segmentation across scanners
    van Opbroek, Annegreet
    Achterberg, Hakim C.
    Vernooij, Meike W.
    Ikram, M. Arfan
    de Bruijne, Marleen
    NEUROIMAGE-CLINICAL, 2018, 20 : 466 - 475
  • [44] FSOINET: FEATURE-SPACE OPTIMIZATION-INSPIRED NETWORK FOR IMAGE COMPRESSIVE SENSING
    Chen, Wenjun
    Yang, Chunling
    Yang, Xin
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2460 - 2464
  • [45] Feature-space Speaker Adaptation for Probabilistic Linear Discriminant Analysis Acoustic Models
    Lu, Liang
    Renals, Steve
    16TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2015), VOLS 1-5, 2015, : 2862 - 2866
  • [46] Data-independent Random Projections from the feature-space of the homogeneous polynomial kernel
    Lopez-Sanchez, Daniel
    Arrieta, Angelica Gonzalez
    Corchado, Juan M.
    PATTERN RECOGNITION, 2018, 82 : 130 - 146
  • [47] PatchUp: A Feature-Space Block-Level Regularization Technique for Convolutional Neural Networks
    Faramarzi, Mojtaba
    Amini, Mohammad
    Badrinaaraayanan, Akilesh
    Verma, Vikas
    Chandar, Sarath
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 589 - 597
  • [48] Generating Diverse and Discriminatory Knapsack Instances by Searching for Novelty in Variable Dimensions of Feature-Space
    Marrero, Alejandro
    Segredo, Eduardo
    Hart, Emma
    Bossek, Jakob
    Neumann, Aneta
    PROCEEDINGS OF THE 2023 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, GECCO 2023, 2023, : 312 - 320
  • [49] On the Effect of Atmospheric Turbulence in the Feature Space of Deep Face Recognition
    Robbins, Wes
    Boult, Terrance
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 1617 - 1625
  • [50] Hierarchical Task-Incremental Learning with Feature-Space Initialization Inspired by Neural Collapse
    Qinhao Zhou
    Xiang Xiang
    Jing Ma
    Neural Processing Letters, 2023, 55 : 10811 - 10827