Orthographic and feature-level contributions to letter identification

被引:6
|
作者
Lally, Clare [1 ,2 ]
Rastle, Kathleen [1 ]
机构
[1] Royal Holloway Univ London, London, England
[2] UCL, UCL Speech Hearing & Phonet Sci, Chandler House,2 Wakefield St, London WC1N 1PF, England
来源
基金
英国经济与社会研究理事会;
关键词
Visual word recognition; reading; letter identification; visual processing; orthographic processing; INTERACTIVE ACTIVATION MODEL; VISUAL-WORD RECOGNITION; LETTER PERCEPTION; PSEUDOWORD SUPERIORITY; READ;
D O I
10.1177/17470218221106155
中图分类号
B84 [心理学];
学科分类号
04 ; 0402 ;
摘要
Word recognition is facilitated by primes containing visually similar letters (dentjst-dentist), suggesting that letter identities are encoded with initial uncertainty. Orthographic knowledge also guides letter identification, as readers are more accurate at identifying letters in words compared with pseudowords. We investigated how high-level orthographic knowledge and low-level visual feature analysis operate in combination during letter identification. We conducted a Reicher-Wheeler task to compare readers' ability to discriminate between visually similar and dissimilar letters across different orthographic contexts (words, pseudowords, and consonant strings). Orthographic context and visual similarity had independent effects on letter identification, and there was no interaction between these factors. The magnitude of these effects indicated that high-level orthographic information plays a greater role than low-level visual feature information in letter identification. We propose that readers use orthographic knowledge to refine potential letter candidates while visual feature information is accumulated. This combination of high-level knowledge and low-level feature analysis may be essential in permitting the flexibility required to identify visual variations of the same letter (e.g., N-n) while maintaining enough precision to tell visually similar letters apart (e.g., n-h). These results provide new insights on the integration of visual and linguistic information and highlight the need for greater integration between models of reading and visual processing.
引用
下载
收藏
页码:1111 / 1119
页数:9
相关论文
共 50 条
  • [1] Feature-level fusion in personal identification
    Gao, Y
    Maggs, M
    2005 IEEE COMPUTER SOCIETY CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOL 1, PROCEEDINGS, 2005, : 468 - 473
  • [2] Palmprint identification using feature-level fusion
    Kong, A
    Zhang, D
    Kamel, M
    PATTERN RECOGNITION, 2006, 39 (03) : 478 - 487
  • [3] Feature-Level Fusion of Iris and Face for Personal Identification
    Wang, Zhifang
    Han, Qi
    Niu, Xiamu
    Busch, Christoph
    ADVANCES IN NEURAL NETWORKS - ISNN 2009, PT 3, PROCEEDINGS, 2009, 5553 : 356 - +
  • [4] A Biometric Identification System with Kernel SVM and Feature-level Fusion
    Soviany, Sorin
    Puscoci, Sorin
    Sandulescu, Virginia
    PROCEEDINGS OF THE 2020 12TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTERS AND ARTIFICIAL INTELLIGENCE (ECAI-2020), 2020,
  • [5] Wood species identification using feature-level fusion scheme
    Zhao, Peng
    Dou, Gang
    Chen, Guang-Sheng
    OPTIK, 2014, 125 (03): : 1144 - 1148
  • [6] Feature-level domain adaptation
    1600, Microtome Publishing (17):
  • [7] Feature-Level Domain Adaptation
    Kouw, Wouter M.
    van der Maaten, Laurens J. P.
    Krijthe, Jesse H.
    Loog, Marco
    JOURNAL OF MACHINE LEARNING RESEARCH, 2016, 17
  • [8] Feature-level fusion of fingerprint and finger-vein for personal identification
    Yang, Jinfeng
    Zhang, Xu
    PATTERN RECOGNITION LETTERS, 2012, 33 (05) : 623 - 628
  • [9] Multimodal Feature-Level Fusion for Biometrics Identification System on IoMT Platform
    Xin, Yang
    Kong, Lingshuang
    Liu, Zhi
    Wang, Chunhua
    Zhu, Hongliang
    Gao, Mingcheng
    Zhao, Chensu
    Xu, Xiaoke
    IEEE ACCESS, 2018, 6 : 21418 - 21426
  • [10] Feature-Level Camera Style Transfer for Person Re-Identification
    Liu, Yang
    Sheng, Hao
    Wang, Shuai
    Wu, Yubin
    Xiong, Zhang
    APPLIED SCIENCES-BASEL, 2022, 12 (14):