Considering Misconceptions in Automatic Essay Scoring with A-TEST - Amrita Test Evaluation and Scoring Tool

被引:0
|
作者
Nedungadi, Prema [1 ,2 ]
Jyothi, L. [2 ]
Raman, Raghu [1 ]
机构
[1] Amrita Univ, Amrita CREATE, Vallikavu, Kollam, India
[2] Amrita Univ, Dept Comp Sci, Kollam, India
关键词
Feature extraction; Essay scoring; Text analysis; Text mining; Latent semantic analysis (LSA); SVD; Natural language process- NLP; AES;
D O I
10.1007/978-3-319-08368-1_31
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
In large classrooms with limited teacher time, there is a need for automatic evaluation of text answers and real-time personalized feedback during the learning process. In this paper, we discuss Amrita Test Evaluation & Scoring Tool (A-TEST), a text evaluation and scoring tool that learns from course materials and from human-rater scored text answers and also directly from teacher input. We use latent semantic analysis (LSA) to identify the key concepts. While most AES systems use LSA to compare students' responses with a set of ideal essays, this ignores learning the common misconceptions that students may have about a topic. A-TEST also uses LSA to learn misconceptions from the lowest scoring essays using this as a factor for scoring. 'A-TEST' was evaluated using two datasets of 1400 and 1800 pre-scored text answers that were manually scored by two teachers. The scoring accuracy and kappa scores between the derived 'A-TEST' model and the human raters were comparable to those between the human raters.
引用
收藏
页码:271 / +
页数:3
相关论文
共 50 条