Automatic grading and hinting in open-ended text questions

被引:12
|
作者
Sychev, Oleg [1 ]
Anikin, Anton [1 ]
Prokudin, Artem [1 ]
机构
[1] Volgograd State Tech Univ, Lenin Ave 28, Volgograd 400005, Russia
来源
关键词
e-learning; Automatic error recognition; Regular expressions; Editing distances; Computational linguistics; Ontology; Formative feedback; INTELLIGENT TUTORING SYSTEM;
D O I
10.1016/j.cogsys.2019.09.025
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Open-ended text questions provide better assessment of learner's knowledge, but analysing answers for this kind of questions, checking their correctness, and generating of detailed formative feedback about errors for the learner are more difficult and complex tasks than for closed-ended questions like multiple-choice. The analysis of answers for open-ended questions can be performed on different levels. Analysis on character level allows to find errors in characters' placement inside a word or a token; it is typically used to detect and correct typos, allowing to differ typos from actual errors in the learner's answer. The word-level or token-level analysis allows finding misplaced, extraneous, or missing words in the sentence. The semantic-level analysis is used to capture formally the meaning of the learner's answer and compare it with the meaning of the correct answer that can be provided in a natural or formal language. Some systems and approaches use analysis on several levels. The variability of the answers for open-ended questions significantly increases the complexity of the error search and formative feedback generation tasks. Different types of patterns including regular expressions and their use in questions with patterned answers are discussed. The types of formative feedback and modern approaches and their capabilities to generate feedback on different levels are discussed too. Statistical approaches or loosely defined template rules are inclined to false-positive grading. They are generally lowering the workload of creating questions, but provide low feedback. Approaches based on strictly-defined sets of correct answers perform better in providing hinting and answer-until-correct feedback. They are characterised by a higher workload of creating questions because of the need to account for every possible correct answer by the teacher and fewer types of detected errors. The optimal choice for creating automatised e-learning courses are template-based open-ended question systems like OntoPeFeGe, Preg, METEOR, and CorrectWriting which allows answer-until-correct feedback and are able to find and report various types of errors. This approach requires more time to create questions, but less time to manage the learning process in the courses once they are run. (C) 2019 Elsevier B.V. All rights reserved.
引用
收藏
页码:264 / 272
页数:9
相关论文
共 50 条
  • [1] Framework for Classroom Student Grading with Open-Ended Questions: A Text-Mining Approach
    Vairinhos, Valter Martins
    Pereira, Luis Agonia
    Matos, Florinda
    Nunes, Helena
    Patino, Carmen
    Galindo-Villardon, Purificacion
    [J]. MATHEMATICS, 2022, 10 (21)
  • [2] Text highlighting combined with open-ended questions: A methodological extension
    Ares, Gaston
    Ryan, Grace S. S.
    Jaeger, Sara R. R.
    [J]. JOURNAL OF SENSORY STUDIES, 2023, 38 (03)
  • [3] Machine learning algorithm for grading open-ended physics questions in Turkish
    Ayşe Çınar
    Elif Ince
    Murat Gezer
    Özgür Yılmaz
    [J]. Education and Information Technologies, 2020, 25 : 3821 - 3844
  • [4] Machine learning algorithm for grading open-ended physics questions in Turkish
    Cinar, Ayse
    Ince, Elif
    Gezer, Murat
    Yilmaz, Ozgur
    [J]. EDUCATION AND INFORMATION TECHNOLOGIES, 2020, 25 (05) : 3821 - 3844
  • [5] Computer Science Meets Education: Natural Language Processing for Automatic Grading of Open-Ended Questions in eBooks
    Smith, Glenn Gordon
    Haworth, Robert
    Zitnik, Slavko
    [J]. JOURNAL OF EDUCATIONAL COMPUTING RESEARCH, 2020, 58 (07) : 1227 - 1255
  • [6] Automatic Classification of Open-Ended Questions: Check-All-That-Apply Questions
    Schonlau, Matthias
    Gweon, Hyukjun
    Wenemark, Marika
    [J]. SOCIAL SCIENCE COMPUTER REVIEW, 2021, 39 (04) : 562 - 572
  • [7] Automatic Coding of Text Answers to Open-Ended Questions: Should You Double Code the Training Data?
    He, Zhoushanyue
    Schonlau, Matthias
    [J]. SOCIAL SCIENCE COMPUTER REVIEW, 2020, 38 (06) : 754 - 765
  • [8] USING PLACEHOLDER TEXT IN NARRATIVE OPEN-ENDED QUESTIONS IN WEB SURVEYS
    Kunz, Tanja
    Quoss, Franziska
    Gummer, Tobias
    [J]. JOURNAL OF SURVEY STATISTICS AND METHODOLOGY, 2021, 9 (05) : 992 - 1012
  • [9] Validation techniques in text mining (with application to the processing of open-ended questions)
    Lebart, L
    [J]. TEXT MINING AND ITS APPLICATIONS, 2004, 138 : 169 - 178
  • [10] ARE OPEN-ENDED QUESTIONS WORTH THE EFFORT
    PAYNE, SL
    [J]. JOURNAL OF MARKETING RESEARCH, 1965, 2 (04) : 417 - 418