Exploring effective methods for automated essay scoring of non-native speakers

被引:0
|
作者
Poonpon, Kornwipa [1 ]
Manorom, Paiboon [1 ]
Chansanam, Wirapong [1 ]
机构
[1] Khon Kaen Univ, Fac Humanities & Social Sci, Khon Kaen, Thailand
关键词
automated essay scoring; non-native speakers; machine learning; long short-term memory network; Thailand;
D O I
10.30935/cedtech/13740
中图分类号
G40 [教育学];
学科分类号
040101 ; 120403 ;
摘要
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers' writing skills. This research addresses this gap by exploring the effectiveness of automated essay-scoring methods specifically designed for non-native speakers. The study acknowledges the unique challenges posed by variations in language proficiency, cultural differences, and linguistic complexities when assessing non-native speakers' writing abilities. This work focuses on the automated student assessment prize and Khon Kaen University academic English language test dataset and presents an approach that leverages variants of the long short-term memory network model to learn features and compare results with the Kappa coefficient. The findings demonstrate that the proposed framework and approach, which involve joint learning of different essay representations, yield significant benefits and achieve results comparable to state-of-the-art deep learning models. These results suggest that the novel text representation proposed in this paper holds promise as a new and effective choice for assessing the writing tasks of non-native speakers. The result of this study can apply to advance educational assessment practices and promote equitable opportunities for language learners worldwide by enhancing the evaluation process for non-native speakers
引用
收藏
页数:17
相关论文
共 50 条
  • [31] Tips for training non-native English speakers
    Hughes, LaTesa
    [J]. LAB ANIMAL, 2014, 43 (03) : 101 - 101
  • [32] TEACHING BOOKISH IDIOMSFOR NON-NATIVE SPEAKERS
    Gurianov, Igor O.
    Byiyk, Iana A.
    [J]. MODERN JOURNAL OF LANGUAGE TEACHING METHODS, 2016, : 155 - 158
  • [33] Adaptive Learning in Computing for non-native Speakers
    Rimbaud, Yann
    McEwan, Tom
    Lawson, Alistair
    Cairncross, Sandra
    [J]. 2014 IEEE FRONTIERS IN EDUCATION CONFERENCE (FIE), 2014,
  • [34] Why Did They Do That? Exploring Attribution Mismatches Between Native and Non-Native Speakers Using Videoconferencing
    He, Helen Ai
    Yamashita, Naomi
    Hautasaari, Ari
    Cao, Xun
    Huang, Elaine M.
    [J]. CSCW'17: PROCEEDINGS OF THE 2017 ACM CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK AND SOCIAL COMPUTING, 2017, : 297 - 309
  • [35] Teaching and researching English accents in native and non-native speakers
    Saito, Kazuya
    [J]. WORLD ENGLISHES, 2015, 34 (02) : 293 - 295
  • [36] Neuropsychological Performance of Native versus Non-native English Speakers
    Kisser, Jason E.
    Wendell, Carrington R.
    Spencer, Robert J.
    Waldstein, Shari R.
    [J]. ARCHIVES OF CLINICAL NEUROPSYCHOLOGY, 2012, 27 (07) : 749 - 755
  • [37] Teaching and Researching English Accents in Native and Non-native Speakers
    Chen, Hsueh Chu
    [J]. JOURNAL OF ASIA TEFL, 2016, 13 (04): : 395 - 396
  • [38] A NOVEL APPROACH TO DETECTING NON-NATIVE SPEAKERS AND THEIR NATIVE LANGUAGE
    Omar, Mohamed Kamal
    Pelecanos, Jason
    [J]. 2010 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2010, : 4398 - 4401
  • [39] Speaking rate consistency in native and non-native speakers of English
    Baese-Berk, Melissa M.
    Morrill, Tuuli H.
    [J]. JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA, 2015, 138 (03): : EL223 - EL228
  • [40] Language Proficiency in Native and Non-native Speakers: Theory and Research
    Iwashita, Noriko
    [J]. LANGUAGE TESTING, 2018, 35 (02) : 319 - 322