Enhancing Chinese Essay Discourse Logic Evaluation Through Optimized Fine-Tuning of Large Language Models

被引:0
|
作者
Song, Jinwang [1 ]
Song, Yanxin [1 ]
Zhou, Guangyu [1 ]
Fu, Wenhui [1 ]
Zhang, Kunli [1 ]
Zan, Hongying [1 ]
机构
[1] Zhengzhou Univ, Zhengzhou, Peoples R China
关键词
Essay Evaluation; Large Language Models; Natural Language Processing;
D O I
10.1007/978-981-97-9443-0_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the high complexity and diversity of writing, automated essay evaluation systems face significant challenges. Large language models (LLMs), representing the latest peak in NLP technology for semantic understanding, hold immense potential for advancing essay evaluation systems. In the NLPCC 2024 Shared Task 4 Chinese Essay Discourse Logic Evaluation and Integration, we investigated improving LLMs' capabilities in evaluating essay logic, coherence, and quality. Considering the characteristics of different tasks, we adopted MRC-style instructions to optimize output formats and implemented undersampling to address data imbalance. To enhance efficiency and model performance, we explored LLM fine-tuning methods that decouple tasks and applied similarity comparison to refine model outputs. Additionally, we utilized noisy embedding fine-tuning to mitigate overfitting. Our approach achieved the top ranking in the NLPCC 2024 Shared Task 4.
引用
收藏
页码:342 / 352
页数:11
相关论文
共 50 条
  • [21] Prompting or Fine-tuning? A Comparative Study of Large Language Models for Taxonomy Construction
    Chen, Boqi
    Yi, Fandi
    Varro, Daniel
    2023 ACM/IEEE INTERNATIONAL CONFERENCE ON MODEL DRIVEN ENGINEERING LANGUAGES AND SYSTEMS COMPANION, MODELS-C, 2023, : 588 - 596
  • [22] CSAFT: Continuous Semantic Augmentation Fine-Tuning for Legal Large Language Models
    Li, Bo
    Fan, Shuang
    Huang, Jin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT V, 2024, 15020 : 293 - 307
  • [23] Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models
    Zong, Yongshuo
    Bohdal, Ondrej
    Yu, Tingyang
    Yang, Yongxin
    Hospedales, Timothy
    Proceedings of Machine Learning Research, 2024, 235 : 62867 - 62891
  • [24] Selective privacy-preserving framework for large language models fine-tuning
    Wang, Teng
    Zhai, Lindong
    Yang, Tengfei
    Luo, Zhucheng
    Liu, Shuanggen
    INFORMATION SCIENCES, 2024, 678
  • [25] Parameter-efficient fine-tuning of large language models using semantic knowledge tuning
    Prottasha, Nusrat Jahan
    Mahmud, Asif
    Sobuj, Md. Shohanur Islam
    Bhat, Prakash
    Kowsher, Md
    Yousefi, Niloofar
    Garibay, Ozlem Ozmen
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [26] Enhancing generalization in camera trap image recognition: Fine-tuning visual language models
    Yang, Zihe
    Tian, Ye
    Wang, Lifeng
    Zhang, Junguo
    NEUROCOMPUTING, 2025, 634
  • [27] Utilizing Fine-Tuning of Large Language Models for Generating Synthetic Payloads: Enhancing Web Application Cybersecurity through Innovative Penetration Testing Techniques
    Cirkovic, Stefan
    Mladenovic, Vladimir
    Tomic, Sinisa
    Drljaca, Dalibor
    Ristic, Olga
    CMC-COMPUTERS MATERIALS & CONTINUA, 2025, 82 (03): : 4409 - 4430
  • [28] Comprehensive Review of Large Language Model Fine-Tuning
    Zhang, Qintong
    Wang, Yuchao
    Wang, Hexi
    Wang, Junxin
    Chen, Hai
    Computer Engineering and Applications, 2024, 60 (17) : 17 - 33
  • [29] CONVFIT: Conversational Fine-Tuning of Pretrained Language Models
    Vulic, Ivan
    Su, Pei-Hao
    Coope, Sam
    Gerz, Daniela
    Budzianowski, Pawel
    Casanueva, Inigo
    Mrksic, Nikola
    Wen, Tsung-Hsien
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1151 - 1168
  • [30] Improve Performance of Fine-tuning Language Models with Prompting
    Yang, Zijian Gyozo
    Ligeti-Nagy, Noenn
    INFOCOMMUNICATIONS JOURNAL, 2023, 15 : 62 - 68