Enhancing Chinese Essay Discourse Logic Evaluation Through Optimized Fine-Tuning of Large Language Models

被引:0
|
作者
Song, Jinwang [1 ]
Song, Yanxin [1 ]
Zhou, Guangyu [1 ]
Fu, Wenhui [1 ]
Zhang, Kunli [1 ]
Zan, Hongying [1 ]
机构
[1] Zhengzhou Univ, Zhengzhou, Peoples R China
关键词
Essay Evaluation; Large Language Models; Natural Language Processing;
D O I
10.1007/978-981-97-9443-0_30
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Due to the high complexity and diversity of writing, automated essay evaluation systems face significant challenges. Large language models (LLMs), representing the latest peak in NLP technology for semantic understanding, hold immense potential for advancing essay evaluation systems. In the NLPCC 2024 Shared Task 4 Chinese Essay Discourse Logic Evaluation and Integration, we investigated improving LLMs' capabilities in evaluating essay logic, coherence, and quality. Considering the characteristics of different tasks, we adopted MRC-style instructions to optimize output formats and implemented undersampling to address data imbalance. To enhance efficiency and model performance, we explored LLM fine-tuning methods that decouple tasks and applied similarity comparison to refine model outputs. Additionally, we utilized noisy embedding fine-tuning to mitigate overfitting. Our approach achieved the top ranking in the NLPCC 2024 Shared Task 4.
引用
收藏
页码:342 / 352
页数:11
相关论文
共 50 条
  • [1] Enhanced Discriminative Fine-Tuning of Large Language Models for Chinese Text Classification
    Song, Jinwang
    Zan, Hongying
    Zhang, Kunli
    2024 INTERNATIONAL CONFERENCE ON ASIAN LANGUAGE PROCESSING, IALP 2024, 2024, : 168 - 174
  • [2] Enhancing Chinese comprehension and reasoning for large language models: an efficient LoRA fine-tuning and tree of thoughts framework
    Chen, Songlin
    Wang, Weicheng
    Chen, Xiaoliang
    Zhang, Maolin
    Lu, Peng
    Li, Xianyong
    Du, Yajun
    JOURNAL OF SUPERCOMPUTING, 2025, 81 (01):
  • [3] Phased Instruction Fine-Tuning for Large Language Models
    Pang, Wei
    Zhou, Chuan
    Zhou, Xiao-Hua
    Wang, Xiaojie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 5735 - 5748
  • [4] HackMentor: Fine-Tuning Large Language Models for Cybersecurity
    Zhang, Jie
    Wen, Hui
    Deng, Liting
    Xin, Mingfeng
    Li, Zhi
    Li, Lun
    Zhu, Hongsong
    Sun, Limin
    2023 IEEE 22ND INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, BIGDATASE, CSE, EUC, ISCI 2023, 2024, : 452 - 461
  • [5] Personalized Large Language Models through Parameter Efficient Fine-Tuning Techniques
    Braga, Marco
    PROCEEDINGS OF THE 47TH INTERNATIONAL ACM SIGIR CONFERENCE ON RESEARCH AND DEVELOPMENT IN INFORMATION RETRIEVAL, SIGIR 2024, 2024, : 3076 - 3076
  • [6] Demystifying Instruction Mixing for Fine-tuning Large Language Models
    Wang, Renxi
    Li, Haonan
    Wu, Minghao
    Wang, Yuxia
    Han, Xudong
    Zhang, Chiyu
    Baldwin, Timothy
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 4: STUDENT RESEARCH WORKSHOP, 2024, : 86 - 93
  • [7] Getting it right: the limits of fine-tuning large language models
    Browning, Jacob
    ETHICS AND INFORMATION TECHNOLOGY, 2024, 26 (02)
  • [8] Scaling Federated Learning for Fine-Tuning of Large Language Models
    Hilmkil, Agrin
    Callh, Sebastian
    Barbieri, Matteo
    Sutfeld, Leon Rene
    Zec, Edvin Listo
    Mogren, Olof
    NATURAL LANGUAGE PROCESSING AND INFORMATION SYSTEMS (NLDB 2021), 2021, 12801 : 15 - 23
  • [9] Fine-tuning large language models for chemical text mining
    Zhang, Wei
    Wang, Qinggong
    Kong, Xiangtai
    Xiong, Jiacheng
    Ni, Shengkun
    Cao, Duanhua
    Niu, Buying
    Chen, Mingan
    Li, Yameng
    Zhang, Runze
    Wang, Yitian
    Zhang, Lehan
    Li, Xutong
    Xiong, Zhaoping
    Shi, Qian
    Huang, Ziming
    Fu, Zunyun
    Zheng, Mingyue
    CHEMICAL SCIENCE, 2024, 15 (27) : 10600 - 10611
  • [10] Fine-tuning large neural language models for biomedical natural language processing
    Tinn, Robert
    Cheng, Hao
    Gu, Yu
    Usuyama, Naoto
    Liu, Xiaodong
    Naumann, Tristan
    Gao, Jianfeng
    Poon, Hoifung
    PATTERNS, 2023, 4 (04):