A Proposed Framework for Human-like Language Processing of ChatGPT in Academic Writing

被引:0
|
作者
Mahyoob M. [1 ,2 ]
Algaraady J. [3 ]
Alblwi A. [1 ]
机构
[1] Taibah University, Medina
[2] Technical Community College, Taiz
[3] Taiz University, Taiz
关键词
academic writing; ChatGPT; emerging technologies; LLMs; natural and artificial language processing; OpenAI;
D O I
10.3991/ijet.v18i14.41725
中图分类号
学科分类号
摘要
The study proposed a framework for analyzing and measuring the ChatGPT capabilities as a generic language model. This study aims to examine the capabilities of the emerging technological Artificial Intelligence tool (ChatGPT) in generating effective academic writing. The proposed framework consists of six principles (Relatedness, Adequacy, Limitation, Authenticity, Cognition, and Redundancy) related to Artificial Language Processing which would explore the accuracy and proficiency of this algorithm-generated writing. The researchers used ChatGPT to obtain some academic texts and paragraphs in different genres as responses to some textbased academic queries. A critical analysis of the content of these academic texts was conducted based on the proposed framework principles. The results show that despite ChatGPT’s exceptional capabilities, its serious defects are evident, as many issues in academic writing are raised. The major issues include information repetition, nonfactual inferences, illogical reasoning, fake references, hallucination, and lack of pragmatic interpretation. The proposed framework would be a valuable guideline for researchers and practitioners interested in analyzing and evaluating recently emerging machine languages of AI language models. © 2023 by the authors of this article. Published under CC-BY.
引用
收藏
页码:282 / 293
页数:11
相关论文
共 50 条
  • [1] Human-like problem-solving abilities in large language models using ChatGPT
    Orru, Graziella
    Piarulli, Andrea
    Conversano, Ciro
    Gemignani, Angelo
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [2] Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
    Hagendorff, Thilo
    Fabi, Sarah
    Kosinski, Michal
    NATURE COMPUTATIONAL SCIENCE, 2023, 3 (10): : 833 - +
  • [3] Human-like intuitive behavior and reasoning biases emerged in large language models but disappeared in ChatGPT
    Thilo Hagendorff
    Sarah Fabi
    Michal Kosinski
    Nature Computational Science, 2023, 3 : 833 - 838
  • [4] Skimming, Locating, then Perusing: A Human-Like Framework for Natural Language Video Localization
    Liu, Daizong
    Hu, Wei
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 4536 - 4545
  • [5] Embrace responsible ChatGPT usage to overcome language barriers in academic writing
    Kayaalp, M. Enes
    Ollivier, Matthieu
    Winkler, Philipp W.
    Dahmen, Jari
    Musahl, Volker
    Hirschmann, Michael T.
    Karlsson, Jon
    KNEE SURGERY SPORTS TRAUMATOLOGY ARTHROSCOPY, 2024, 32 (01) : 5 - 9
  • [6] USE AND MISUSE OF CHATGPT IN ACADEMIC WRITING AMONG THE ENGLISH LANGUAGE STUDENTS
    Jankovic, Anita
    Kulic, Danijela
    INFORMATION TECHNOLOGIES AND LEARNING TOOLS, 2025, 105 (01) : 178 - 188
  • [7] GNOSTRON: a framework for human-like machine understanding
    Yufik, Yan M.
    2018 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (IEEE SSCI), 2018, : 136 - 145
  • [8] AI language models in human reproduction research: exploring ChatGPT's potential to assist academic writing
    Semrl, N.
    Feigl, S.
    Taumberger, N.
    Bracic, T.
    Fluhr, H.
    Blockeel, C.
    Kollmann, M.
    HUMAN REPRODUCTION, 2023, 38 (12) : 2281 - 2288
  • [9] ChatGPT as a Commenter to the News: Can LLMs Generate Human-Like Opinions?
    Tseng, Rayden
    Verberne, Suzan
    van der Putten, Peter
    DISINFORMATION IN OPEN ONLINE MEDIA, MISDOOM 2023, 2023, 14397 : 160 - 174
  • [10] ChatGPT: More Human-Like Than Computer-Like, but Not Necessarily in a Good Way
    Azaria, Amos
    2023 IEEE 35TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2023, : 468 - 473