On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments

被引:0
|
作者
Hassan, Mohammed [1 ]
Chen, Yuxuan [1 ]
Denny, Paul [2 ]
Zilles, Craig [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
[2] Univ Auckland, Auckland, New Zealand
关键词
Large Language Models; code comprehension; debuggers; execution;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Novice programmers often struggle to develop computational thinking (CT) skills in introductory programming courses. This study investigates the use of Large Language Models (LLMs) to provide scalable, strategy-driven feedback to teach CT. Through think-aloud interviews with 17 students solving code comprehension and writing tasks, we found that LLMs effectively guided decomposition and program development tool usage. Challenges included students seeking direct answers or pasting feedback without considering suggested strategies. We discuss how instructors should integrate LLMs into assessments to support students' learning of CT.
引用
收藏
页码:471 / 477
页数:7
相关论文
共 50 条
  • [1] On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments
    Hassan, Mohammed
    Chen, Yuxuan
    Denny, Paul
    Zilles, Craig
    PROCEEDINGS OF THE 56TH ACM TECHNICAL SYMPOSIUM ON COMPUTER SCIENCE EDUCATION, SIGCSE TS 2025, VOL 1, 2025, : 471 - 477
  • [2] Research on the teaching of programming language based on Computational Thinking
    Lu Ying
    Liu Pingping
    PROCEEDINGS OF THE 2017 INTERNATIONAL CONFERENCE ON SOCIAL SCIENCE, EDUCATION AND HUMANITIES RESEARCH (ICSEHR 2017), 2017, 152 : 84 - 87
  • [3] Training for Computational Thinking Capability on Programming Language Teaching
    Zhang Yinnan
    Luo Chaosheng
    PROCEEDINGS OF 2012 7TH INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE & EDUCATION, VOLS I-VI, 2012, : 1804 - 1809
  • [4] Utilizing large language models for EFL essay grading: An examination of reliability and validity in rubric-based assessments
    Yavuz, Fatih
    Celik, Ozgur
    Celik, Gamze Yavas
    BRITISH JOURNAL OF EDUCATIONAL TECHNOLOGY, 2025, 56 (01) : 150 - 166
  • [5] BRAINTEASER: Lateral Thinking Puzzles for Large Language Models
    Jiang, Yifan
    Ilievski, Filip
    Ma, Kaixin
    Sourati, Zhivar
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 14317 - 14332
  • [6] Teaching Large Language Models to Translate with Comparison
    Zeng, Jiali
    Meng, Fandong
    Yin, Yongjing
    Zhou, Jie
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 17, 2024, : 19488 - 19496
  • [7] Utilizing Large Language Models in Tribal Emergency Management
    Gupta, Srishti
    Chen, Yu-Che
    Tsai, Chun-Hua
    COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 1 - 6
  • [8] A BOOK ON DEVELOPING CRITICAL THINKING SKILLS WITHIN BULGARIAN LANGUAGE TEACHING
    Goranova, Iliana
    BULGARSKI EZIK I LITERATURA-BULGARIAN LANGUAGE AND LITERATURE, 2014, 56 (03): : 314 - 318
  • [9] Enhancing health assessments with large language models: A methodological approach
    Wang, Xi
    Zhou, Yujia
    Zhou, Guangyu
    APPLIED PSYCHOLOGY-HEALTH AND WELL BEING, 2025, 17 (01)
  • [10] Exploring the Potential of Large Language Models in Computational Argumentation
    Chen, Guizhen
    Cheng, Liying
    Tuan, Luu Anh
    Bing, Lidong
    PROCEEDINGS OF THE 62ND ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, VOL 1: LONG PAPERS, 2024, : 2309 - 2330