On Teaching Novices Computational Thinking by Utilizing Large Language Models Within Assessments

被引:0
|
作者
Hassan, Mohammed [1 ]
Chen, Yuxuan [1 ]
Denny, Paul [2 ]
Zilles, Craig [1 ]
机构
[1] Univ Illinois, Urbana, IL 61801 USA
[2] Univ Auckland, Auckland, New Zealand
关键词
Large Language Models; code comprehension; debuggers; execution;
D O I
暂无
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Novice programmers often struggle to develop computational thinking (CT) skills in introductory programming courses. This study investigates the use of Large Language Models (LLMs) to provide scalable, strategy-driven feedback to teach CT. Through think-aloud interviews with 17 students solving code comprehension and writing tasks, we found that LLMs effectively guided decomposition and program development tool usage. Challenges included students seeking direct answers or pasting feedback without considering suggested strategies. We discuss how instructors should integrate LLMs into assessments to support students' learning of CT.
引用
收藏
页码:471 / 477
页数:7
相关论文
共 50 条
  • [21] LLMCRIT: Teaching Large Language Models to Use Criteria
    Yuan, Weizhe
    Liu, Pengfei
    Galle, Matthias
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 7929 - 7960
  • [22] Teaching the Limitations of Large Language Models in Medical School
    Gunawardene, Araliya N.
    Schmuter, Gabriella
    JOURNAL OF SURGICAL EDUCATION, 2024, 81 (05) : 625 - 625
  • [23] Large language models for life cycle assessments: Opportunities, challenges, and risks
    Preuss, Nathan
    Alshehri, Abdulelah S.
    You, Fengqi
    JOURNAL OF CLEANER PRODUCTION, 2024, 466
  • [24] The effect of incorporating large language models into the teaching on critical thinking disposition: An "AI plus Constructivism Learning Theory" attempt
    Wang, Peng
    Yin, Kexin
    Zhang, Mingzhu
    Zheng, Yuanxin
    Zhang, Tong
    Kang, Yanjun
    Feng, Xun
    EDUCATION AND INFORMATION TECHNOLOGIES, 2025,
  • [25] Programming Language Teaching Model Based on Computational Thinking and Problem-based Learning
    Chen, Guang-ming
    PROCEEDINGS OF THE 2017 2ND INTERNATIONAL SEMINAR ON EDUCATION INNOVATION AND ECONOMIC MANAGEMENT (SEIEM 2017), 2017, 156 : 128 - 131
  • [26] Can Large Language Models Transform Computational Social Science?
    Ziems, Caleb
    Held, William
    Shaikh, Omar
    Chen, Jiaao
    Zhang, Zhehao
    Yang, Diyi
    COMPUTATIONAL LINGUISTICS, 2023, 50 (01) : 237 - 291
  • [27] Programming Computational Electromagnetic Applications Assisted by Large Language Models
    Fernandes, Leandro Carisio
    IEEE ANTENNAS AND PROPAGATION MAGAZINE, 2024, 66 (01) : 63 - 71
  • [28] Cognitive Overload: Jailbreaking Large Language Models with Overloaded Logical Thinking
    Xu, Nan
    Wang, Fei
    Zhou, Ben
    Li, Bangzheng
    Xiao, Chaowei
    Chen, Muhao
    Findings of the Association for Computational Linguistics: NAACL 2024 - Findings, 2024, : 3526 - 3548
  • [29] Utilizing large language models in breast cancer management: systematic review
    Vera Sorin
    Benjamin S. Glicksberg
    Yaara Artsi
    Yiftach Barash
    Eli Konen
    Girish N. Nadkarni
    Eyal Klang
    Journal of Cancer Research and Clinical Oncology, 150
  • [30] Utilizing large language models in breast cancer management: systematic review
    Sorin, Vera
    Glicksberg, Benjamin S.
    Artsi, Yaara
    Barash, Yiftach
    Konen, Eli
    Nadkarni, Girish N.
    Klang, Eyal
    JOURNAL OF CANCER RESEARCH AND CLINICAL ONCOLOGY, 2024, 150 (03)