Putting the Ghost in the Machine: Emulating Cognitive Style in Large Language Models

被引:0
|
作者
Agarwal, Vasvi [1 ]
Jablokow, Kathryn [2 ]
Mccomb, Christopher [1 ]
机构
[1] Carnegie Mellon Univ, Dept Mech Engn, 5000 Forbes Ave,4126 Wean Hall, Pittsburgh, PA 15213 USA
[2] Penn State Univ, Sch Engn Design & Innovat, 30 East,Swedesford Rd, Malvern, PA 19355 USA
关键词
machine learning for engineering applications; Kirton's Adaption-Innovation theory; large language models; artificial intelligence in design; prompt engineering; generative AI; ADAPTION-INNOVATION THEORY; IDEAS; CREATIVITY; ADAPTORS;
D O I
10.1115/1.4066857
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Large Language Models (LLMs) have emerged as pivotal technology in the evolving world. Their significance in design lies in their transformative potential to support engineers and collaborate with design teams throughout the design process. However, it is not known whether LLMs can emulate the cognitive and social attributes which are known to be important during design, such as cognitive style. This research evaluates the efficacy of LLMs to emulate aspects of Kirton's Adaption-Innovation theory, which characterizes individual preferences in problem-solving. Specifically, we use LLMs to generate solutions for three design problems using two different cognitive style prompts (adaptively framed and innovatively framed). Solutions are evaluated with respect to feasibility and paradigm relatedness, which are known to have discriminative value in other studies of cognitive style. We found that solutions generated using the adaptive prompt tend to display higher feasibility and are paradigm-preserving, while solutions generated using the innovative prompts were more paradigm-modifying. This aligns with prior work and expectations for design behavior based on Kirton's Adaption-Innovation theory. Ultimately, these results demonstrate that LLMs can be prompted to accurately emulate cognitive style.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] The moral machine experiment on large language models
    Takemoto, Kazuhiro
    ROYAL SOCIETY OPEN SCIENCE, 2024, 11 (02):
  • [2] Rethinking machine unlearning for large language models
    Liu, Sijia
    Yao, Yuanshun
    Jia, Jinghan
    Casper, Stephen
    Baracaldo, Nathalie
    Hase, Peter
    Yao, Yuguang
    Liu, Chris Yuhao
    Xu, Xiaojun
    Li, Hang
    Varshney, Kush R.
    Bansal, Mohit
    Koyejo, Sanmi
    Liu, Yang
    NATURE MACHINE INTELLIGENCE, 2025, 7 (02) : 181 - 194
  • [3] Putting the Ghost Back in the Machine: An Exploration of Somatic Dualism
    Davidson, Matthew
    PACIFIC PHILOSOPHICAL QUARTERLY, 2019, 100 (02) : 624 - 641
  • [4] Language as a cognitive and social tool at the time of large language models
    Borghi, Anna M.
    De Livio, Chiara
    Gervasi, Angelo Mattia
    Mannella, Francesco
    Nolfi, Stefano
    Tummolini, Luca
    JOURNAL OF CULTURAL COGNITIVE SCIENCE, 2024, 8 (03) : 179 - 198
  • [5] Recipe For Arbitrary Text Style Transfer with Large Language Models
    Reif, Emily
    Ippolito, Daphne
    Yuan, Ann
    Coenen, Andy
    Callison-Burch, Chris
    Wei, Jason
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 837 - 848
  • [6] Harnessing Large Language Models for Cognitive Assistants in Factories
    Freire, S. Kernan
    Foosherian, Mina
    Wang, C.
    Niforatos, E.
    PROCEEDINGS OF THE 5TH INTERNATIONAL CONFERENCE ON CONVERSATIONAL USER INTERFACES, CUI 2023, 2023,
  • [7] Large Language Models: Opportunities and Challenges For Cognitive Assessment
    Efremova, Maria
    Kubiak, Emeric
    Baron, Simon
    Bernard, David
    EUROPEAN JOURNAL OF PSYCHOLOGY OPEN, 2023, 82 : 133 - 134
  • [8] Benchmarking Cognitive Biases in Large Language Models as Evaluators
    Koo, Ryan
    Lee, Minhwa
    Raheja, Vipul
    Park, Jongin
    Kim, Zae Myung
    Kang, Dongyeop
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: ACL 2024, 2024, : 517 - 545
  • [9] (Ir)rationality and cognitive biases in large language models
    Macmillan-Scott, Olivia
    Musolesi, Mirco
    ROYAL SOCIETY OPEN SCIENCE, 2024, 11 (06):
  • [10] Leveraging Cognitive Science for Testing Large Language Models
    Srinivasan, Ramya
    Inakoshi, Hiroya
    Uchino, Kanji
    2023 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2023, : 169 - 171