Augmenting human innovation teams with artificial intelligence: Exploring transformer-based language models

被引:115
|
作者
Bouschery, Sebastian G. [1 ,3 ]
Blazevic, Vera [1 ,2 ]
Piller, Frank T. [1 ,3 ]
机构
[1] Rhein Westfal TH Aachen, Sch Business & Econ, Aachen, Germany
[2] Radboud Univ Nijmegen, Dept Mkt, Nijmegen, Netherlands
[3] Rhein Westfal TH Aachen, Sch Business & Econ, Templergraben 55, D-52056 Aachen, Germany
关键词
artificial intelligence; GPT-3; hybrid intelligence; innovation teams; prompt engineering; transformer-based language models; PERFORMANCE; CREATIVITY; KNOWLEDGE; SEARCH; IDEA;
D O I
10.1111/jpim.12656
中图分类号
F [经济];
学科分类号
02 ;
摘要
The use of transformer-based language models in artificial intelligence (AI) has increased adoption in various industries and led to significant productivity advancements in business operations. This article explores how these models can be used to augment human innovation teams in the new product development process, allowing for larger problem and solution spaces to be explored and ultimately leading to higher innovation performance. The article proposes the use of the AI-augmented double diamond framework to structure the exploration of how these models can assist in new product development (NPD) tasks, such as text summarization, sentiment analysis, and idea generation. It also discusses the limitations of the technology and the potential impact of AI on established practices in NPD. The article establishes a research agenda for exploring the use of language models in this area and the role of humans in hybrid innovation teams. (Note: Following the idea of this article, GPT-3 alone generated this abstract. Only minor formatting edits were performed by humans.)
引用
收藏
页码:139 / 153
页数:15
相关论文
共 50 条
  • [31] Accelerating Training of Transformer-Based Language Models with Progressive Layer Dropping
    Zhang, Minjia
    He, Yuxiong
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS (NEURIPS 2020), 2020, 33
  • [32] Arlo: Serving Transformer-based Language Models with Dynamic Input Lengths
    Tan, Xin
    Li, Jiamin
    Yang, Yitao
    Li, Jingzong
    Xu, Hong
    53RD INTERNATIONAL CONFERENCE ON PARALLEL PROCESSING, ICPP 2024, 2024, : 367 - 376
  • [33] Enhancing Address Data Integrity using Transformer-Based Language Models
    Kurklu, Omer Faruk
    Akagiunduz, Erdem
    32ND IEEE SIGNAL PROCESSING AND COMMUNICATIONS APPLICATIONS CONFERENCE, SIU 2024, 2024,
  • [34] Named Entity Recognition in Cyber Threat Intelligence Using Transformer-based Models
    Evangelatos, Pavlos
    Iliou, Christos
    Mavropoulos, Thanassis
    Apostolou, Konstantinos
    Tsikrika, Theodora
    Vrochidis, Stefanos
    Kompatsiaris, Ioannis
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 348 - 353
  • [35] The intersection between language acquisition and artificial intelligence: exploring the potential of natural language models
    Borjabad, Salud Adelaida Flores
    AMAZONIA INVESTIGA, 2023, 12 (62): : 7 - 9
  • [36] Large Language Models and Artificial Intelligence in Psychiatry Medical Education: Augmenting But Not Replacing Best Practices
    Torous, John
    Greenberg, William
    ACADEMIC PSYCHIATRY, 2025, 49 (01) : 22 - 24
  • [37] EEG Classification with Transformer-Based Models
    Sun, Jiayao
    Xie, Jin
    Zhou, Huihui
    2021 IEEE 3RD GLOBAL CONFERENCE ON LIFE SCIENCES AND TECHNOLOGIES (IEEE LIFETECH 2021), 2021, : 92 - 93
  • [38] Quantifying the Bias of Transformer-Based Language Models for African American English in Masked Language Modeling
    Salutari, Flavia
    Ramos, Jerome
    Rahmani, Hossein A.
    Linguaglossa, Leonardo
    Lipani, Aldo
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT I, 2023, 13935 : 532 - 543
  • [39] Incorporating Medical Knowledge to Transformer-based Language Models for Medical Dialogue Generation
    Naseem, Usman
    Bandi, Ajay
    Raza, Shaina
    Rashid, Junaid
    Chakravarthi, Bharathi Raja
    PROCEEDINGS OF THE 21ST WORKSHOP ON BIOMEDICAL LANGUAGE PROCESSING (BIONLP 2022), 2022, : 110 - 115
  • [40] Task-Specific Transformer-Based Language Models in HealthCare:Scoping Review
    Cho, Ha Na
    Jun, Tae Joon
    Kim, Young-Hak
    Kang, Heejun
    Ahn, Imjin
    Gwon, Hansle
    Kim, Yunha
    Seo, Jiahn
    Choi, Heejung
    Kim, Minkyoung
    Han, Jiye
    Kee, Gaeun
    Park, Seohyun
    Ko, Soyoung
    JMIR MEDICAL INFORMATICS, 2024, 12