TaleBrush: Sketching Stories with Generative Pretrained Language Models

被引:57
|
作者
Chung, John Joon Young [1 ]
Kim, Wooseok [2 ]
Yoo, Kang Min [3 ]
Lee, Hwaran [3 ]
Adar, Eytan [1 ]
Chang, Minsuk [3 ]
机构
[1] Univ Michigan, Ann Arbor, MI 48109 USA
[2] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[3] Naver AI LAB, Seongnam, South Korea
关键词
story writing; sketching; creativity support tool; story generation; controlled generation; PLOT;
D O I
10.1145/3491102.3501819
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While advanced text generation algorithms (e.g., GPT-3) have enabled writers to co-create stories with an AI, guiding the narrative remains a challenge. Existing systems often leverage simple turn-taking between the writer and the AI in story development. However, writers remain unsupported in intuitively understanding the AI's actions or steering the iterative generation. We introduce TaleBrush, a generative story ideation tool that uses line sketching interactions with a GPT-based language model for control and sensemaking of a protagonist's fortune in co-created stories. Our empirical evaluation found our pipeline reliably controls story generation while maintaining the novelty of generated sentences. In a user study with 14 participants with diverse writing experiences, we found participants successfully leveraged sketching to iteratively explore and write stories according to their intentions about the character's fortune while taking inspiration from generated stories. We conclude with a reflection on how sketching interactions can facilitate the iterative human-AI co-creation process.
引用
下载
收藏
页数:19
相关论文
共 50 条
  • [31] A Survey of Sentiment Analysis Based on Pretrained Language Models
    Sun, Kaili
    Luo, Xudong
    Luo, Michael Y.
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 1239 - 1244
  • [32] Large Product Key Memory for Pretrained Language Models
    Kim, Gyuwan
    Jung, Tae-Hwan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 4060 - 4069
  • [33] A study of Turkish emotion classification with pretrained language models
    Ucan, Alaettin
    Dorterler, Murat
    Akcapinar Sezer, Ebru
    JOURNAL OF INFORMATION SCIENCE, 2022, 48 (06) : 857 - 865
  • [34] Can Pretrained Language Models (Yet) Reason Deductively?
    Yuan, Zhangdie
    Hu, Songbo
    Vulic, Ivan
    Korhonen, Anna
    Meng, Zaiqiao
    17TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EACL 2023, 2023, : 1447 - 1462
  • [35] Data Augmentation for Spoken Language Understanding via Pretrained Language Models
    Peng, Baolin
    Zhu, Chenguang
    Zeng, Michael
    Gao, Jianfeng
    INTERSPEECH 2021, 2021, : 1219 - 1223
  • [36] Topic Classification for Political Texts with Pretrained Language Models
    Wang, Yu
    POLITICAL ANALYSIS, 2023, 31 (04) : 662 - 668
  • [37] Masking as an Efficient Alternative to Finetuning for Pretrained Language Models
    Zhao, Mengjie
    Lin, Tao
    Mi, Fei
    Jaggi, Martin
    Schutze, Hinrich
    PROCEEDINGS OF THE 2020 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP), 2020, : 2226 - 2241
  • [38] Probing Pretrained Language Models for Semantic Attributes and their Values
    Beloucif, Meriem
    Biemann, Chris
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2021, 2021, : 2554 - 2559
  • [40] Developing Pretrained Language Models for Turkish Biomedical Domain
    Turkmen, Hazal
    Dikenelli, Oguz
    Eraslan, Cenk
    Calli, Mehmet Cem
    Ozbek, Suha Sureyya
    2022 IEEE 10TH INTERNATIONAL CONFERENCE ON HEALTHCARE INFORMATICS (ICHI 2022), 2022, : 597 - 598