CHATDESIGN: BOOTSTRAPPING GENERATIVE FLOOR PLAN DESIGN WITH PRE-TRAINED LARGE LANGUAGE MODELS

被引:0
|
作者
Li, Jinmin [1 ,2 ]
Luo, Yilu [1 ]
Lu, Shuai [1 ]
Zhang, Jingyun [2 ]
Wang, Jun [2 ]
Guo, Rizen [2 ]
Wang, Shaoming [2 ]
机构
[1] Tsinghua Univ, Shenzhen Int Grad Sch, Beijing, Peoples R China
[2] Tencent, Wechat Lab 33, Shenzhen, Peoples R China
基金
美国国家科学基金会;
关键词
floor plan generation; large language models; user interactions; automatic design; deep learning; pre-train models;
D O I
暂无
中图分类号
TU [建筑科学];
学科分类号
0813 ;
摘要
Large language models (LLMs) have achieved remarkable success in various domains, revolutionizing tasks such as language translation, text generation, and question-answering. However, generating floor plan designs poses a unique challenge that demands the fulfilment of intricate spatial and relational constraints. In this paper, we propose ChatDesign, an innovative approach that leverages the power of pre-trained LLMs to generate floor plan designs from natural language descriptions, while incorporating iterative modifications based on user interaction. By processing user input text through a pre-trained LLM and utilizing a decoder, we can generate regression parameters and floor plans that are precisely tailored to satisfy the specific needs of the user. Our approach incorporates an iterative refinement process, optimizing the model output by considering the input text and previous results. Throughout these interactions, we employ many strategic techniques to ensure the generated design images align precisely with the user's requirements. The proposed approach is extensively evaluated through rigorous experiments, including user studies, demonstrating its feasibility and efficacy. The empirical results consistently demonstrate the superiority of our method over existing approaches, showcasing its ability to generate floor plans that rival those created by human designer. Our code will be available at https://github.com/THUKingmin/ChatDesign.
引用
收藏
页码:99 / 108
页数:10
相关论文
共 50 条
  • [1] The Use and Misuse of Pre-Trained Generative Large Language Models in Reliability Engineering
    Hu, Yunwei
    Goktas, Yavuz
    Yellamati, David Deepak
    De Tassigny, Catherine
    [J]. 2024 ANNUAL RELIABILITY AND MAINTAINABILITY SYMPOSIUM, RAMS, 2024,
  • [2] Compression of Generative Pre-trained Language Models via Quantization
    Tao, Chaofan
    Hou, Lu
    Zhang, Wei
    Shang, Lifeng
    Jiang, Xin
    Liu, Qun
    Luo, Ping
    Wong, Ngai
    [J]. PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022), VOL 1: (LONG PAPERS), 2022, : 4821 - 4836
  • [3] Probing Toxic Content in Large Pre-Trained Language Models
    Ousidhoum, Nedjma
    Zhao, Xinran
    Fang, Tianqing
    Song, Yangqiu
    Yeung, Dit-Yan
    [J]. 59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 4262 - 4274
  • [4] CPM: A large-scale generative Chinese Pre-trained language model
    Zhang, Zhengyan
    Han, Xu
    Zhou, Hao
    Ke, Pei
    Gu, Yuxian
    Ye, Deming
    Qin, Yujia
    Su, Yusheng
    Ji, Haozhe
    Guan, Jian
    Qi, Fanchao
    Wang, Xiaozhi
    Zheng, Yanan
    Zeng, Guoyang
    Cao, Huanqi
    Chen, Shengqi
    Li, Daixuan
    Sun, Zhenbo
    Liu, Zhiyuan
    Huang, Minlie
    Han, Wentao
    Tang, Jie
    Li, Juanzi
    Zhu, Xiaoyan
    Sun, Maosong
    [J]. AI OPEN, 2021, 2 : 93 - 99
  • [5] Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization
    Yu, Tiezheng
    Dai, Wenliang
    Liu, Zihan
    Fung, Pascale
    [J]. 2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 3995 - 4007
  • [6] Pre-Trained Language Models and Their Applications
    Wang, Haifeng
    Li, Jiwei
    Wu, Hua
    Hovy, Eduard
    Sun, Yu
    [J]. ENGINEERING, 2023, 25 : 51 - 65
  • [7] Automated Program Repair in the Era of Large Pre-trained Language Models
    Xia, Chunqiu Steven
    Wei, Yuxiang
    Zhang, Lingming
    [J]. 2023 IEEE/ACM 45TH INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING, ICSE, 2023, : 1482 - 1494
  • [8] Automated LOINC Standardization Using Pre-trained Large Language Models
    Tu, Tao
    Loreaux, Eric
    Chesley, Emma
    Lelkes, Adam D.
    Gamble, Paul
    Bellaiche, Mathias
    Seneviratne, Martin
    Chen, Ming-Jun
    [J]. MACHINE LEARNING FOR HEALTH, VOL 193, 2022, 193 : 343 - 355
  • [9] SMT Solver Validation Empowered by Large Pre-trained Language Models
    Sun, Maolin
    Yang, Yibiao
    Wang, Yang
    Wen, Ming
    Jia, Haoxiang
    Zhou, Yuming
    [J]. 2023 38TH IEEE/ACM INTERNATIONAL CONFERENCE ON AUTOMATED SOFTWARE ENGINEERING, ASE, 2023, : 1288 - 1300
  • [10] p-Laplacian Adaptation for Generative Pre-trained Vision-Language Models
    Wu, Haoyuan
    Zhang, Xinyun
    Xu, Peng
    Liao, Peiyu
    Yao, Xufeng
    Yu, Bei
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 6, 2024, : 6003 - 6011