Advances in text-guided 3D editing: a survey

被引:0
|
作者
Lu, Lihua [1 ]
Li, Ruyang [1 ]
Zhang, Xiaohui [1 ]
Wei, Hui [1 ]
Du, Guoguang [1 ]
Wang, Binqiang [1 ]
机构
[1] Shandong Mass Informat Technol Res Inst, Jinan, Peoples R China
关键词
Text prompts; Text-guided 3D editing; Editing capacity; NEURAL RADIANCE FIELDS;
D O I
10.1007/s10462-024-10937-6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In 3D Artificial Intelligence Generated Content (AIGC), compared with generating 3D assets from scratch, editing extant 3D assets satisfies user prompts, allowing the creation of diverse and high-quality 3D assets in a time and labor-saving manner. More recently, text-guided 3D editing that modifies 3D assets guided by text prompts is user-friendly and practical, which evokes a surge in research within this field. In this survey, we comprehensively investigate recent literature on text-guided 3D editing in an attempt to answer two questions: What are the methodologies of existing text-guided 3D editing? How has current progress in text-guided 3D editing gone so far? Specifically, we focus on text-guided 3D editing methods published in the past 4 years, delving deeply into their frameworks and principles. We then present a fundamental taxonomy in terms of the editing strategy, optimization scheme, and 3D representation. Based on the taxonomy, we review recent advances in this field, considering factors such as editing scale, type, granularity, and perspective. In addition, we highlight four applications of text-guided 3D editing, including texturing, style transfer, local editing of scenes, and insertion editing, to exploit further the 3D editing capacities with in-depth comparisons and discussions. Depending on the insights achieved by this survey, we discuss open challenges and future research directions. We hope this survey will help readers gain a deeper understanding of this exciting field and foster further advancements in text-guided 3D editing.
引用
收藏
页数:61
相关论文
共 50 条
  • [31] Text-Guided Image Inpainting
    Zhang, Zijian
    Zhao, Zhou
    Zhang, Zhu
    Huai, Baoxing
    Yuan, Jing
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4079 - 4087
  • [32] Text-Guided Multi-region Scene Image Editing Based on Diffusion Model
    Li, Ruichen
    Wu, Lei
    Wang, Changshuo
    Dong, Pei
    Li, Xin
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XI, ICIC 2024, 2024, 14872 : 229 - 240
  • [33] Text-Guided Image Editing Based on Post Score for Gaining Attention on Social Media
    Watanabe, Yuto
    Togo, Ren
    Maeda, Keisuke
    Ogawa, Takahiro
    Haseyama, Miki
    SENSORS, 2024, 24 (03)
  • [34] Portrait3D: Text-Guided High-Quality 3D Portrait Generation Using Pyramid Representation and GANs Prior
    Wu, Yiqian
    Xu, Hao
    Tang, Xiangjun
    Chen, Xien
    Tang, Siyu
    Zhang, Zhebin
    Li, Chen
    Jin, Xiaogang
    ACM TRANSACTIONS ON GRAPHICS, 2024, 43 (04):
  • [35] Benchmarking Robustness to Text-Guided Corruptions
    Mofayezi, Mohammadreza
    Medghalchi, Yasamin
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW, 2023, : 779 - 786
  • [36] Text-Guided Vector Graphics Customization
    Zhang, Peiying
    Zhao, Nanxuan
    Liao, Jing
    PROCEEDINGS OF THE SIGGRAPH ASIA 2023 CONFERENCE PAPERS, 2023,
  • [37] Text-Guided Synthesis of Eulerian Cinemagraphs
    Mahapatra, Aniruddha
    Siarohin, Aliaksandr
    Lee, Hsin-Ying
    Tulyakov, Sergey
    Zhu, Jun-Yan
    ACM TRANSACTIONS ON GRAPHICS, 2023, 42 (06):
  • [38] Text-Guided Synthesis of Crowd Animation
    Ji, Xuebo
    Pan, Zherong
    Gao, Xifeng
    Pan, Jia
    PROCEEDINGS OF SIGGRAPH 2024 CONFERENCE PAPERS, 2024,
  • [39] Text-Guided Automated Self Assessment
    Pirnay-Dummer, Pablo
    Ifenthaler, Dirk
    MULTIPLE PERSPECTIVES ON PROBLEM SOLVING AND LEARNING IN THE DIGITAL AGE, 2011, : 217 - 225
  • [40] Topology optimization with text-guided stylization
    Shengze Zhong
    Parinya Punpongsanon
    Daisuke Iwai
    Kosuke Sato
    Structural and Multidisciplinary Optimization, 2023, 66