Text2Mesh: Text-Driven Neural Stylization for Meshes

被引:120
|
作者
Michel, Oscar [1 ]
Bar-On, Roi [1 ,2 ]
Liu, Richard [1 ]
Benaim, Sagie [2 ]
Hanocka, Rana [1 ]
机构
[1] Univ Chicago, Chicago, IL 60637 USA
[2] Tel Aviv Univ, Tel Aviv, Israel
关键词
D O I
10.1109/CVPR52688.2022.01313
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this work, we develop intuitive controls for editing the style of 3D objects. Our framework, Text2Mesh, stylizes a 3D mesh by predicting color and local geometric details which conform to a target text prompt. We consider a disentangled representation of a 3D object using a fixed mesh input (content) coupled with a learned neural network, which we term a neural style field network (NSF). In order to modify style, we obtain a similarity score between a text prompt (describing style) and a stylized mesh by harnessing the representational power of CLIP. Text2Mesh requires neither a pre-trained generative model nor a specialized 3D mesh dataset. It can handle low-quality meshes (non-manifold, boundaries, etc.) with arbitrary genus, and does not require UV parameterization. We demonstrate the ability of our technique to synthesize a myriad of styles over a wide variety of 3D meshes. Our code and results are available in our project webpage: https://threedle.github.io/text2mesh/.
引用
收藏
页码:13482 / 13492
页数:11
相关论文
共 50 条
  • [1] CLIP-Actor: Text-Driven Recommendation and Stylization for Animating Human Meshes
    Youwang, Kim
    Ji-Yeon, Kim
    Oh, Tae-Hyun
    COMPUTER VISION - ECCV 2022, PT III, 2022, 13663 : 173 - 191
  • [2] NeRF-Art: Text-Driven Neural Radiance Fields Stylization
    Wang, Can
    Jiang, Ruixiang
    Chai, Menglei
    He, Mingming
    Chen, Dongdong
    Liao, Jing
    IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2024, 30 (08) : 4983 - 4996
  • [3] ConIS: controllable text-driven image stylization with semantic intensity
    Yang, Gaoming
    Li, Changgeng
    Zhang, Ji
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [4] DiffStyler: Controllable Dual Diffusion for Text-Driven Image Stylization
    Huang, Nisha
    Zhang, Yuxin
    Tang, Fan
    Ma, Chongyang
    Huang, Haibin
    Dong, Weiming
    Xu, Changsheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025, 36 (02) : 3370 - 3383
  • [5] Text2Scene: Text-driven Indoor Scene Stylization with Part-aware Details
    Hwang, Inwoo
    Kim, Hyeonwoo
    Kim, Young Min
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR, 2023, : 1890 - 1899
  • [6] ControlNeRF: Text-Driven 3D Scene Stylization via Diffusion Model
    Chen, Jiahui
    Yang, Chuanfeng
    Li, Kaiheng
    Wu, Qingqiang
    Hong, Qingqi
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING-ICANN 2024, PT II, 2024, 15017 : 395 - 406
  • [7] Explainable Text-Driven Neural Network for Stock Prediction
    Yang, Linyi
    Zhang, Zheng
    Xiong, Su
    Wei, Lirui
    Ng, James
    Xu, Lina
    Dong, Ruihai
    PROCEEDINGS OF 2018 5TH IEEE INTERNATIONAL CONFERENCE ON CLOUD COMPUTING AND INTELLIGENCE SYSTEMS (CCIS), 2018, : 441 - 445
  • [8] X-Mesh: Towards Fast and Accurate Text-driven 3D Stylization via Dynamic Textual Guidance
    Ma, Yiwei
    Zhang, Xiaoqing
    Sun, Xiaoshuai
    Ji, Jiayi
    Wang, Haowei
    Jiang, Guannan
    Zhuang, Weilin
    Ji, Rongrong
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2737 - 2748
  • [9] Text2Performer: Text-Driven Human Video Generation
    Jiang, Yuming
    Yang, Shuai
    Koh, Tong Liang
    Wu, Wayne
    Loy, Chen Change
    Liu, Ziwei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 22690 - 22700
  • [10] Text-Driven Video Prediction
    Song, Xue
    Chen, Jingjing
    Zhu, Bin
    Jiang, Yu-gang
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (09)