IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers

被引:3
|
作者
Wu, Ronghuan [1 ]
Su, Wanchao [1 ,2 ]
Ma, Kede [1 ]
Liao, Jing [1 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Monash Univ, Clayton, Vic, Australia
来源
ACM TRANSACTIONS ON GRAPHICS | 2023年 / 42卷 / 06期
关键词
SVG; Icon Synthesis; Vector Graphics Generation; Text-Guided Generation; Autoregressive Transformers;
D O I
10.1145/3618364
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text. raster image. vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text. vector graphics script) through pretrained large language models. Nevertheless, these methods suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively (using the FID and CLIP scores) and qualitatively (through formal subjective user studies). Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.
引用
收藏
页数:14
相关论文
共 50 条
  • [41] Adversarial Learning with Mask Reconstruction for Text-Guided Image Inpainting
    Wu, Xingcai
    Xie, Yucheng
    Zeng, Jiaqi
    Yang, Zhenguo
    Yu, Yi
    Li, Qing
    Liu, Wenyin
    PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 3464 - 3472
  • [42] Rethinking Super-Resolution as Text-Guided Details Generation
    Ma, Chenxi
    Yan, Bo
    Lin, Qing
    Tan, Weimin
    Chen, Siming
    PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2022, 2022, : 3461 - 3469
  • [43] Text-guided distillation learning to diversify video embeddings for text-video retrieval
    Lee, Sangmin
    Kim, Hyung-Il
    Ro, Yong Man
    PATTERN RECOGNITION, 2024, 156
  • [44] LivePhoto: Real Image Animation with Text-Guided Motion Control
    Chen, Xi
    Liu, Zhiheng
    Chen, Mengting
    Feng, Yutong
    Liu, Yu
    Shen, Yujun
    Zhao, Hengshuang
    COMPUTER VISION-ECCV 2024, PT XVIII, 2025, 15076 : 475 - 491
  • [45] Advances in text-guided 3D editing: a survey
    Lu, Lihua
    Li, Ruyang
    Zhang, Xiaohui
    Wei, Hui
    Du, Guoguang
    Wang, Binqiang
    ARTIFICIAL INTELLIGENCE REVIEW, 2024, 57 (12)
  • [46] Dilated Residual Aggregation Network for Text-Guided Image Manipulation
    Lu, Siwei
    Luo, Di
    Yang, Zhenguo
    Hao, Tianyong
    Li, Qing
    Liu, Wenyin
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2021, PT III, 2021, 12893 : 28 - 40
  • [47] TediGAN: Text-Guided Diverse Face Image Generation and Manipulation
    Xia, Weihao
    Yang, Yujiu
    Xue, Jing-Hao
    Wu, Baoyuan
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2256 - 2265
  • [48] Lightweight Generative Adversarial Networks for Text-Guided Image Manipulation
    Li, Bowen
    Qi, Xiaojuan
    Torr, Philip H. S.
    Lukasiewicz, Thomas
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [49] Learning Universal Policies via Text-Guided Video Generation
    Du, Yilun
    Yang, Mengjiao
    Dai, Bo
    Dai, Hanjun
    Nachum, Ofir
    Tenenbaum, Joshua B.
    Schuurmans, Dale
    Abbeel, Pieter
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [50] Text-guided Fourier Augmentation for long-tailed recognition
    Wang, Weiqiu
    Chen, Zining
    Su, Fei
    Zhao, Zhicheng
    PATTERN RECOGNITION LETTERS, 2024, 179 : 38 - 44