IconShop: Text-Guided Vector Icon Synthesis with Autoregressive Transformers

被引:3
|
作者
Wu, Ronghuan [1 ]
Su, Wanchao [1 ,2 ]
Ma, Kede [1 ]
Liao, Jing [1 ]
机构
[1] City Univ Hong Kong, Hong Kong, Peoples R China
[2] Monash Univ, Clayton, Vic, Australia
来源
ACM TRANSACTIONS ON GRAPHICS | 2023年 / 42卷 / 06期
关键词
SVG; Icon Synthesis; Vector Graphics Generation; Text-Guided Generation; Autoregressive Transformers;
D O I
10.1145/3618364
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Scalable Vector Graphics (SVG) is a popular vector image format that offers good support for interactivity and animation. Despite its appealing characteristics, creating custom SVG content can be challenging for users due to the steep learning curve required to understand SVG grammars or get familiar with professional editing software. Recent advancements in text-to-image generation have inspired researchers to explore vector graphics synthesis using either image-based methods (i.e., text. raster image. vector graphics) combining text-to-image generation models with image vectorization, or language-based methods (i.e., text. vector graphics script) through pretrained large language models. Nevertheless, these methods suffer from limitations in terms of generation quality, diversity, and flexibility. In this paper, we introduce IconShop, a text-guided vector icon synthesis method using autoregressive transformers. The key to success of our approach is to sequentialize and tokenize SVG paths (and textual descriptions as guidance) into a uniquely decodable token sequence. With that, we are able to exploit the sequence learning power of autoregressive transformers, while enabling both unconditional and text-conditioned icon synthesis. Through standard training to predict the next token on a large-scale vector icon dataset accompanied by textural descriptions, the proposed IconShop consistently exhibits better icon synthesis capability than existing image-based and language-based methods both quantitatively (using the FID and CLIP scores) and qualitatively (through formal subjective user studies). Meanwhile, we observe a dramatic improvement in generation diversity, which is validated by the objective Uniqueness and Novelty measures. More importantly, we demonstrate the flexibility of IconShop with multiple novel icon synthesis tasks, including icon editing, icon interpolation, icon semantic combination, and icon design auto-suggestion.
引用
收藏
页数:14
相关论文
共 50 条
  • [31] FocusGAN: Preserving Background in Text-Guided Image Editing
    Zhao, Liuqing
    Li, Linyan
    Hu, Fuyuan
    Xia, Zhenping
    Yao, Rui
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2021, 35 (16)
  • [32] TGANet: Text-Guided Attention for Improved Polyp Segmentation
    Tomar, Nikhil Kumar
    Jha, Debesh
    Bagci, Ulas
    Ali, Sharib
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT III, 2022, 13433 : 151 - 160
  • [33] Text-Guided Molecule Generation with Diffusion Language Model
    Gong, Haisong
    Liu, Qiang
    Wu, Shu
    Wang, Liang
    THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 1, 2024, : 109 - 117
  • [34] Target-Free Text-Guided Image Manipulation
    Fan, Wan-Cyuan
    Yang, Cheng-Fu
    Yang, Chiao-An
    Wang, Yu-Chiang Frank
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 1, 2023, : 588 - 596
  • [35] SEGMENTATION-AWARE TEXT-GUIDED IMAGE MANIPULATION
    Haruyama, Tomoki
    Togo, Ren
    Maeda, Keisuke
    Ogawa, Takahiro
    Haseyama, Miki
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 2433 - 2437
  • [36] Prior-free Guided TTS: An Improved and Efficient Diffusion-based Text-Guided Speech Synthesis
    Choi, Won-Gook
    Kim, So-Jeong
    Kim, Taeho
    Chang, Joon-Hyuk
    INTERSPEECH 2023, 2023, : 4289 - 4293
  • [37] Text-Guided Diverse Image Synthesis for Long-Tailed Remote Sensing Object Classification
    Tang, Haojun
    Zhao, Wenda
    Hu, Guang
    Xiao, Yi
    Li, Yunlong
    Wang, Haipeng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2024, 62
  • [38] FusionDeformer: text-guided mesh deformation using diffusion models
    Xu, Hao
    Wu, Yiqian
    Tang, Xiangjun
    Zhang, Jing
    Zhang, Yang
    Zhang, Zhebin
    Li, Chen
    Jin, Xiaogang
    VISUAL COMPUTER, 2024, 40 (07): : 4701 - 4712
  • [39] Text-Guided Visual Feature Refinement for Text-Based Person Search
    Gao, Liying
    Niu, Kai
    Ma, Zehong
    Jiao, Bingliang
    Tan, Tonghao
    Wang, Peng
    PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 118 - 126
  • [40] MMFL: Multimodal Fusion Learning for Text-Guided Image Inpainting
    Lin, Qing
    Yan, Bo
    Li, Jichun
    Tan, Weimin
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 1094 - 1102