MF-Net: A Novel Few-shot Stylized Multilingual Font Generation Method

被引:5
|
作者
Zhang, Yufan [1 ]
Man, Junkai [1 ]
Sun, Peng [1 ]
机构
[1] Duke Kunshan Univ, Kunshan, Peoples R China
关键词
Style transfer; image synthesis; font design; few-shot learning;
D O I
10.1145/3503161.3548414
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Creating a complete stylized font library that helps the audience to perceive information from the text often requires years of study and proficiency in the use of many professional tools. Accordingly, automatic stylized font generation in a deep learning-based fashion is a desirable but challenging task that has attracted a lot of attention in recent years. This paper revisits the state-of-the-art methods for stylized font generation and presents a taxonomy of the deep learning-based stylized font generation. Despite the notable performance of the existing models, stylized multilingual font generation, the task of applying specific font style to diverse characters in multiple languages has never been reported to be addressed. An efficient and economical method for stylized multilingual font generation is essential in numerous application scenarios that require communication with international audiences. We propose a solution for few-shot multilingual stylized font generation by a fast feed-forward network, Multilingual Font Generation Network (MF-Net), which can transfer previously unseen font styles from a few samples to characters from previously unseen languages. Following the Generative Adversarial Network (GAN) framework, MF-Net adopts two separate encoders in the generator to decouple a font image's content and style information. We adopt an attention module in the style encoder to extract both shallow and deep style features. Moreover, we also design a novel language complexityaware skip connection to adaptive adjust the structural information to be preserved. With an effective loss function to improve the visual quality of the generated font images, we show the effectiveness of the proposed MF-Net based on quantitative and subjective visual evaluation, and compare it with the existing models in the scenario of stylized multilingual font generation. The source code is available on https://github.com/iamyufan/MF-Net.
引用
收藏
页码:2088 / 2096
页数:9
相关论文
共 50 条
  • [21] Multiple Heads are Better than One: Few-shot Font Generation with Multiple Localized Experts
    Park, Song
    Chun, Sanghyuk
    Cha, Junbum
    Lee, Bado
    Shim, Hyunjung
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13880 - 13889
  • [22] JointFontGAN: Joint Geometry-Content GAN for Font Generation via Few-Shot Learning
    Xi, Yankun
    Yan, Guoli
    Hua, Jing
    Zhong, Zichun
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 4309 - 4317
  • [23] A two-generation based method for few-shot learning with few-shot instance-level privileged information
    Xu, Jian
    He, Jinghui
    Liu, Bo
    Cao, Fan
    Xiao, Yanshan
    APPLIED INTELLIGENCE, 2024, 54 (05) : 4077 - 4094
  • [24] A two-generation based method for few-shot learning with few-shot instance-level privileged information
    Jian Xu
    Jinghui He
    Bo Liu
    Fan Cao
    Yanshan Xiao
    Applied Intelligence, 2024, 54 : 4077 - 4094
  • [25] XMP-Font: Self-Supervised Cross-Modality Pre-training for Few-Shot Font Generation
    Liu, Wei
    Liu, Fangyue
    Ding, Fei
    He, Qian
    Yi, Zili
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 7895 - 7904
  • [26] FSFT-NET: FACE TRANSFER VIDEO GENERATION WITH FEW-SHOT VIEWS
    Song, Luchuan
    Yin, Guojun
    Liu, Bin
    Zhang, Yuhui
    Yu, Nenghai
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3582 - 3586
  • [27] One-Shot Multilingual Font Generation Via ViT
    Liu, Jiarui
    Wang, Zhiheng
    arXiv,
  • [28] Multi-Content GAN for Few-Shot Font Style Transfer
    Azadil, Samaneh
    Fisher, Matthew
    Kim, Vladimir
    Wang, Zhaowen
    Shechtman, Eli
    Darrell, Trevor
    2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 7564 - 7573
  • [29] LEARNING COMPONENT-LEVEL AND INTER-CLASS GLYPH REPRESENTATION FOR FEW-SHOT FONT GENERATION
    Su, Yongliang
    Chen, Xu
    Wu, Lei
    Meng, Xiangxu
    2023 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO, ICME, 2023, : 738 - 743
  • [30] Meta-BN Net for few-shot learning
    GAO Wei
    SHAO Mingwen
    SHU Jun
    ZHUANG Xinkai
    Frontiers of Computer Science, 2023, 17 (01)