An unsupervised font style transfer model based on generative adversarial networks

被引:0
|
作者
Sihan Zeng
Zhongliang Pan
机构
[1] South China Normal University,Physics and Telecommunication Engineering
来源
关键词
Chinese characters; Style transfer; Generative adversarial networks; Unsupervised learning; Style-attentional networks;
D O I
暂无
中图分类号
学科分类号
摘要
Chinese characters, because of their complex structure and a large number, lead to an extremely high cost of time for designers to design a complete set of characters. As a result, the dramatic growth of characters used in various fields such as culture and business has formed a strong contradiction between supply and demand with Chinese font design. Although most of the existing Chinese characters transformation models greatly alleviate the demand for character usage, the semantics of the generated characters cannot be guaranteed and the generation efficiency is low. At the same time, the models require large amounts of paired data for training, which requires a large amount of sample processing time. To address the problems of existing methods, this paper proposes an unsupervised Chinese characters generation method based on generative adversarial networks, which fuses Style-Attentional Net to a skip-connected U-Net as a GAN generator network architecture. It effectively and flexibly integrates local style patterns based on the semantic spatial distribution of content images while retaining feature information of different sizes. Our model generates fonts that maintain the source domain content features and the target domain style features at the end of training. The addition of the style specification module and the classification discriminator allows the model to generate multiple style typefaces. The generation results show that the model proposed in this paper can perform the task of Chinese character style transfer well. The model generates high-quality images of Chinese characters and generates Chinese characters with complete structures and natural strokes. In the quantitative comparison experiments and qualitative comparison experiments, our model has more superior visual effects and image performance indexes compared with the existing models. In sample size experiments, clearly structured fonts are still generated and the model demonstrates significant robustness. At the same time, the training conditions of our model are easy to meet and facilitate generalization to real applications.
引用
收藏
页码:5305 / 5324
页数:19
相关论文
共 50 条
  • [1] An unsupervised font style transfer model based on generative adversarial networks
    Zeng, Sihan
    Pan, Zhongliang
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2022, 81 (04) : 5305 - 5324
  • [2] GlyphGAN: Style-consistent font generation based on generative adversarial networks
    Hayashi, Hideaki
    Abe, Kohtaro
    Uchida, Seiichi
    [J]. KNOWLEDGE-BASED SYSTEMS, 2019, 186
  • [3] Image Style Transfer with Generative Adversarial Networks
    Li, Ru
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2950 - 2954
  • [4] Generative Adversarial Style Transfer Networks for Face Aging
    Palsson, Sveinn
    Agustsson, Eirikur
    Timofte, Radu
    Van Gool, Luc
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 2165 - 2173
  • [5] TACHIEGAN: GENERATIVE ADVERSARIAL NETWORKS FOR TACHIE STYLE TRANSFER
    Chen, Zihan
    Chen, Xuejin
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (IEEE ICMEW 2022), 2022,
  • [6] Unsupervised Generative Adversarial Network for Style Transfer using Multiple Discriminators
    Akhtar, Mohd Rayyan
    Liu, Peng
    [J]. THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021), 2022, 12083
  • [7] Image Style Conversion Model Design Based on Generative Adversarial Networks
    Gong, Ke
    Zhen, Zhu
    [J]. IEEE ACCESS, 2024, 12 : 122126 - 122138
  • [8] Anime Image Style Transfer Algorithm Based on Improved Generative Adversarial Networks
    Li, Yunhong
    Zhu, Jingkun
    Liu, Xingrui
    Chen, Jinni
    Su, Xueping
    [J]. Beijing Youdian Daxue Xuebao/Journal of Beijing University of Posts and Telecommunications, 2024, 47 (04): : 117 - 123
  • [9] MA-GAN: the style transfer model based on multi-adaptive generative adversarial networks
    Zhao, Min
    Qian, XueZhong
    Song, Wei
    [J]. JOURNAL OF ELECTRONIC IMAGING, 2024, 33 (03)
  • [10] DG-Font: Deformable Generative Networks for Unsupervised Font Generation
    Xie, Yangchen
    Chen, Xinyuan
    Sun, Li
    Lu, Yue
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 5126 - 5136