An unsupervised font style transfer model based on generative adversarial networks

被引:5
|
作者
Zeng, Sihan [1 ]
Pan, Zhongliang [1 ]
机构
[1] South China Normal Univ, Phys & Telecommun Engn, Guangzhou, Peoples R China
关键词
Chinese characters; Style transfer; Generative adversarial networks; Unsupervised learning; Style-attentional networks; CHINESE; CHARACTER;
D O I
10.1007/s11042-021-11777-0
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Chinese characters, because of their complex structure and a large number, lead to an extremely high cost of time for designers to design a complete set of characters. As a result, the dramatic growth of characters used in various fields such as culture and business has formed a strong contradiction between supply and demand with Chinese font design. Although most of the existing Chinese characters transformation models greatly alleviate the demand for character usage, the semantics of the generated characters cannot be guaranteed and the generation efficiency is low. At the same time, the models require large amounts of paired data for training, which requires a large amount of sample processing time. To address the problems of existing methods, this paper proposes an unsupervised Chinese characters generation method based on generative adversarial networks, which fuses Style-Attentional Net to a skip-connected U-Net as a GAN generator network architecture. It effectively and flexibly integrates local style patterns based on the semantic spatial distribution of content images while retaining feature information of different sizes. Our model generates fonts that maintain the source domain content features and the target domain style features at the end of training. The addition of the style specification module and the classification discriminator allows the model to generate multiple style typefaces. The generation results show that the model proposed in this paper can perform the task of Chinese character style transfer well. The model generates high-quality images of Chinese characters and generates Chinese characters with complete structures and natural strokes. In the quantitative comparison experiments and qualitative comparison experiments, our model has more superior visual effects and image performance indexes compared with the existing models. In sample size experiments, clearly structured fonts are still generated and the model demonstrates significant robustness. At the same time, the training conditions of our model are easy to meet and facilitate generalization to real applications.
引用
收藏
页码:5305 / 5324
页数:20
相关论文
共 50 条
  • [1] An unsupervised font style transfer model based on generative adversarial networks
    Sihan Zeng
    Zhongliang Pan
    [J]. Multimedia Tools and Applications, 2022, 81 : 5305 - 5324
  • [2] Artistic Font Style Transfer Based on Deep Convolutional Generative Adversarial Networks
    Wang, Juan
    Liu, Dong-Hun
    [J]. Journal of Network Intelligence, 2024, 9 (03): : 1693 - 1705
  • [3] GlyphGAN: Style-consistent font generation based on generative adversarial networks
    Hayashi, Hideaki
    Abe, Kohtaro
    Uchida, Seiichi
    [J]. KNOWLEDGE-BASED SYSTEMS, 2019, 186
  • [4] PDEGAN: A Panoramic Style Transfer Based on Generative Adversarial Networks
    Wang, Qinghua
    Long, Xinling
    Huang, Jingwei
    Chen, Yang
    Yang, Lirong
    Zhang, Fuquan
    [J]. Journal of Network Intelligence, 2024, 9 (04): : 2112 - 2121
  • [5] Image Style Transfer with Generative Adversarial Networks
    Li, Ru
    [J]. PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2950 - 2954
  • [6] Unsupervised image style transformation of generative adversarial networks based on cyclic consistency
    Wu, Jingyu
    Sun, Fuming
    Xu, Rui
    Lu, Mingyu
    Zhang, Boyu
    [J]. Multimedia Systems, 2024, 30 (06)
  • [7] Generative Adversarial Style Transfer Networks for Face Aging
    Palsson, Sveinn
    Agustsson, Eirikur
    Timofte, Radu
    Van Gool, Luc
    [J]. PROCEEDINGS 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW), 2018, : 2165 - 2173
  • [8] TACHIEGAN: GENERATIVE ADVERSARIAL NETWORKS FOR TACHIE STYLE TRANSFER
    Chen, Zihan
    Chen, Xuejin
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO WORKSHOPS (IEEE ICMEW 2022), 2022,
  • [9] Unsupervised Generative Adversarial Network for Style Transfer using Multiple Discriminators
    Akhtar, Mohd Rayyan
    Liu, Peng
    [J]. THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021), 2022, 12083
  • [10] Image Style Conversion Model Design Based on Generative Adversarial Networks
    Gong, Ke
    Zhen, Zhu
    [J]. IEEE ACCESS, 2024, 12 : 122126 - 122138