Texture synthesis method based on generative adversarial networks

被引:0
|
作者
Yu S. [1 ,2 ]
Han Z. [2 ]
Tang Y. [1 ,2 ]
Wu C. [1 ]
机构
[1] School of Information Science and Engineering, Northeastern University, Shenyang
[2] State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang
关键词
Deep learning; Generative adversarial networks; Generative model; Texture synthesis;
D O I
10.3788/IRLA201847.0203005
中图分类号
学科分类号
摘要
Texture synthesis is a hot research topic in the fields of computer graphics, vision, and image processing. Traditional texture synthesis methods are generally achieved by extracting effective feature patterns or statistics and generating random images under the constraint of the feature information. Generative adversarial networks (GANs) is a new type of deep network. It can randomly generate new data of the same distribution as the observed data by training generator and discriminator in an adversarial learning mechanism. Inspired by this point, a texture synthesis method based on GANs was proposed. The advantage of the algorithm was that it could generate more realistic texture images without iteration; the generated images were visually consistent with the observed texture image and also had randomness. A series of experiments for random texture and structured texture synthesis verify the effectiveness of the proposed algorithm. © 2018, Editorial Board of Journal of Infrared and Laser Engineering. All right reserved.
引用
收藏
相关论文
共 15 条
  • [1] Huang J., Li F., Gui Y., Et al., Surfaces texture synthesis based on texel distribution, Journal of Chinese Computer Systems, 37, 10, pp. 2361-2365, (2016)
  • [2] Criminisi A., Perez P., Toyama K., Region filling and object removal by exemplar-based image inpainting, IEEE Tras Process, 13, 9, pp. 1200-1212, (2004)
  • [3] Kwatra V., Essa I., Turk G., Et al., Graphcut textures: Image and video synthesis using graph cuts, ACM Transactions on Graphics, pp. 277-286, (2003)
  • [4] Lefebvre S., Hoppe H., Parallel controllable texture synthesis, ACM Transactions on Graphics, 24, 3, pp. 777-786, (2005)
  • [5] Zhang W., He K., Meng C., Texture synthesis method by adaptive selecting size of patches, Computer Engineering and Applications, 48, 17, pp. 170-173, (2012)
  • [6] Song C.Z., Wu Y., Mumford D., Filters, random fields and maximum entropy (FRAME): towards a unified theory for texture modeling, International Journal of Computer Vision, 27, 2, pp. 107-126, (1998)
  • [7] Kwatra V., Essa I., Bobick A., Et al., Texture optimization for example-based synthesis, ACM Transactions on Graphics, pp. 795-802, (2005)
  • [8] Urs R.D., Costa J.P.D., Germain C., Maximum-likelihood based synthesis of volumetric textures from a 2D sample, IEEE Transactions on Image Processing, 23, 4, pp. 1820-1830, (2014)
  • [9] Xie J., Hu W., Zhu S.C., Et al., Learning sparse FRAME models for natural image patterns, International Journal of Computer Vision, 114, 2-3, pp. 1-22, (2015)
  • [10] Lu Y., Zhu S.C., Wu Y.N., Learning FRAME models using CNN filters, Computer Science, (2015)