Image Inpainting with a Three-Stage Generative Network

被引:0
|
作者
Shao X. [1 ]
Ye H. [1 ]
Yang B. [1 ]
Cao F. [1 ]
机构
[1] Department of Applied Mathematics, College of Sciences, China Jiliang University, Hangzhou
基金
中国国家自然科学基金;
关键词
Decoder with Bidirectional Feature Fusion; Generative Adversarial Networks; HSV Color Generation Model; Image Inpainting;
D O I
10.16451/j.cnki.issn1003-6059.202212001
中图分类号
学科分类号
摘要
One of the research emphases of image inpainting based on deep learning is to generate color, edge and texture. However, generation methods of these three important properties need to be further improved. A three-stage generative network is proposed, and three stages tend to synthesize colors, edges and textures respectively. Specifically, the global color of the image is reconstructed in the HSV color space at the HSV color generation stage to provide color guidance for image inpainting. An edge learning framework is designed at the edge optimization stage to obtain more accurate edge information. At the texture synthesis stage, a decoder with feature bidirectional fusion is designed to enhance the details of the image. The three stages are successively connected, and each stage plays an important role in improving the performance of image inpainting. Extensive experiments demonstrate the superiority of the proposed method compared with the state-of-the-art methods. © 2022 Journal of Pattern Recognition and Artificial Intelligence. All rights reserved.
引用
收藏
页码:1047 / 1063
页数:16
相关论文
共 40 条
  • [1] ZHANG X B, ZHAI D H, LI T R, Et al., Image Inpainting Based on Deep Learning: A Review, Information Fusion, 90, pp. 74-94, (2023)
  • [2] LI Y L, GAO Y, YAN J L, Et al., Image Inpainting Methods Based on Deep Neural Networks: A Review, Chinese Journal of Computers, 44, 11, pp. 2295-2316, (2021)
  • [3] ZENG Y H, FU J L, CHAO H Y, Et al., Aggregated Contextual Transformations for High-Resolution Image Inpainting, IEEE Transactions on Visualization and Computer Graphics, (2022)
  • [4] LING H, KREIS K, LI D Q, Et al., EditGAN: High-Precision Semantic Image Editing [ C/OL]
  • [5] LIU Y, SUN P, WERGELES N, Et al., A Survey and Performance Evaluation of Deep Learning Methods for Small Object Detection, Expert Systems with Applications, (2021)
  • [6] WAN Z Y, ZHANG B, CHEN D, Et al., Old Photo Restoration via Deep Latent Space Translation, IEEE Transactions on Pattern Analysis and Machine Intelligence, 45, 2, pp. 2071-2087, (2023)
  • [7] LI H D, LUO W Q, HUANG J W., Localization of Diffusion-Based Inpainting in Digital Images, IEEE Transactions on Information Forensics and Security, 12, 12, pp. 3050-3064, (2017)
  • [8] GHORAI M, SAMANTA S, MANDAL S, Et al., Multiple Pyramids Based Image Inpainting Using Local Patch Statistics and Steering Kernel Feature, IEEE Transactions on Image Processing, 28, 11, pp. 5495-5509, (2019)
  • [9] PATHAK D, KR魧HENB譈HL P, DONAHUE J, Et al., Context Encoders: Feature Learning by Inpainting, Proc of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 2536-2544, (2016)
  • [10] GU J X, WANG Z H, KUEN J, Et al., Recent Advances in Convolutional Neural Networks, Pattern Recognition, 77, pp. 354-377, (2018)