Semantic image inpainting with dense and dilated deep convolutional autoencoder adversarial network

被引:0
|
作者
Ren, Kun [1 ,2 ,3 ,4 ]
Fan, Chunqi [1 ,2 ,3 ,4 ]
Meng, Lisha [1 ,2 ,3 ,4 ]
Huang, Long [1 ,2 ,3 ,4 ]
机构
[1] Beijing Univ Technol, Fac Informat Technol, Beijing 100124, Peoples R China
[2] Minist Educ, Engn Res Ctr Digital Commun, Beijing 100124, Peoples R China
[3] Beijing Lab Urban Mass Transit, Beijing 100124, Peoples R China
[4] Beijing Key Lab Computat Intelligence & Intellige, Beijing 100124, Peoples R China
基金
中国国家自然科学基金;
关键词
Image inpainting; Generative adversarial networks; Autoencoder; Densenet; Dilated convolution;
D O I
10.1117/12.2538756
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
The developments of generative adversarial networks (GANs) make it possible to fill missing regions in broken images with convincing details. However, many existing approaches fail to keep the inpainted content and structures consistent with their surroundings. In this paper, we propose a GAN-based inpainting model which can restore the semantic damaged images visually reasonable and coherent. In our model, the generative network has an autoencoder frame and the discriminator network is a CNN classifier. Different from the classic autoencoder, we design a novel bottleneck layer in the middle of the autoencoder which is comprised of four dense-net blocks and each block contains vanilla convolution layers and dilated convolution layers. The kernels of dilated convolution are spread out and result in an effective enlargement of the receptive field. Thus the model can capture more widely semantic information to ensure the consistency of inpainted images. Furthermore, the multiplex of different level's features in each dense-net block can help the model understand the whole image better to produce a convincing image. We evaluate our model over the public datasets CelebA and Stanford Cars with random position masks of different ratios. The effectiveness of our model is verified by qualitative and quantitative experiments.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Deep Convolutional Generative Adversarial Network with Autoencoder for Semisupervised SAR Image Classification
    Zhang, Zheng
    Yang, Jingsong
    Du, Yang
    IEEE Geoscience and Remote Sensing Letters, 2022, 19
  • [2] Deep Convolutional Generative Adversarial Network With Autoencoder for Semisupervised SAR Image Classification
    Zhang, Zheng
    Yang, Jingsong
    Du, Yang
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [3] IMAGE INPAINTING BY MSCSWIN TRANSFORMER ADVERSARIAL AUTOENCODER
    Chen, Bo-Wei
    Liu, Tsung-Jung
    Liu, Kuan-Hsien
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 2040 - 2044
  • [4] Semantic face image inpainting based on Generative Adversarial Network
    Zhang, Heshu
    Li, Tao
    2020 35TH YOUTH ACADEMIC ANNUAL CONFERENCE OF CHINESE ASSOCIATION OF AUTOMATION (YAC), 2020, : 530 - 535
  • [5] Semantic Deep Image Inpainting
    Afreen, Nishat
    Singh, Shrey
    Kumar, Sanjay
    2018 INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING, COMMUNICATIONS AND INFORMATICS (ICACCI), 2018, : 1190 - 1195
  • [6] A Novel Generative Image Inpainting Model with Dense Gated Convolutional Network
    Ma, Xiaoxuan
    Deng, Yibo
    Zhang, Lei
    Li, Zhiwen
    INTERNATIONAL JOURNAL OF COMPUTERS COMMUNICATIONS & CONTROL, 2023, 18 (02)
  • [7] DDCNet: Deep dilated convolutional neural network for dense prediction
    Salehi, Ali
    Balasubramanian, Madhusudhanan
    NEUROCOMPUTING, 2023, 523 : 116 - 129
  • [8] DDCNet: Deep dilated convolutional neural network for dense prediction
    Salehi, Ali
    Balasubramanian, Madhusudhanan
    NEUROCOMPUTING, 2023, 523 : 116 - 129
  • [9] Improved Semantic Image Inpainting Method with Deep Convolution Generative Adversarial Networks
    Chen, Xiaoning
    Zhao, Jian
    BIG DATA, 2022, 10 (06) : 506 - 514
  • [10] Medical image fusion method based on dense block and deep convolutional generative adversarial network
    Cheng Zhao
    Tianfu Wang
    Baiying Lei
    Neural Computing and Applications, 2021, 33 : 6595 - 6610