Rdfinet: reference-guided directional diverse face inpainting network

被引:1
|
作者
Chen, Qingyang [1 ]
Qiang, Zhengping [1 ]
Zhao, Yue [2 ]
Lin, Hong [1 ]
He, Libo [3 ]
Dai, Fei [1 ]
机构
[1] Southwest Forestry Univ, Coll Big Data & Intelligent Engn, Kunming, Yunnan, Peoples R China
[2] Southwest Forestry Univ, Coll Art & Design, Kunming, Yunnan, Peoples R China
[3] Informat Secur Coll, Yunnan Police Coll, Kunming, Yunnan, Peoples R China
关键词
GAN; Directional; Face-parsing; Face-image completion; Reference image; GENERATIVE ADVERSARIAL NETWORKS; IMAGE;
D O I
10.1007/s40747-024-01543-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The majority of existing face inpainting methods primarily focus on generating a single result that visually resembles the original image. The generation of diverse and plausible results has emerged as a new branch in image restoration, often referred to as "Pluralistic Image Completion". However, most diversity methods simply use random latent vectors to generate multiple results, leading to uncontrollable outcomes. To overcome these limitations, we introduce a novel architecture known as the Reference-Guided Directional Diverse Face Inpainting Network. In this paper, instead of using a background image as reference, which is typically used in image restoration, we have used a face image, which can have many different characteristics from the original image, including but not limited to gender and age, to serve as a reference face style. Our network firstly infers the semantic information of the masked face, i.e., the face parsing map, based on the partial image and its mask, which subsequently guides and constrains directional diverse generator network. The network will learn the distribution of face images from different domains in a low-dimensional manifold space. To validate our method, we conducted extensive experiments on the CelebAMask-HQ dataset. Our method not only produces high-quality oriented diverse results but also complements the images with the style of the reference face image. Additionally, our diverse results maintain correct facial feature distribution and sizes, rather than being random. Our network has achieved SOTA results in face diverse inpainting when writing. Code will is available at https://github.com/nothingwithyou/RDFINet.
引用
收藏
页码:7619 / 7630
页数:12
相关论文
共 50 条
  • [1] Reference-guided face inpainting with reference attention network
    Yu, Jiazuo
    Li, Kai
    Peng, Jinjia
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (12): : 9717 - 9731
  • [2] Reference-guided face inpainting with reference attention network
    Jiazuo Yu
    Kai Li
    Jinjia Peng
    Neural Computing and Applications, 2022, 34 : 9717 - 9731
  • [3] Reference-Guided Large-Scale Face Inpainting With Identity and Texture Control
    Luo, Wuyang
    Yang, Su
    Zhang, Weishan
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (10) : 5498 - 5509
  • [4] REFERENCE-GUIDED TEXTURE AND STRUCTURE INFERENCE FOR IMAGE INPAINTING
    Liu, Taorong
    Liao, Liang
    Wang, Zheng
    Satoh, Shin'ichi
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 1996 - 2000
  • [5] Reference-guided Controllable Inpainting of Neural Radiance Fields
    Mirzaei, Ashkan
    Aumentado-Armstrong, Tristan
    Brubaker, Marcus A.
    Kelly, Jonathan
    Levinshtein, Alex
    Derpanis, Konstantinos G.
    Gilitschenski, Igor
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 17769 - 17779
  • [6] A Reference-Guided Double Pipeline Face Image Completion Network
    Liu, Hongrui
    Li, Shuoshi
    Wang, Hongquan
    Zhu, Xinshan
    ELECTRONICS, 2020, 9 (11) : 1 - 13
  • [7] Reference-Guided Landmark Image Inpainting With Deep Feature Matching
    Li, Jiacheng
    Xiong, Zhiwei
    Liu, Dong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (12) : 8422 - 8435
  • [8] Joint self-supervised and reference-guided learning for depth inpainting
    Wu, Heng
    Fu, Kui
    Zhao, Yifan
    Song, Haokun
    Li, Jia
    COMPUTATIONAL VISUAL MEDIA, 2022, 8 (04) : 597 - 612
  • [9] Joint self-supervised and reference-guided learning for depth inpainting
    Heng Wu
    Kui Fu
    Yifan Zhao
    Haokun Song
    Jia Li
    Computational Visual Media, 2022, 8 : 597 - 612
  • [10] Transref: Multi-scale reference embedding transformer for reference-guided image inpainting
    Liu, Taorong
    Liao, Liang
    Chen, Delin
    Xiao, Jing
    Wang, Zheng
    Lin, Chia-Wen
    Satoh, Shin'ichi
    NEUROCOMPUTING, 2025, 632