Universal Face Photo-Sketch Style Transfer via Multiview Domain Translation

被引:19
|
作者
Peng, Chunlei [1 ]
Wang, Nannan [2 ]
Li, Jie [3 ]
Gao, Xinbo [4 ,5 ]
机构
[1] Xidian Univ, Sch Cyber Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[2] Xidian Univ, Sch Telecommun Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[3] Xidian Univ, Sch Elect Engn, Video & Image Proc Syst Lab, Xian 710071, Peoples R China
[4] Xidian Univ, Sch Elect Engn, State Key Lab Integrated Serv Networks, Xian 710071, Peoples R China
[5] Chongqing Univ Posts & Telecommun, Chongqing Key Lab Image Cognit, Chongqing 400065, Peoples R China
基金
中国国家自然科学基金;
关键词
Style transfer; domain translation; face synthesis; RECOGNITION;
D O I
10.1109/TIP.2020.3016502
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face photo-sketch style transfer aims to convert a representation of a face from the photo (or sketch) domain to the sketch (respectively, photo) domain while preserving the character of the subject. It has wide-ranging applications in law enforcement, forensic investigation and digital entertainment. However, conventional face photo-sketch synthesis methods usually require training images from both the source domain and the target domain, and are limited in that they cannot be applied to universal conditions where collecting training images in the source domain that match the style of the test image is unpractical. This problem entails two major challenges: 1) designing an effective and robust domain translation model for the universal situation in which images of the source domain needed for training are unavailable, and 2) preserving the facial character while performing a transfer to the style of an entire image collection in the target domain. To this end, we present a novel universal face photo-sketch style transfer method that does not need any image from the source domain for training. The regression relationship between an input test image and the entire training image collection in the target domain is inferred via a deep domain translation framework, in which a domain-wise adaption term and a local consistency adaption term are developed. To improve the robustness of the style transfer process, we propose a multiview domain translation method that flexibly leverages a convolutional neural network representation with hand-crafted features in an optimal way. Qualitative and quantitative comparisons are provided for universal unconstrained conditions of unavailable training images from the source domain, demonstrating the effectiveness and superiority of our method for universal face photo-sketch style transfer.
引用
收藏
页码:8519 / 8534
页数:16
相关论文
共 50 条
  • [41] Dual-Transfer Face Sketch-Photo Synthesis
    Zhang, Mingjin
    Wang, Ruxin
    Gao, Xinbo
    Li, Jie
    Tao, Dacheng
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (02) : 642 - 657
  • [42] Domain-Aware Universal Style Transfer
    Hong, Kibeom
    Jeon, Seogkyu
    Yang, Huan
    Fu, Jianlong
    Byun, Hyeran
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 14589 - 14597
  • [43] Face Sketch Synthesis with Style Transfer using Pyramid Column Feature
    Chen, Chaofeng
    Tan, Xiao
    Wong, Kwan-Yee K.
    2018 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2018), 2018, : 485 - 493
  • [44] Data Augmentation via Photo-to-Sketch Translation for Sketch-based Image Retrieval
    Furuya, Takahiko
    Ohbuchi, Ryutarou
    TENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2018), 2019, 11069
  • [45] Universal Dehazing via Haze Style Transfer
    Park, Eunpil
    Yoo, Jaejun
    Sim, Jae-Young
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (09) : 8576 - 8588
  • [46] Universal Style Transfer via Feature Transforms
    Li, Yijun
    Fang, Chen
    Yang, Jimei
    Wang, Zhaowen
    Lu, Xin
    Yang, Ming-Hsuan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 30 (NIPS 2017), 2017, 30
  • [47] High-Quality Face Caricature via Style Translation
    Laishram, Lamyanba
    Shaheryar, Muhammad
    Lee, Jong Taek
    Jung, Soon Ki
    IEEE ACCESS, 2023, 11 : 138882 - 138896
  • [48] MOST-Net: A Memory Oriented Style Transfer Network for Face Sketch Synthesis
    Ji, Fan
    Sun, Muyi
    Qi, Xingqun
    Li, Qi
    Sun, Zhenan
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 733 - 739
  • [49] Few-Shot Face Sketch-to-Photo Synthesis via Global-Local Asymmetric Image-to-Image Translation
    Li, Yongkang
    Liang, Qifan
    Han, Zhen
    Mai, Wenjun
    Wang, Zhongyuan
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (10)
  • [50] A Diverse Domain Generative Adversarial Network for Style Transfer on Face Photographs
    Tahir, Rabia
    Cheng, Keyang
    Memon, Bilal Ahmed
    Liu, Qing
    INTERNATIONAL JOURNAL OF INTERACTIVE MULTIMEDIA AND ARTIFICIAL INTELLIGENCE, 2022, 7 (05): : 100 - 108