RamGAN: Region Attentive Morphing GAN for Region-Level Makeup Transfer

被引:5
|
作者
Xiang, Jianfeng [1 ,2 ,3 ,4 ]
Chen, Junliang [1 ,2 ,3 ,4 ]
Liu, Wenshuang [1 ,2 ,3 ,4 ]
Hou, Xianxu [1 ,2 ,3 ,4 ]
Shen, Linlin [1 ,2 ,3 ,4 ]
机构
[1] Shenzhen Univ, Comp Vis Inst, Sch Comp Sci & Software Engn, Shenzhen, Peoples R China
[2] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen, Peoples R China
[3] Shenzhen Univ, Guangdong Key Lab Intelligent Informat Proc, Shenzhen, Peoples R China
[4] Shenzhen Univ, Natl Engn Lab Big Data Syst Comp Technol, Shenzhen, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Region makeup transfer; Region attention; GAN;
D O I
10.1007/978-3-031-20047-2_41
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a region adaptive makeup transfer GAN, called RamGAN, for precise region-level makeup transfer. Compared to face-level transfer methods, our RamGAN uses spatial-aware Region Attentive Morphing Module (RAMM) to encode Region Attentive Matrices (RAMs) for local regions like lips, eye shadow and skin. After that, the Region Style Injection Module (RSIM) is applied to RAMs produced by RAMM to obtain two Region Makeup Tensors, gamma and beta, which are subsequently added to the feature map of source image to transfer the makeup. As attention and makeup styles are calculated for each region, RamGAN can achieve better disentangled makeup transfer for different facial regions. When there are significant pose and expression variations between source and reference, RamGAN can also achieve better transfer results, due to the integration of spatial information and region-level correspondence. Experimental results are conducted on public datasets like MT, M-Wild and Makeup datasets, both visual and quantitative results and user study suggest that our approach achieves better transfer results than state-of-the-art methods like BeautyGAN, BeautyGlow, DMT, CPM and PSGAN.
引用
收藏
页码:719 / 735
页数:17
相关论文
共 50 条
  • [1] CARPG: Cross-City Knowledge Transfer for Traffic Accident Prediction via Attentive Region-Level Parameter Generation
    Yang, Guang
    Zhang, Yuequn
    Hang, Jinquan
    Feng, Xinyue
    Xie, Zejun
    Zhang, Desheng
    Yang, Yu
    PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 2939 - 2948
  • [2] Region-level epimutation rates in Arabidopsis thaliana
    Denkena, Johanna
    Johannes, Frank
    Colome-Tatche, Maria
    HEREDITY, 2021, 127 (02) : 190 - 202
  • [3] Enhancing human parsing with region-level learning
    Zhou, Yanghong
    Mok, P. Y.
    IET COMPUTER VISION, 2024, 18 (01) : 60 - 71
  • [4] A region-level moving object detection method
    School of Computer Science and Technology, Nanjing University of Science and Technology, Nanjing 210094, China
    Moshi Shibie yu Rengong Zhineng, 2009, 5 (689-696):
  • [5] Region-level epimutation rates in Arabidopsis thaliana
    Johanna Denkena
    Frank Johannes
    Maria Colomé-Tatché
    Heredity, 2021, 127 : 190 - 202
  • [6] LEARNING SEARCH PATH FOR REGION-LEVEL IMAGE MATCHING
    Krishna, Onkar
    Irie, Go
    Wu, Xiaomeng
    Kawanishi, Takahito
    Kashino, Kunio
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 1967 - 1971
  • [7] Human Emotion Recognition With Relational Region-Level Analysis
    Li, Weixin
    Dong, Xuan
    Wang, Yunhong
    IEEE TRANSACTIONS ON AFFECTIVE COMPUTING, 2023, 14 (01) : 650 - 663
  • [8] PARALLEL COMPUTERS FOR REGION-LEVEL IMAGE-PROCESSING
    ROSENFELD, A
    WU, AY
    PATTERN RECOGNITION, 1982, 15 (01) : 41 - 50
  • [9] PRPOIR: Exploiting the Region-Level Interest for POI Recommendation
    Yuan, Hao
    Xu, Jian
    Zheng, Ning
    Xu, Ming
    Li, Wei
    Shen, Rujia
    2020 IEEE 32ND INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2020, : 59 - 66
  • [10] Towards Region-level IP Geolocation Based on the Path Feature
    Chen, Jingning
    Liu, Fenlin
    Wang, Tianpeng
    Luo, Xiangyang
    Zhao, Fan
    Zhu, Guang
    2015 17TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT), 2015, : 468 - 471