Face Hallucination by Attentive Sequence Optimization with Reinforcement Learning

被引:35
|
作者
Shi, Yukai [1 ]
Li, Guanbin [1 ]
Cao, Qingxing [1 ]
Wang, Keze [1 ]
Lin, Liang [1 ]
机构
[1] Sun Yat Sen Univ, Sch Data & Comp Sci, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Face; Image resolution; Image reconstruction; Optimization; Reinforcement learning; Visualization; Image restoration; Face hallucination; reinforcement learning; recurrent neural network; IMAGE SUPERRESOLUTION; REPRESENTATION; ALIGNMENT; NETWORKS;
D O I
10.1109/TPAMI.2019.2915301
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Face hallucination is a domain-specific super-resolution problem that aims to generate a high-resolution (HR) face image from a low-resolution (LR) input. In contrast to the existing patch-wise super-resolution models that divide a face image into regular patches and independently apply LR to HR mapping to each patch, we implement deep reinforcement learning and develop a novel attention-aware face hallucination (Attention-FH) framework, which recurrently learns to attend a sequence of patches and performs facial part enhancement by fully exploiting the global interdependency of the image. Specifically, our proposed framework incorporates two components: a recurrent policy network for dynamically specifying a new attended region at each time step based on the status of the super-resolved image and the past attended region sequence, and a local enhancement network for selected patch hallucination and global state updating. The Attention-FH model jointly learns the recurrent policy network and local enhancement network through maximizing a long-term reward that reflects the hallucination result with respect to the whole HR image. Extensive experiments demonstrate that our Attention-FH significantly outperforms the state-of-the-art methods on in-the-wild face images with large pose and illumination variations.
引用
收藏
页码:2809 / 2824
页数:16
相关论文
共 50 条
  • [21] Deep reinforcement learning for stacking sequence optimization of composite laminates
    Shonkwiler, Sara
    Li, Xiang
    Fenrich, Richard
    McMains, Sara
    MANUFACTURING LETTERS, 2023, 35 : 1203 - 1213
  • [22] Learning Patch-Based Anchors for Face Hallucination
    Ko, Wei-Jen
    Wang, Yu-Chiang Frank
    Chien, Shao-Yi
    2016 IEEE 18TH INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2016,
  • [23] Deep reinforcement learning for stacking sequence optimization of composite laminates
    Shonkwiler, Sara
    Li, Xiang
    Fenrich, Richard
    McMains, Sara
    MANUFACTURING LETTERS, 2023, 35 : 1203 - 1213
  • [24] Tiny Face Hallucination via Relativistic Adversarial Learning
    Shao Wenze
    Zhang Miaomiao
    Li Haibo
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2021, 43 (09) : 2577 - 2585
  • [25] LEARNING ADAPTIVE LOCAL DISTANCE METRIC FOR FACE HALLUCINATION
    Zou, Yuanpeng
    Zhou, Fei
    Liao, Qingmin
    2017 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2017, : 1967 - 1971
  • [26] Learning-based face hallucination in DCT domain
    Zhang, Wei
    Cham, Wai-Kuen
    2008 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, VOLS 1-12, 2008, : 2038 - 2045
  • [27] Face hallucination based on cluster consistent dictionary learning
    Li, Minqi
    He, Xiangjian
    Lam, Kin-Man
    Zhang, Kaibing
    Jing, Junfeng
    IET IMAGE PROCESSING, 2021, 15 (12) : 2841 - 2853
  • [28] Generating attentive goals for prioritized hindsight reinforcement learning
    Liu, Peng
    Bai, Chenjia
    Zhao, Yingnan
    Bai, Chenyao
    Zhao, Wei
    Tang, Xianglong
    KNOWLEDGE-BASED SYSTEMS, 2020, 203
  • [29] An Attentive Consensus Platform for Collaborative Reinforcement Learning Agents
    Hwang, Maxwell
    Lin, Jin-Ling
    Kao, Shao-Wei
    IEEE SYSTEMS JOURNAL, 2023, 17 (03): : 3783 - 3793
  • [30] Attentive Multi-task Deep Reinforcement Learning
    Bram, Timo
    Brunner, Gino
    Richter, Oliver
    Wattenhofer, Roger
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2019, PT III, 2020, 11908 : 134 - 149