An Accurate and Lightweight Method for Human Body Image Super-Resolution

被引:14
|
作者
Liu, Yunan [1 ,2 ,3 ]
Zhang, Shanshan [1 ,2 ,3 ]
Xu, Jie [1 ,2 ,3 ]
Yang, Jian [1 ,2 ,3 ]
Tai, Yu-Wing [4 ]
机构
[1] Nanjing Univ Sci & Technol, PCA Lab, Nanjing 210094, Peoples R China
[2] Nanjing Univ Sci & Technol, Key Lab Intelligent Percept & Syst High Dimens In, Minist Educ, Nanjing 210094, Peoples R China
[3] Nanjing Univ Sci & Technol, Jiangsu Key Lab Image & Video Understanding Socia, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[4] Kuaishou Technol, Shenzhen 518000, Peoples R China
基金
中国国家自然科学基金;
关键词
Super resolution; human body prior; lightweight multi-scale block; nonsubsampled shearlet transform;
D O I
10.1109/TIP.2021.3055737
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we propose a new method to super-resolve low resolution human body images by learning efficient multi-scale features and exploiting useful human body prior. Specifically, we propose a lightweight multi-scale block (LMSB) as basic module of a coherent framework, which contains an image reconstruction branch and a prior estimation branch. In the image reconstruction branch, the LMSB aggregates features of multiple receptive fields so as to gather rich context information for low-to-high resolution mapping. In the prior estimation branch, we adopt the human parsing maps and nonsubsampled shearlet transform (NSST) sub-bands to represent the human body prior, which is expected to enhance the details of reconstructed human body images. When evaluated on the newly collected HumanSR dataset, our method outperforms state-of-the-art image super-resolution methods with similar to 8x fewer parameters; moreover, our method significantly improves the performance of human image analysis tasks (e.g. human parsing and pose estimation) for low-resolution inputs.
引用
收藏
页码:2888 / 2897
页数:10
相关论文
共 50 条
  • [1] Lightweight and Accurate Recursive Fractal Network for Image Super-Resolution
    Li, Juncheng
    Yuan, Yiting
    Mei, Kangfu
    Fang, Faming
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 3814 - 3823
  • [2] LIGHTWEIGHT AND ACCURATE SINGLE IMAGE SUPER-RESOLUTION WITH CHANNEL SEGREGATION NETWORK
    Niu, Zhong-Han
    Lin, Xi-Peng
    Yu, An-Ni
    Zhou, Yang-Hao
    Yang, Yu-Bin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1630 - 1634
  • [3] Lightweight feature separation, fusion and optimization networks for accurate image super-resolution
    Tian, Lin
    Gao, Shaoshuai
    Tu, Guofang
    MULTIMEDIA SYSTEMS, 2022, 28 (02) : 611 - 622
  • [4] Accurate MR image super-resolution via lightweight lateral inhibition network
    Zhao, Xiaole
    Hu, Xiafei
    Liao, Ying
    He, Tian
    Zhang, Tao
    Zou, Xueming
    Tian, Jinsha
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 201
  • [5] Fusion diversion network for fast, accurate and lightweight single image super-resolution
    Gu, Zheng
    Chen, Liping
    Zheng, Yanhong
    Wang, Tong
    Li, Tieying
    SIGNAL IMAGE AND VIDEO PROCESSING, 2021, 15 (06) : 1351 - 1359
  • [6] Fusion diversion network for fast, accurate and lightweight single image super-resolution
    Zheng Gu
    Liping Chen
    Yanhong Zheng
    Tong Wang
    Tieying Li
    Signal, Image and Video Processing, 2021, 15 : 1351 - 1359
  • [7] Lightweight feature separation, fusion and optimization networks for accurate image super-resolution
    Lin Tian
    Shaoshuai Gao
    Guofang Tu
    Multimedia Systems, 2022, 28 : 611 - 622
  • [8] Review of Research on Lightweight Image Super-Resolution
    Zhu, Xinfeng
    Song, Jian
    Computer Engineering and Applications, 2024, 60 (16) : 49 - 60
  • [9] Lightweight image super-resolution with enhanced CNN
    Tian, Chunwei
    Zhuge, Ruibin
    Wu, Zhihao
    Xu, Yong
    Zuo, Wangmeng
    Chen, Chen
    Lin, Chia-Wen
    KNOWLEDGE-BASED SYSTEMS, 2020, 205
  • [10] A very lightweight image super-resolution network
    Bai, Haomou
    Liang, Xiao
    SCIENTIFIC REPORTS, 2024, 14 (01):