HIPA: Hierarchical Patch Transformer for Single Image Super Resolution

被引:10
|
作者
Cai, Qing [1 ]
Qian, Yiming [2 ]
Li, Jinxing [3 ]
Lyu, Jun [4 ]
Yang, Yee-Hong [5 ]
Wu, Feng [6 ]
Zhang, David [7 ,8 ,9 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Shandong, Peoples R China
[2] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
[4] Hong Kong Polytech Univ, Sch Nursing, Hong Kong, Peoples R China
[5] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E9, Canada
[6] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
[7] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[8] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518129, Guangdong, Peoples R China
[9] CUHK SZ Linkl Joint Lab Comp Vis & Artificial Inte, Shenzhen 518172, Guangdong, Peoples R China
基金
加拿大自然科学与工程研究理事会; 美国国家科学基金会;
关键词
Transformers; Feature extraction; Convolution; Image restoration; Superresolution; Visualization; Computer architecture; single image super-resolution; hierarchical patch transformer; attention-based position embedding; SUPERRESOLUTION; NETWORK;
D O I
10.1109/TIP.2023.3279977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based architectures start to emerge in single image super resolution (SISR) and have achieved promising performance. However, most existing vision Transformer-based SISR methods still have two shortcomings: (1) they divide images into the same number of patches with a fixed size, which may not be optimal for restoring patches with different levels of texture richness; and (2) their position encodings treat all input tokens equally and hence, neglect the dependencies among them. This paper presents a HIPA, which stands for a novel Transformer architecture that progressively recovers the high resolution image using a hierarchical patch partition. Specifically, we build a cascaded model that processes an input image in multiple stages, where we start with tokens with small patch sizes and gradually merge them to form the full resolution. Such a hierarchical patch mechanism not only explicitly enables feature aggregation at multiple resolutions but also adaptively learns patch-aware features for different image regions, e.g., using a smaller patch for areas with fine details and a larger patch for textureless regions. Meanwhile, a new attention-based position encoding scheme for Transformer is proposed to let the network focus on which tokens should be paid more attention by assigning different weights to different tokens, which is the first time to our best knowledge. Furthermore, we also propose a multi-receptive field attention module to enlarge the convolution receptive field from different branches. The experimental results on several public datasets demonstrate the superior performance of the proposed HIPA over previous methods quantitatively and qualitatively. We will share our code and models when the paper is accepted.
引用
收藏
页码:3226 / 3237
页数:12
相关论文
共 50 条
  • [1] Transformer for Single Image Super-Resolution
    Lu, Zhisheng
    Li, Juncheng
    Liu, Hong
    Huang, Chaoyan
    Zhang, Linlin
    Zeng, Tieyong
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2022, 2022, : 456 - 465
  • [2] HIERARCHICAL RECURSIVE NETWORK FOR SINGLE IMAGE SUPER RESOLUTION
    Su, Minglan
    Lai, Shenqi
    Chai, Zhenhua
    Wei, Xiaoming
    Liu, Yong
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW), 2019, : 595 - 598
  • [3] Single image super-resolution based on image patch classification
    Xia, Ping
    Yan, Hua
    Li, Jing
    Sun, Jiande
    [J]. SECOND INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2017, 10443
  • [4] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    [J]. ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [5] Single Image Super-Resolution with Hierarchical Receptive Field
    Qin, Din
    Gu, Xiaodong
    [J]. 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [6] Single Image Super-resolution Using Spatial Transformer Networks
    Wang, Qiang
    Fan, Huijie
    Cong, Yang
    Tang, Yandong
    [J]. 2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 564 - 567
  • [7] Adaptive Patch Exiting for Scalable Single Image Super-Resolution
    Wang, Shizun
    Liu, Jiaming
    Chen, Kaixin
    Li, Xiaoqi
    Lu, Ming
    Guo, Yandong
    [J]. COMPUTER VISION - ECCV 2022, PT XVIII, 2022, 13678 : 292 - 307
  • [8] Patch Based Synthesis for Single Depth Image Super-Resolution
    Mac Aodha, Oisin
    Campbell, Neill D. F.
    Nair, Arun
    Brostow, Gabriel J.
    [J]. COMPUTER VISION - ECCV 2012, PT III, 2012, 7574 : 71 - 84
  • [9] HCT: image super-resolution restoration using hierarchical convolution transformer networks
    Ying Guo
    Chang Tian
    Han Wang
    Jie Liu
    Chong Di
    Keqing Ning
    [J]. Pattern Analysis and Applications, 2025, 28 (2)
  • [10] Hybrid-Scale Hierarchical Transformer for Remote Sensing Image Super-Resolution
    Shang, Jianrun
    Gao, Mingliang
    Li, Qilei
    Pan, Jinfeng
    Zou, Guofeng
    Jeon, Gwanggil
    [J]. REMOTE SENSING, 2023, 15 (13)