HIPA: Hierarchical Patch Transformer for Single Image Super Resolution

被引:10
|
作者
Cai, Qing [1 ]
Qian, Yiming [2 ]
Li, Jinxing [3 ]
Lyu, Jun [4 ]
Yang, Yee-Hong [5 ]
Wu, Feng [6 ]
Zhang, David [7 ,8 ,9 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Shandong, Peoples R China
[2] Univ Manitoba, Dept Comp Sci, Winnipeg, MB R3T 2N2, Canada
[3] Harbin Inst Technol, Sch Comp Sci & Technol, Shenzhen 518055, Guangdong, Peoples R China
[4] Hong Kong Polytech Univ, Sch Nursing, Hong Kong, Peoples R China
[5] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E9, Canada
[6] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Anhui, Peoples R China
[7] Chinese Univ Hong Kong, Sch Data Sci, Shenzhen 518172, Peoples R China
[8] Shenzhen Inst Artificial Intelligence & Robot Soc, Shenzhen 518129, Guangdong, Peoples R China
[9] CUHK SZ Linkl Joint Lab Comp Vis & Artificial Inte, Shenzhen 518172, Guangdong, Peoples R China
基金
加拿大自然科学与工程研究理事会; 美国国家科学基金会;
关键词
Transformers; Feature extraction; Convolution; Image restoration; Superresolution; Visualization; Computer architecture; single image super-resolution; hierarchical patch transformer; attention-based position embedding; SUPERRESOLUTION; NETWORK;
D O I
10.1109/TIP.2023.3279977
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer-based architectures start to emerge in single image super resolution (SISR) and have achieved promising performance. However, most existing vision Transformer-based SISR methods still have two shortcomings: (1) they divide images into the same number of patches with a fixed size, which may not be optimal for restoring patches with different levels of texture richness; and (2) their position encodings treat all input tokens equally and hence, neglect the dependencies among them. This paper presents a HIPA, which stands for a novel Transformer architecture that progressively recovers the high resolution image using a hierarchical patch partition. Specifically, we build a cascaded model that processes an input image in multiple stages, where we start with tokens with small patch sizes and gradually merge them to form the full resolution. Such a hierarchical patch mechanism not only explicitly enables feature aggregation at multiple resolutions but also adaptively learns patch-aware features for different image regions, e.g., using a smaller patch for areas with fine details and a larger patch for textureless regions. Meanwhile, a new attention-based position encoding scheme for Transformer is proposed to let the network focus on which tokens should be paid more attention by assigning different weights to different tokens, which is the first time to our best knowledge. Furthermore, we also propose a multi-receptive field attention module to enlarge the convolution receptive field from different branches. The experimental results on several public datasets demonstrate the superior performance of the proposed HIPA over previous methods quantitatively and qualitatively. We will share our code and models when the paper is accepted.
引用
下载
收藏
页码:3226 / 3237
页数:12
相关论文
共 50 条
  • [31] An improved single-image super-resolution algorithm through neighbor embedding and fusion of image patch
    Xiao, Weichu
    Guo, Baolong
    International Review on Computers and Software, 2012, 7 (03) : 1308 - 1315
  • [32] Pairwise Operator Learning for Patch-Based Single-Image Super-Resolution
    Tang, Yi
    Shao, Ling
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2017, 26 (02) : 994 - 1003
  • [33] Reducible dictionaries for single image super-resolution based on patch matching and mean shifting
    Rasti, Pejman
    Nasrollahi, Kamal
    Orlova, Olga
    Tamberg, Gert
    Moeslund, Thomas B.
    Anbarjafari, Gholamreza
    JOURNAL OF ELECTRONIC IMAGING, 2017, 26 (02)
  • [34] Single Image Super Resolution on Dynamic X-ray Radiography Based on a Vision Transformer
    Kim, Hyunjong
    Choi, Ilwoong
    Kim, Dong Sik
    Shin, Choul Woo
    MEDICAL IMAGING 2024: IMAGE PROCESSING, 2024, 12926
  • [35] TranMamba: a lightweight hybrid transformer-Mamba network for single image super-resolution
    Long Zhang
    Yi Wan
    Signal, Image and Video Processing, 2025, 19 (5)
  • [36] SINGLE IMAGE SUPER RESOLUTION WITH HIGH RESOLUTION DICTIONARY
    Mu, Guangwu
    Gao, Xinbo
    Zhang, Kaibing
    Li, Xuelong
    Tao, Dacheng
    2011 18TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2011, : 1141 - 1144
  • [37] Medical image super-resolution via transformer-based hierarchical encoder-decoder network
    Sun, Jianhao
    Zeng, Xiangqin
    Lei, Xiang
    Gao, Mingliang
    Li, Qilei
    Zhang, Housheng
    Ba, Fengli
    NETWORK MODELING AND ANALYSIS IN HEALTH INFORMATICS AND BIOINFORMATICS, 2024, 13 (01):
  • [38] Image Enhancement using Hierarchical Bayesian Image Expansion Super Resolution
    Whitney, Timothy
    Straub, Jeremy
    Marsh, Ronald
    MOBILE MULTIMEDIA/IMAGE PROCESSING, SECURITY, AND APPLICATIONS 2015, 2015, 9497
  • [39] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [40] Activating More Pixels in Image Super-Resolution Transformer
    Chen, Xiangyu
    Wang, Xintao
    Zhou, Jiantao
    Qiao, Yu
    Dong, Chao
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 22367 - 22377