Single-Image Super-Resolution Reconstruction via Learned Geometric Dictionaries and Clustered Sparse Coding

被引:148
|
作者
Yang, Shuyuan [1 ]
Wang, Min [2 ]
Chen, Yiguang [1 ]
Sun, Yaxin [1 ]
机构
[1] Xidian Univ, Key Lab Intelligent Percept & Image Understanding, Minist Educ, Xian 710071, Peoples R China
[2] Xidian Univ, Natl Key Lab Radar Signal Proc, Xian 710071, Peoples R China
基金
美国国家科学基金会;
关键词
Clustered sparse coding; geometric dictionary; residual compensation self-similarity; super-resolution; SPACE;
D O I
10.1109/TIP.2012.2201491
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, single image super-resolution reconstruction (SISR) via sparse coding has attracted increasing interest. In this paper, we proposed a multiple-geometric-dictionaries-based clustered sparse coding scheme for SISR. Firstly, a large number of high-resolution (HR) image patches are randomly extracted from a set of example training images and clustered into several groups of "geometric patches," from which the corresponding "geometric dictionaries" are learned to further sparsely code each local patch in a low-resolution image. A clustering aggregation is performed on the HR patches recovered by different dictionaries, followed by a subsequent patch aggregation to estimate the HR image. Considering that there are often many repetitive image structures in an image, we add a self-similarity constraint on the recovered image in patch aggregation to reveal new features and details. Finally, the HR residual image is estimated by the proposed recovery method and compensated to better preserve the subtle details of the images. Some experiments test the proposed method on natural images, and the results show that the proposed method outperforms its counterparts in both visual fidelity and numerical measures.
引用
收藏
页码:4016 / 4028
页数:13
相关论文
共 50 条
  • [21] Sparse representation using multiple dictionaries for single image super-resolution
    Lin, Yih-Lon
    Sung, Chung-Ming
    Chiang, Yu-Min
    [J]. SIXTH INTERNATIONAL CONFERENCE ON GRAPHIC AND IMAGE PROCESSING (ICGIP 2014), 2015, 9443
  • [22] Image super-resolution via sparse representation over multiple learned dictionaries based on edge sharpness
    F. Yeganli
    M. Nazzal
    M. Unal
    H. Ozkaramanli
    [J]. Signal, Image and Video Processing, 2016, 10 : 535 - 542
  • [23] Image super-resolution via sparse representation over multiple learned dictionaries based on edge sharpness
    Yeganli, F.
    Nazzal, M.
    Unal, M.
    Ozkaramanli, H.
    [J]. SIGNAL IMAGE AND VIDEO PROCESSING, 2016, 10 (03) : 535 - 542
  • [24] Single-Image Super-Resolution: A Survey
    Yao, Tingting
    Luo, Yu
    Chen, Yantong
    Yang, Dongqiao
    Zhao, Lei
    [J]. COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, CSPS 2018, VOL II: SIGNAL PROCESSING, 2020, 516 : 119 - 125
  • [25] Single-Image Super-Resolution Using Sparse Regression and Natural Image Prior
    Kim, Kwang In
    Kwon, Younghee
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2010, 32 (06) : 1127 - 1133
  • [26] Single-Image Super-Resolution: A Benchmark
    Yang, Chih-Yuan
    Ma, Chao
    Yang, Ming-Hsuan
    [J]. COMPUTER VISION - ECCV 2014, PT IV, 2014, 8692 : 372 - 386
  • [27] Single-image super resolution using evolutionary sparse coding technique
    Ahmadi, Kaveh
    Salari, Ezzatollah
    [J]. IET IMAGE PROCESSING, 2017, 11 (01) : 13 - 21
  • [28] Single-image super-resolution based on sparse kernel ridge regression
    Wu, Fanlu
    Wang, Xiangjun
    [J]. AOPC 2017: OPTICAL SENSING AND IMAGING TECHNOLOGY AND APPLICATIONS, 2017, 10462
  • [29] LEARNED MULTIMODAL CONVOLUTIONAL SPARSE CODING FOR GUIDED IMAGE SUPER-RESOLUTION
    Marivani, Iman
    Tsiligianni, Evaggelia
    Cornelis, Bruno
    Deligiannis, Nikos
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2019, : 2891 - 2895
  • [30] Single Image Super-resolution via Learned Representative Features and Sparse Manifold Embedding
    Zhang, Liao
    Yang, Shuyuan
    Zhang, Jiren
    Jiao, Licheng
    [J]. PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 1278 - 1284