Lightweight global-locally connected distillation network for single image super-resolution

被引:3
|
作者
Zeng, Cong [1 ]
Li, Guangyao [1 ]
Chen, Qiaochuan [2 ]
Xiao, Qingguo [3 ]
机构
[1] Tongji Univ, Coll Elect & Informat Engn, Shanghai 201804, Peoples R China
[2] Shanghai Univ, Sch Comp Engn & Sci, Shanghai 200444, Peoples R China
[3] Linyi Univ, Sch Automat & Elect Engn, Linyi 276000, Shandong, Peoples R China
基金
中国国家自然科学基金;
关键词
Convolutional neural network; Image super-resolution; Global-local connection method; Information distillation; Wide activation; ACCURATE;
D O I
10.1007/s10489-022-03454-y
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As convolutional neural networks (CNNs) have been commonly applied to ill-posed single image super-resolution (SISR) task, most previous CNN-based methods made significant progress in terms of both high signal-to-noise ratios (PSNR) and structural similarity (SSIM). However, with the layers in those networks going deeper and deeper, they require more and more computing power, fail to consider distilling the feature maps. In this paper, we propose a lightweight global-locally connected distillation network, GLCDNet. Specifically, we propose a wide activation shrink-expand convolutional block whose filter channels will first shrink then expand to aggregate more information. This information will concatenate with feature maps of the previous blocks to further explore shallow information. Thus, the block will exploit statistics within most feature channels while refining useful information of features. Furthermore, together with the global-local connection method, our network is robust to benchmark datasets with high processing speed. Comparative results demonstrate that our GLCDNet achieves superior performance while keeping the parameters and speed balanced.
引用
收藏
页码:17797 / 17809
页数:13
相关论文
共 50 条
  • [21] Lightweight single image super-resolution Transformer network with explicit global structural similarities capture
    Yang, Shuli
    Tang, Shu
    Gao, Xinbo
    Xie, Xianzhong
    Leng, Jiaxu
    OPTICS AND LASER TECHNOLOGY, 2025, 184
  • [22] A very lightweight image super-resolution network
    Bai, Haomou
    Liang, Xiao
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [23] Residual Dense Information Distillation Network for Single Image Super-Resolution
    Chen, Qiaosong
    Li, Jinxin
    Duan, Bolin
    Pu, Liu
    Deng, Xin
    Wang, Jin
    2019 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN AND CYBERNETICS (SMC), 2019, : 500 - 505
  • [24] Single Image Super-Resolution via Laplacian Information Distillation Network
    Cheng, Mengcheng
    Shu, Zhan
    Hu, Jiapeng
    Zhang, Ying
    Su, Zhuo
    2018 7TH INTERNATIONAL CONFERENCE ON DIGITAL HOME (ICDH 2018), 2018, : 24 - 30
  • [25] Lightweight Single Image Super-Resolution with Selective Channel Processing Network
    Zhu, Hongyu
    Tang, Hao
    Hu, Yaocong
    Tao, Huanjie
    Xie, Chao
    SENSORS, 2022, 22 (15)
  • [26] Lightweight single image super-resolution with attentive residual refinement network
    Qin, Jinghui
    Zhang, Rumin
    NEUROCOMPUTING, 2022, 500 : 846 - 855
  • [27] LIGHTWEIGHT AND ACCURATE SINGLE IMAGE SUPER-RESOLUTION WITH CHANNEL SEGREGATION NETWORK
    Niu, Zhong-Han
    Lin, Xi-Peng
    Yu, An-Ni
    Zhou, Yang-Hao
    Yang, Yu-Bin
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1630 - 1634
  • [28] Lightweight dynamic attention network for single thermal image super-resolution
    Haikun Zhang
    Yueli Hu
    Signal, Image and Video Processing, 2024, 18 : 2195 - 2206
  • [29] Lightweight dynamic attention network for single thermal image super-resolution
    Zhang, Haikun
    Hu, Yueli
    SIGNAL IMAGE AND VIDEO PROCESSING, 2024, 18 (03) : 2195 - 2206
  • [30] MRDN: A lightweight Multi-stage residual distillation network for image Super-Resolution
    Yang, Xin
    Guo, Yingqing
    Li, Zhiqiang
    Zhou, Dake
    Li, Tao
    EXPERT SYSTEMS WITH APPLICATIONS, 2022, 204