A Unified Network for Arbitrary Scale Super-Resolution of Video Satellite Images

被引:13
|
作者
He, Zhi [1 ]
He, Dan [2 ]
机构
[1] Sun Yat Sen Univ, Guangdong Prov Key Lab Urbanizat & Geosimulat, Ctr Integrated Geog Informat Anal,Sch Geog & Plan, Southern Marine Sci & Engn Guangdong Lab Zhuhai, Guangzhou, Guangdong, Peoples R China
[2] Dongguan Univ Technol, City Coll, Dongguan 511700, Peoples R China
来源
基金
中国国家自然科学基金;
关键词
Arbitrary scale; deep learning; super-resolution (SR); video satellite; SUPER RESOLUTION; INTERPOLATION;
D O I
10.1109/TGRS.2020.3038653
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Super-resolution (SR) has attracted increasing attention as it can improve the quality of video satellite images. Most previous studies only consider several integer magnification factors and focus on obtaining a specific SR model for each scale factor. However, in the real world, it is a common requirement to zoom the videos arbitrarily by rolling the mouse wheel. In this article, we propose a unified network for arbitrary scale SR (ASSR) of video satellite images. The proposed ASSR consists of two modules, i.e., feature learning module and arbitrary upscale module. The feature learning module accepts multiple low-resolution (LR) frames and extracts useful features of those frames by using many 3-D residual blocks. The arbitrary upscale module takes the extracted features as input and enhances the spatial resolution by subpixel convolution and bicubic-based adjustment. Different from existing video satellite image SR methods, ASSR can continuously zoom LR video satellite images with arbitrary integer and noninteger scale factors in a single model. Experiments have been conducted on real video satellite images acquired by Jilin-1 and OVS-1. Quantitative and qualitative results have demonstrated that ASSR has superior reconstruction performance compared with the state-of-the-art SR methods.
引用
下载
收藏
页码:8812 / 8825
页数:14
相关论文
共 50 条
  • [21] DEFORMABLE ALIGNMENT AND SCALE-ADAPTIVE FEATURE EXTRACTION NETWORK FOR CONTINUOUS-SCALE SATELLITE VIDEO SUPER-RESOLUTION
    Ni, Ning
    Wu, Hanlin
    Zhang, Libao
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 2746 - 2750
  • [22] Super-resolution restoration of facial images in video
    Yu, Jiangang
    Bhanu, Bir
    18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, PROCEEDINGS, 2006, : 342 - +
  • [23] SUPER-RESOLUTION OF DEFORMED FACIAL IMAGES IN VIDEO
    Yu, Jiangang
    Bhanu, Bir
    2008 15TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, VOLS 1-5, 2008, : 1160 - 1163
  • [24] DSCVSR: A Lightweight Video Super-Resolution for Arbitrary Magnification
    Hong, Zixuan
    Cao, Weipeng
    Xu, Zhiwu
    Ming, Zhong
    Cao, Chuqing
    Zheng, Liang
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT I, KSEM 2024, 2024, 14884 : 112 - 123
  • [25] Learning a Local-Global Alignment Network for Satellite Video Super-Resolution
    Jin, Xianyu
    He, Jiang
    Xiao, Yi
    Yuan, Qiangqiang
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2023, 20
  • [26] Deep Joint Estimation Network for Satellite Video Super-Resolution With Multiple Degradations
    Liu, Huan
    Gu, Yanfeng
    IEEE Transactions on Geoscience and Remote Sensing, 2022, 60
  • [27] Deep Joint Estimation Network for Satellite Video Super-Resolution With Multiple Degradations
    Liu, Huan
    Gu, Yanfeng
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2022, 60
  • [28] Enhanced dual branches network for arbitrary-scale image super-resolution
    Li, Guangping
    Xiao, Huanling
    Liang, Dingkai
    ELECTRONICS LETTERS, 2023, 59 (01)
  • [29] MCWESRGAN: Improving Enhanced Super-Resolution Generative Adversarial Network for Satellite Images
    Karwowska, Kinga
    Wierzbicki, Damian
    IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2023, 16 : 9886 - 9906
  • [30] Efficient lightweight network for video super-resolution
    Luo, Laigan
    Yi, Benshun
    Wang, Zhongyuan
    Yi, Peng
    He, Zheng
    NEURAL COMPUTING & APPLICATIONS, 2023, 36 (2): : 883 - 896