Multi-Scale and Multi-Stream Fusion Network for Pansharpening

被引:4
|
作者
Jian, Lihua [1 ]
Wu, Shaowu [2 ]
Chen, Lihui [3 ]
Vivone, Gemine [4 ,5 ]
Rayhana, Rakiba [6 ]
Zhang, Di [1 ]
机构
[1] Zhengzhou Univ, Sch Elect & Informat Engn, Zhengzhou 450001, Peoples R China
[2] Wuhan Univ, Sch Comp Sci, Wuhan 430072, Peoples R China
[3] Chongqing Univ, Sch Microelect & Commun Engn, Chongqing 400044, Peoples R China
[4] Inst Methodol Environm Anal CNR IMAA, Natl Res Council, I-85050 Tito, Italy
[5] NBFC Natl Biodivers Future Ctr, I-90133 Palermo, Italy
[6] Univ British Columbia, Sch Engn, Kelowna, BC V1V 1V7, Canada
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
pansharpening; multi-scale; multi-stream fusion; multi-stage reconstruction loss; image enhancement; image fusion; PAN-SHARPENING METHOD; REMOTE-SENSING IMAGES; SATELLITE IMAGES; REGRESSION; INJECTION; CONTRAST; QUALITY; MODEL; MS;
D O I
10.3390/rs15061666
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Pansharpening refers to the use of a panchromatic image to improve the spatial resolution of a multi-spectral image while preserving spectral signatures. However, existing pansharpening methods are still unsatisfactory at balancing the trade-off between spatial enhancement and spectral fidelity. In this paper, a multi-scale and multi-stream fusion network (named MMFN) that leverages the multi-scale information of the source images is proposed. The proposed architecture is simple, yet effective, and can fully extract various spatial/spectral features at different levels. A multi-stage reconstruction loss was adopted to recover the pansharpened images in each multi-stream fusion block, which facilitates and stabilizes the training process. The qualitative and quantitative assessment on three real remote sensing datasets (i.e., QuickBird, Pleiades, and WorldView-2) demonstrates that the proposed approach outperforms state-of-the-art methods.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Skeleton-based action recognition with multi-stream, multi-scale dilated spatial-temporal graph convolution network
    Haiping Zhang
    Xu Liu
    Dongjin Yu
    Liming Guan
    Dongjing Wang
    Conghao Ma
    Zepeng Hu
    Applied Intelligence, 2023, 53 : 17629 - 17643
  • [22] MSDRN: Pansharpening of Multispectral Images via Multi-Scale Deep Residual Network
    Wang, Wenqing
    Zhou, Zhiqiang
    Liu, Han
    Xie, Guo
    REMOTE SENSING, 2021, 13 (06)
  • [23] RGB-D Saliency Detection by Multi-stream Late Fusion Network
    Chen, Hao
    Li, Youfu
    Su, Dan
    COMPUTER VISION SYSTEMS, ICVS 2017, 2017, 10528 : 459 - 468
  • [24] Multi-Stream Threshold Shrinkage and Fusion Network for Product Surface Defect Detection
    Geng, Yubiao
    Yue, Zhiyuan
    Yan, Qiming
    Sun, Yubao
    Computer Engineering and Applications, 2023, 59 (10) : 162 - 170
  • [25] Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture
    Lei Dajiang
    Zhang Ce
    Li Zhixing
    Wu Yu
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (08) : 1942 - 1949
  • [26] Remote Sensing Image Fusion Based on Generative Adversarial Network with Multi-stream Fusion Architecture
    Lei D.
    Zhang C.
    Li Z.
    Wu Y.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2020, 42 (08): : 1942 - 1949
  • [27] MSNet: A Multi-Stream Fusion Network for Remote Sensing Spatiotemporal Fusion Based on Transformer and Convolution
    Li, Weisheng
    Cao, Dongwen
    Peng, Yidong
    Yang, Chao
    REMOTE SENSING, 2021, 13 (18)
  • [28] A NEW PANSHARPENING METHOD WITH MULTI-SCALE STRUCTURE PERCEPTION
    Pan, Yu
    Li, Xu
    Gao, Ang
    Li, Lixin
    Mei, Shaohui
    Yue, Shigang
    IGARSS 2018 - 2018 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2018, : 8046 - 8049
  • [29] Fusion of multi-stream speech features for dialect classification
    Shweta Sinha
    Aruna Jain
    S. S. Agrawal
    CSI Transactions on ICT, 2015, 2 (4) : 243 - 252
  • [30] MSTFDN: Multi-scale transformer fusion dehazing network
    Yan Yang
    Haowen Zhang
    Xudong Wu
    Xiaozhen Liang
    Applied Intelligence, 2023, 53 : 5951 - 5962