Multiscale Attention Fusion for Depth Map Super-Resolution Generative Adversarial Networks

被引:1
|
作者
Xu, Dan [1 ]
Fan, Xiaopeng [1 ,2 ]
Gao, Wen [2 ,3 ]
机构
[1] Harbin Inst Technol, Sch Comp Sci & Technol, Harbin 150001, Peoples R China
[2] Pengcheng Lab, Shenzhen 518052, Peoples R China
[3] Peking Univ, Sch Elect Engn & Comp Sci, Beijing 100871, Peoples R China
基金
中国国家自然科学基金; 国家高技术研究发展计划(863计划);
关键词
attention; depth map; fusion; generative adversarial networks; multiscale; super-resolution;
D O I
10.3390/e25060836
中图分类号
O4 [物理学];
学科分类号
0702 ;
摘要
Color images have long been used as an important supplementary information to guide the super-resolution of depth maps. However, how to quantitatively measure the guiding effect of color images on depth maps has always been a neglected issue. To solve this problem, inspired by the recent excellent results achieved in color image super-resolution by generative adversarial networks, we propose a depth map super-resolution framework with generative adversarial networks using multiscale attention fusion. Fusion of the color features and depth features at the same scale under the hierarchical fusion attention module effectively measure the guiding effect of the color image on the depth map. The fusion of joint color-depth features at different scales balances the impact of different scale features on the super-resolution of the depth map. The loss function of a generator composed of content loss, adversarial loss, and edge loss helps restore clearer edges of the depth map. Experimental results on different types of benchmark depth map datasets show that the proposed multiscale attention fusion based depth map super-resolution framework has significant subjective and objective improvements over the latest algorithms, verifying the validity and generalization ability of the model.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Generative Adversarial Networks and Perceptual Losses for Video Super-Resolution
    Lucas, Alice
    Lopez-Tapia, Santiago
    Molina, Rafael
    Katsaggelos, Aggelos K.
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (07) : 3312 - 3327
  • [22] ISRGAN: Improved Super-Resolution Using Generative Adversarial Networks
    Chudasama, Vishal
    Upla, Kishor
    ADVANCES IN COMPUTER VISION, CVC, VOL 1, 2020, 943 : 109 - 127
  • [23] Hierarchical Generative Adversarial Networks for Single Image Super-Resolution
    Chen, Weimin
    Ma, Yuqing
    Liu, Xianglong
    Yuan, Yi
    2021 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2021), 2021, : 355 - 364
  • [24] MULTISCALE DIRECTIONAL FUSION FOR DEPTH MAP SUPER RESOLUTION WITH DENOISING
    Xu, Dan
    Fan, Xiaopeng
    Zhang, Shibo
    Wang, Yang
    Zhao, Debin
    Gao, Wen
    2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 2342 - 2346
  • [25] Super-Resolution Generative Adversarial Network Based on the Dual Dimension Attention Mechanism for Biometric Image Super-Resolution
    Huang, Chi-En
    Li, Yung-Hui
    Aslam, Muhammad Saqlain
    Chang, Ching-Chun
    SENSORS, 2021, 21 (23)
  • [26] Agricultural Pest Super-Resolution and Identification With Attention Enhanced Residual and Dense Fusion Generative and Adversarial Network
    Dai, Qiang
    Cheng, Xi
    Qiao, Yan
    Zhang, Youhua
    IEEE ACCESS, 2020, 8 (08): : 81943 - 81959
  • [27] Deformable Enhancement and Adaptive Fusion for Depth Map Super-Resolution
    Liu, Peng
    Zhang, Zonghua
    Meng, Zhaozong
    Gao, Nan
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 204 - 208
  • [28] BESRGAN: Boundary equilibrium face super-resolution generative adversarial networks
    Ren, Xinyi
    Hui, Qiang
    Zhao, Xingke
    Xiong, Jianping
    Yin, Jun
    IET IMAGE PROCESSING, 2023, 17 (06) : 1784 - 1796
  • [29] D-SRGAN: DEM Super-Resolution with Generative Adversarial Networks
    Demiray B.Z.
    Sit M.
    Demir I.
    SN Computer Science, 2021, 2 (1)
  • [30] Super-Resolution Reconstruction of Cell Images Based on Generative Adversarial Networks
    Pan, Bin
    Du, Yifeng
    Guo, Xiaoming
    IEEE ACCESS, 2024, 12 : 72252 - 72263