Beyond Subspace Isolation: Many-to-Many Transformer for Light Field Image Super-Resolution

被引:0
|
作者
Hu, Zeke Zexi [1 ]
Chen, Xiaoming [2 ]
Chung, Vera Yuk Ying [1 ]
Shen, Yiran [3 ]
机构
[1] Univ Sydney, Sch Comp Sci, Darlington, NSW 2008, Australia
[2] Beijing Technol & Business Univ, Sch Comp & Artificial Intelligence, Beijing 102488, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250100, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Transformers; Light fields; Tensors; Superresolution; Spatial resolution; Cameras; Correlation; Image reconstruction; Training; Optimization; Light field; super-resolution; image processing; deep learning;
D O I
10.1109/TMM.2024.3521795
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The effective extraction of spatial-angular features plays a crucial role in light field image super-resolution (LFSR) tasks, and the introduction of convolution and Transformers leads to significant improvement in this area. Nevertheless, due to the large 4D data volume of light field images, many existing methods opted to decompose the data into a number of lower-dimensional subspaces and perform Transformers in each sub-space individually. As a side effect, these methods inadvertently restrict the self-attention mechanisms to a One-to-One scheme accessing only a limited subset of LF data, explicitly preventing comprehensive optimization on all spatial and angular cues. In this paper, we identify this limitation as subspace isolation and introduce a novel Many-to-Many Transformer (M2MT) to address it. M2MT aggregates angular information in the spatial subspace before performing the self-attention mechanism. It enables complete access to all information across all sub-aperture images (SAIs) in a light field image. Consequently, M2MT is enabled to comprehensively capture long-range correlation dependencies. With M2MT as the foundational component, we develop a simple yet effective M2MT network for LFSR. Our experimental results demonstrate that M2MT achieves state-of-the-art performance across various public datasets, and it offers a favorable balance between model performance and efficiency, yielding higher-quality LFSR results with substantially lower demand for memory and computation. We further conduct in-depth analysis using local attribution maps (LAM) to obtain visual interpretability, and the results validate that M2MT is empowered with a truly non-local context in both spatial and angular subspaces to mitigate subspace isolation and acquire effective spatial-angular representation.
引用
收藏
页码:1334 / 1348
页数:15
相关论文
共 50 条
  • [41] Dtsr: detail-enhanced transformer for image super-resolution
    Huang, Xiaoqian
    Huang, Detian
    Huang, Qin
    Huang, Caixia
    Chen, Feiyang
    Xu, Zhengjun
    VISUAL COMPUTER, 2024, 40 (11): : 7667 - 7684
  • [42] LCFormer: linear complexity transformer for efficient image super-resolution
    Gao, Xiang
    Wu, Sining
    Zhou, Ying
    Wang, Fan
    Hu, Xiaopeng
    MULTIMEDIA SYSTEMS, 2024, 30 (04)
  • [43] Single Image Super-resolution Using Spatial Transformer Networks
    Wang, Qiang
    Fan, Huijie
    Cong, Yang
    Tang, Yandong
    2017 IEEE 7TH ANNUAL INTERNATIONAL CONFERENCE ON CYBER TECHNOLOGY IN AUTOMATION, CONTROL, AND INTELLIGENT SYSTEMS (CYBER), 2017, : 564 - 567
  • [44] Efficient Swin Transformer for Remote Sensing Image Super-Resolution
    Kang, Xudong
    Duan, Puhong
    Li, Jier
    Li, Shutao
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2024, 33 : 6367 - 6379
  • [45] Batch-transformer for scene text image super-resolution
    Sun, Yaqi
    Xie, Xiaolan
    Li, Zhi
    Yang, Kai
    VISUAL COMPUTER, 2024, 40 (10): : 7399 - 7409
  • [46] SVTSR: image super-resolution using scattering vision transformer
    Liang, Jiabao
    Jin, Yutao
    Chen, Xiaoyan
    Huang, Haotian
    Deng, Yue
    SCIENTIFIC REPORTS, 2024, 14 (01):
  • [47] Spatial and frequency information fusion transformer for image super-resolution
    Zhang, Yan
    Xu, Fujie
    Sun, Yemei
    Wang, Jiao
    NEURAL NETWORKS, 2025, 187
  • [48] SRInpaintor: When Super-Resolution Meets Transformer for Image Inpainting
    Li, Feng
    Li, Anqi
    Qin, Jia
    Bai, Huihui
    Lin, Weisi
    Cong, Runmin
    Zhao, Yao
    IEEE Transactions on Computational Imaging, 2022, 8 : 743 - 758
  • [49] Image Super-Resolution Using a Simple Transformer Without Pretraining
    Huan Liu
    Mingwen Shao
    Chao Wang
    Feilong Cao
    Neural Processing Letters, 2023, 55 : 1479 - 1497
  • [50] Transformer-based image super-resolution and its lightweight
    Zhang, Dongxiao
    Qi, Tangyao
    Gao, Juhao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2024, 83 (26) : 68625 - 68649