Beyond Subspace Isolation: Many-to-Many Transformer for Light Field Image Super-Resolution

被引:0
|
作者
Hu, Zeke Zexi [1 ]
Chen, Xiaoming [2 ]
Chung, Vera Yuk Ying [1 ]
Shen, Yiran [3 ]
机构
[1] Univ Sydney, Sch Comp Sci, Darlington, NSW 2008, Australia
[2] Beijing Technol & Business Univ, Sch Comp & Artificial Intelligence, Beijing 102488, Peoples R China
[3] Shandong Univ, Sch Software, Jinan 250100, Peoples R China
基金
北京市自然科学基金; 中国国家自然科学基金;
关键词
Transformers; Light fields; Tensors; Superresolution; Spatial resolution; Cameras; Correlation; Image reconstruction; Training; Optimization; Light field; super-resolution; image processing; deep learning;
D O I
10.1109/TMM.2024.3521795
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The effective extraction of spatial-angular features plays a crucial role in light field image super-resolution (LFSR) tasks, and the introduction of convolution and Transformers leads to significant improvement in this area. Nevertheless, due to the large 4D data volume of light field images, many existing methods opted to decompose the data into a number of lower-dimensional subspaces and perform Transformers in each sub-space individually. As a side effect, these methods inadvertently restrict the self-attention mechanisms to a One-to-One scheme accessing only a limited subset of LF data, explicitly preventing comprehensive optimization on all spatial and angular cues. In this paper, we identify this limitation as subspace isolation and introduce a novel Many-to-Many Transformer (M2MT) to address it. M2MT aggregates angular information in the spatial subspace before performing the self-attention mechanism. It enables complete access to all information across all sub-aperture images (SAIs) in a light field image. Consequently, M2MT is enabled to comprehensively capture long-range correlation dependencies. With M2MT as the foundational component, we develop a simple yet effective M2MT network for LFSR. Our experimental results demonstrate that M2MT achieves state-of-the-art performance across various public datasets, and it offers a favorable balance between model performance and efficiency, yielding higher-quality LFSR results with substantially lower demand for memory and computation. We further conduct in-depth analysis using local attribution maps (LAM) to obtain visual interpretability, and the results validate that M2MT is empowered with a truly non-local context in both spatial and angular subspaces to mitigate subspace isolation and acquire effective spatial-angular representation.
引用
收藏
页码:1334 / 1348
页数:15
相关论文
共 50 条
  • [21] Steformer: Efficient Stereo Image Super-Resolution With Transformer
    Lin, Jianxin
    Yin, Lianying
    Wang, Yijun
    IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 : 8396 - 8407
  • [22] Image Super-Resolution Using Dilated Window Transformer
    Park, Soobin
    Choi, Yong Suk
    IEEE ACCESS, 2023, 11 (60028-60039): : 60028 - 60039
  • [23] Efficient mixed transformer for single image super-resolution
    Zheng, Ling
    Zhu, Jinchen
    Shi, Jinpeng
    Weng, Shizhuang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2024, 133
  • [24] ESSAformer: Efficient Transformer for Hyperspectral Image Super-resolution
    Zhang, Mingjin
    Zhang, Chi
    Zhang, Qiming
    Guo, Jie
    Gao, Xinbo
    Zhang, Jing
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 23016 - 23027
  • [25] Multi-granularity Transformer for Image Super-Resolution
    Zhuge, Yunzhi
    Jia, Xu
    COMPUTER VISION - ACCV 2022, PT III, 2023, 13843 : 138 - 154
  • [26] Learning Texture Transformer Network for Image Super-Resolution
    Yang, Fuzhi
    Yang, Huan
    Fu, Jianlong
    Lu, Hongtao
    Guo, Baining
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 5790 - 5799
  • [27] Enhancing Image Super-Resolution with Dual Compression Transformer
    Yu, Jiaxing
    Chen, Zheng
    Wang, Jingkai
    Kong, Linghe
    Yan, Jiajie
    Gu, Wei
    VISUAL COMPUTER, 2024,
  • [28] Efficient Dual Attention Transformer for Image Super-Resolution
    Park, Soobin
    Jeong, Yuna
    Choi, Yong Suk
    39TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2024, 2024, : 963 - 970
  • [29] GPU-Accelerated Light-field Image Super-resolution
    Trung-Hieu Tran
    Mammadov, Gasim
    Sun, Kaicong
    Simon, Sven
    2018 INTERNATIONAL CONFERENCE ON ADVANCED COMPUTING AND APPLICATIONS (ACOMP), 2018, : 7 - 13
  • [30] Angular-flexible network for light field image super-resolution
    Liang, Zhengyu
    Wang, Yingqian
    Wang, Longguang
    Yang, Jungang
    Zhou, Shilin
    ELECTRONICS LETTERS, 2021, 57 (24) : 921 - 924