MadFormer: multi-attention-driven image super-resolution method based on Transformer

被引:4
|
作者
Liu, Beibei [1 ]
Sun, Jing [1 ]
Zhu, Bing [2 ]
Li, Ting [1 ]
Sun, Fuming [1 ]
机构
[1] Dalian Minzu Univ, Sch Informat & Commun Engn, Liaohe West Rd, Dalian 116600, Liaoning, Peoples R China
[2] Harbin Inst Technol, Sch Elect & Informat Engn, Xidazhi St, Harbin 150006, Heilongjiang, Peoples R China
基金
中国国家自然科学基金;
关键词
Image super-resolution; Transformer; Multi-attention-driven; Dynamic fusion;
D O I
10.1007/s00530-024-01276-1
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
While the Transformer-based method has demonstrated exceptional performance in low-level visual processing tasks, it has a strong modeling ability only locally, thereby neglecting the importance of spatial feature information and high-frequency details within the channel for super-resolution. To enhance feature information and improve the visual experience, we propose a multi-attention-driven image super-resolution method based on a Transformer network, called MadFormer. Initially, the low-resolution image undergoes an initial convolution operation to extract shallow features while being fed into a residual multi-attention block incorporating channel attention, spatial attention, and self-attention mechanisms. By employing multi-head self-attention, the proposed method aims to capture global-local feature information; channel attention and spatial attention are utilized to effectively capture high-frequency features in both the channel and spatial domains. Subsequently, deep feature information is inputted into a dynamic fusion block that dynamically fuses multi-attention extracted features, facilitating the aggregation of cross-window information. Ultimately, the shallow and deep feature information is fused via convolution operations, yielding high-resolution images through high-quality reconstruction. Comprehensive quantitative and qualitative comparisons with other advanced algorithms demonstrate the substantial advantages of the proposed approach in terms of peak signal-to-noise ratio (PSNR) and structural similarity (SSIM) for image super-resolution.
引用
收藏
页数:11
相关论文
empty
未找到相关数据