A Global Self-Attention Memristive Neural Network for Image Restoration

被引:0
|
作者
Zhang, Wenhao [1 ]
Xiao, He [2 ]
Xie, Dirui [3 ]
Zhou, Yue
Duan, Shukai
Hu, Xiaofang [1 ]
机构
[1] Southwest Univ, Coll Artificial Intelligence, Chongqing 400715, Peoples R China
[2] Southwest Univ, Natl & Local Joint Engn Res Ctr Intelligent Trans, Chongqing 400715, Peoples R China
[3] Southwest Univ, Minist Educ, Key Lab Luminescence Anal & Mol Sensing, Chongqing 400715, Peoples R China
基金
中国国家自然科学基金;
关键词
Image restoration; Feature extraction; Neural networks; Memristors; Computational modeling; Transformers; Hardware; Attention mechanisms; convolutional neural networks; image restoration; memristive circuit; non-local network; CIRCUIT IMPLEMENTATION; QUALITY ASSESSMENT; ENHANCEMENT; ALGORITHM;
D O I
10.1109/TETCI.2024.3369447
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, using the idea of non-local operations, various non-local networks and the Vision Transformer have been proposed to model the long-range pixel dependencies, addressing the limitation of Convolutional neural networks(CNNs). However, most of these models cannot adaptively process images with different resolutions, and their large number of parameters and computational complexity make them unfavorable for edge devices. In this paper, we propose an efficient Global Self-Attention Memristive Neural Network (GSA-MNN) for image restoration and present a memristive circuits implementation scheme for GSA-MNN. GSA-MNN can both extract global and local information from images, which can be flexibly applied to different resolution images. Specifically, the Global Spatial Attention Module (GSAM) and the Global Channel Attention Module (GCAM) are designed to complete the modeling and inference of global relations. The GSAM is used to model global spatial relations between the pixels of the feature maps, while the GCAM explores global relations across the channels. Moreover, a multi-scale local information extraction module is proposed to deal with image regions with complex textures. Furthermore, we provide a modular designed circuit implementation scheme for these three modules and the entire GSA-MNN. Benefiting from the programmability of the memristor crossbars, three kinds of image restoration tasks: image deraining, low-light image enhancement, and image dehazing are realized on the same circuit framework by adjusting the configuration parameters. Experimental comparisons with over 20 state-of-the-art methods on 10 public datasets show that our proposed GSA-MNN has superiority.
引用
收藏
页码:2613 / 2624
页数:12
相关论文
共 50 条
  • [1] Encoder-decoder Network with Self-attention Module for Image Restoration
    Jin, Qing
    Yu, Qi
    Liu, Jiying
    Tan, Xintong
    [J]. THIRTEENTH INTERNATIONAL CONFERENCE ON GRAPHICS AND IMAGE PROCESSING (ICGIP 2021), 2022, 12083
  • [2] Image Classification based on Self-attention Convolutional Neural Network
    Cai, Xiaohong
    Li, Ming
    Cao, Hui
    Ma, Jingang
    Wang, Xiaoyan
    Zhuang, Xuqiang
    [J]. SIXTH INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION, 2021, 11913
  • [3] Self-Attention Based Network for Punctuation Restoration
    Wang, Feng
    Chen, Wei
    Yang, Zhen
    Xu, Bo
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 2803 - 2808
  • [4] Global Convolutional Neural Networks With Self-Attention for Fisheye Image Rectification
    Kim, Byunghyun
    Lee, Dohyun
    Min, Kyeongyuk
    Chong, Jongwha
    Joe, Inwhee
    [J]. IEEE ACCESS, 2022, 10 : 129580 - 129587
  • [5] Self-attention convolutional neural network for improved MR image reconstruction
    Wu, Yan
    Ma, Yajun
    Liu, Jing
    Du, Jiang
    Xing, Lei
    [J]. INFORMATION SCIENCES, 2019, 490 : 317 - 328
  • [6] PSNet: Towards Efficient Image Restoration With Self-Attention
    Cui, Yuning
    Knoll, Alois
    [J]. IEEE ROBOTICS AND AUTOMATION LETTERS, 2023, 8 (09) : 5735 - 5742
  • [7] Multi-scale self-attention generative adversarial network for pathology image restoration
    Liang, Meiyan
    Zhang, Qiannan
    Wang, Guogang
    Xu, Na
    Wang, Lin
    Liu, Haishun
    Zhang, Cunlin
    [J]. VISUAL COMPUTER, 2023, 39 (09): : 4305 - 4321
  • [8] Multi-scale self-attention generative adversarial network for pathology image restoration
    Meiyan Liang
    Qiannan Zhang
    Guogang Wang
    Na Xu
    Lin Wang
    Haishun Liu
    Cunlin Zhang
    [J]. The Visual Computer, 2023, 39 : 4305 - 4321
  • [9] Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration
    Lee, Eunho
    Hwang, Youngbae
    [J]. IEEE ACCESS, 2024, 12 : 38672 - 38684
  • [10] Memristive continuous Hopfield neural network circuit for image restoration
    Hong, Qinghui
    Li, Ya
    Wang, Xiaoping
    [J]. NEURAL COMPUTING & APPLICATIONS, 2020, 32 (12): : 8175 - 8185