Multi-Type Self-Attention Guided Degraded Saliency Detection

被引:0
|
作者
Zhou, Ziqi [1 ]
Wang, Zheng [1 ]
Lu, Huchuan [2 ,4 ]
Wang, Song [1 ,3 ]
Sun, Meijun [1 ]
机构
[1] Tianjin Univ, Coll Intelligence & Comp, Tianjin, Peoples R China
[2] Dalian Univ Technol, Sch Informat & Commun Engn, Dalian, Peoples R China
[3] Univ South Carolina, Dept Comp Sci & Engn, Columbia, SC 29208 USA
[4] Peng Cheng Lab, Shenzhen, Peoples R China
关键词
OBJECT DETECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Existing saliency detection techniques are sensitive to image quality and perform poorly on degraded images. In this paper, we systematically analyze the current status of the research on detecting salient objects from degraded images and then propose a new multi-type self-attention network, namely MSANet, for degraded saliency detection. The main contributions include: 1) Applying attention transfer learning to promote semantic detail perception and internal feature mining of the target network on degraded images; 2) Developing a multi-type self-attention mechanism to achieve the weight recalculation of multi-scale features. By computing global and local attention scores, we obtain the weighted features of different scales, effectively suppress the interference of noise and redundant information, and achieve a more complete boundary extraction. The proposed MSANet converts low-quality inputs to high-quality saliency maps directly in an end-to-end fashion. Experiments on seven widely-used datasets show that our approach produces good performance on both clear and degraded images.
引用
收藏
页码:13082 / 13089
页数:8
相关论文
共 50 条
  • [41] Ship detection algorithm in complex backgrounds via multi-head self-attention
    Yu N.-J.
    Fan X.-B.
    Deng T.-M.
    Mao G.-T.
    Zhejiang Daxue Xuebao (Gongxue Ban)/Journal of Zhejiang University (Engineering Science), 2022, 56 (12): : 2392 - 2402
  • [42] Interpretable Multi-Head Self-Attention Architecture for Sarcasm Detection in Social Media
    Akula, Ramya
    Garibay, Ivan
    ENTROPY, 2021, 23 (04)
  • [43] Multi-Scale Self-Attention for Text Classification
    Guo, Qipeng
    Qiu, Xipeng
    Liu, Pengfei
    Xue, Xiangyang
    Zhang, Zheng
    THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 7847 - 7854
  • [44] Adaptive Pruning for Multi-Head Self-Attention
    Messaoud, Walid
    Trabelsi, Rim
    Cabani, Adnane
    Abdelkefi, Fatma
    ARTIFICIAL INTELLIGENCE AND SOFT COMPUTING, ICAISC 2023, PT II, 2023, 14126 : 48 - 57
  • [45] Multi-Stride Self-Attention for Speech Recognition
    Han, Kyu J.
    Huang, Jing
    Tang, Yun
    He, Xiaodong
    Zhou, Bowen
    INTERSPEECH 2019, 2019, : 2788 - 2792
  • [46] Multi Resolution Analysis (MRA) for Approximate Self-Attention
    Zeng, Zhanpeng
    Pal, Sourav
    Kline, Jeffery
    Fung, Glenn
    Singh, Vikas
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [47] GS-NET: GLOBAL SELF-ATTENTION GUIDED CNN FOR MULTI-STAGE GLAUCOMA CLASSIFICATION
    Das, Dipankar
    Nayak, Deepak Ranjan
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3454 - 3458
  • [48] Prior Knowledge Integrated with Self-attention for Event Detection
    Li, Yan
    Li, Chenliang
    Xu, Weiran
    Li, Junliang
    INFORMATION RETRIEVAL, CCIR 2018, 2018, 11168 : 263 - 273
  • [49] Episodic Memory Network with Self-attention for Emotion Detection
    Huang, Jiangping
    Lin, Zhong
    Liu, Xin
    DATABASE SYSTEMS FOR ADVANCED APPLICATIONS, 2019, 11448 : 220 - 224
  • [50] Lane Marker Detection Based on Multihead Self-Attention
    Shengli F.
    Yuzhi Z.
    Xiaohui B.
    Mobile Information Systems, 2023, 2023