AgeDETR: Attention-Guided Efficient DETR for Space Target Detection

被引:0
|
作者
Wang, Xiaojuan [1 ,2 ]
Xi, Bobo [1 ,3 ]
Xu, Haitao [1 ]
Zheng, Tie [1 ]
Xue, Changbin [1 ]
机构
[1] Chinese Acad Sci, Natl Space Sci Ctr, Beijing 100190, Peoples R China
[2] Univ Chinese Acad Sci, Beijing 100049, Peoples R China
[3] Xidian Univ, Sch Telecommun Engn, Xian 710071, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
space target detection; attention-guided feature enhancement; attention-guided feature fusion;
D O I
10.3390/rs16183452
中图分类号
X [环境科学、安全科学];
学科分类号
08 ; 0830 ;
摘要
Recent advancements in space exploration technology have significantly increased the number of diverse satellites in orbit. This surge in space-related information has posed considerable challenges in developing space target surveillance and situational awareness systems. However, existing detection algorithms face obstacles such as complex space backgrounds, varying illumination conditions, and diverse target sizes. To address these challenges, we propose an innovative end-to-end Attention-Guided Encoder DETR (AgeDETR) model, since artificial intelligence technology has progressed swiftly in recent years. Specifically, AgeDETR integrates Efficient Multi-Scale Attention (EMA) Enhanced FasterNet block (EF-Block) within a ResNet18 (EF-ResNet18) backbone. This integration enhances feature extraction and computational efficiency, providing a robust foundation for accurately identifying space targets. Additionally, we introduce the Attention-Guided Feature Enhancement (AGFE) module, which leverages self-attention and channel attention mechanisms to effectively extract and reinforce salient target features. Furthermore, the Attention-Guided Feature Fusion (AGFF) module optimizes multi-scale feature integration and produces highly expressive feature representations, which significantly improves recognition accuracy. The proposed AgeDETR framework achieves outstanding performance metrics, i.e., 97.9% in mAP0.5 and 85.2% in mAP0.5:0.95, on the SPARK2022 dataset, outperforming existing detectors and demonstrating superior performance in space target detection.
引用
收藏
页数:21
相关论文
共 50 条
  • [21] Attention-guided residual frame learning for video anomaly detection
    Yu, Jun-Hyung
    Moon, Jeong-Hyeon
    Sohn, Kyung-Ah
    MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (08) : 12099 - 12116
  • [22] Attention-guided RGBD saliency detection using appearance information
    Zhou, Xiaofei
    Li, Gongyang
    Gong, Chen
    Liu, Zhi
    Zhang, Jiyong
    IMAGE AND VISION COMPUTING, 2020, 95 (95)
  • [23] Deep Attention-Guided Hashing
    Yang, Zhan
    Raymond, Osolo Ian
    Sun, Wuqing
    Long, Jun
    IEEE ACCESS, 2019, 7 : 11209 - 11221
  • [24] Attention-Guided Collaborative Counting
    Mo, Hong
    Ren, Wenqi
    Zhang, Xiong
    Yan, Feihu
    Zhou, Zhong
    Cao, Xiaochun
    Wu, Wei
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 : 6306 - 6319
  • [25] Attention-guided residual frame learning for video anomaly detection
    Jun-Hyung Yu
    Jeong-Hyeon Moon
    Kyung-Ah Sohn
    Multimedia Tools and Applications, 2023, 82 : 12099 - 12116
  • [26] Attention-guided salient object detection using autoencoder regularization
    Cheng Xu
    Xianhui Liu
    Weidong Zhao
    Applied Intelligence, 2023, 53 : 6481 - 6495
  • [27] Oscillatory model of attention-guided object selection and novelty detection
    Borisyuk, RA
    Kazanovich, YB
    NEURAL NETWORKS, 2004, 17 (07) : 899 - 915
  • [28] Attention-guided multiscale neural network for defect detection in sewer pipelines
    Li, Yanfen
    Wang, Hanxiang
    Dang, L. Minh
    Song, Hyoung-Kyu
    Moon, Hyeonjoon
    COMPUTER-AIDED CIVIL AND INFRASTRUCTURE ENGINEERING, 2023, 38 (15) : 2163 - 2179
  • [29] Video Sparse Transformer With Attention-Guided Memory for Video Object Detection
    Fujitake, Masato
    Sugimoto, Akihiro
    IEEE ACCESS, 2022, 10 : 65886 - 65900
  • [30] Attention-guided CNN for image denoising
    Tian, Chunwei
    Xu, Yong
    Li, Zuoyong
    Zuo, Wangmeng
    Fei, Lunke
    Liu, Hong
    NEURAL NETWORKS, 2020, 124 : 117 - 129