Recognition of Micro-Motion Space Targets Based on Attention-Augmented Cross-Modal Feature Fusion Recognition Network

被引:11
|
作者
Tian, Xudong [1 ]
Bai, Xueru [2 ]
Zhou, Feng [1 ]
机构
[1] Xidian Univ, Key Lab Elect Informat Countermeasure & Simulat Te, Minist Educ, Xian 710071, Peoples R China
[2] Xidian Univ, Natl Key Lab Radar Signal Proc, Xian 710071, Peoples R China
基金
中国国家自然科学基金;
关键词
Attention-augmented cross-modal feature fusion recognition (ACM-FR); convolutional neural network (CNN); feature fusion; inverse synthetic aperture radar (ISAR); micro-motion space target; SIGNATURE EXTRACTION; PARAMETER-ESTIMATION; DOPPLER;
D O I
10.1109/TGRS.2023.3275991
中图分类号
P3 [地球物理学]; P59 [地球化学];
学科分类号
0708 ; 070902 ;
摘要
Narrowband and wideband waveforms are usually adopted simultaneously during the observation of micro-motion space targets by inverse synthetic aperture radar (ISAR), which can collect rich multimodal information in the time-Doppler, time-range, and range-instantaneous-Doppler (RID) domains. In order to exploit the electromagnetic scattering, shape, structure, and motion characteristics, this article proposes an attention-augmented cross-modal feature fusion recognition network (ACM-FR Net). First, the ACM-FR Net adopts a convolutional neural network (CNN) to extract initial feature vectors from joint time-frequency (JTF) image, high-resolution range profiles (HRRPs), and RID image. Then, it transforms the feature vectors of the three modalities into feature sequences. Finally, it achieves interactive feature fusion by implementing ACM feature fusion. In the four-category micro-motion space target recognition experiments, the proposed ACM-FR Net has demonstrated high accuracy and noise robustness.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Fusion Recognition of Space Targets with Micro-Motion Based on a Sparse Auto-Encoder
    Tian X.
    Bai X.
    Zhou F.
    Dianzi Yu Xinxi Xuebao/Journal of Electronics and Information Technology, 2023, 45 (12): : 4336 - 4344
  • [2] Feature Extraction and Target Recognition of Missile Targets based on Micro-motion
    Lei Peng
    Li Kang-le
    Liu Yong-xiang
    PROCEEDINGS OF 2012 IEEE 11TH INTERNATIONAL CONFERENCE ON SIGNAL PROCESSING (ICSP) VOLS 1-3, 2012, : 1914 - 1919
  • [3] A cross-modal fusion network based on graph feature learning for multimodal emotion recognition
    Cao Xiaopeng
    Zhang Linying
    Chen Qiuxian
    Ning Hailong
    Dong Yizhuo
    The Journal of China Universities of Posts and Telecommunications, 2024, 31 (06) : 16 - 25
  • [4] Cross-modal attention and letter recognition
    Wesner, Michael
    Miller, Lisa
    INTERNATIONAL JOURNAL OF PSYCHOLOGY, 2008, 43 (3-4) : 343 - 343
  • [5] Multimodal Fusion with Cross-Modal Attention for Action Recognition in Still Images
    Tsai, Jia-Hua
    Chu, Wei-Ta
    PROCEEDINGS OF THE 4TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA IN ASIA, MMASIA 2022, 2022,
  • [6] Discriminative attention-augmented feature learning for facial expression recognition in the wild
    Linyi Zhou
    Xijian Fan
    Tardi Tjahjadi
    Sruti Das Choudhury
    Neural Computing and Applications, 2022, 34 : 925 - 936
  • [7] Discriminative attention-augmented feature learning for facial expression recognition in the wild
    Zhou, Linyi
    Fan, Xijian
    Tjahjadi, Tardi
    Das Choudhury, Sruti
    NEURAL COMPUTING & APPLICATIONS, 2022, 34 (02): : 925 - 936
  • [8] Cross-Modal Recognition Algorithm of Electromagnetic Targets via Siamese Network
    Zhang W.
    Wang S.-F.
    Lin J.-R.
    Li Q.
    Shao H.-Z.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (06): : 1281 - 1290
  • [9] CMFN: Cross-Modal Fusion Network for Irregular Scene Text Recognition
    Zheng, Jinzhi
    Ji, Ruyi
    Zhang, Libo
    Wu, Yanjun
    Zhao, Chen
    NEURAL INFORMATION PROCESSING, ICONIP 2023, PT VI, 2024, 14452 : 421 - 433
  • [10] MemoCMT: multimodal emotion recognition using cross-modal transformer-based feature fusion
    Mustaqeem Khan
    Phuong-Nam Tran
    Nhat Truong Pham
    Abdulmotaleb El Saddik
    Alice Othmani
    Scientific Reports, 15 (1)