Few-label aerial target intention recognition based on self-supervised contrastive learning

被引:0
|
作者
Song, Zihao [1 ]
Zhou, Yan [1 ]
Cai, Yichao [1 ]
Cheng, Wei [1 ]
Wu, Changfei [1 ]
Yin, Jianguo [1 ]
机构
[1] Early Warning Acad, Wuhan, Peoples R China
来源
IET RADAR SONAR AND NAVIGATION | 2025年 / 19卷 / 01期
关键词
air safety; data analysis; decision making; neural nets; recurrent neural nets;
D O I
10.1049/rsn2.12695
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Identifying the intentions of aerial targets is crucial for air situation understanding and decision making. Deep learning, with its powerful feature learning and representation capability, has become a key means to achieve higher performance in aerial target intention recognition (ATIR). However, conventional supervised deep learning methods rely on abundant labelled samples for training, which are difficult to quickly obtain in practical scenarios, posing a significant challenge to the effectiveness of training deep learning models. To address this issue, this paper proposes a novel few-label ATIR method based on deep contrastive learning, which combines the advantages of self-supervised learning and semi-supervised learning. Specifically, leveraging unlabelled samples, we first employ strong and weak data augmentation views and the temporal contrasting module to capture temporally relevant features, whereas the contextual contrasting module is utilised to learn discriminative representations. Subsequently, the network is fine-tuned with a limited set of labelled samples to further refine the learnt representations. Experimental results on an ATIR dataset demonstrate that our method significantly outperforms other few-label classification baselines in terms of recognition accuracy and Macro F1 score when the proportion of labelled samples is as low as 1% and 5%.
引用
收藏
页数:17
相关论文
共 50 条
  • [21] Self-supervised group meiosis contrastive learning for EEG-based emotion recognition
    Kan, Haoning
    Yu, Jiale
    Huang, Jiajin
    Liu, Zihe
    Wang, Heqian
    Zhou, Haiyan
    APPLIED INTELLIGENCE, 2023, 53 (22) : 27207 - 27225
  • [22] Global and Local Contrastive Learning for Self-Supervised Skeleton-Based Action Recognition
    Hu, Jinhua
    Hou, Yonghong
    Guo, Zihui
    Gao, Jiajun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2024, 34 (11) : 10578 - 10589
  • [23] SELF-SUPERVISED CONTRASTIVE LEARNING FOR AUDIO-VISUAL ACTION RECOGNITION
    Liu, Yang
    Tan, Ying
    Lan, Haoyuan
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 1000 - 1004
  • [24] Image classification framework based on contrastive self-supervised learning
    Zhao H.-W.
    Zhang J.-R.
    Zhu J.-P.
    Li H.
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2022, 52 (08): : 1850 - 1856
  • [25] A CONTRASTIVE SELF-SUPERVISED LEARNING SCHEME FOR BEAT TRACKING AMENABLE TO FEW-SHOT LEARNING
    Gagnere, Antonin
    Essid, Slim
    Peeters, Geoffroy
    arXiv,
  • [26] Self-supervised scientific document recommendation based on contrastive learning
    Tan, Shicheng
    Zhang, Tao
    Zhao, Shu
    Zhang, Yanping
    SCIENTOMETRICS, 2023, 128 (09) : 5027 - 5049
  • [27] Self-supervised scientific document recommendation based on contrastive learning
    Shicheng Tan
    Tao Zhang
    Shu Zhao
    Yanping Zhang
    Scientometrics, 2023, 128 : 5027 - 5049
  • [28] Bearings RUL prediction based on contrastive self-supervised learning
    Deng, WeiKun
    Nguyen, Khanh T. P.
    Medjaher, Kamal
    Gogu, Christian
    Morio, Jerome
    IFAC PAPERSONLINE, 2023, 56 (02): : 11906 - 11911
  • [29] A comprehensive perspective of contrastive self-supervised learning
    Songcan CHEN
    Chuanxing GENG
    Frontiers of Computer Science, 2021, (04) : 102 - 104
  • [30] On Compositions of Transformations in Contrastive Self-Supervised Learning
    Patrick, Mandela
    Asano, Yuki M.
    Kuznetsova, Polina
    Fong, Ruth
    Henriques, Joao F.
    Zweig, Geoffrey
    Vedaldi, Andrea
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 9557 - 9567