Specific Emitter Identification Model Based on Improved BYOL Self-Supervised Learning

被引:4
|
作者
Zhao, Dongxing [1 ]
Yang, Junan [1 ]
Liu, Hui [1 ]
Huang, Keju [1 ]
机构
[1] Natl Univ Def Technol, Coll Elect Engn, Hefei 230000, Peoples R China
关键词
specific emitter identification; self-supervised learning; small samples; deep learning; signal processing; REPRESENTATION; CLASSIFICATION;
D O I
10.3390/electronics11213485
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Specific emitter identification (SEI) is extracting the features of the received radio signals and determining the emitter individuals that generate the signals. Although deep learning-based methods have been effectively applied for SEI, their performance declines dramatically with the smaller number of labeled training samples and in the presence of significant noise. To address this issue, we propose an improved Bootstrap Your Own Late (BYOL) self-supervised learning scheme to fully exploit the unlabeled samples, which comprises the pretext task adopting contrastive learning conception and the downstream task. We designed three optimized data augmentation methods for communication signals in the former task to serve the contrastive concept. We built two neural networks, online and target networks, which interact and learn from each other. The proposed scheme demonstrates the generality of handling the small and sufficient sample cases across a wide range from 10 to 400, being labeled in each group. The experiment also shows promising accuracy and robustness where the recognition results increase at 3-8% from 3 to 7 signal-to-noise ratio (SNR). Our scheme can accurately identify the individual emitter in a complicated electromagnetic environment.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Network Intrusion Detection Model Based on Improved BYOL Self-Supervised Learning
    Wang, Zhendong
    Li, Zeyu
    Wang, Junling
    Li, Dahai
    [J]. SECURITY AND COMMUNICATION NETWORKS, 2021, 2021
  • [2] Specific Emitter Identification Based on Self-Supervised Contrast Learning
    Liu, Bo
    Yu, Hongyi
    Du, Jianping
    Wu, You
    Li, Yongbin
    Zhu, Zhaorui
    Wang, Zhenyu
    [J]. ELECTRONICS, 2022, 11 (18)
  • [3] Contrastive Self-Supervised Clustering for Specific Emitter Identification
    Hao, Xiaoyang
    Feng, Zhixi
    Liu, Ruoyu
    Yang, Shuyuan
    Jiao, Licheng
    Luo, Rong
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (23) : 20803 - 20818
  • [4] A Complex-Valued Self-Supervised Learning-Based Method for Specific Emitter Identification
    Zhao, Dongxing
    Yang, Junan
    Liu, Hui
    Huang, Keju
    [J]. ENTROPY, 2022, 24 (07)
  • [5] TRIBYOL: TRIPLET BYOL FOR SELF-SUPERVISED REPRESENTATION LEARNING
    Li, Guang
    Togo, Ren
    Ogawa, Takahiro
    Haseyama, Miki
    [J]. 2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 3458 - 3462
  • [6] Self-Supervised Clustering Models Based on BYOL Network Structure
    Chen, Xuehao
    Zhou, Jin
    Chen, Yuehui
    Han, Shiyuan
    Wang, Yingxu
    Du, Tao
    Yang, Cheng
    Liu, Bowen
    [J]. ELECTRONICS, 2023, 12 (23)
  • [7] BYOL-S: Learning Self-supervised Speech Representations by Bootstrapping
    Elbanna, Gasser
    Scheidwasser-Clow, Neil
    Kegler, Mikolaj
    Beckmann, Pierre
    El Hajal, Karl
    Cernak, Milos
    [J]. HEAR: HOLISTIC EVALUATION OF AUDIO REPRESENTATIONS, VOL 166, 2021, 166 : 25 - 47
  • [8] SBIR-BYOL: a self-supervised sketch-based image retrieval model
    Jose M. Saavedra
    Javier Morales
    Nils Murrugarra-Llerena
    [J]. Neural Computing and Applications, 2023, 35 : 5395 - 5408
  • [9] SBIR-BYOL: a self-supervised sketch-based image retrieval model
    Saavedra, Jose M.
    Morales, Javier
    Murrugarra-Llerena, Nils
    [J]. NEURAL COMPUTING & APPLICATIONS, 2023, 35 (07): : 5395 - 5408
  • [10] BYOL for Audio: Self-Supervised Learning for General-Purpose Audio Representation
    Niizumi, Daisuke
    Takeuchi, Daiki
    Ohishi, Yasunori
    Harada, Noboru
    Kashino, Kunio
    [J]. 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,