Model- and Deep Learning-Based Bandwidth and Carrier Frequency Allocation in Distributed Radar Networks

被引:1
|
作者
Chalise, Batu K. [1 ]
Martone, Anthony F. [2 ]
Kirk, Benjamin H. [2 ]
机构
[1] New York Inst Technol, Dept Elect & Comp Engn, Old Westbury, NY 11568 USA
[2] DEVCOM Army Res Lab, Adelphi, MD 20783 USA
关键词
Radar; Bandwidth; Radio spectrum management; Signal to noise ratio; Interference; Radar detection; Radar tracking; Bandwidth and carrier frequency allocation; bidirectional long short-term memory (LSTM); distributed radar; geometric programming (GP); semidefinite programming (SDP); successive convex approximation (SCA); RESOURCE-ALLOCATION; POWER ALLOCATION; CONSENSUS; SUBCARRIER; SYSTEMS;
D O I
10.1109/TAES.2023.3301827
中图分类号
V [航空、航天];
学科分类号
08 ; 0825 ;
摘要
Optimum allocation of bandwidth and carrier frequency in a network of distributed radar nodes is an important non-trivial research problem. In this paper, we propose both model- and deep learning-based joint bandwidth and carrier frequency allocation algorithms for a network consisting of a central coordinator and distributed radar nodes, each operating in a monostatic mode. With an objective of enabling poor performing radar nodes, that observe low target signal-to-noise-interference ratio (SINR) values, benefit from distributed collaboration, we propose model-based max-min approach, in which we maximize the minimum of the SINRs observed by all nodes, under total bandwidth and individual node's range resolution (RR) constraints. This optimization is non-convex, but we solve it efficiently utilizing an explicit relationship between bandwidth and carrier frequencies, and the fact that each node's SINR is a monotonically decreasing function of bandwidth and carrier frequency allocated to the node. We propose two iterative optimization methods that employ successive convex approximation with a) semidefinite programming (SDP) and b) geometric programming (GP) problem formulations. Computer simulations show the performance of the proposed methods under different RR requirements, which significantly outperform the equal bandwidth allocation (EBWA) method and enable poor performing nodes to enhance their individual SINRs significantly. The solutions of this model-based optimization and target locations are then used, respectively, as labels and input, to train a bidirectional long short-term memory (LSTM) network. The trained network can significantly reduce the online run-time complexity of the bandwidth and carrier frequency allocation in distributed radar networks.
引用
收藏
页码:8022 / 8036
页数:15
相关论文
共 50 条
  • [1] Harmonic Mean SINR Maximization-based Bandwidth and Carrier Frequency Allocation for Distributed Radar Networks
    Chalise, Batu K.
    Martone, Anthony F.
    Kirk, Benjamin H.
    2023 IEEE RADAR CONFERENCE, RADARCONF23, 2023,
  • [2] Deep Learning-Based Dynamic Bandwidth Allocation for Future Optical Access Networks
    Hatem, John Abied
    Dhaini, Ahmad R.
    Elbassuoni, Shady
    IEEE ACCESS, 2019, 7 : 97307 - 97318
  • [3] Distributed Deep Reinforcement Learning-Based Spectrum and Power Allocation for Heterogeneous Networks
    Yang, Helin
    Zhao, Jun
    Lam, Kwok-Yan
    Xiong, Zehui
    Wu, Qingqing
    Xiao, Liang
    IEEE TRANSACTIONS ON WIRELESS COMMUNICATIONS, 2022, 21 (09) : 6935 - 6948
  • [4] Learning-Based Distributed Resource Allocation in Asynchronous Multicell Networks
    Jang, Jonggyu
    Yang, Hyun Jong
    Kim, Sunghyun
    2018 INTERNATIONAL CONFERENCE ON INFORMATION AND COMMUNICATION TECHNOLOGY CONVERGENCE (ICTC), 2018, : 910 - 913
  • [5] Deep Reinforcement Learning-based Power Control and Bandwidth Allocation Policy for Weighted Cost Minimization in Wireless Networks
    Ke, Hongchang
    Wang, Hui
    Sun, Hongbin
    APPLIED INTELLIGENCE, 2023, 53 (22) : 26885 - 26906
  • [6] Deep Reinforcement Learning-based Power Control and Bandwidth Allocation Policy for Weighted Cost Minimization in Wireless Networks
    Hongchang Ke
    Hui Wang
    Hongbin Sun
    Applied Intelligence, 2023, 53 : 26885 - 26906
  • [7] Distributed deep learning-based signal classification for time-frequency synchronization in wireless networks
    Zhang, Qin
    Guan, Yutong
    Li, Hai
    Xiong, Kanghua
    Song, Zhengyu
    COMPUTER COMMUNICATIONS, 2023, 201 : 37 - 47
  • [8] Distributed Deep Learning-Based Model for Financial Fraud Detection in Supply Chain Networks
    Tamym, Lahcen
    Benyoucef, Lyes
    PROCEEDINGS OF NINTH INTERNATIONAL CONGRESS ON INFORMATION AND COMMUNICATION TECHNOLOGY, ICICT 2024, VOL 3, 2024, 1013 : 43 - 53
  • [9] A Model-Driven Deep Learning-Based Receiver for OFDM System With Carrier Frequency Offset
    Lin, Xincong
    Shen, Yushi
    Jiang, Chunxiao
    IEEE COMMUNICATIONS LETTERS, 2024, 28 (04) : 813 - 817
  • [10] Deep Learning-Based Resource Allocation Scheme for Heterogeneous NOMA Networks
    Kim, Donghyeon
    Kwon, Sean
    Jung, Haejoon
    Lee, In-Ho
    IEEE ACCESS, 2023, 11 : 89423 - 89432