Fine-Grained Image Generation Network With Radar Range Profiles Using Cross-Modal Visual Supervision

被引:3
|
作者
Bao, Jiacheng [1 ]
Li, Da [1 ]
Li, Shiyong [1 ]
Zhao, Guoqiang [1 ]
Sun, Houjun [1 ]
Zhang, Yi [1 ]
机构
[1] Beijing Inst Technol, Sch Integrated Circuits & Elect, Beijing Key Lab Millimeter Wave & Terahertz Tech, Beijing 100081, Peoples R China
基金
中国国家自然科学基金;
关键词
Cross-modal supervision; deep neural network (DNN); electromagnetic imaging; generative adversarial network (GAN); radar range profile; CONVOLUTIONAL NEURAL-NETWORK; ENTROPY; RECONSTRUCTION; RESOLUTION;
D O I
10.1109/TMTT.2023.3299615
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Electromagnetic imaging methods mainly utilize converted sampling, dimensional transformation, and coherent processing to obtain spatial images of targets, which often suffer from accuracy and efficiency problems. Deep neural network (DNN)-based high-resolution imaging methods have achieved impressive results in improving resolution and reducing computational costs. However, previous works exploit single modality information from electromagnetic data; thus, the performances are limited. In this article, we propose an electromagnetic image generation network (EMIG-Net), which translates electromagnetic data of multiview 1-D range profiles (1DRPs), directly into bird-view 2-D high-resolution images under cross-modal supervision. We construct an adversarial generative framework with visual images as supervision to significantly improve the imaging accuracy. Moreover, the network structure is carefully designed to optimize computational efficiency. Experiments on self-built synthetic data and experimental data in the anechoic chamber show that our network has the ability to generate high-resolution images, whose visual quality is superior to that of traditional imaging methods and DNN-based methods, while consuming less computational cost. Compared with the backprojection (BP) algorithm, the EMIG-Net gains a significant improvement in entropy (72%), peak signal-to-noise ratio (PSNR; 150%), and structural similarity (SSIM; 153%). Our work shows the broad prospects of deep learning in radar data representation and high-resolution imaging and provides a path for researching electromagnetic imaging based on learning theory.
引用
收藏
页码:1339 / 1352
页数:14
相关论文
共 50 条
  • [31] Deep Multiscale Fine-Grained Hashing for Remote Sensing Cross-Modal Retrieval
    Huang, Jiaxiang
    Feng, Yong
    Zhou, Mingliang
    Xiong, Xiancai
    Wang, Yongheng
    Qiang, Baohua
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2024, 21 : 1 - 5
  • [32] CROSS-MODAL KNOWLEDGE DISTILLATION FOR FINE-GRAINED ONE-SHOT CLASSIFICATION
    Zhao, Jiabao
    Lin, Xin
    Yang, Yifan
    Yang, Jing
    He, Liang
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 4295 - 4299
  • [33] Fine-grained sentiment Feature Extraction Method for Cross-modal Sentiment Analysis
    Sun, Ye
    Jin, Guozhe
    Zhao, Yahui
    Cui, Rongyi
    2024 16TH INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND COMPUTING, ICMLC 2024, 2024, : 602 - 608
  • [34] Warping of Radar Data Into Camera Image for Cross-Modal Supervision in Automotive Applications
    Grimm, Christopher
    Fei, Tai
    Warsitz, Ernst
    Farhoud, Ridha
    Breddermann, Tobias
    Haeb-Umbach, Reinhold
    IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2022, 71 (09) : 9435 - 9449
  • [35] Fine-Grained Matching with Multi-Perspective Similarity Modeling for Cross-Modal Retrieval
    Xie, Xiumin
    Hou, Chuanwen
    Li, Zhixin
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, VOL 180, 2022, 180 : 2148 - 2158
  • [36] High-Dimensional Sparse Cross-Modal Hashing with Fine-Grained Similarity Embedding
    Wang, Yongxin
    Chen, Zhen-Duo
    Luo, Xin
    Xu, Xin-Shun
    PROCEEDINGS OF THE WORLD WIDE WEB CONFERENCE 2021 (WWW 2021), 2021, : 2900 - 2909
  • [37] Fine-Grained Cross-Modal Retrieval for Cultural Items with Focal Attention and Hierarchical Encodings
    Sheng, Shurong
    Laenen, Katrien
    Van Gool, Luc
    Moens, Marie-Francine
    COMPUTERS, 2021, 10 (09)
  • [38] A Cross-modal Attention Model for Fine-Grained Incident Retrieval from Dashcam Videos
    Pham, Dinh-Duy
    Dao, Minh-Son
    Nguyen, Thanh-Binh
    MULTIMEDIA MODELING, MMM 2023, PT I, 2023, 13833 : 409 - 420
  • [39] Aligning Images and Text with Semantic Role Labels for Fine-Grained Cross-Modal Understanding
    Bhattacharyya, Abhidip
    Mauceri, Cecilia
    Palmer, Martha
    Heckman, Christoffer
    LREC 2022: THIRTEEN INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2022, : 4944 - 4954
  • [40] Fine-Grained Cross-Modal Semantic Consistency in Natural Conservation Image Data from a Multi-Task Perspective
    Tao, Rui
    Zhu, Meng
    Cao, Haiyan
    Ren, Honge
    SENSORS, 2024, 24 (10)