Energy Efficient Sparse Bayesian Learning using Learned Approximate Message Passing

被引:0
|
作者
Thomas, Christo Kurisummoottil [1 ,3 ]
Mundlamuri, Rakesh [1 ]
Murthy, Chandra R. [2 ]
Kountouris, Marios [1 ]
机构
[1] EURECOM, Commun Syst Dept, Sophia Antipolis, France
[2] Indian Inst Sci, Bangalore, Karnataka, India
[3] Qualcomm Inc, Espoo, Finland
关键词
Sparse Bayesian learning; approximate message passing; deep unfolding; energy efficiency;
D O I
10.1109/SPAWC51858.2021.9593220
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Sparse Bayesian learning (SBL) is a well-studied framework for sparse signal recovery, with numerous applications in wireless communications, including wideband (millimeter wave) channel estimation and user activity detection. SBL is known to be more sparsity-inducing than other priors (e.g., Laplacian prior), and is better able to handle ill-conditioned measurement matrices, hence providing superior sparse recovery performance. However, the complexity of SBL does not scale well with the dimensionality of the problem due to the computational complexity associated with the matrix inversion step in the EM iterations. A computationally efficient version of SBL can be obtained by exploiting approximate message passing (AMP) for the inversion, coined AMP-SBL. However, this algorithm still requires a large number of iterations and careful hand-tuning to guarantee convergence for arbitrary measurement matrices. In this work, we revisit AMP-SBL from an energy-efficiency perspective. We propose a fast version of AMP-SBL leveraging deep neural networks (DNN). The main idea is to use deep learning to unfold the iterations in the AMP-SBL algorithm using very few, no more than 10, neural network layers. The sparse vector estimation is performed using DNN, and hyperparameters are learned using the EM algorithm, making it robust to different measurement matrix models. Our results show a reduction in energy consumption, primarily due to lower complexity and faster convergence rate. Moreover, the training of the neural network is simple since the number of parameters to be learned is relatively small.
引用
收藏
页码:271 / 275
页数:5
相关论文
共 50 条
  • [1] Sparse Bayesian Learning Using Approximate Message Passing
    Al-Shoukairi, Maher
    Rao, Bhaskar
    [J]. CONFERENCE RECORD OF THE 2014 FORTY-EIGHTH ASILOMAR CONFERENCE ON SIGNALS, SYSTEMS & COMPUTERS, 2014, : 1957 - 1961
  • [2] Unitary Approximate Message Passing for Sparse Bayesian Learning
    Luo, Man
    Guo, Qinghua
    Jin, Ming
    Eldar, Yonina C.
    Huang, Defeng
    Meng, Xiangming
    [J]. IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2021, 69 : 6023 - 6038
  • [3] Computationally Efficient Sparse Bayesian Learning via Generalized Approximate Message Passing
    Zou, Xianbing
    Li, Fuwei
    Fang, Jun
    Li, Hongbin
    [J]. 2016 IEEE INTERNATIONAL CONFERENCE ON UBIQUITOUS WIRELESS BROADBAND (ICUWB2016), 2016,
  • [4] Sparse Bayesian Learning Based on Approximate Message Passing with Unitary Transformation
    Luo, Man
    Guo, Qinghua
    Huang, Defeng
    Xi, Jiangtao
    [J]. 2019 IEEE VTS ASIA PACIFIC WIRELESS COMMUNICATIONS SYMPOSIUM (APWCS 2019), 2019,
  • [5] Vector Approximate Message Passing with Sparse Bayesian Learning for Gaussian Mixture Prior
    Chengyao Ruan
    Zaichen Zhang
    Hao Jiang
    Jian Dang
    Liang Wu
    Hongming Zhang
    [J]. China Communications, 2023, 20 (05) : 57 - 69
  • [6] Vector Approximate Message Passing with Sparse Bayesian Learning for Gaussian Mixture Prior
    Ruan, Chengyao
    Zhang, Zaichen
    Jiang, Hao
    Dang, Jian
    Wu, Liang
    Zhang, Hongming
    [J]. CHINA COMMUNICATIONS, 2023, 20 (05) : 57 - 69
  • [7] Generalized Swept Approximate Message Passing based Kalman Filtering for Dynamic Sparse Bayesian Learning
    Thomas, Christo Kurisummoottil
    Stock, Dirk
    [J]. 28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 2065 - 2069
  • [8] Variational Bayesian and Generalized Approximate Message Passing-Based Sparse Bayesian Learning Model for Image Reconstruction
    Dong, Jingyi
    Lyu, Wentao
    Zhou, Di
    Xu, Weiqiang
    [J]. IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2328 - 2332
  • [9] Approximate Message Passing With Consistent Parameter Estimation and Applications to Sparse Learning
    Kamilov, Ulugbek S.
    Rangan, Sundeep
    Fletcher, Alyson K.
    Unser, Michael
    [J]. IEEE TRANSACTIONS ON INFORMATION THEORY, 2014, 60 (05) : 2969 - 2985
  • [10] Two-Dimensional Pattern-Coupled Sparse Bayesian Learning via Generalized Approximate Message Passing
    Fang, Jun
    Zhang, Lizao
    Li, Hongbin
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2016, 25 (06) : 2920 - 2930