Lossy Compression via Sparse Linear Regression: Performance Under Minimum-Distance Encoding

被引:9
|
作者
Venkataramanan, Ramji [1 ]
Joseph, Antony [2 ]
Tatikonda, Sekhar [3 ]
机构
[1] Univ Cambridge, Dept Engn, Cambridge CB2 1PZ, England
[2] Univ Calif Berkeley, Dept Stat, Berkeley, CA 94704 USA
[3] Yale Univ, Dept Elect Engn, New Haven, CT 06511 USA
基金
美国国家科学基金会;
关键词
Lossy compression; Gaussian sources; squared error distortion; rate-distortion function; error exponent; sparse regression; THEORETIC LIMITS; ERROR EXPONENT; CODES; RECOVERY;
D O I
10.1109/TIT.2014.2313085
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We study a new class of codes for lossy compression with the squared-error distortion criterion, designed using the statistical framework of high-dimensional linear regression. Codewords are linear combinations of subsets of columns of a design matrix. Called a sparse superposition or sparse regression codebook, this structure is motivated by an analogous construction proposed recently by Barron and Joseph for communication over an Additive White Gaussian Noise channel. For independent identically distributed (i.i.d) Gaussian sources and minimum-distance encoding, we show that such a code can attain the Shannon rate-distortion function with the optimal error exponent, for all distortions below a specified value. It is also shown that sparse regression codes are robust in the following sense: a codebook designed to compress an i.i.d Gaussian source of variance sigma(2) with (squared-error) distortion D can compress any ergodic source of variance less than sigma(2) to within distortion D. Thus, the sparse regression ensemble retains many of the good covering properties of the i.i.d random Gaussian ensemble, while having a compact representation in terms of a matrix whose size is a low-order polynomial in the block-length.
引用
收藏
页码:3254 / 3264
页数:11
相关论文
共 50 条
  • [1] Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
    Venkataramanan, Ramji
    Sarkar, Tuhin
    Tatikonda, Sekhar
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2014, 60 (06) : 3265 - 3278
  • [2] Lossy Compression via Sparse Linear Regression: Computationally Efficient Encoding and Decoding
    Venkataramanan, Ramji
    Sarkar, Tuhin
    Tatikonda, Sekhar
    2013 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY PROCEEDINGS (ISIT), 2013, : 1182 - +
  • [3] Linear censored quantile regression: A novel minimum-distance approach
    De Backer, Mickael
    El Ghouch, Anouar
    Van Keilegom, Ingrid
    SCANDINAVIAN JOURNAL OF STATISTICS, 2020, 47 (04) : 1275 - 1306
  • [4] MINIMUM-DISTANCE BOUNDS FOR BINARY LINEAR CODES
    HELGERT, HJ
    STINAFF, RD
    IEEE TRANSACTIONS ON INFORMATION THEORY, 1973, 19 (03) : 344 - 356
  • [5] Modified minimum-distance criterion for blended random and nonrandom encoding
    Duelli, M
    Reece, M
    Cohn, RW
    JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 1999, 16 (10): : 2425 - 2438
  • [6] Modified minimum-distance criterion for blended random and nonrandom encoding
    ElectroOptics Research Institute, University of Louisville, Louisville, KY 40292, United States
    J Opt Soc Am A, 10 (2425-2438):
  • [7] Lossy Compression via Sparse Regression Codes: An Approximate Message Passing Approach
    Wu, Huihui
    Wang, Wenjie
    Liang, Shansuo
    Han, Wei
    Bai, Bo
    2023 IEEE INFORMATION THEORY WORKSHOP, ITW, 2023, : 288 - 293
  • [9] AN UPDATED TABLE OF MINIMUM-DISTANCE BOUNDS FOR BINARY LINEAR CODES
    BROUWER, AE
    VERHOEFF, T
    IEEE TRANSACTIONS ON INFORMATION THEORY, 1993, 39 (02) : 662 - 677
  • [10] Lossy Compression of Noisy Sparse Sources Based on Syndrome Encoding
    Elzanaty, Ahmed
    Giorgetti, Andrea
    Chiani, Marco
    IEEE TRANSACTIONS ON COMMUNICATIONS, 2019, 67 (10) : 7073 - 7087