On the Minimax Risk of Dictionary Learning

被引:20
|
作者
Jung, Alexander [1 ]
Eldar, Yonina C. [2 ]
Goertz, Norbert [3 ]
机构
[1] Aalto Univ, Dept Comp Sci, Espoo 02150, Finland
[2] Technion Israel Inst Technol, Dept EE, IL-32000 Haifa, Israel
[3] Vienna Univ Technol, Inst Telecommun, A-1040 Vienna, Austria
基金
以色列科学基金会;
关键词
Compressed sensing; dictionary learning; minimax risk; Fano inequality; KULLBACK-LEIBLER DIVERGENCE; GAUSSIAN MIXTURE-MODELS; OVERCOMPLETE DICTIONARIES; SPARSE REPRESENTATION; SAMPLE COMPLEXITY; K-SVD; INFORMATION; COVARIANCE; VECTOR; BOUNDS;
D O I
10.1109/TIT.2016.2517006
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We consider the problem of learning a dictionary matrix from a number of observed signals, which are assumed to be generated via a linear model with a common underlying dictionary. In particular, we derive lower bounds on the minimum achievable worst case mean squared error (MSE), regardless of computational complexity of the dictionary learning (DL) schemes. By casting DL as a classical (or frequentist) estimation problem, the lower bounds on the worst case MSE are derived following an established information-theoretic approach to minimax estimation. The main contribution of this paper is the adaption of these information-theoretic tools to the DL problem in order to derive lower bounds on the worst case MSE of any DL algorithm. We derive three different lower bounds applying to different generative models for the observed signals. The first bound only requires the existence of a covariance matrix of the (unknown) underlying coefficient vector. By specializing this bound to the case of sparse coefficient distributions and assuming the true dictionary satisfies the restricted isometry property, we obtain a lower bound on the worst case MSE of DL methods in terms of the signal-to-noise ratio (SNR). The third bound applies to a more restrictive subclass of coefficient distributions by requiring the non-zero coefficients to be Gaussian. Although the applicability of this bound is the most limited, it is the tightest of the three bounds in the low SNR regime. A particular use of our lower bounds is the derivation of necessary conditions on the required number of observations (sample size), such that DL is feasible, i.e., accurate DL schemes might exist. By comparing these necessary conditions with sufficient conditions on the sample size such that a particular DL technique is successful, we are able to characterize the regimes, where those algorithms are optimal in terms of required sample size.
引用
收藏
页码:1501 / 1515
页数:15
相关论文
共 50 条
  • [1] Minimax Reconstruction Risk of Convolutional Sparse Dictionary Learning
    Singh, Shashank
    Poczos, Barnabas
    Ma, Jian
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 84, 2018, 84
  • [2] Minimax Lower Bounds on Dictionary Learning for Tensor Data
    Shakeri, Zahra
    Bajwa, Waheed U.
    Sarwate, Anand D.
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2018, 64 (04) : 2706 - 2726
  • [3] Minimax Lower Bounds for Kronecker-Structured Dictionary Learning
    Shakeri, Zahra
    Bajwa, Waheed U.
    Sarwate, Anand D.
    2016 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY, 2016, : 1148 - 1152
  • [4] A MR Image Denoising Algorithm based on Dictionary Learning with Minimax Concave Penalty
    Tang, Jianhao
    Wan, Chao
    Ling, Jiacheng
    Li, Zhenni
    2020 IEEE 3RD INTERNATIONAL CONFERENCE ON INFORMATION COMMUNICATION AND SIGNAL PROCESSING (ICICSP 2020), 2020, : 258 - 263
  • [5] Efficient Learning of Minimax Risk Classifiers in High Dimensions
    Bondugula, Kartheek
    Mazuelas, Santiago
    Perez, Aritz
    UNCERTAINTY IN ARTIFICIAL INTELLIGENCE, 2023, 216 : 206 - 215
  • [6] ADAPTIVE WEIGHTED MINIMAX-CONCAVE PENALTY BASED DICTIONARY LEARNING FOR BRAIN MR IMAGES
    Pokala, Praveen Kumar
    Chemudupati, Satvik
    Seelamantula, Chandra Sekhar
    2020 IEEE 17TH INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (ISBI 2020), 2020, : 1929 - 1932
  • [7] A Minimax Dictionary Expansion for Sparse Continuous Reconstruction
    Passarin, Thiago A. R.
    Pipa, Daniel R.
    Zibetti, Marcelo V. W.
    2017 25TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2017, : 2136 - 2140
  • [8] ON MINIMAX RISK
    PINELIS, IF
    THEORY OF PROBABILITY AND ITS APPLICATIONS, 1990, 35 (01) : 104 - 109
  • [9] Empirical entropy, minimax regret and minimax risk
    Rakhlin, Alexander
    Sridharan, Karthik
    Tsybakov, Alexandre B.
    BERNOULLI, 2017, 23 (02) : 789 - 824
  • [10] Minimax Model Learning
    Voloshin, Cameron
    Jiang, Nan
    Yue, Yisong
    24TH INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS (AISTATS), 2021, 130