Bayesian Inference With Nonlinear Generative Models: Comments on Secure Learning

被引:1
|
作者
Bereyhi, Ali [1 ,2 ]
Loureiro, Bruno [3 ,4 ]
Krzakala, Florent [3 ]
Mueller, Ralf R. [1 ]
Schulz-Baldes, Hermann [5 ]
机构
[1] Friedrich Alexander Univ FAU Erlangen Nurnberg, Inst Digital Commun IDC, D-91058 Erlangen, Germany
[2] Univ Toronto, Wireless Comp Lab WCL, Toronto, ON M5S 2E4, Canada
[3] Ecole Polytech Fed Lausanne EPFL, Informat Learning & Phys Lab IdePHICS, CH-1015 Lausanne, Switzerland
[4] Ecole Normale Super, Ctr Data Sci, F-75230 Paris, France
[5] Friedrich Alexander Univ FAU Erlangen Nurnberg, Dept Math, D-91058 Erlangen, Germany
关键词
Bayes methods; Nonlinear optics; Information theory; Glass; Encoding; Statistical learning; Load modeling; Nonlinear generative models; Bayesian inference; Gaussian random fields; information-theoretically secure learning; replica method; decoupling principle; STATISTICAL-MECHANICS; SYSTEM-ANALYSIS; MAP ESTIMATION; SPIN-GLASSES; SIGNALS; INFORMATION; CODES; CDMA; ASYMPTOTICS; EFFICIENCY;
D O I
10.1109/TIT.2023.3325187
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Unlike the classical linear model, nonlinear generative models have been addressed sparsely in the literature of statistical learning. This work aims to shed light on these models and their secrecy potential. To this end, we invoke the replica method to derive the asymptotic normalized cross entropy in an inverse probability problem whose generative model is described by a Gaussian random field with a generic covariance function. Our derivations further demonstrate the asymptotic statistical decoupling of the Bayesian estimator and specify the decoupled setting for a given nonlinear model. The replica solution depicts that strictly nonlinear models establish an all-or-nothing phase transition: there exists a critical load at which the optimal Bayesian inference changes from perfect to an uncorrelated learning. Based on this finding, we design a new secure coding scheme which achieves the secrecy capacity of the wiretap channel. This interesting result implies that strictly nonlinear generative models are perfectly secured without any secure coding. We justify this latter statement through the analysis of an illustrative model for perfectly secure and reliable inference.
引用
收藏
页码:7998 / 8028
页数:31
相关论文
共 50 条
  • [41] Amortized Bayesian inference on generative dynamical network models of epilepsy using deep neural density estimators
    Hashemi, Meysam
    Vattikonda, Anirudh N.
    Jha, Jayant
    Sip, Viktor
    Woodman, Marmaduke M.
    Bartolomei, Fabrice
    Jirsa, Viktor K.
    NEURAL NETWORKS, 2023, 163 : 178 - 194
  • [42] EMG pattern recognition via Bayesian inference with scale mixture-based stochastic generative models
    Furui, Akira
    Igaue, Takuya
    Tsuji, Toshio
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 185
  • [43] Bayesian inference with misspecified models: Inference about what?
    O'Hagan, Anthony
    JOURNAL OF STATISTICAL PLANNING AND INFERENCE, 2013, 143 (10) : 1643 - 1648
  • [44] Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers
    Wang, Yu
    Dai, Bin
    Hua, Gang
    Aston, John
    Wipf, David
    IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2018, 12 (06) : 1615 - 1627
  • [45] Learning Nonlinear Generative Models of Time Series With a Kalman Filter in RKHS
    Zhu, Pingping
    Chen, Badong
    Principe, Jose C.
    IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2014, 62 (01) : 141 - 155
  • [46] Bayesian Generative Active Deep Learning
    Toan Tran
    Thanh-Toan Do
    Reid, Ian
    Carneiro, Gustavo
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [47] Nonlinear sparse Bayesian learning for physics-based models
    Sandhu, Rimple
    Khalil, Mohammad
    Pettit, Chris
    Poirel, Dominique
    Sarkar, Abhijit
    JOURNAL OF COMPUTATIONAL PHYSICS, 2021, 426
  • [48] Property Existence Inference against Generative Models
    Wang, Lijin
    Wang, Jingjing
    Wan, Jie
    Long, Lin
    Yang, Ziqi
    Qin, Zhan
    PROCEEDINGS OF THE 33RD USENIX SECURITY SYMPOSIUM, SECURITY 2024, 2024, : 2423 - 2440
  • [49] Generative models, linguistic communication and active inference
    Friston, Karl J.
    Parr, Thomas
    Yufik, Yan
    Sajid, Noor
    Price, Catherine J.
    Holmes, Emma
    NEUROSCIENCE AND BIOBEHAVIORAL REVIEWS, 2020, 118 : 42 - 64
  • [50] Asymptotically exact inference in differentiable generative models
    Graham, Matthew M.
    Storkey, Amos J.
    ELECTRONIC JOURNAL OF STATISTICS, 2017, 11 (02): : 5105 - 5164