A stochastic variance-reduced coordinate descent algorithm for learning sparse Bayesian network from discrete high-dimensional data

被引:1
|
作者
Shajoonnezhad, Nazanin [1 ]
Nikanjam, Amin [2 ]
机构
[1] KN Toosi Univ Technol, Tehran, Iran
[2] Polytech Montreal, Montreal, PQ, Canada
关键词
Bayesian networks; Sparse structure learning; Stochastic gradient descent; Constrained optimization; DIRECTED ACYCLIC GRAPHS; PENALIZED ESTIMATION; REGULARIZATION; CONNECTIVITY;
D O I
10.1007/s13042-022-01674-9
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper addresses the problem of learning a sparse structure Bayesian network from high-dimensional discrete data. Compared to continuous Bayesian networks, learning a discrete Bayesian network is a challenging problem due to the large parameter space. Although many approaches have been developed for learning continuous Bayesian networks, few approaches have been proposed for the discrete ones. In this paper, we address learning Bayesian networks as an optimization problem and propose a score function which guarantees the learnt structure to be a sparse directed acyclic graph. Besides, we implement a block-wised stochastic coordinate descent algorithm to optimize the score function. Specifically, we use a variance reducing method in our optimization algorithm to make the algorithm work efficiently for high-dimensional data. The proposed approach is applied to synthetic data from well-known benchmark networks. The quality, scalability, and robustness of the constructed network are measured. Compared to some competitive approaches, the results reveal that our algorithm outperforms some of the well-known proposed methods.
引用
收藏
页码:947 / 958
页数:12
相关论文
共 50 条
  • [11] Similarity Learning for High-Dimensional Sparse Data
    Liu, Kuan
    Bellet, Aurelien
    Sha, Fei
    ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 38, 2015, 38 : 653 - 662
  • [12] Group Learning for High-Dimensional Sparse Data
    Cherkassky, Vladimir
    Chen, Hsiang-Han
    Shiao, Han-Tai
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [13] HIGH-DIMENSIONAL SPARSE BAYESIAN LEARNING WITHOUT COVARIANCE MATRICES
    Lin, Alexander
    Song, Andrew H.
    Bilgic, Berkin
    Ba, Demba
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 1511 - 1515
  • [14] An Iterative Coordinate Descent Algorithm for High-Dimensional Nonconvex Penalized Quantile Regression
    Peng, Bo
    Wang, Lan
    JOURNAL OF COMPUTATIONAL AND GRAPHICAL STATISTICS, 2015, 24 (03) : 676 - 694
  • [15] Sparse Bayesian variable selection for classifying high-dimensional data
    Yang, Aijun
    Lian, Heng
    Jiang, Xuejun
    Liu, Pengfei
    STATISTICS AND ITS INTERFACE, 2018, 11 (02) : 385 - 395
  • [16] Efficient Sparse Representation for Learning With High-Dimensional Data
    Chen, Jie
    Yang, Shengxiang
    Wang, Zhu
    Mao, Hua
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2023, 34 (08) : 4208 - 4222
  • [17] A coordinate descent approach for sparse Bayesian learning in high dimensional QTL mapping and genome-wide association studies
    Wang, Meiyue
    Xu, Shizhong
    BIOINFORMATICS, 2019, 35 (21) : 4327 - 4335
  • [18] Bayesian evolutionary hypernetworks for interpretable learning from high-dimensional data
    Kim, Soo-Jin
    Ha, Jung-Woo
    Kim, Heebal
    Zhang, Byoung-Tak
    APPLIED SOFT COMPUTING, 2019, 81
  • [19] A Semismooth Newton Algorithm for High-Dimensional Nonconvex Sparse Learning
    Shi, Yueyong
    Huang, Jian
    Jiao, Yuling
    Yang, Qinglong
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (08) : 2993 - 3006
  • [20] Deep-learning assisted reduced order model for high-dimensional flow prediction from sparse data
    Wu, Jiaxin
    Xiao, Dunhui
    Luo, Min
    PHYSICS OF FLUIDS, 2023, 35 (10)