Scalable learning of large networks

被引:1
|
作者
Roy, S. [1 ]
Plis, S. [1 ]
Werner-Washburne, M. [2 ]
Lane, T. [1 ]
机构
[1] Univ New Mexico, Dept Comp Sci, Albuquerque, NM 87131 USA
[2] Univ New Mexico, Dept Biol, Albuquerque, NM 87131 USA
关键词
NONQUIESCENT CELLS; QUIESCENT; TIME;
D O I
10.1049/iet-syb.2008.0161
中图分类号
Q2 [细胞生物学];
学科分类号
071009 ; 090102 ;
摘要
Cellular networks inferred from condition-specific microarray data can capture the functional rewiring of cells in response to different environmental conditions. Unfortunately, many algorithms for inferring cellular networks do not scale to whole-genome data with thousands of variables. We propose a novel approach for scalable learning of large networks: cluster and infer networks (CIN). CIN learns network structures in two steps: (a) partition variables into smaller clusters, and (b) learn networks per cluster. We optionally revisit the cluster assignment of variables with poor neighbourhoods. Results on networks with known topologies suggest that CIN has substantial speed benefits, without substantial performance loss. We applied our approach to microarray compendia of glucose-starved yeast cells. The inferred networks had significantly higher number of subgraphs representing meaningful biological dependencies than random graphs. Analysis of subgraphs identified biological processes that agreed well with existing information about yeast populations under glucose starvation, and also implicated novel pathways that were previously not known to be associated with these populations.
引用
收藏
页码:404 / U157
页数:16
相关论文
共 50 条
  • [31] ScaLeKB: scalable learning and inference over large knowledge bases
    Yang Chen
    Daisy Zhe Wang
    Sean Goldberg
    The VLDB Journal, 2016, 25 : 893 - 918
  • [32] Scalable Large-Margin Structured Learning: Theory and Algorithms
    Huang, Liang
    Zhao, Kai
    53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2015), 2015, : 19 - 20
  • [33] ScaLeKB: scalable learning and inference over large knowledge bases
    Chen, Yang
    Wang, Daisy Zhe
    Goldberg, Sean
    VLDB JOURNAL, 2016, 25 (06): : 893 - 918
  • [34] Scalable Large-Margin Mahalanobis Distance Metric Learning
    Shen, Chunhua
    Kim, Junae
    Wang, Lei
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2010, 21 (09): : 1524 - 1530
  • [35] node2vec: Scalable Feature Learning for Networks
    Grover, Aditya
    Leskovec, Jure
    KDD'16: PROCEEDINGS OF THE 22ND ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2016, : 855 - 864
  • [36] SCALEDEEP: A Scalable Compute Architecture for Learning and Evaluating Deep Networks
    Venkataramani, Swagath
    Ranjan, Ashish
    Banerjee, Subarno
    Das, Dipankar
    Avancha, Sasikanth
    Jagannathan, Ashok
    Durg, Ajaya
    Nagaraj, Dheemanth
    Kaul, Bharat
    Dubey, Pradeep
    Raghunathan, Anand
    44TH ANNUAL INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA 2017), 2017, : 13 - 26
  • [37] Scalable neural networks for the efficient learning of disordered quantum systems
    Saraceni, N.
    Cantori, S.
    Pilati, S.
    PHYSICAL REVIEW E, 2020, 102 (03)
  • [38] Scalable Bayesian Learning of Recurrent Neural Networks for Language Modeling
    Gan, Zhe
    Li, Chunyuan
    Chen, Changyou
    Pu, Yunchen
    Su, Qinliang
    Carin, Lawrence
    PROCEEDINGS OF THE 55TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2017), VOL 1, 2017, : 321 - 331
  • [39] Efficient and Scalable Structure Learning for Bayesian Networks: Algorithms and Applications
    Zhu, Rong
    Pfadler, Andreas
    Wu, Ziniu
    Han, Yuxing
    Yang, Xiaoke
    Ye, Feng
    Qian, Zhenping
    Zhou, Jingren
    Cui, Bin
    2021 IEEE 37TH INTERNATIONAL CONFERENCE ON DATA ENGINEERING (ICDE 2021), 2021, : 2613 - 2624
  • [40] Learning Multifractal Structure in Large Networks
    Benson, Austin R.
    Riquelme, Carlos
    Schmit, Sven
    PROCEEDINGS OF THE 20TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING (KDD'14), 2014, : 1326 - 1335