A Biterm Topic Model for Sparse Mutation Data

被引:0
|
作者
Sason, Itay [1 ]
Chen, Yuexi [2 ,3 ]
Leiserson, Mark D. M. [2 ,3 ]
Sharan, Roded [1 ]
机构
[1] Tel Aviv Univ, Sch Comp Sci, IL-69978 Tel Aviv, Israel
[2] Univ Maryland, Dept Comp Sci, College Pk, MD 20740 USA
[3] Univ Maryland, Ctr Bioinformat & Computat Biol, College Pk, MD 20740 USA
关键词
mutational signature; panel sequencing data; biterm topic model; SIGNATURES;
D O I
10.3390/cancers15051601
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Simple Summary We developed an efficient method for analyzing sparse mutation data based on mutation co-occurrence to infer the underlying numbers of mutational signatures and sample clusters that gave rise to the data. Mutational signature analysis promises to reveal the processes that shape cancer genomes for applications in diagnosis and therapy. However, most current methods are geared toward rich mutation data that has been extracted from whole-genome or whole-exome sequencing. Methods that process sparse mutation data typically found in practice are only in the earliest stages of development. In particular, we previously developed the Mix model that clusters samples to handle data sparsity. However, the Mix model had two hyper-parameters, including the number of signatures and the number of clusters, that were very costly to learn. Therefore, we devised a new method that was several orders-of-magnitude more efficient for handling sparse data, was based on mutation co-occurrences, and imitated word co-occurrence analyses of Twitter texts. We showed that the model produced significantly improved hyper-parameter estimates that led to higher likelihoods of discovering overlooked data and had better correspondence with known signatures.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] Sparse Biterm Topic Model for Short Texts
    Zhu, Bingshan
    Cai, Yi
    Zhang, Huakui
    WEB AND BIG DATA, APWEB-WAIM 2021, PT I, 2021, 12858 : 227 - 241
  • [2] User Based Aggregation for Biterm Topic Model
    Chen, Weizheng
    Wang, Jinpeng
    Zhang, Yan
    Yan, Hongfei
    Li, Xiaoming
    PROCEEDINGS OF THE 53RD ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL) AND THE 7TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (IJCNLP), VOL 2, 2015, : 489 - 494
  • [3] Improving biterm topic model with word embeddings
    Jiajia Huang
    Min Peng
    Pengwei Li
    Zhiwei Hu
    Chao Xu
    World Wide Web, 2020, 23 : 3099 - 3124
  • [4] Improving biterm topic model with word embeddings
    Huang, Jiajia
    Peng, Min
    Li, Pengwei
    Hu, Zhiwei
    Xu, Chao
    WORLD WIDE WEB-INTERNET AND WEB INFORMATION SYSTEMS, 2020, 23 (06): : 3099 - 3124
  • [5] FastBTM: Reducing the sampling time for biterm topic model
    He, Xingwei
    Xu, Hua
    Li, Jia
    He, Liu
    Yu, Linlin
    KNOWLEDGE-BASED SYSTEMS, 2017, 132 : 11 - 20
  • [6] Biterm Pseudo Document Topic Model for Short Text
    Jiang, Lan
    Lu, Hengyang
    Xu, Ming
    Wang, Chongjun
    2016 IEEE 28TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2016), 2016, : 865 - 872
  • [7] Modification biterm topic model input feature for detecting topic in thematic virtual museums
    Anggai, S.
    Blekanov, I. S.
    Sergeev, S. L.
    VESTNIK SANKT-PETERBURGSKOGO UNIVERSITETA SERIYA 10 PRIKLADNAYA MATEMATIKA INFORMATIKA PROTSESSY UPRAVLENIYA, 2018, 14 (03): : 243 - 251
  • [8] Stochastic Collapsed Variational Bayesian Inference for Biterm Topic Model
    Awaya, Narutaka
    Kitazono, Jun
    Omori, Toshiaki
    Ozawa, Seiichi
    2016 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2016, : 3364 - 3370
  • [9] A Biterm-based Dirichlet Process Topic Model for Short Texts
    Pan, Yali
    Yin, Jian
    Liu, Shaopeng
    Li, Jing
    PROCEEDINGS OF THE 3RD INTERNATIONAL CONFERENCE ON COMPUTER SCIENCE AND SERVICE SYSTEM (CSSS), 2014, 109 : 301 - 304
  • [10] Optimize Collapsed Gibbs Sampling for Biterm Topic Model by Alias Method
    He, Xingwei
    Xu, Hua
    Sun, Xiaomin
    Deng, Junhui
    Bai, Xiaoli
    Li, Jia
    2017 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2017, : 1155 - 1162