Fast Training on Large Genomics Data using Distributed Support Vector Machines

被引:0
|
作者
Theera-Ampornpunt, Nawanol [1 ]
Kim, Seong Gon [1 ]
Ghoshal, Asish [1 ]
Bagchi, Saurabh [1 ]
Grama, Ananth [1 ]
Chaterji, Somali [1 ]
机构
[1] Purdue Univ, W Lafayette, IN 47907 USA
关键词
machine learning; classifier training; computational genomics; computational cost; network cost; CHIP-SEQ; TRANSCRIPTION; PREDICTION; ELEMENTS; ENHANCER; SIGNATURES;
D O I
暂无
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The field of genomics has seen a glorious explosion of high-quality data, with tremendous strides having been made in genomic sequencing instruments and computational genomics applications meant to make sense of the data. A common use case for genomics data is to answer the question if a specific genetic signature is correlated with some disease manifestations. Support Vector Machine (SVM) is a widely used classifier in computational literature. Previous studies have shown success in using these SVMs for the above use case of genomics data. However, SVMs suffer from a widely-recognized scalability problem in both memory use and computational time. It is as yet an unanswered question if training such classifiers can scale to the massive sizes that characterize many of the genomics data sets. We answer that question here for a specific dataset, in order to decipher whether some regulatory module of a particular combinatorial epigenetic "pattern" will regulate the expression of a gene. However, the specifics of the dataset is likely of less relevance to the claims of our work. We take a proposed theoretical technique for efficient training of SVM, namely Cascade SVM, create our classifier called EP-SVM, and empirically evaluate how it scales to the large genomics dataset. We implement Cascade SVM on the Apache Spark platform and open source this implementation(1). Through our evaluation, we bring out the computational cost on each application process, the way of distributing the overall workload among multiple processes, which can potentially execute on different cores or different machines, and the cost of data transfer to different cores or different machines. We believe we are the first to shed light on the computational and network costs of training an SVM on a multi-dimensional genomics dataset. We also evaluate the accuracy of the classifier result as a function of the parameters of the SVM model.
引用
收藏
页数:8
相关论文
共 50 条
  • [1] Fast training of support vector machines by extracting boundary data
    Abe, S
    Inoue, T
    ARTIFICIAL NEURAL NETWORKS-ICANN 2001, PROCEEDINGS, 2001, 2130 : 308 - 313
  • [2] Distributed data fusion using support vector machines
    Challa, S
    Palaniswami, M
    Shilton, A
    PROCEEDINGS OF THE FIFTH INTERNATIONAL CONFERENCE ON INFORMATION FUSION, VOL II, 2002, : 881 - 885
  • [3] Training Support Vector Machines on Large Sets of Image Data
    Kukenys, Ignas
    McCane, Brendan
    Neumegen, Tim
    COMPUTER VISION - ACCV 2009, PT III, 2010, 5996 : 331 - 340
  • [4] Fast training of support vector machines for regression
    Anguita, D
    Boni, A
    Pace, S
    IJCNN 2000: PROCEEDINGS OF THE IEEE-INNS-ENNS INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOL V, 2000, : 210 - 214
  • [5] Fast Support Vector Data Description Training Using Edge Detection on Large Datasets
    Hu, Chenlong
    Zhou, Bo
    Hu, Jinglu
    PROCEEDINGS OF THE 2014 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2014, : 2176 - 2182
  • [6] Fast Support Vector Machines for Continuous Data
    Kramer, Kurt A.
    Hall, Lawrence O.
    Goldgof, Dmitry B.
    Remsen, Andrew
    Luo, Tong
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS PART B-CYBERNETICS, 2009, 39 (04): : 989 - 1001
  • [7] Fast training of support vector machines on the Cell processor
    Marzolla, Moreno
    NEUROCOMPUTING, 2011, 74 (17) : 3700 - 3707
  • [8] Fast Training of Support Vector Machines for Survival Analysis
    Poelsterl, Sebastian
    Navab, Nassir
    Katouzian, Amin
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT II, 2015, 9285 : 243 - 259
  • [9] Provably fast training algorithms for support vector machines
    Balcazar, Jose L.
    Dai, Yang
    Tanaka, Junichi
    Watanabe, Osamu
    THEORY OF COMPUTING SYSTEMS, 2008, 42 (04) : 568 - 595
  • [10] Provably Fast Training Algorithms for Support Vector Machines
    José L. Balcázar
    Yang Dai
    Junichi Tanaka
    Osamu Watanabe
    Theory of Computing Systems, 2008, 42 : 568 - 595