Efficient Serial and Parallel SVM Training using Coordinate Descent

被引:0
|
作者
Liossis, Emmanuel [1 ]
机构
[1] Natl Tech Univ Athens, Sch Elect & Comp Engn, Intelligent Syst Lab, Athens, Greece
关键词
SVM; training algorithm; parallel;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Eliminating the bias term of the Support Vector Machine (SVM) classifier permits substancial simplification to training algorithms. Using this elimination, the optimization invloved in training can be decomposed to update as low as one coordinate at a time. This paper explores two directions of improvements which stem from this simplification. The first one is about the options available for choosing the coordinate to optimize during each optimization iteration. The second one is about the parallelization schemes which the simplified optimization facilitates.
引用
收藏
页码:76 / 83
页数:8
相关论文
共 50 条
  • [31] Parallel Execution of SVM Training using Graphics Processing Units (SVMTrGPUs)
    Salleh, Nur Shakirah Md
    Baharim, Muhammad Fahim
    PROCEEDINGS 5TH IEEE INTERNATIONAL CONFERENCE ON CONTROL SYSTEM, COMPUTING AND ENGINEERING (ICCSCE 2015), 2015, : 260 - 263
  • [32] An efficient GPU-parallel coordinate descent algorithm for sparse precision matrix estimation via scaled lasso
    Lee, Seunghwan
    Kim, Sang Cheol
    Yu, Donghyeon
    COMPUTATIONAL STATISTICS, 2023, 38 (01) : 217 - 242
  • [33] An efficient GPU-parallel coordinate descent algorithm for sparse precision matrix estimation via scaled lasso
    Seunghwan Lee
    Sang Cheol Kim
    Donghyeon Yu
    Computational Statistics, 2023, 38 : 217 - 242
  • [34] Gradient Descent Using Stochastic Circuits for Efficient Training of Learning Machines
    Liu, Siting
    Jiang, Honglan
    Liu, Leibo
    Han, Jie
    IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2018, 37 (11) : 2530 - 2541
  • [35] Understanding Progressive Training Through the Framework of Randomized Coordinate Descent
    Szlendak, Rafal
    Gasanov, Elnur
    Richtarik, Peter
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [36] Coordinate Descent on the Orthogonal Group for Recurrent Neural Network Training
    Massart, Estelle
    Abrol, Vinayak
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 7744 - 7751
  • [37] P-packSVM: Parallel Primal grAdient desCent Kernel SVM
    Zhu, Zeyuan Allen
    Chen, Weizhu
    Wang, Gang
    Zhu, Chenguang
    Chen, Zheng
    2009 9TH IEEE INTERNATIONAL CONFERENCE ON DATA MINING, 2009, : 677 - +
  • [38] BPPGD: Budgeted Parallel Primal grAdient desCent Kernel SVM on Spark
    Sai, Jinchen
    Wang, Bai
    Wu, Bin
    2016 IEEE FIRST INTERNATIONAL CONFERENCE ON DATA SCIENCE IN CYBERSPACE (DSC 2016), 2016, : 74 - 79
  • [39] PARALLEL STOCHASTIC ASYNCHRONOUS COORDINATE DESCENT: TIGHT BOUNDS ON THE POSSIBLE PARALLELISM
    Cheung, Yun Kuen
    Cole, Richard J.
    Tao, Yixin
    SIAM JOURNAL ON OPTIMIZATION, 2021, 31 (01) : 448 - 460
  • [40] Memetic Differential Evolution Using Coordinate Descent
    Bidgoli, Azam Asilian
    Rahnamayan, Shahryar
    2021 IEEE CONGRESS ON EVOLUTIONARY COMPUTATION (CEC 2021), 2021, : 359 - 366