Parallelizing Stochastic Gradient Descent with Hardware Transactional Memory for Matrix Factorization

被引:5
|
作者
Wu, Zhenwei [1 ]
Luo, Yingqi [1 ]
Lu, Kai [1 ]
Wang, Xiaoping [1 ]
机构
[1] Natl Univ Def Technol, Sch Comp, Changsha, Hunan, Peoples R China
关键词
Hardware transactional memory; Stochastic Gradient Descent; Recommender systems;
D O I
10.1109/ICISE.2018.00029
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Rapid increase of the amount of available data necessitates large-scale machine learning methods, and Stochastic Gradient Descent (SGD) has become a predominant one of the choices. However, the inherently sequential properties of SGD severely constrain its scalability and prevent it benefiting from multi-core devices. This work parallelizes SGD with transactional memory and leverages hardware support of transactional execution to explore better use of newly deployed features in commercial multi-core processors. To evaluate the performance of our SGD implementation, we compare it with the traditional lock-based approach and conduct quantitative analysis of its synchronization overhead on real world datasets. Experimental results show that the proposed parallelized SGD implementation achieves satisfied scalability and improved execution performance compared with the lock-based approach.
引用
收藏
页码:118 / 121
页数:4
相关论文
共 50 条
  • [1] Accelerating Stochastic Gradient Descent Based Matrix Factorization on FPGA
    Zhou, Shijie
    Kannan, Rajgopal
    Prasanna, Viktor K.
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (08) : 1897 - 1911
  • [2] Efficient Parallel Stochastic Gradient Descent for Matrix Factorization Using GPU
    Nassar, Mohamed A.
    El-Sayed, Layla A. A.
    Taha, Yousry
    [J]. 2016 11TH INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST), 2016, : 63 - 68
  • [3] Matrix Factorization Based Collaborative Filtering with Resilient Stochastic Gradient Descent
    Abdelbar, Ashraf M.
    Elnabarawy, Islam
    Salama, Khalid M.
    Wunsch, Donald C., II
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [4] GPUSGD: A GPU-accelerated stochastic gradient descent algorithm for matrix factorization
    Jin, Jing
    Lai, Siyan
    Hu, Su
    Lin, Jing
    Lin, Xiaola
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2016, 28 (14): : 3844 - 3865
  • [5] CuMF_SGD: Parallelized Stochastic Gradient Descent for Matrix Factorization on GPUs
    Xie, Xiaolong
    Tan, Wei
    Fong, Liana L.
    Liang, Yun
    [J]. HPDC'17: PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE PARALLEL AND DISTRIBUTED COMPUTING, 2017, : 79 - 92
  • [6] Comparison Between Stochastic Gradient Descent and VLE Metaheuristic for Optimizing Matrix Factorization
    Gomez-Pulido, Juan A.
    Cortes-Toro, Enrique
    Duran-Dominguez, Arturo
    Lanza-Gutierrez, Jose M.
    Crawford, Broderick
    Soto, Ricardo
    [J]. OPTIMIZATION AND LEARNING, 2020, 1173 : 153 - 164
  • [7] Convergence of Alternating Gradient Descent for Matrix Factorization
    Ward, Rachel
    Kolda, Tamara G.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [8] An Efficient Approach of GPU-accelerated Stochastic Gradient Descent Method for Matrix Factorization
    Li, Feng
    Ye, Yunming
    Li, Xutao
    [J]. JOURNAL OF INTERNET TECHNOLOGY, 2019, 20 (04): : 1087 - 1097
  • [9] A NEW APPROACH OF GPU-ACCELERATED STOCHASTIC GRADIENT DESCENT METHOD FOR MATRIX FACTORIZATION
    Li, Feng
    Ye, Yunming
    Li, Xutao
    Lu, Jiajie
    [J]. INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2019, 15 (02): : 697 - 711
  • [10] A Fast Parallel Stochastic Gradient Method for Matrix Factorization in Shared Memory Systems
    Chin, Wei-Sheng
    Zhuang, Yong
    Juan, Yu-Chin
    Lin, Chih-Jen
    [J]. ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (01)