Comparison Between Stochastic Gradient Descent and VLE Metaheuristic for Optimizing Matrix Factorization

被引:0
|
作者
Gomez-Pulido, Juan A. [1 ]
Cortes-Toro, Enrique [2 ]
Duran-Dominguez, Arturo [1 ]
Lanza-Gutierrez, Jose M. [3 ]
Crawford, Broderick [4 ]
Soto, Ricardo [4 ]
机构
[1] Univ Extremadura, Badajoz, Spain
[2] Univ Playa Ancha, Valparaiso, Chile
[3] Univ Carlos III Madrid, Madrid, Spain
[4] Pontificia Univ Catolica Valparaiso, Valparaiso, Chile
来源
OPTIMIZATION AND LEARNING | 2020年 / 1173卷
关键词
Matrix factorization; Gradient descent; Metaheuristics; OPTIMIZATION; SEARCH;
D O I
10.1007/978-3-030-41913-4_13
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Matrix factorization is used by recommender systems in collaborative filtering for building prediction models based on a couple of matrices. These models are usually generated by stochastic gradient descent algorithm, which learns the model minimizing the error done. Finally, the obtained models are validated according to an error criterion by predicting test data. Since the model generation can be tackled as an optimization problem where there is a huge set of possible solutions, we propose to use metaheuristics as alternative solving methods for matrix factorization. In this work we applied a novel metaheuristic for continuous optimization, which works inspired by the vapour-liquid equilibrium. We considered a particular case were matrix factorization was applied: the prediction student performance problem. The obtained results surpassed thoroughly the accuracy provided by stochastic gradient descent.
引用
收藏
页码:153 / 164
页数:12
相关论文
共 50 条
  • [1] Accelerating Stochastic Gradient Descent Based Matrix Factorization on FPGA
    Zhou, Shijie
    Kannan, Rajgopal
    Prasanna, Viktor K.
    [J]. IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2020, 31 (08) : 1897 - 1911
  • [2] Efficient Parallel Stochastic Gradient Descent for Matrix Factorization Using GPU
    Nassar, Mohamed A.
    El-Sayed, Layla A. A.
    Taha, Yousry
    [J]. 2016 11TH INTERNATIONAL CONFERENCE FOR INTERNET TECHNOLOGY AND SECURED TRANSACTIONS (ICITST), 2016, : 63 - 68
  • [3] Parallelizing Stochastic Gradient Descent with Hardware Transactional Memory for Matrix Factorization
    Wu, Zhenwei
    Luo, Yingqi
    Lu, Kai
    Wang, Xiaoping
    [J]. 2018 3RD INTERNATIONAL CONFERENCE ON INFORMATION SYSTEMS ENGINEERING (ICISE), 2018, : 118 - 121
  • [4] Matrix Factorization Based Collaborative Filtering with Resilient Stochastic Gradient Descent
    Abdelbar, Ashraf M.
    Elnabarawy, Islam
    Salama, Khalid M.
    Wunsch, Donald C., II
    [J]. 2018 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2018,
  • [5] CuMF_SGD: Parallelized Stochastic Gradient Descent for Matrix Factorization on GPUs
    Xie, Xiaolong
    Tan, Wei
    Fong, Liana L.
    Liang, Yun
    [J]. HPDC'17: PROCEEDINGS OF THE 26TH INTERNATIONAL SYMPOSIUM ON HIGH-PERFORMANCE PARALLEL AND DISTRIBUTED COMPUTING, 2017, : 79 - 92
  • [6] Optimizing Stochastic Gradient Descent Using the Angle Between Gradients
    Song, Chongya
    Pons, Alexander
    Yen, Kang
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2020, : 5269 - 5275
  • [7] GPUSGD: A GPU-accelerated stochastic gradient descent algorithm for matrix factorization
    Jin, Jing
    Lai, Siyan
    Hu, Su
    Lin, Jing
    Lin, Xiaola
    [J]. CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2016, 28 (14): : 3844 - 3865
  • [8] Convergence of Alternating Gradient Descent for Matrix Factorization
    Ward, Rachel
    Kolda, Tamara G.
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [9] A NEW APPROACH OF GPU-ACCELERATED STOCHASTIC GRADIENT DESCENT METHOD FOR MATRIX FACTORIZATION
    Li, Feng
    Ye, Yunming
    Li, Xutao
    Lu, Jiajie
    [J]. INTERNATIONAL JOURNAL OF INNOVATIVE COMPUTING INFORMATION AND CONTROL, 2019, 15 (02): : 697 - 711
  • [10] An Efficient Approach of GPU-accelerated Stochastic Gradient Descent Method for Matrix Factorization
    Li, Feng
    Ye, Yunming
    Li, Xutao
    [J]. JOURNAL OF INTERNET TECHNOLOGY, 2019, 20 (04): : 1087 - 1097