LEARNING IN HIGH-DIMENSIONAL FEATURE SPACES USING ANOVA-BASED FAST MATRIX-VECTOR MULTIPLICATION

被引:0
|
作者
Nestler, Franziska [1 ]
Stoll, Martin [2 ]
Wagner, Theresa [2 ]
机构
[1] TU Chemnitz, Dept Math, Chair Appl Funct Anal, Chemnitz, Germany
[2] TU Chemnitz, Dept Math, Chair Sci Comp, Chemnitz, Germany
关键词
ANOVA kernel; kernel ridge regression; non-equispaced fast Fourier transform; fast summation; fast matrix-vector multiplication; multiple kernel learning; FAST FOURIER-TRANSFORMS;
D O I
10.3934/fods.2022012
中图分类号
O29 [应用数学];
学科分类号
070104 ;
摘要
ABSTRACT. Kernel matrices are crucial in many learning tasks such as support vector machines or kernel ridge regression. The kernel matrix is typically dense and large-scale. Depending on the dimension of the feature space even the computation of all of its entries in reasonable time becomes a challenging task. For such dense matrices the cost of a matrix-vector product scales quadratically with the dimensionality N, if no customized methods are applied. We propose the use of an ANOVA kernel, where we construct several kernels based on lowerdimensional feature spaces for which we provide fast algorithms realizing the matrix-vector products. We employ the non-equispaced fast Fourier transform (NFFT), which is of linear complexity for fixed accuracy. Based on a feature grouping approach, we then show how the fast matrix-vector products can be embedded into a learning method choosing kernel ridge regression and the conjugate gradient solver. We illustrate the performance of our approach on several data sets.
引用
收藏
页数:18
相关论文
共 50 条
  • [1] An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling
    Li, Weixuan
    Lin, Guang
    Zhang, Dongxiao
    [J]. JOURNAL OF COMPUTATIONAL PHYSICS, 2014, 258 : 752 - 772
  • [2] FAST AND EFFICIENT DISTRIBUTED MATRIX-VECTOR MULTIPLICATION USING RATELESS FOUNTAIN CODES
    Mallick, Ankur
    Chaudhari, Malhar
    Joshi, Gauri
    [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2019, : 8192 - 8196
  • [3] Using Machine Learning to Estimate Utilization and Throughput for OpenCL-Based Matrix-Vector Multiplication (MVM)
    Naher, Jannatun
    Gloster, Clay
    Doss, Christopher C.
    Jadhav, Shrikant S.
    [J]. 2020 10TH ANNUAL COMPUTING AND COMMUNICATION WORKSHOP AND CONFERENCE (CCWC), 2020, : 365 - 372
  • [4] Fast content identification in high-dimensional feature spaces using Sparse Ternary Codes
    Ferdowsi, Sohrab
    Voloshynovskiy, Slava
    Kostadinov, Dimche
    Holotyak, Taras
    [J]. 2016 8TH IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY (WIFS 2016), 2016,
  • [5] Sparse Matrix-Vector Multiplication Optimizations based on Matrix Bandwidth Reduction using NVIDIA CUDA
    Xu, Shiming
    Lin, Hai Xiang
    Xue, Wei
    [J]. PROCEEDINGS OF THE NINTH INTERNATIONAL SYMPOSIUM ON DISTRIBUTED COMPUTING AND APPLICATIONS TO BUSINESS, ENGINEERING AND SCIENCE (DCABES 2010), 2010, : 609 - 614
  • [6] Image Steganalysis in High-Dimensional Feature Spaces with Proximal Support Vector Machine
    Zhong, Ping
    Li, Mengdi
    Mu, Kai
    Wen, Juan
    Xue, Yiming
    [J]. INTERNATIONAL JOURNAL OF DIGITAL CRIME AND FORENSICS, 2019, 11 (01) : 78 - 89
  • [7] A Streaming Dataflow Engine for Sparse Matrix-Vector Multiplication Using High-Level Synthesis
    Hosseinabady, Mohammad
    Nunez-Yanez, Jose Luis
    [J]. IEEE TRANSACTIONS ON COMPUTER-AIDED DESIGN OF INTEGRATED CIRCUITS AND SYSTEMS, 2020, 39 (06) : 1272 - 1285
  • [8] High-dimensional Bayesian optimization using low-dimensional feature spaces
    Moriconi, Riccardo
    Deisenroth, Marc Peter
    Sesh Kumar, K. S.
    [J]. MACHINE LEARNING, 2020, 109 (9-10) : 1925 - 1943
  • [9] High-dimensional Bayesian optimization using low-dimensional feature spaces
    Riccardo Moriconi
    Marc Peter Deisenroth
    K. S. Sesh Kumar
    [J]. Machine Learning, 2020, 109 : 1925 - 1943
  • [10] Merge-based Sparse Matrix-Vector Multiplication (SpMV) using the CSR Storage Format
    Merrill, Duane
    Garland, Michael
    [J]. ACM SIGPLAN NOTICES, 2016, 51 (08) : 389 - 390