Distributed Continual Learning With CoCoA in High-Dimensional Linear Regression

被引:0
|
作者
Hellkvist, Martin [1 ]
Ozcelikkale, Ayca [1 ]
Ahlen, Anders [1 ]
机构
[1] Uppsala Univ, Dept Elect Engn, S-75121 Uppsala, Sweden
基金
瑞典研究理事会;
关键词
Task analysis; Training; Distributed databases; Distance learning; Computer aided instruction; Data models; Training data; Multi-task networks; networked systems; distributed estimation; adaptation; overparametrization; NEURAL-NETWORKS; ALGORITHMS;
D O I
10.1109/TSP.2024.3361714
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
We consider estimation under scenarios where the signals of interest exhibit change of characteristics over time. In particular, we consider the continual learning problem where different tasks, e.g., data with different distributions, arrive sequentially and the aim is to perform well on the newly arrived task without performance degradation on the previously seen tasks. In contrast to the continual learning literature focusing on the centralized setting, we investigate the problem from a distributed estimation perspective. We consider the well-established distributed learning algorithm CoCoA, which distributes the model parameters and the corresponding features over the network. We provide exact analytical characterization for the generalization error of CoCoA under continual learning for linear regression in a range of scenarios, where overparameterization is of particular interest. These analytical results characterize how the generalization error depends on the network structure, the task similarity and the number of tasks, and show how these dependencies are intertwined. In particular, our results show that the generalization error can be significantly reduced by adjusting the network size, where the most favorable network size depends on task similarity and the number of tasks. We present numerical results verifying the theoretical analysis and illustrate the continual learning performance of CoCoA with a digit classification task.
引用
收藏
页码:1015 / 1031
页数:17
相关论文
共 50 条
  • [1] Unified Transfer Learning in High-Dimensional Linear Regression
    Liu, Shuo Shuo
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [2] Robust transfer learning for high-dimensional regression with linear constraints
    Chen, Xuan
    Song, Yunquan
    Wang, Yuanfeng
    JOURNAL OF STATISTICAL COMPUTATION AND SIMULATION, 2024, 94 (11) : 2462 - 2482
  • [3] Sparsity Oriented Importance Learning for High-Dimensional Linear Regression
    Ye, Chenglong
    Yang, Yi
    Yang, Yuhong
    JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2018, 113 (524) : 1797 - 1812
  • [4] Scalable High-Dimensional Multivariate Linear Regression for Feature-Distributed Data
    Huang, Shuo-Chieh
    Tsay, Ruey S.
    JOURNAL OF MACHINE LEARNING RESEARCH, 2024, 25
  • [5] Transfer learning for high-dimensional linear regression via the elastic net
    Meng, Kang
    Gai, Yujie
    Wang, Xiaodi
    Yao, Mei
    Sun, Xiaofei
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [6] Variational Inference in high-dimensional linear regression
    Mukherjee, Sumit
    Sen, Subhabrata
    JOURNAL OF MACHINE LEARNING RESEARCH, 2022, 23
  • [7] ACCURACY ASSESSMENT FOR HIGH-DIMENSIONAL LINEAR REGRESSION
    Cai, T. Tony
    Guo, Zijian
    ANNALS OF STATISTICS, 2018, 46 (04): : 1807 - 1836
  • [8] Prediction in abundant high-dimensional linear regression
    Cook, R. Dennis
    Forzani, Liliana
    Rothman, Adam J.
    ELECTRONIC JOURNAL OF STATISTICS, 2013, 7 : 3059 - 3088
  • [9] Elementary Estimators for High-Dimensional Linear Regression
    Yang, Eunho
    Lozano, Aurelie C.
    Ravikumar, Pradeep
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 32 (CYCLE 2), 2014, 32 : 388 - 396
  • [10] Variational Inference in high-dimensional linear regression
    Mukherjee, Sumit
    Sen, Subhabrata
    Journal of Machine Learning Research, 2022, 23