Tradeoff of generalization error in unsupervised learning

被引:0
|
作者
Kim, Gilhan [1 ]
Lee, Hojun [1 ]
Jo, Junghyo [2 ]
Baek, Yongjoo [1 ]
机构
[1] Seoul Natl Univ, Ctr Theoret Phys, Dept Phys & Astron, Seoul 08826, South Korea
[2] Seoul Natl Univ, Dept Phys Educ & Ctr Theoret Phys, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
machine learning; classical phase transitions; stochastic processes; MODEL;
D O I
10.1088/1742-5468/ace42c
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Finding the optimal model complexity that minimizes the generalization error (GE) is a key issue of machine learning. For the conventional supervised learning, this task typically involves the bias-variance tradeoff: lowering the bias by making the model more complex entails an increase in the variance. Meanwhile, little has been studied about whether the same tradeoff exists for unsupervised learning. In this study, we propose that unsupervised learning generally exhibits a two-component tradeoff of the GE, namely the model error (ME) and the data error (DE)-using a more complex model reduces the ME at the cost of the DE, with the DE playing a more significant role for a smaller training dataset. This is corroborated by training the restricted Boltzmann machine to generate the configurations of the two-dimensional Ising model at a given temperature and the totally asymmetric simple exclusion process with given entry and exit rates. Our results also indicate that the optimal model tends to be more complex when the data to be learned are more complex.
引用
收藏
页数:15
相关论文
共 50 条
  • [21] Lower Bounds on the Generalization Error of Nonlinear Learning Models
    Seroussi, Inbar
    Zeitouni, Ofer
    IEEE TRANSACTIONS ON INFORMATION THEORY, 2022, 68 (12) : 7956 - 7970
  • [22] Towards Model Generalization for Intrusion Detection: Unsupervised Machine Learning Techniques
    Verkerken, Miel
    D'hooge, Laurens
    Wauters, Tim
    Volckaert, Bruno
    De Turck, Filip
    JOURNAL OF NETWORK AND SYSTEMS MANAGEMENT, 2022, 30 (01)
  • [23] Towards Model Generalization for Intrusion Detection: Unsupervised Machine Learning Techniques
    Miel Verkerken
    Laurens D’hooge
    Tim Wauters
    Bruno Volckaert
    Filip De Turck
    Journal of Network and Systems Management, 2022, 30
  • [24] Unsupervised visual discrimination learning of complex stimuli: Accuracy, bias and generalization
    Montefusco-Siegmund, Rodrigo
    Toro, Mauricio
    Maldonado, Pedro E.
    Aylwin, Maria de la L.
    VISION RESEARCH, 2018, 148 : 37 - 48
  • [25] Complementary Domain Adaptation and Generalization for Unsupervised Continual Domain Shift Learning
    Cho, Wonguk
    Park, Jinha
    Kim, Taesup
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 11408 - 11418
  • [26] An Unsupervised Transfer Learning Method for UWB Ranging Error Mitigation
    Li, Zhihang
    Hu, Keke
    Wang, Tianyu
    Cui, Shulin
    Shen, Yuan
    IEEE COMMUNICATIONS LETTERS, 2023, 27 (12) : 3215 - 3219
  • [27] Learning Algorithm Generalization Error Bounds via Auxiliary Distributions
    Aminian G.
    Masiha S.
    Toni L.
    Rodrigues M.R.D.
    IEEE Journal on Selected Areas in Information Theory, 2024, 5 : 273 - 284
  • [28] MULTI-TASK LEARNING WITH LOCALIZED GENERALIZATION ERROR MODEL
    Li, Wendi
    Zhu, Yi
    Wang, Ting
    Ng, Wing W. Y.
    PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS (ICMLC), 2019, : 380 - 387
  • [29] LEARNING A NEW VISUOMOTOR TRANSFORMATION - ERROR-CORRECTION AND GENERALIZATION
    ROBYBRAMI, A
    BURNOD, Y
    COGNITIVE BRAIN RESEARCH, 1995, 2 (04): : 229 - 242
  • [30] Generalization error of three layered learning model in Bayesian estimation
    Aoyagi, Miki
    Watanabe, Sumio
    PROCEEDINGS OF THE SECOND IASTED INTERNATIONAL CONFERENCE ON COMPUTATIONAL INTELLIGENCE, 2006, : 405 - +