Tradeoff of generalization error in unsupervised learning

被引:0
|
作者
Kim, Gilhan [1 ]
Lee, Hojun [1 ]
Jo, Junghyo [2 ]
Baek, Yongjoo [1 ]
机构
[1] Seoul Natl Univ, Ctr Theoret Phys, Dept Phys & Astron, Seoul 08826, South Korea
[2] Seoul Natl Univ, Dept Phys Educ & Ctr Theoret Phys, Seoul 08826, South Korea
基金
新加坡国家研究基金会;
关键词
machine learning; classical phase transitions; stochastic processes; MODEL;
D O I
10.1088/1742-5468/ace42c
中图分类号
O3 [力学];
学科分类号
08 ; 0801 ;
摘要
Finding the optimal model complexity that minimizes the generalization error (GE) is a key issue of machine learning. For the conventional supervised learning, this task typically involves the bias-variance tradeoff: lowering the bias by making the model more complex entails an increase in the variance. Meanwhile, little has been studied about whether the same tradeoff exists for unsupervised learning. In this study, we propose that unsupervised learning generally exhibits a two-component tradeoff of the GE, namely the model error (ME) and the data error (DE)-using a more complex model reduces the ME at the cost of the DE, with the DE playing a more significant role for a smaller training dataset. This is corroborated by training the restricted Boltzmann machine to generate the configurations of the two-dimensional Ising model at a given temperature and the totally asymmetric simple exclusion process with given entry and exit rates. Our results also indicate that the optimal model tends to be more complex when the data to be learned are more complex.
引用
收藏
页数:15
相关论文
共 50 条
  • [1] Generalization in Unsupervised Learning
    Abou-Moustafa, Karim T.
    Schuurmans, Dale
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2015, PT I, 2015, 9284 : 300 - 317
  • [2] Unsupervised learning and generalization
    Hansen, LE
    Larsen, J
    ICNN - 1996 IEEE INTERNATIONAL CONFERENCE ON NEURAL NETWORKS, VOLS. 1-4, 1996, : 25 - 30
  • [3] Generalization Error in Deep Learning
    Jakubovitz, Daniel
    Giryes, Raja
    Rodrigues, Miguel R. D.
    COMPRESSED SENSING AND ITS APPLICATIONS, 2019, : 153 - 193
  • [4] Generalization and learning error for nonlinear perceptron
    Shcherbina, M
    Tirozzi, B
    MATHEMATICAL AND COMPUTER MODELLING, 2002, 35 (3-4) : 259 - 271
  • [5] A generalization error for Q-learning
    Murphy, Susan A.
    Journal of Machine Learning Research, 2005, 6
  • [6] A generalization error for Q-learning
    Murphy, SA
    JOURNAL OF MACHINE LEARNING RESEARCH, 2005, 6 : 1073 - 1097
  • [7] Unsupervised Domain Generalization by Learning a Bridge Across Domains
    Harary, Sivan
    Schwartz, Eli
    Arbelle, Assaf
    Staar, Peter
    Abu-Hussein, Shady
    Amrani, Elad
    Herzig, Roei
    Alfassy, Amit
    Giryes, Raja
    Kuehne, Hilde
    Katabi, Dina
    Saenko, Kate
    Feris, Rogerio
    Karlinsky, Leonid
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 5270 - 5280
  • [8] Generalization and exclusive allocation of credit in unsupervised category learning
    Marshall, JA
    Gupta, VS
    NETWORK-COMPUTATION IN NEURAL SYSTEMS, 1998, 9 (02) : 279 - 302
  • [9] Generalization Error Analysis of Quantized Compressive Learning
    Li, Xiaoyun
    Li, Ping
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] An Analysis of Generalization Error in Relevant Subtask Learning
    Yamazaki, Keisuke
    Kaski, Samuel
    ADVANCES IN NEURO-INFORMATION PROCESSING, PT I, 2009, 5506 : 629 - +