Large deviations in the perceptron model and consequences for active learning

被引:0
|
作者
Cui H. [1 ]
Saglietti L. [1 ]
Zdeborov'A L. [1 ]
机构
[1] Institute of Physics, École Polytechnique Fédérale de Lausanne, Lausanne
来源
基金
欧盟地平线“2020”; 欧洲研究理事会;
关键词
Active learning; Large deviations; Perceptron model;
D O I
10.1088/2632-2153/abfbbb
中图分类号
学科分类号
摘要
Active learning (AL) is a branch of machine learning that deals with problems where unlabeled data is abundant yet obtaining labels is expensive. The learning algorithm has the possibility of querying a limited number of samples to obtain the corresponding labels, subsequently used for supervised learning. In this work, we consider the task of choosing the subset of samples to be labeled from a fixed finite pool of samples. We assume the pool of samples to be a random matrix and the ground truth labels to be generated by a single-layer teacher random neural network. We employ replica methods to analyze the large deviations for the accuracy achieved after supervised learning on a subset of the original pool. These large deviations then provide optimal achievable performance boundaries for any AL algorithm. We show that the optimal learning performance can be efficiently approached by simple message-passing AL algorithms. We also provide a comparison with the performance of some other popular active learning strategies. © 2021 The Author(s).
引用
收藏
相关论文
共 50 条
  • [1] Large deviations for the perceptron model and consequences for active learning
    Cui, Hugo
    Saglietti, Luca
    Zdeborova, Lenka
    MATHEMATICAL AND SCIENTIFIC MACHINE LEARNING, VOL 107, 2020, 107 : 390 - 430
  • [2] Large deviations of semisupervised learning in the stochastic block model
    Cui, Hugo
    Saglietti, Luca
    Zdeborova, Lenka
    PHYSICAL REVIEW E, 2022, 105 (03)
  • [3] Analysis of perceptron-based active learning
    Dasgupta, Sanjoy
    Kalai, Adam Tauman
    Monteleoni, Claire
    Journal of Machine Learning Research, 2009, 10 : 281 - 299
  • [4] Active Online Learning in the Binary Perceptron Problem
    Zhou, Hai-Jun
    COMMUNICATIONS IN THEORETICAL PHYSICS, 2019, 71 (02) : 243 - 252
  • [5] Analysis of perceptron-based active learning
    Dasgupta, S
    Kalai, AT
    Monteleoni, C
    LEARNING THEORY, PROCEEDINGS, 2005, 3559 : 249 - 263
  • [6] Active Online Learning in the Binary Perceptron Problem
    周海军
    Communications in Theoretical Physics, 2019, 71 (02) : 243 - 252
  • [7] Analysis of Perceptron-Based Active Learning
    Dasgupta, Sanjoy
    Kalai, Adam Tauman
    Monteleoni, Claire
    JOURNAL OF MACHINE LEARNING RESEARCH, 2009, 10 : 281 - 299
  • [8] Learning, large deviations and rare events
    Benhabib, Jess
    Dave, Chetan
    REVIEW OF ECONOMIC DYNAMICS, 2014, 17 (03) : 367 - 382
  • [9] Recursive preferences, learning and large deviations
    Dave, Chetan
    Tsang, Kwok Ping
    ECONOMICS LETTERS, 2014, 124 (03) : 329 - 334
  • [10] Collective motion in large deviations of active particles
    Keta, Yann-Edwin
    Fodor, Etienne
    van Wijland, Frederic
    Cates, Michael E.
    Jack, Robert L.
    PHYSICAL REVIEW E, 2021, 103 (02)