Attribute-Efficient Learning of Halfspaces with Malicious Noise: Near-Optimal Label Complexity and Noise Tolerance

被引:0
|
作者
Shen, Jie [1 ]
Zhang, Chicheng [2 ]
机构
[1] Stevens Inst Technol, Hoboken, NJ 07030 USA
[2] Univ Arizona, Tucson, AZ USA
来源
关键词
halfspaces; malicious noise; passive and active learning; attribute efficiency; REGRESSION; PERCEPTRON; SELECTION; BOUNDS; RATES;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper is concerned with computationally efficient learning of homogeneous sparse halfspaces in R-d under noise. Though recent works have established attribute-efficient learning algorithms under various types of label noise (e.g. bounded noise), it remains an open question of when and how s-sparse halfspaces can be efficiently learned under the challenging malicious noise model, where an adversary may corrupt both the unlabeled examples and the labels. We answer this question in the affirmative by designing a computationally efficient active learning algorithm with near-optimal label complexity of (O) over tilde (s log(4) d/epsilon)(1) and noise tolerance eta = Omega(epsilon), where epsilon is an element of (0, 1) is the target error rate, under the assumption that the distribution over (uncorrupted) unlabeled examples is isotropic log-concave. Our algorithm can be straightforwardly tailored to the passive learning setting, and we show that its sample complexity is (O) over tilde (1/epsilon s(2) log(5) d) which also enjoys attribute efficiency. Our main techniques include attribute-efficient paradigms for soft outlier removal and for empirical risk minimization, and a new analysis of uniform concentration for unbounded instances - all of them crucially take the sparsity structure of the underlying halfspace into account.
引用
收藏
页数:42
相关论文
共 50 条
  • [1] Near-Optimal Bounds for Learning Gaussian Halfspaces with Random Classification Noise
    Diakonikolas, Ilias
    Diakonikolas, Jelena
    Kane, Daniel M.
    Wang, Puqian
    Zarifis, Nikos
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] Learning Halfspaces with Malicious Noise
    Klivans, Adam R.
    Long, Philip M.
    Servedio, Rocco A.
    [J]. JOURNAL OF MACHINE LEARNING RESEARCH, 2009, 10 : 2715 - 2740
  • [3] Learning Halfspaces with Malicious Noise
    Klivans, Adam R.
    Long, Philip M.
    Servedio, Rocco A.
    [J]. AUTOMATA, LANGUAGES AND PROGRAMMING, PT I, 2009, 5555 : 609 - +
  • [4] Sample-Optimal PAC Learning of Halfspaces with Malicious Noise
    Shen, Jie
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [5] Efficient Testable Learning of Halfspaces with Adversarial Label Noise
    Diakonikolas, Ilias
    Kane, Daniel M.
    Kontonis, Vasilis
    Liu, Sihan
    Zarifis, Nikos
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Computational sample complexity and attribute-efficient learning
    Servedio, RA
    [J]. JOURNAL OF COMPUTER AND SYSTEM SCIENCES, 2000, 60 (01) : 161 - 178
  • [7] On the Power of Localized Perceptron for Label-Optimal Learning of Halfspaces with Adversarial Noise
    Shen, Jie
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [8] The Complexity of Adversarially Robust Proper Learning of Halfspaces with Agnostic Noise
    Diakonikolas, Ilias
    Kane, Daniel M.
    Manurangsi, Pasin
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [9] Optimal and Near-Optimal Detection in Bursty Impulsive Noise
    Mahmood, Ahmed
    Chitre, Mandar
    [J]. IEEE JOURNAL OF OCEANIC ENGINEERING, 2017, 42 (03) : 639 - 653
  • [10] Optimal and Near-Optimal Detection in Bursty Impulsive Noise
    Mahmood, Ahmed
    Chitre, Mandar
    [J]. IEEE Journal of Oceanic Engineering, 2017, 42 (03): : 639 - 653