Incremental learning from positive data

被引:69
|
作者
Lange, S [1 ]
Zeugmann, T [1 ]
机构
[1] KYUSHU UNIV 33, DEPT INFORMAT, HIGASHI KU, FUKUOKA 812, JAPAN
关键词
D O I
10.1006/jcss.1996.0051
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The present paper deals with a systematic study of incremental learning algorithms. The general scenario is as follows. Let c be any concept; then every infinite sequence of elements exhausting c is called positive presentation of c. An algorithmic learner successively takes as input one element of a positive presentation as well as its previously made hypothesis at a time and outputs a new hypothesis about the target concept. The sequence of hypotheses has to converge to a hypothesis correctly describing the concept to be learned. This basic scenario is referred to as iterative learning. Iterative inference can be refined by allowing the learner to store an a priori bounded number of carefully chosen examples resulting in bounded example memory inference. Additionally, feed-back identification is introduced. Now, the learner is enabled to ask whether or not a particular element did already appear in the data provided so far. Our results are threefold: First, the learning capabilities of the various models of incremental learning are related to previously studied learning models. It is proved that incremental learning can be always simulated by inference devices that are both set-driven and conservative. Second, feed-back learning is shown to be more powerful than iterative inference, and its learning power is incomparable to that of bounded example memory inference which itself extends that of iterative learning, tao. In particular, the learning power of bounded example memory inference always increases if the number of examples the learner is allowed to store is incremented. Third, a sufficient condition for iterative inference allowing non-enumerative learning is provided. The results obtained provide strong evidence that there is no unique way to design superior incremental learning algorithms. Instead, incremental learning is the art of knowing what to overlook. (C) 1996 Academic Press, Inc.
引用
收藏
页码:88 / 103
页数:16
相关论文
共 50 条
  • [1] Incremental learning of approximations from positive data
    Grieser, G
    Lange, S
    [J]. INFORMATION PROCESSING LETTERS, 2004, 89 (01) : 37 - 42
  • [2] Incremental learning from unbalanced data
    Muhlbaier, M
    Topalis, A
    Polikar, R
    [J]. 2004 IEEE INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2004, : 1057 - 1062
  • [3] Incremental Learning from Stream Data
    He, Haibo
    Chen, Sheng
    Li, Kang
    Xu, Xin
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (12): : 1901 - 1914
  • [4] Learning from positive data
    Muggleton, S
    [J]. INDUCTIVE LOGIC PROGRAMMING, 1997, 1314 : 358 - 376
  • [5] Incremental learning from chunk data for IDR/QR
    Lu, Gui-Fu
    Zou, Jian
    Wang, Yong
    [J]. IMAGE AND VISION COMPUTING, 2015, 36 : 1 - 8
  • [6] Incremental Learning of New Classes from Unbalanced Data
    Ditzler, Gregory
    Rosen, Gail
    Polikar, Robi
    [J]. 2013 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2013,
  • [7] Towards a Map for Incremental Learning in the Limit from Positive and Negative Information
    Khazraei, Ardalan
    Koetzing, Timo
    Seidel, Karen
    [J]. CONNECTING WITH COMPUTABILITY, 2021, 12813 : 273 - 284
  • [8] Deep Class-Incremental Learning From Decentralized Data
    Zhang, Xiaohan
    Dong, Songlin
    Chen, Jinjie
    Tian, Qi
    Gong, Yihong
    Hong, Xiaopeng
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, 35 (05) : 7190 - 7203
  • [9] Incremental Learning of Concept Drift from Streaming Imbalanced Data
    Ditzler, Gregory
    Polikar, Robi
    [J]. IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2013, 25 (10) : 2283 - 2301
  • [10] Learning from Positive and Unlabeled Data with Arbitrary Positive Shift
    Hammoudeh, Zayd
    Lowd, Daniel
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33