Optimistic Active Learning using Mutual Information

被引:0
|
作者
Guo, Yuhong [1 ]
Greiner, Russ [1 ]
机构
[1] Univ Alberta, Dept Comp Sci, Edmonton, AB T6G 2E8, Canada
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
An "active learning system" will sequentially decide which unlabeled instance to label, with the goal of efficiently gathering the information necessary to produce a good classifier. Some such systems greedily select the next instance based only on properties of that instance and the few currently labeled points - e.g., selecting the one closest to the current classification boundary. Unfortunately, these approaches ignore the valuable information contained in the other unlabeled instances, which can help identify a good classifier much faster. For the previous approaches that do exploit this unlabeled data, this information is mostly used in a conservative way. One common property of the approaches in the literature is that the active learner sticks to one single query selection criterion in the whole process. We propose a system, MM+M, that selects the query instance that is able to provide the maximum conditional mutual information about the labels of the unlabeled instances, given the labeled data, in an optimistic way. This approach implicitly exploits the discriminative partition information contained in the unlabeled data. Instead of using one selection criterion, MM+M also employs a simple on-line method that changes its selection rule when it encounters an "unexpected label". Our empirical results demonstrate that this new approach works effectively.
引用
收藏
页码:823 / 829
页数:7
相关论文
共 50 条
  • [31] Mutual Information Regularized Offline Reinforcement Learning
    Ma, Xiao
    Kang, Bingyi
    Xu, Zhongwen
    Lin, Min
    Yan, Shuicheng
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] Learning from examples with quadratic mutual information
    Xu, DX
    Principe, JC
    NEURAL NETWORKS FOR SIGNAL PROCESSING VIII, 1998, : 155 - 164
  • [33] Learning tree network based on mutual information
    Hong, Yinghan
    Mai, Guizhen
    Liu, Zhusong
    Metallurgical and Mining Industry, 2015, 7 (12): : 146 - 154
  • [34] Adversarial Mutual Information Learning for Network Embedding
    He, Dongxiao
    Zhai, Lu
    Li, Zhigang
    Jin, Di
    Yang, Liang
    Huang, Yuxiao
    Yu, Philip S.
    PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 3321 - 3327
  • [35] Fair Representation Learning: An Alternative to Mutual Information
    Liu, Ji
    Li, Zenan
    Yao, Yuan
    Xu, Feng
    Ma, Xiaoxing
    Xu, Miao
    Tong, Hanghang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 1088 - 1097
  • [36] MILCS: A Mutual Information Learning Classifier System
    Smith, R. E.
    Jiang, Max Kun
    GECCO 2007: GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE, VOL 1 AND 2, 2007, : 1877 - 1877
  • [37] Minimum mutual information beamforming for simultaneous active speakers
    Kumatani, Kenichi
    Mayer, Uwe
    Gehrig, Tobias
    Stoimenov, Emilian
    McDonough, John
    Woelfel, Matthias
    2007 IEEE WORKSHOP ON AUTOMATIC SPEECH RECOGNITION AND UNDERSTANDING, VOLS 1 AND 2, 2007, : 71 - +
  • [38] Signal processing using mutual information
    Hudson, John E.
    IEEE SIGNAL PROCESSING MAGAZINE, 2006, 23 (06) : 50 - +
  • [39] Hierarchical clustering using mutual information
    Kraskov, A
    Stögbauer, H
    Andrzejak, RG
    Grassberger, P
    EUROPHYSICS LETTERS, 2005, 70 (02): : 278 - 284
  • [40] A Fair Classifier Using Mutual Information
    Cho, Jaewoong
    Hwang, Gyeongjo
    Suh, Changho
    2020 IEEE INTERNATIONAL SYMPOSIUM ON INFORMATION THEORY (ISIT), 2020, : 2521 - 2526