Improving Multiple-Instance Learning via Disambiguation by Considering Generalization

被引:0
|
作者
Zhao, Lu [1 ]
Yu, Youjian [1 ]
Chen, Hao [1 ]
Yuan, Liming [2 ]
机构
[1] Tianjin Chengjian Univ, Tianjin 300384, Peoples R China
[2] Tianjin Univ Technol, Tianjin 300384, Peoples R China
来源
关键词
Multiple-instance learning; Disambiguation; Generalization ability;
D O I
10.1007/978-3-319-90802-1_37
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Multiple-instance learning (MIL) is a variant of the traditional supervised learning. InMIL training examples are bags of instances and labels are associated with bags rather than individual instances. The standard MIL assumption indicates that a bag is labeled positive if at least one of its instances is labeled positive, and otherwise labeled negative. However, many MIL problems do not satisfy this assumption but the more general one that the class of a bag is jointly determined by multiple instances of the bag. To solve such problems, the authors of MILD proposed an efficient disambiguation method to identify the most discriminative instances in training bags and then converted MIL to the standard supervised learning. Nevertheless, MILD does not consider the generalization ability of its disambiguation method, leading to inferior performance compared to other baselines. In this paper, we try to improve the performance of MILD by considering the discrimination of its disambiguation method on the validation set. We have performed extensive experiments on the drug activity prediction and region-based image categorization tasks. The experimental results demonstrate that MILD outperforms other similar MIL algorithms by taking into account the generalization capability of its disambiguation method.
引用
收藏
页码:419 / 429
页数:11
相关论文
共 50 条
  • [31] Drug activity prediction using multiple-instance learning via joint instance and feature selection
    Zhendong Zhao
    Gang Fu
    Sheng Liu
    Khaled M Elokely
    Robert J Doerksen
    Yixin Chen
    Dawn E Wilkins
    BMC Bioinformatics, 14
  • [32] An Iterative Instance Selection Based Framework for Multiple-Instance Learning
    Yuan, Liming
    Wen, Xianbin
    Zhao, Lu
    Xu, Haixia
    2018 IEEE 30TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI), 2018, : 772 - 779
  • [33] Multiple-instance ensemble learning for hyperspectral images
    Ergul, Ugur
    Bilgin, Gokhan
    JOURNAL OF APPLIED REMOTE SENSING, 2017, 11
  • [34] A Note on Learning from Multiple-Instance Examples
    Avrim Blum
    Adam Kalai
    Machine Learning, 1998, 30 : 23 - 29
  • [35] Multiple-instance learning as a classifier combining problem
    Li, Yan
    Tax, David M. J.
    Duin, Robert P. W.
    Loog, Marco
    PATTERN RECOGNITION, 2013, 46 (03) : 865 - 874
  • [36] Multiple-Instance Active Learning for Image Categorization
    Liu, Dong
    Hua, Xian-Sheng
    Yang, Linjun
    Zhang, Hong-Jiang
    ADVANCES IN MULTIMEDIA MODELING, PROCEEDINGS, 2009, 5371 : 239 - +
  • [37] Multiple-Instance Learning via an RBF Kernel-Based Extreme Learning Machine
    Wang J.
    Cai L.
    Zhao X.
    Cai, Liangjian (cailiangjian@outlook.com), 1600, Walter de Gruyter GmbH (26): : 185 - 195
  • [38] A note on learning from multiple-instance examples
    Blum, A
    Kalai, A
    MACHINE LEARNING, 1998, 30 (01) : 23 - 29
  • [39] MIForests: Multiple-Instance Learning with Randomized Trees
    Leistner, Christian
    Saffari, Amir
    Bischof, Horst
    COMPUTER VISION - ECCV 2010, PT VI, 2010, 6316 : 29 - 42
  • [40] An extended kernel for generalized multiple-instance learning
    Tao, QP
    Scott, S
    Vinodchandran, NV
    Osugi, TT
    Mueller, B
    ICTAI 2004: 16TH IEEE INTERNATIONALCONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2004, : 272 - 277