Multiple-Instance Learning Approach via Bayesian Extreme Learning Machine

被引:3
|
作者
Wang, Peipei [1 ]
Zheng, Xinqi [1 ,2 ]
Ku, Junhua [3 ]
Wang, Chunning [4 ]
机构
[1] China Univ Geosci, Sch Informat Engn, Beijing 100083, Peoples R China
[2] MNR China, Technol Innovat Ctr Terr Spatial Big Data, Beijing 100036, Peoples R China
[3] Yibin Univ, Sch Math, Yibin 644000, Peoples R China
[4] Natl Geol Lib China, Beijing 100083, Peoples R China
基金
中国国家自然科学基金;
关键词
Multiple-instance learning; Bayesian extreme learning machine; instance selection; classification; NETWORK; CLASSIFICATION; PREDICTION;
D O I
10.1109/ACCESS.2020.2984271
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multiple-instance learning (MIL) can solve supervised learning tasks, where only a bag of multiple instances is labeled, instead of a single instance. It is considerably important to develop effective and efficient MIL algorithms, because real-world datasets usually contain large instances. Known for its good generalization performance, MIL based on extreme learning machines (ELMx2013;MIL) has proven to be more efficient than several typical MIL classification methods. ELMx2013;MIL selects the most qualified instances from each bag through a single hidden layer feedforward network (SLFN) and trains modified ELM models to update the output weights. This learning approach often performs susceptible to the number of hidden nodes and can easily suffer from over-fitting problem. Using Bayesian inferences, this study introduces a Bayesian ELM (BELM)-based MIL algorithm (BELMx2013;MIL) to address MIL classification problems. First, weight self-learning method based on a Bayesian network is applied to determine the weights of instance features. The most qualified instances are then selected from each bag to represent the bag. Second, BELM can improve the classification model via regularization of automatic estimations to reduce possible over-fitting during the calibration process. Experiments and comparisons are conducted with several competing algorithms on Musk datasets, images datasets, and inductive logic programming datasets. Superior classification accuracy and performance are demonstrated by BELMx2013;MIL.
引用
收藏
页码:62458 / 62470
页数:13
相关论文
共 50 条
  • [31] A Nonparametric Bayesian Approach to Multiple Instance Learning
    Manandhar, Achut
    Morton, Kenneth D., Jr.
    Collins, Leslie M.
    Torrione, Peter A.
    INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2015, 29 (03)
  • [32] A Novel Multiple Instance Learning Method Based on Extreme Learning Machine
    Wang, Jie
    Cai, Liangjian
    Peng, Jinzhu
    Jia, Yuheng
    COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE, 2015, 2015
  • [33] Max-margin Multiple-Instance Learning via Semidefinite Programming
    Guo, Yuhong
    ADVANCES IN MACHINE LEARNING, PROCEEDINGS, 2009, 5828 : 98 - 108
  • [34] Multiple-instance ensemble learning for hyperspectral images
    Ergul, Ugur
    Bilgin, Gokhan
    JOURNAL OF APPLIED REMOTE SENSING, 2017, 11
  • [35] A Note on Learning from Multiple-Instance Examples
    Avrim Blum
    Adam Kalai
    Machine Learning, 1998, 30 : 23 - 29
  • [36] Multiple-instance learning as a classifier combining problem
    Li, Yan
    Tax, David M. J.
    Duin, Robert P. W.
    Loog, Marco
    PATTERN RECOGNITION, 2013, 46 (03) : 865 - 874
  • [37] Multiple-Instance Learning with Empirical Estimation Guided Instance Selection
    Yuan, Liming
    Wen, Xianbin
    Xu, Haixia
    Zhao, Lu
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 770 - 775
  • [38] Multiple-Instance Active Learning for Image Categorization
    Liu, Dong
    Hua, Xian-Sheng
    Yang, Linjun
    Zhang, Hong-Jiang
    ADVANCES IN MULTIMEDIA MODELING, PROCEEDINGS, 2009, 5371 : 239 - +
  • [39] A note on learning from multiple-instance examples
    Blum, A
    Kalai, A
    MACHINE LEARNING, 1998, 30 (01) : 23 - 29
  • [40] MIForests: Multiple-Instance Learning with Randomized Trees
    Leistner, Christian
    Saffari, Amir
    Bischof, Horst
    COMPUTER VISION - ECCV 2010, PT VI, 2010, 6316 : 29 - 42