Uncertainty Sampling Methods for Selecting Datasets in Active Meta-Learning

被引:0
|
作者
Prudencio, Ricardo B. C. [1 ]
Soares, Carlos [2 ]
Ludermir, Teresa B. [1 ]
机构
[1] Univ Fed Pernambuco, Ctr Informat, BR-50732970 Recife, PE, Brazil
[2] Univ Porto, Fac Econ, LIAAD INESC Porto L A, Oporto 4050190, Portugal
关键词
ALGORITHM SELECTION;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Several meta-learning approaches have been developed for the problem of algorithm selection. In this context, it is of central importance to collect a sufficient number of datasets to be used as meta-examples in order to provide reliable results. Recently, some proposals to generate datasets have addressed this issue with successful results. These proposals include datasetoids, which is a simple manipulation method to obtain new datasets from existing ones. However, the increase in the number of datasets raises another issue: in order to generate meta-examples for training, it is necessary to estimate the performance of the algorithms on the datasets. This typically requires running all candidate algorithms on all datasets, which is computationally very expensive. In a recent paper, active meta-learning has been used to address this problem. An uncertainty sampling method for the k-NN algorithm using a least confidence score based on a distance measure was employed. Here we extend that work, namely by investigating three hypotheses: 1) is there advantage in using a frequency-based least confidence score over the distance-based score? 2) given that the meta-learning problem used has three classes, is it better to use a margin-based score? and 3) given that datasetoids are expected to contain some noise, are better results achieved by starting the search with all datasets already labeled? Some of the results obtained are unexpected and should be further analyzed. However, they confirm that active meta-learning can significantly reduce the computational cost of meta-learning with potential gains in accuracy.
引用
收藏
页码:1082 / 1089
页数:8
相关论文
共 50 条
  • [41] Submodular Meta-Learning
    Adibi, Arman
    Mokhtari, Aryan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [42] Online Meta-Learning
    Finn, Chelsea
    Rajeswaran, Aravind
    Kakade, Sham
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [43] Meta-learning with backpropagation
    Younger, AS
    Hochreiter, S
    Conwell, PR
    IJCNN'01: INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, VOLS 1-4, PROCEEDINGS, 2001, : 2001 - 2006
  • [44] Competitive Meta-Learning
    Boxi Weng
    Jian Sun
    Gao Huang
    Fang Deng
    Gang Wang
    Jie Chen
    IEEE/CAA Journal of Automatica Sinica, 2023, 10 (09) : 1902 - 1904
  • [45] Weighted Voting and Meta-Learning for Combining Authorship Attribution Methods
    Petrovic, Smiljana
    Petrovic, Ivan
    Palesi, Ileana
    Calise, Anthony
    INTELLIGENT DATA ENGINEERING AND AUTOMATED LEARNING - IDEAL 2018, PT I, 2018, 11314 : 328 - 335
  • [46] Active learning and data manipulation techniques for generating training examples in meta-learning
    Sousa, Arthur F. M.
    Prudencio, Ricardo B. C.
    Ludermir, Teresa B.
    Soares, Carlos
    NEUROCOMPUTING, 2016, 194 : 45 - 55
  • [47] Predicting Scores of Medical Imaging Segmentation Methods with Meta-learning
    van Sonsbeek, Tom
    Cheplygina, Veronika
    INTERPRETABLE AND ANNOTATION-EFFICIENT LEARNING FOR MEDICAL IMAGE COMPUTING, IMIMIC 2020, MIL3ID 2020, LABELS 2020, 2020, 12446 : 242 - 253
  • [48] Selecting meta-heuristics for solving vehicle routing problems with time windows via meta-learning
    Gutierrez-Rodriguez, Andres E.
    Conant-Pablos, Santiago E.
    Ortiz-Bayliss, Jose C.
    Terashima-Marin, Hugo
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 118 : 470 - 481
  • [49] Competitive Meta-Learning
    Weng, Boxi
    Sun, Jian
    Huang, Gao
    Deng, Fang
    Wang, Gang
    Chen, Jie
    IEEE-CAA JOURNAL OF AUTOMATICA SINICA, 2023, 10 (09) : 1902 - 1904
  • [50] Evidence-based uncertainty sampling for active learning
    Manali Sharma
    Mustafa Bilgic
    Data Mining and Knowledge Discovery, 2017, 31 : 164 - 202