Evaluating the performance of bagging-based k-nearest neighbor ensemble with the voting rule selection method

被引:4
|
作者
Suchithra, M. S. [1 ]
Pai, Maya L. [1 ]
机构
[1] Amrita Vishwa Vidyapeetham, Amrita Sch Arts & Sci, Dept Comp Sci & IT, Kochi, Kerala, India
关键词
Label ranking; Ensembles; K-nearest neighbor label ranker; Rank aggregation; VRS; LABEL RANKING; OPTIMIZATION ALGORITHM;
D O I
10.1007/s11042-022-12716-3
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Label ranking based prediction problems help to find a mapping between instances and ranked labels. To improve the prediction performance of label ranking models, the main suggestion is to use ensembles. The main feature of ensemble models is the ability to retrieve the outcomes of various multiple simple models and join them into a combined outcome. To ensure the property of ensemble learning, the nearest neighbor estimation is applied with a bagging approach that samples the instances of training data. The reason to select the label ranking ensemble approach is shown in this study using detailed analysis of suitable existing algorithms. The results show that the defined parameters used in the k-nearest label ranker help to obtain a better prediction performance than the existing label ranking ensembles with an accuracy of 85% to 99% on 21 label ranking datasets. But there is a possibility to improve any ensemble algorithm by using the voting rule selection procedure. This study, integrating the Voting Rule Selector (VRS) algorithm and the seven commonly used voting rules with the k-nearest neighbor label ranker, found that VRS and Copeland work more efficiently than the Borda aggregation in dataset level learning. The results indicate that the k-nearest neighbor label ranker with the VRS and Copeland aggregation method is ranked first in most of the datasets. In the dataset level, the VRS method obtained an average improvement of 48.02% in comparison to the simple model k-nearest neighbor approach and is almost equal to the Copeland with 47.84%.
引用
收藏
页码:20741 / 20762
页数:22
相关论文
共 50 条
  • [41] Locally determining the number of neighbors in the k-nearest neighbor rule based on statistical confidence
    Wang, JG
    Neskovic, P
    Cooper, LN
    ADVANCES IN NATURAL COMPUTATION, PT 1, PROCEEDINGS, 2005, 3610 : 71 - 80
  • [42] Simple termination conditions for k-nearest neighbor method
    Kudo, M
    Masuyama, N
    Toyama, J
    Shimbo, M
    PATTERN RECOGNITION LETTERS, 2003, 24 (9-10) : 1203 - 1213
  • [43] A K-NEAREST NEIGHBOR CLASSIFICATION RULE-BASED ON DEMPSTER-SHAFER THEORY
    DENOEUX, T
    IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1995, 25 (05): : 804 - 813
  • [44] Style linear k-nearest neighbor classification method
    Zhang, Jin
    Bian, Zekang
    Wang, Shitong
    APPLIED SOFT COMPUTING, 2024, 150
  • [45] A dynamic density-based clustering method based on K-nearest neighbor
    Mahshid Asghari Sorkhi
    Ebrahim Akbari
    Mohsen Rabbani
    Homayun Motameni
    Knowledge and Information Systems, 2024, 66 : 3005 - 3031
  • [46] Modeling River Ice Breakup Dates by k-Nearest Neighbor Ensemble
    Sun, Wei
    Lv, Ying
    Li, Gongchen
    Chen, Yumin
    WATER, 2020, 12 (01)
  • [47] Application of the Improved K-Nearest Neighbor-Based Multi-Model Ensemble Method for Runoff Prediction
    Xie, Tao
    Chen, Lu
    Yi, Bin
    Li, Siming
    Leng, Zhiyuan
    Gan, Xiaoxue
    Mei, Ziyi
    WATER, 2024, 16 (01)
  • [48] EK-NNclus: A clustering procedure based on the evidential K-nearest neighbor rule
    Denceux, Thierry
    Kanjanatarakul, Orakanya
    Sriboonchitta, Songsak
    KNOWLEDGE-BASED SYSTEMS, 2015, 88 : 57 - 69
  • [49] A new edited k-nearest neighbor rule in the pattern classification problem
    Hattori, K
    Takahashi, M
    PATTERN RECOGNITION, 2000, 33 (03) : 521 - 528
  • [50] Towards enriching the quality of k-nearest neighbor rule for document classification
    Basu, Tanmay
    Murthy, C. A.
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2014, 5 (06) : 897 - 905