An interpretable neural network ensemble

被引:4
|
作者
Hartono, Pitoyo [1 ]
Hashimoto, Shuji [2 ]
机构
[1] Future Univ, Dept Media Architecture, Hakodate, Hokkaido, Japan
[2] Waseda Univ, Dept Appl Phys, Tokyo, Japan
关键词
D O I
10.1109/IECON.2007.4460332
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The objective of this study is to build a model of neural network classifier that is not only reliable but also, as opposed to most of the presently, available neural networks, logically interpretable in a human-plausible manner. Presently, most of the studies of rule extraction front trained neural networks focus on extracting rule from existing neural network models that were designed without the consideration of rule extraction, hence after the training process they are meant to be used as a kind black box Consequently, this makes rule extraction a hard task. In this study we construct a model of neural network ensemble with the consideration of rule extraction. The function of the ensemble can be easily interpreted to generate logical rules that are understandable for human. We believe that the interpretability of neural networks contributes to the improvement of the reliability and the usability of neural networks when applied to critical real world problems.
引用
收藏
页码:228 / +
页数:2
相关论文
共 50 条
  • [1] On the Effectiveness of Interpretable Feedforward Neural Network
    Li, Miles Q.
    Fung, Benjamin C. M.
    Abusitta, Adel
    [J]. 2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [2] The Diversified Ensemble Neural Network
    Zhang, Shaofeng
    Liu, Meng
    Yan, Junchi
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [3] Landslide susceptibility modeling by interpretable neural network
    K. Youssef
    K. Shao
    S. Moon
    L.-S. Bouchard
    [J]. Communications Earth & Environment, 4
  • [4] Graph ensemble neural network
    Duan, Rui
    Yan, Chungang
    Wang, Junli
    Jiang, Changjun
    [J]. INFORMATION FUSION, 2024, 110
  • [5] Top interpretable neural network for handwriting identification
    Marcinowski, Maciej
    [J]. JOURNAL OF FORENSIC SCIENCES, 2022, 67 (03) : 1140 - 1148
  • [6] A fuzzy binary neural network for interpretable classifications
    Meyer, Robert
    O'Keefe, Simon
    [J]. NEUROCOMPUTING, 2013, 121 : 401 - 415
  • [7] An Interpretable Neural Network Model for Bundle Recommendations
    Li, Xinyi
    [J]. PROCEEDINGS OF THE 16TH ACM CONFERENCE ON RECOMMENDER SYSTEMS, RECSYS 2022, 2022, : 722 - 723
  • [8] Transforming Convolutional Neural Network to an Interpretable Classifier
    Tamajka, Martin
    Benesova, Wanda
    Kompanek, Matej
    [J]. PROCEEDINGS OF 2019 INTERNATIONAL CONFERENCE ON SYSTEMS, SIGNALS AND IMAGE PROCESSING (IWSSIP 2019), 2019, : 255 - 259
  • [9] Landslide susceptibility modeling by interpretable neural network
    Youssef, K.
    Shao, K.
    Moon, S.
    Bouchard, L. -s.
    [J]. COMMUNICATIONS EARTH & ENVIRONMENT, 2023, 4 (01):
  • [10] Clustering by sparse orthogonal NMF and interpretable neural network
    Gai, Yongwei
    Liu, Jinglei
    [J]. MULTIMEDIA SYSTEMS, 2023, 29 (06) : 3341 - 3356