Explainable AI: Machine Learning Interpretation in Blackcurrant Powders

被引:0
|
作者
Przybyl, Krzysztof [1 ]
机构
[1] Poznan Univ Life Sci, Fac Food Sci & Nutr, Dept Dairy & Proc Engn, 31 Wojska Polskiego St, PL-60624 Poznan, Poland
关键词
explainable artificial intelligence (XAI); Local Interpretable Model Agnostic Explanations (LIMEs); machine learning; classifiers ensembles; gray-level co-occurrence matrix (GLCM); Random Forest (RF); blackcurrant powders; ARTIFICIAL-INTELLIGENCE; COMPUTER VISION; NEURAL-NETWORKS; CLASSIFICATION; PREDICTION;
D O I
10.3390/s24103198
中图分类号
O65 [分析化学];
学科分类号
070302 ; 081704 ;
摘要
Recently, explainability in machine and deep learning has become an important area in the field of research as well as interest, both due to the increasing use of artificial intelligence (AI) methods and understanding of the decisions made by models. The explainability of artificial intelligence (XAI) is due to the increasing consciousness in, among other things, data mining, error elimination, and learning performance by various AI algorithms. Moreover, XAI will allow the decisions made by models in problems to be more transparent as well as effective. In this study, models from the 'glass box' group of Decision Tree, among others, and the 'black box' group of Random Forest, among others, were proposed to understand the identification of selected types of currant powders. The learning process of these models was carried out to determine accuracy indicators such as accuracy, precision, recall, and F1-score. It was visualized using Local Interpretable Model Agnostic Explanations (LIMEs) to predict the effectiveness of identifying specific types of blackcurrant powders based on texture descriptors such as entropy, contrast, correlation, dissimilarity, and homogeneity. Bagging (Bagging_100), Decision Tree (DT0), and Random Forest (RF7_gini) proved to be the most effective models in the framework of currant powder interpretability. The measures of classifier performance in terms of accuracy, precision, recall, and F1-score for Bagging_100, respectively, reached values of approximately 0.979. In comparison, DT0 reached values of 0.968, 0.972, 0.968, and 0.969, and RF7_gini reached values of 0.963, 0.964, 0.963, and 0.963. These models achieved classifier performance measures of greater than 96%. In the future, XAI using agnostic models can be an additional important tool to help analyze data, including food products, even online.
引用
收藏
页数:17
相关论文
共 50 条
  • [1] Explainable Machine Learning for Trustworthy AI
    Giannotti, Fosca
    [J]. ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2022, 356 : 3 - 3
  • [2] Explainable AI: A Review of Machine Learning Interpretability Methods
    Linardatos, Pantelis
    Papastefanopoulos, Vasilis
    Kotsiantis, Sotiris
    [J]. ENTROPY, 2021, 23 (01) : 1 - 45
  • [3] Learning Classifier Systems: Cognitive inspired Machine Learning for eXplainable AI
    Siddique, Abubakar
    Browne, Will
    [J]. PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE COMPANION, GECCO 2022, 2022, : 1081 - 1110
  • [4] Diabetes prediction using machine learning and explainable AI techniques
    Tasin, Isfafuzzaman
    Nabil, Tansin Ullah
    Islam, Sanjida
    Khan, Riasat
    [J]. HEALTHCARE TECHNOLOGY LETTERS, 2023, 10 (1-2) : 1 - 10
  • [5] Predicting life satisfaction using machine learning and explainable AI
    Khan, Alif Elham
    Hasan, Mohammad Junayed
    Anjum, Humayra
    Mohammed, Nabeel
    Momen, Sifat
    [J]. HELIYON, 2024, 10 (10)
  • [6] Explainable Machine Learning Model for Glaucoma Diagnosis and Its Interpretation
    Oh, Sejong
    Park, Yuli
    Cho, Kyong Jin
    Kim, Seong Jae
    [J]. DIAGNOSTICS, 2021, 11 (03)
  • [7] From Explainable AI to Explainable Simulation: Using Machine Learning and XAI to understand System Robustness
    Feldkamp, Niclas
    Strassburger, Steffen
    [J]. PROCEEDINGS OF THE 2023 ACM SIGSIM INTERNATIONAL CONFERENCE ON PRINCIPLES OF ADVANCED DISCRETE SIMULATION, ACMSIGSIM-PADS 2023, 2023, : 96 - 106
  • [8] Consensus hybrid ensemble machine learning for intrusion detection with explainable AI
    Ahmed, Usman
    Jiangbin, Zheng
    Khan, Sheharyar
    Sadiq, Muhammad Tariq
    [J]. Journal of Network and Computer Applications, 2025, 235
  • [9] Explainable AI: Graph machine learning for response prediction and biomarker discovery
    Cohen-Setton, Jake
    Bulusu, Krishna
    Dry, Jonathan
    Sidders, Ben
    [J]. CANCER RESEARCH, 2024, 84 (06)
  • [10] Superimposition: Augmenting Machine Learning Outputs with Conceptual Models for Explainable AI
    Lukyanenko, Roman
    Castellanos, Arturo
    Storey, Veda C.
    Castillo, Alfred
    Tremblay, Monica Chiarini
    Parsons, Jeffrey
    [J]. ADVANCES IN CONCEPTUAL MODELING, ER 2020, 2020, 12584 : 26 - 34