Explainable artificial intelligence and interpretable machine learning for agricultural data analysis

被引:29
|
作者
Ryo, Masahiro [1 ,2 ]
机构
[1] Leibniz Ctr Agr Landscape Res ZALF, Eberswalder Str 84, D-15374 Muncheberg, Germany
[2] Brandenburg Univ Technol Cottbus Senftenberg, Pl Deutsch Einheit 1, D-03046 Cottbus, Germany
关键词
Interpretable machine learning; Explainable artificial intelligence; Agriculture; Crop yield; No-tillage; XAI; NO-TILL; BLACK-BOX; MODELS; CROP;
D O I
10.1016/j.aiia.2022.11.003
中图分类号
S [农业科学];
学科分类号
09 ;
摘要
Artificial intelligence and machine learning have been increasingly applied for prediction in agricultural science. However, many models are typically black boxes, meaning we cannot explain what the models learned from the data and the reasons behind predictions. To address this issue, I introduce an emerging subdomain of artificial intelligence, explainable artificial intelligence (XAI), and associated toolkits, interpretable machine learning. This study demonstrates the usefulness of several methods by applying them to an openly available dataset. The dataset includes the no-tillage effect on crop yield relative to conventional tillage and soil, climate, and man-agement variables. Data analysis discovered that no-tillage management can increase maize crop yield where yield in conventional tillage is <5000 kg/ha and the maximum temperature is higher than 32 & DEG;. These methods are useful to answer (i) which variables are important for prediction in regression/classification, (ii) which var-iable interactions are important for prediction, (iii) how important variables and their interactions are associated with the response variable, (iv) what are the reasons underlying a predicted value for a certain instance, and (v) whether different machine learning algorithms offer the same answer to these questions. I argue that the goodness of model fit is overly evaluated with model performance measures in the current practice, while these questions are unanswered. XAI and interpretable machine learning can enhance trust and explainability in AI.& COPY; 2022 The Author. Publishing services by Elsevier B.V. on behalf of KeAi Communications Co., Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/).
引用
收藏
页码:257 / 265
页数:9
相关论文
共 50 条
  • [31] Explainable Artificial Intelligence and Cardiac Imaging: Toward More Interpretable Models
    Salih, Ahmed
    Galazzo, Ilaria Boscolo
    Gkontra, Polyxeni
    Lee, Aaron Mark
    Lekadir, Karim
    Raisi-Estabragh, Zahra
    Petersen, Steffen E.
    [J]. CIRCULATION-CARDIOVASCULAR IMAGING, 2023, 16 (04) : E014519
  • [32] Interpretable and explainable predictive machine learning models for data-driven protein engineering
    Medina-Ortiz, David
    Khalifeh, Ashkan
    Anvari-Kazemabad, Hoda
    Davari, Mehdi D.
    [J]. Biotechnology Advances, 2025, 79
  • [33] An Explainable Artificial Intelligence Framework for the Predictive Analysis of Hypo and Hyper Thyroidism Using Machine Learning Algorithms
    Md. Bipul Hossain
    Anika Shama
    Apurba Adhikary
    Avi Deb Raha
    K. M. Aslam Uddin
    Mohammad Amzad Hossain
    Imtia Islam
    Saydul Akbar Murad
    Md. Shirajum Munir
    Anupam Kumar Bairagi
    [J]. Human-Centric Intelligent Systems, 2023, 3 (3): : 211 - 231
  • [34] DeepXplainer: An interpretable deep learning based approach for lung cancer detection using explainable artificial intelligence
    Wani, Niyaz Ahmad
    Kumar, Ravinder
    Bedi, Jatin
    [J]. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE, 2024, 243
  • [35] Advancements in Deep Reinforcement Learning and Inverse Reinforcement Learning for Robotic Manipulation: Toward Trustworthy, Interpretable, and Explainable Artificial Intelligence
    Ozalp, Recep
    Ucar, Aysegul
    Guzelis, Cuneyt
    [J]. IEEE ACCESS, 2024, 12 : 51840 - 51858
  • [36] Data Science: Big Data, Machine Learning, and Artificial Intelligence
    Carlos, Ruth C.
    Kahn, Charles E.
    Halabi, Safwan
    [J]. JOURNAL OF THE AMERICAN COLLEGE OF RADIOLOGY, 2018, 15 (03) : 497 - 498
  • [37] Interactive Collaborative Learning with Explainable Artificial Intelligence
    Arnold, Oksana
    Golchert, Sebastian
    Rennert, Michel
    Jantke, Klaus P.
    [J]. LEARNING IN THE AGE OF DIGITAL AND GREEN TRANSITION, ICL2022, VOL 1, 2023, 633 : 13 - 24
  • [38] Machine Learning and Artificial Intelligence Improve Data Validation
    Gouge, Brian
    [J]. Opflow, 2024, 50 (08) : 8 - 9
  • [39] Explainable and Interpretable Machine Learning for Antimicrobial Stewardship: Opportunities and Challenges
    Giacobbe, Daniele Roberto
    Marelli, Cristina
    Guastavino, Sabrina
    Mora, Sara
    Rosso, Nicola
    Signori, Alessio
    Campi, Cristina
    Giacomini, Mauro
    Bassetti, Matteo
    [J]. CLINICAL THERAPEUTICS, 2024, 46 (06) : 474 - 480
  • [40] A spectrum of explainable and interpretable machine learning approaches for genomic studies
    Conard, Ashley Mae
    DenAdel, Alan
    Crawford, Lorin
    [J]. WILEY INTERDISCIPLINARY REVIEWS-COMPUTATIONAL STATISTICS, 2023, 15 (05):