Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models

被引:79
|
作者
Ryo, Masahiro [1 ,2 ,3 ]
Angelov, Boyan [4 ]
Mammola, Stefano [5 ,6 ]
Kass, Jamie M. [7 ]
Benito, Blas M. [8 ]
Hartig, Florian [9 ]
机构
[1] Free Univ Berlin, Inst Biol, Berlin, Germany
[2] Berlin Brandenburg Inst Adv Biodivers Res BBIB, Berlin, Germany
[3] Leibniz Ctr Agr Landscape Res ZALF, Muncheberg, Germany
[4] Assoc Comp Machinery ACM, New York, NY USA
[5] Natl Res Council CNR, Mol Ecol Grp MEG, Water Res Inst IRSA, Verbania, Italy
[6] Univ Helsinki, Lab Integrat Biodivers Res LIBRe, Finnish Museum Nat Hist LUOMUS, Helsinki, Finland
[7] Okinawa Inst Sci & Technol Grad Univ, Biodivers & Biocomplex Unit, Okinawa, Japan
[8] Univ Alicante, Inst Environm Studies Ramon Margalef, Dept Ecol & Multidisciplinary, Alicante, Spain
[9] Univ Regensburg, Fac Biol & Preclin Med, Theoret Ecol, Regensburg, Germany
基金
欧盟地平线“2020”; 日本学术振兴会; 欧洲研究理事会;
关键词
ecological modeling; explainable artificial intelligence; habitat suitability modeling; interpretable machine learning; species distribution model; xAI;
D O I
10.1111/ecog.05360
中图分类号
X176 [生物多样性保护];
学科分类号
090705 ;
摘要
Species distribution models (SDMs) are widely used in ecology, biogeography and conservation biology to estimate relationships between environmental variables and species occurrence data and make predictions of how their distributions vary in space and time. During the past two decades, the field has increasingly made use of machine learning approaches for constructing and validating SDMs. Model accuracy has steadily increased as a result, but the interpretability of the fitted models, for example the relative importance of predictor variables or their causal effects on focal species, has not always kept pace. Here we draw attention to an emerging subdiscipline of artificial intelligence, explainable AI (xAI), as a toolbox for better interpreting SDMs. xAI aims at deciphering the behavior of complex statistical or machine learning models (e.g. neural networks, random forests, boosted regression trees), and can produce more transparent and understandable SDM predictions. We describe the rationale behind xAI and provide a list of tools that can be used to help ecological modelers better understand complex model behavior at different scales. As an example, we perform a reproducible SDM analysis in R on the African elephant and showcase some xAI tools such as local interpretable model-agnostic explanation (LIME) to help interpret local-scale behavior of the model. We conclude with what we see as the benefits and caveats of these techniques and advocate for their use to improve the interpretability of machine learning SDMs.
引用
收藏
页码:199 / 205
页数:7
相关论文
共 50 条
  • [1] Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
    Hassija, Vikas
    Chamola, Vinay
    Mahapatra, Atmesh
    Singal, Abhinandan
    Goel, Divyansh
    Huang, Kaizhu
    Scardapane, Simone
    Spinelli, Indro
    Mahmud, Mufti
    Hussain, Amir
    COGNITIVE COMPUTATION, 2024, 16 (01) : 45 - 74
  • [2] Interpreting Black-Box Models: A Review on Explainable Artificial Intelligence
    Vikas Hassija
    Vinay Chamola
    Atmesh Mahapatra
    Abhinandan Singal
    Divyansh Goel
    Kaizhu Huang
    Simone Scardapane
    Indro Spinelli
    Mufti Mahmud
    Amir Hussain
    Cognitive Computation, 2024, 16 : 45 - 74
  • [3] Techniques to Improve Ecological Interpretability of Black-Box Machine Learning Models
    Welchowski, Thomas
    Maloney, Kelly O.
    Mitchell, Richard
    Schmid, Matthias
    JOURNAL OF AGRICULTURAL BIOLOGICAL AND ENVIRONMENTAL STATISTICS, 2022, 27 (01) : 175 - 197
  • [4] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [5] Regularizing Black-box Models for Improved Interpretability
    Plumb, Gregory
    Al-Shedivat, Maruan
    Cabrera, Angel Alexander
    Perer, Adam
    Xing, Eric
    Talwalkar, Ameet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [6] Unboxing Artificial Intelligence "black-Box" Models - A Novel Heuristic
    Weppler, S.
    Quon, H.
    Harjai, N.
    Beers, C.
    Van Dyke, L.
    Kirkby, C.
    Schinkel, C.
    Smith, W.
    MEDICAL PHYSICS, 2020, 47 (06) : E667 - E667
  • [7] Explainable AI: To Reveal the Logic of Black-Box Models
    Chinu, Urvashi
    Bansal, Urvashi
    NEW GENERATION COMPUTING, 2024, 42 (01) : 53 - 87
  • [8] Explainable Debugger for Black-box Machine Learning Models
    Rasouli, Peyman
    Yu, Ingrid Chieh
    2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
  • [9] Explainable AI: To Reveal the Logic of Black-Box Models
    Chinu
    Bansal, Urvashi
    New Generation Computing, 42 (01): : 53 - 87
  • [10] Explainable AI: To Reveal the Logic of Black-Box Models
    Urvashi Chinu
    New Generation Computing, 2024, 42 : 53 - 87