ENHANCING INTERPRETABILITY AND FIDELITY IN CONVOLUTIONAL NEURAL NETWORKS THROUGH DOMAIN-INFORMED KNOWLEDGE INTEGRATION

被引:0
|
作者
Agbangba, Codjo Emile [1 ]
Toha, Rodeo Oswald Y. [2 ]
Bello, Abdou Wahidi [3 ]
Adetola, Jamal [2 ]
机构
[1] Univ Abomey Calavi, Lab Biomath & Estimat Forestieres, Calavi, Benin
[2] Univ Natl Sci Technol Ingn & Math, Ecole Natl Super Genie Math & Modelisat, Abomey, Benin
[3] Univ Abomey Calavi, Fac Sci & Tech, Calavi, Benin
关键词
intelligent agriculture; image classification; convolutional neural networks (CNN); plant diseases; initialization; heatmaps;
D O I
10.17654/0972361724062
中图分类号
O21 [概率论与数理统计]; C8 [统计学];
学科分类号
020208 ; 070103 ; 0714 ;
摘要
This study addresses the need for robust disease detection methods in vegetable crops by introducing a novel initialization method for convolutional neural networks (CNNs). Rather than creating a new CNN architecture, our approach focuses on infusing expert knowledge from phytopathology directly into the model's foundation. This innovative initialization ensures that the CNN possesses a contextual understanding of intricate disease patterns specific to tomatoes. Additionally, our study redefines the role of heatmaps as a dynamic metric for assessing model fidelity in real-time. Unlike traditional post hoc applications, heatmaps are integrated into the model evaluation process, providing insights into decision-making processes and alignment with expert-derived expectations. This dual innovation aims to enhance transparency and fidelity in CNNs, offering a nuanced and effective solution for disease detection in agriculture. The study contributes to advancing artificial intelligence applications in agriculture by providing accurate predictions and a deeper understanding of the underlying decision mechanisms crucial for crop health management.
引用
收藏
页码:1165 / 1194
页数:30
相关论文
共 50 条
  • [1] Domain-Informed Neural Networks for Interaction Localization Within Astroparticle Experiments
    Liang, Shixiao
    Higuera, Aaron
    Peters, Christina
    Roy, Venkat
    Bajwa, Waheed U.
    Shatkay, Hagit
    Tunnell, Christopher D.
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [2] Domain-informed graph neural networks: A quantum chemistry case study
    Morgan, Jay Paul
    Paiement, Adeline
    Klinke, Christian
    NEURAL NETWORKS, 2023, 165 : 938 - 952
  • [3] Enhancing Brain Tumor Detection Through Custom Convolutional Neural Networks and Interpretability-Driven Analysis
    Dewage, Kavinda Ashan Kulasinghe Wasalamuni
    Hasan, Raza
    Rehman, Bacha
    Mahmood, Salman
    Information (Switzerland), 2024, 15 (10)
  • [4] Progress in Interpretability Research of Convolutional Neural Networks
    Zhang, Wei
    Cai, Lizhi
    Chen, Mingang
    Wang, Naiqi
    MOBILE COMPUTING, APPLICATIONS, AND SERVICES, MOBICASE 2019, 2019, 290 : 155 - 168
  • [6] Enhancing Interpretability in Medical Image Classification by Integrating Formal Concept Analysis with Convolutional Neural Networks
    Khatri, Minal
    Yin, Yanbin
    Deogun, Jitender
    BIOMIMETICS, 2024, 9 (07)
  • [7] Ultrasound Breast Image Classification Through Domain Knowledge Integration Into Deep Neural Networks
    Nehary, Ebrahim A.
    Rajan, Sreeraman
    IEEE ACCESS, 2024, 12 : 112966 - 112983
  • [8] Interpretability Analysis of Convolutional Neural Networks for Crack Detection
    Wu, Jie
    He, Yongjin
    Xu, Chengyu
    Jia, Xiaoping
    Huang, Yule
    Chen, Qianru
    Huang, Chuyue
    Eslamlou, Armin Dadras
    Huang, Shiping
    BUILDINGS, 2023, 13 (12)
  • [9] Extracting Meaningful High-Fidelity Knowledge from Convolutional Neural Networks
    Ngan, Kwun Ho
    Garcez, Artur D'Avila
    Townsend, Joseph
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [10] Structural Compression of Convolutional Neural Networks with Applications in Interpretability
    Abbasi-Asl, Reza
    Yu, Bin
    FRONTIERS IN BIG DATA, 2021, 4