Adaptive explainable artificial intelligence for visual defect inspection

被引:1
|
作者
Rozanec, Joze M. [1 ,2 ,3 ]
Sircelj, Beno [1 ]
Fortuna, Blaz [3 ]
Mladenic, Dunja [2 ]
机构
[1] Jozef Stefan Int Postgrad Sch, Jamova Cesta 39, Ljubljana 1000, Slovenia
[2] Jozef Stefan Inst, Jamova Cesta 39, Ljubljana 1000, Slovenia
[3] Qlector Doo, Rovsnikova 7, Ljubljana, Slovenia
基金
欧盟地平线“2020”;
关键词
Intelligent Manufacturing Systems; Quality Assurance and Maintenance; Fault Detection; Visual Inspection; Human Centred Automation; Adaptive Interfaces; Artificial Intelligence; Explainable Artificial Intelligence;
D O I
10.1016/j.procs.2024.02.119
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Explainable Artificial Intelligence promises to deliver means so that humans better understand the rationale behind a particular machine learning model. In the image domain, such information is frequently conveyed through heat maps. Along the same line, information regarding defect detection for unsupervised methods applied to images can be conveyed through anomaly maps. Nev- ertheless, heat maps or anomaly maps can convey inaccurate information (artifacts), or their perceptions may differ across different persons. Therefore, the user experience could be enhanced by collecting human feedback and creating predictive models on how these could be recolored to bridge the gap between the original heat maps and anomaly maps created with explainability tech- niques and the output expected by humans. We envision this work as relevant in at least two scenarios. First, enhance anomaly and heat maps when conveying information regarding machine vision models deployed in production to remove information deemed unnecessary by the user but systematically present through the explainability technique due to underlying model issues (artifacts). Second, adapt anomaly and heat maps based on users' perceptual needs and preferences.
引用
收藏
页码:3034 / 3043
页数:10
相关论文
共 50 条
  • [31] Explainable and responsible artificial intelligence PREFACE
    Meske, Christian
    Abedin, Babak
    Klier, Mathias
    Rabhi, Fethi
    ELECTRONIC MARKETS, 2022, 32 (04) : 2103 - 2106
  • [32] A Survey on Explainable Artificial Intelligence for Cybersecurity
    Rjoub, Gaith
    Bentahar, Jamal
    Wahab, Omar Abdel
    Mizouni, Rabeb
    Song, Alyssa
    Cohen, Robin
    Otrok, Hadi
    Mourad, Azzam
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (04): : 5115 - 5140
  • [33] Scientific Exploration and Explainable Artificial Intelligence
    Carlos Zednik
    Hannes Boelsen
    Minds and Machines, 2022, 32 : 219 - 239
  • [34] Blockchain for explainable and trustworthy artificial intelligence
    Nassar, Mohamed
    Salah, Khaled
    Rehman, Muhammad Habib ur
    Svetinovic, Davor
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2020, 10 (01)
  • [35] Explainable artificial intelligence: an analytical review
    Angelov, Plamen P.
    Soares, Eduardo A.
    Jiang, Richard
    Arnold, Nicholas I.
    Atkinson, Peter M.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (05)
  • [36] A survey of explainable artificial intelligence decision
    Kong X.
    Tang X.
    Wang Z.
    Xitong Gongcheng Lilun yu Shijian/System Engineering Theory and Practice, 2021, 41 (02): : 524 - 536
  • [37] Is explainable artificial intelligence intrinsically valuable?
    Nathan Colaner
    AI & SOCIETY, 2022, 37 : 231 - 238
  • [38] Scientific Exploration and Explainable Artificial Intelligence
    Zednik, Carlos
    Boelsen, Hannes
    MINDS AND MACHINES, 2022, 32 (01) : 219 - 239
  • [39] Explainable artificial intelligence: a comprehensive review
    Minh, Dang
    Wang, H. Xiang
    Li, Y. Fen
    Nguyen, Tan N.
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (05) : 3503 - 3568
  • [40] Explainable artificial intelligence: a comprehensive review
    Dang Minh
    H. Xiang Wang
    Y. Fen Li
    Tan N. Nguyen
    Artificial Intelligence Review, 2022, 55 : 3503 - 3568