On the black-box explainability of object detection models for safe and trustworthy industrial applications

被引:0
|
作者
Andres, Alain [1 ,2 ]
Martinez-Seras, Aitor [1 ]
Laña, Ibai [1 ,2 ]
Del Ser, Javier [1 ,3 ]
机构
[1] TECNALIA, Basque Research and Technology Alliance (BRTA), Mikeletegi Pasealekua 2, Donostia-San Sebastian,20009, Spain
[2] University of Deusto, Donostia-San Sebastián,20012, Spain
[3] University of the Basque Country (UPV/EHU), Bilbao,48013, Spain
来源
Results in Engineering | 2024年 / 24卷
关键词
Black boxes - Detection models - Explainable artificial intelligence - Industrial robotics - Objects detection - Safe artificial intelligence - Single stage - Single-stage object detection - Trustworthy artificial intelligence;
D O I
10.1016/j.rineng.2024.103498
中图分类号
学科分类号
摘要
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods aim to address this issue, but many existing techniques are model-specific and designed for classification tasks, making them less effective for object detection and difficult for non-specialists to interpret. In this work we focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based masks to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness and localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number of masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage object detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace where safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due to the potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges the performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used. © 2024 The Author(s)
引用
收藏
相关论文
共 50 条
  • [11] Toward Black-Box Detection of Logic Flaws in Web Applications
    Pellegrino, Giancarlo
    Balzarotti, Davide
    21ST ANNUAL NETWORK AND DISTRIBUTED SYSTEM SECURITY SYMPOSIUM (NDSS 2014), 2014,
  • [12] Black-box models for fault detection and performance monitoring of buildings
    Jacob, Dirk
    Dietz, Sebastian
    Komhard, Susanne
    Neumann, Christian
    Herkel, Sebastian
    JOURNAL OF BUILDING PERFORMANCE SIMULATION, 2010, 3 (01) : 53 - 62
  • [13] Safe Inputs Approximation for Black-Box Systems
    Xue, Bai
    Liu, Yang
    Ma, Lei
    Zhang, Xiyue
    Sun, Meng
    Xie, Xiaofei
    2019 24TH INTERNATIONAL CONFERENCE ON ENGINEERING OF COMPLEX COMPUTER SYSTEMS (ICECCS 2019), 2019, : 180 - 189
  • [14] Artificial Intelligence and Black-Box Medical Decisions: Accuracy versus Explainability
    London, Alex John
    HASTINGS CENTER REPORT, 2019, 49 (01) : 15 - 21
  • [15] Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine
    Friedrich, Annie B.
    Mason, Jordan
    Malone, Jay R.
    ETHICS AND INFORMATION TECHNOLOGY, 2022, 24 (01)
  • [16] Rethinking explainability: toward a postphenomenology of black-box artificial intelligence in medicine
    Annie B. Friedrich
    Jordan Mason
    Jay R. Malone
    Ethics and Information Technology, 2022, 24
  • [17] Deep Causal Graphs for Causal Inference, Black-Box Explainability and Fairness
    Parafita, Alvaro y
    Vitria, Jordi
    ARTIFICIAL INTELLIGENCE RESEARCH AND DEVELOPMENT, 2021, 339 : 415 - 424
  • [18] In-Training Explainability Frameworks: A Method to Make Black-Box Machine Learning Models More Explainable
    Acun, Cagla
    Nasraoui, Olfa
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 230 - 237
  • [19] What Lies Beneath: A Note on the Explainability of Black-box Machine Learning Models for Road Traffic Forecasting
    Barredo-Arrieta, Alejandro
    Lana, Ibai
    Del Ser, Javier
    2019 IEEE INTELLIGENT TRANSPORTATION SYSTEMS CONFERENCE (ITSC), 2019, : 2232 - 2237
  • [20] Interpretable Companions for Black-Box Models
    Pan, Danqing
    Wang, Tong
    Hara, Satoshi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 108, 2020, 108 : 2444 - 2453