On the black-box explainability of object detection models for safe and trustworthy industrial applications

被引:0
|
作者
Andres, Alain [1 ,2 ]
Martinez-Seras, Aitor [1 ]
Laña, Ibai [1 ,2 ]
Del Ser, Javier [1 ,3 ]
机构
[1] TECNALIA, Basque Research and Technology Alliance (BRTA), Mikeletegi Pasealekua 2, Donostia-San Sebastian,20009, Spain
[2] University of Deusto, Donostia-San Sebastián,20012, Spain
[3] University of the Basque Country (UPV/EHU), Bilbao,48013, Spain
来源
Results in Engineering | 2024年 / 24卷
关键词
Black boxes - Detection models - Explainable artificial intelligence - Industrial robotics - Objects detection - Safe artificial intelligence - Single stage - Single-stage object detection - Trustworthy artificial intelligence;
D O I
10.1016/j.rineng.2024.103498
中图分类号
学科分类号
摘要
In the realm of human-machine interaction, artificial intelligence has become a powerful tool for accelerating data modeling tasks. Object detection methods have achieved outstanding results and are widely used in critical domains like autonomous driving and video surveillance. However, their adoption in high-risk applications, where errors may cause severe consequences, remains limited. Explainable Artificial Intelligence methods aim to address this issue, but many existing techniques are model-specific and designed for classification tasks, making them less effective for object detection and difficult for non-specialists to interpret. In this work we focus on model-agnostic explainability methods for object detection models and propose D-MFPP, an extension of the Morphological Fragmental Perturbation Pyramid (MFPP) technique based on segmentation-based masks to generate explanations. Additionally, we introduce D-Deletion, a novel metric combining faithfulness and localization, adapted specifically to meet the unique demands of object detectors. We evaluate these methods on real-world industrial and robotic datasets, examining the influence of parameters such as the number of masks, model size, and image resolution on the quality of explanations. Our experiments use single-stage object detection models applied to two safety-critical robotic environments: i) a shared human-robot workspace where safety is of paramount importance, and ii) an assembly area of battery kits, where safety is critical due to the potential for damage among high-risk components. Our findings evince that D-Deletion effectively gauges the performance of explanations when multiple elements of the same class appear in a scene, while D-MFPP provides a promising alternative to D-RISE when fewer masks are used. © 2024 The Author(s)
引用
收藏
相关论文
共 50 条
  • [31] Regularizing Black-box Models for Improved Interpretability
    Plumb, Gregory
    Al-Shedivat, Maruan
    Cabrera, Angel Alexander
    Perer, Adam
    Xing, Eric
    Talwalkar, Ameet
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [32] BLACK-BOX MODELS FOR LINEAR INTEGRATED CIRCUITS
    MURRAYLA.MA
    IEEE TRANSACTIONS ON EDUCATION, 1969, E 12 (03) : 170 - &
  • [33] Learning Groupwise Explanations for Black-Box Models
    Gao, Jingyue
    Wang, Xiting
    Wang, Yasha
    Yan, Yulan
    Xie, Xing
    PROCEEDINGS OF THE THIRTIETH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2021, 2021, : 2396 - 2402
  • [34] Auditing black-box models for indirect influence
    Adler, Philip
    Falk, Casey
    Friedler, Sorelle A.
    Nix, Tionney
    Rybeck, Gabriel
    Scheidegger, Carlos
    Smith, Brandon
    Venkatasubramanian, Suresh
    KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) : 95 - 122
  • [35] Adversarial Eigen Attack on Black-Box Models
    Zhou, Linjun
    Cui, Peng
    Zhang, Xingxuan
    Jiang, Yinan
    Yang, Shiqiang
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 15233 - 15241
  • [36] Explaining Black-box Classification Models with Arguments
    Amgoud, Leila
    2021 IEEE 33RD INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE (ICTAI 2021), 2021, : 791 - 795
  • [37] Auditing black-box models for indirect influence
    Philip Adler
    Casey Falk
    Sorelle A. Friedler
    Tionney Nix
    Gabriel Rybeck
    Carlos Scheidegger
    Brandon Smith
    Suresh Venkatasubramanian
    Knowledge and Information Systems, 2018, 54 : 95 - 122
  • [38] Black-box models for reference voltage monitoring
    Serbec, IN
    Fefer, D
    PROCEEDINGS OF THE IASTED INTERNATIONAL CONFERENCE ON APPLIED SIMULATION AND MODELLING, 2004, : 533 - 539
  • [39] Evolutionary functional black-box testing in an industrial setting
    Vos, Tanja E. J.
    Lindlar, Felix F.
    Wilmes, Benjamin
    Windisch, Andreas
    Baars, Arthur I.
    Kruse, Peter M.
    Gross, Hamilton
    Wegener, Joachim
    SOFTWARE QUALITY JOURNAL, 2013, 21 (02) : 259 - 288
  • [40] Black-box modeling of an industrial combustion engine plant
    Korb, R
    Skorjanz, P
    Jörgl, HP
    (SYSID'97): SYSTEM IDENTIFICATION, VOLS 1-3, 1998, : 191 - 196