Explainable Artificial Intelligence for Interpretable Data Minimization

被引:0
|
作者
Becker, Maximilian [1 ]
Toprak, Emrah [1 ]
Beyerer, Juergen [2 ]
机构
[1] Karlsruhe Inst Technol, Vis & Fus Lab, Karlsruhe, Germany
[2] Fraunhofer IOSB, Karlsruhe, Germany
关键词
XAI; Data Minimization; Counterfactual Explanations;
D O I
10.1109/ICDMW60847.2023.00119
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Black box models such as deep neural networks are increasingly being deployed in high-stakes fields, including justice, health, and finance. Furthermore, they require a huge amount of data, and such data often contains personal information. However, the principle of data minimization in the European Union's General Data Protection Regulation requires collecting only the data that is essential to fulfilling a particular purpose. Implementing data minimization for black box models can be difficult because it involves identifying the minimum set of variables that are relevant to the model's prediction, which may not be apparent without access to the model's inner workings. In addition, users are often reluctant to share all their personal information. We propose an interactive system to reduce the amount of personal data by determining the minimal set of features required for a correct prediction using explainable artificial intelligence techniques. Our proposed method can inform the user whether the provided variables contain enough information for the model to make accurate predictions or if additional variables are necessary. This humancentered approach can enable providers to minimize the amount of personal data collected for analysis and may increase the user's trust and acceptance of the system.
引用
收藏
页码:885 / 893
页数:9
相关论文
共 50 条
  • [41] Applying Explainable Artificial Intelligence Techniques on Linked Open Government Data
    Kalampokis, Evangelos
    Karamanou, Areti
    Tarabanis, Konstantinos
    ELECTRONIC GOVERNMENT, EGOV 2021, 2021, 12850 : 247 - 258
  • [42] Application of Artificial Intelligence in Healthcare: The Need for More Interpretable Artificial Intelligence
    Tavares, Jorge
    ACTA MEDICA PORTUGUESA, 2024, 37 (06) : 411 - 414
  • [43] Memristive Explainable Artificial Intelligence Hardware
    Song, Hanchan
    Park, Woojoon
    Kim, Gwangmin
    Choi, Moon Gu
    In, Jae Hyun
    Rhee, Hakseung
    Kim, Kyung Min
    ADVANCED MATERIALS, 2024, 36 (25)
  • [44] Effects of Explainable Artificial Intelligence in Neurology
    Gombolay, G.
    Silva, A.
    Schrum, M.
    Dutt, M.
    Hallman-Cooper, J.
    Gombolay, M.
    ANNALS OF NEUROLOGY, 2023, 94 : S145 - S145
  • [45] Drug discovery with explainable artificial intelligence
    José Jiménez-Luna
    Francesca Grisoni
    Gisbert Schneider
    Nature Machine Intelligence, 2020, 2 : 573 - 584
  • [46] Explainable Artificial Intelligence for Combating Cyberbullying
    Tesfagergish, Senait Gebremichael
    Damasevicius, Robertas
    SOFT COMPUTING AND ITS ENGINEERING APPLICATIONS, PT 1, ICSOFTCOMP 2023, 2024, 2030 : 54 - 67
  • [47] Drug discovery with explainable artificial intelligence
    Jimenez-Luna, Jose
    Grisoni, Francesca
    Schneider, Gisbert
    NATURE MACHINE INTELLIGENCE, 2020, 2 (10) : 573 - 584
  • [48] Explainable and responsible artificial intelligence PREFACE
    Meske, Christian
    Abedin, Babak
    Klier, Mathias
    Rabhi, Fethi
    ELECTRONIC MARKETS, 2022, 32 (04) : 2103 - 2106
  • [49] Should artificial intelligence be interpretable to humans?
    Schwartz, Matthew D.
    NATURE REVIEWS PHYSICS, 2022, 4 (12) : 741 - 742
  • [50] A Survey on Explainable Artificial Intelligence for Cybersecurity
    Rjoub, Gaith
    Bentahar, Jamal
    Wahab, Omar Abdel
    Mizouni, Rabeb
    Song, Alyssa
    Cohen, Robin
    Otrok, Hadi
    Mourad, Azzam
    IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2023, 20 (04): : 5115 - 5140