Evaluating the Privacy Exposure of Interpretable Global Explainers

被引:1
|
作者
Naretto, Francesca [1 ]
Monreale, Anna [2 ]
Giannotti, Fosca [1 ]
机构
[1] Scuola Normale Super Pisa, Pisa, Italy
[2] Univ Pisa, Pisa, Italy
关键词
D O I
10.1109/CogMI56440.2022.00012
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In recent years we are witnessing the diffusion of AI systems based on powerful Machine Learning models which find application in many critical contexts such as medicine, financial market and credit scoring. In such a context it is particularly important to design Trustworthy AI systems while guaranteeing transparency, with respect to their decision reasoning and privacy protection. Although many works in the literature addressed the lack of transparency and the risk of privacy exposure of Machine Learning models, the privacy risks of explainers have not been appropriately studied. This paper presents a methodology for evaluating the privacy exposure raised by interpretable global explainers able to imitate the original black-box classifier. Our methodology exploits the well-known Membership Inference Attack. The experimental results highlight that global explainers based on interpretable trees lead to an increase in privacy exposure.
引用
收藏
页码:13 / 19
页数:7
相关论文
共 50 条
  • [1] Responsible Music Genre Classification Using Interpretable Model-Agnostic Visual Explainers
    Sudi Murindanyi
    Kyamanywa Hamza
    Sulaiman Kagumire
    Ggaliwango Marvin
    SN Computer Science, 6 (1)
  • [2] Interpretable Privacy with Optimizable Utility
    Ramon, Jan
    Basu, Moitree
    ECML PKDD 2020 WORKSHOPS, 2020, 1323 : 492 - 500
  • [3] Evaluating natural medicinal resources and their exposure to global change
    Theodoridis, Spyros
    Drakou, Evangelia G.
    Hickler, Thomas
    Thines, Marco
    Nogues-Bravo, David
    LANCET PLANETARY HEALTH, 2023, 7 (02): : E155 - E163
  • [4] Informational Privacy, A Right to Explanation, and Interpretable AI
    Kim, Tae Wan
    Routledge, Bryan R.
    2018 IEEE SYMPOSIUM ON PRIVACY-AWARE COMPUTING (PAC), 2018, : 64 - 66
  • [5] Towards Adaptive Privacy Protection for Interpretable Federated Learning
    Li, Zhe
    Chen, Honglong
    Ni, Zhichen
    Gao, Yudong
    Lou, Wei
    IEEE Transactions on Mobile Computing, 2024, 23 (12) : 14471 - 14483
  • [6] Evaluating interpretable machine learning predictions for cryptocurrencies
    El Majzoub, Ahmad
    Rabhi, Fethi A.
    Hussain, Walayat
    INTELLIGENT SYSTEMS IN ACCOUNTING FINANCE & MANAGEMENT, 2023, 30 (03): : 137 - 149
  • [7] Evaluating privacy - determining user privacy expectations on the web
    Pilton, Callum
    Faily, Shamal
    Henriksen-Bulmer, Jane
    Computers and Security, 2021, 105
  • [8] Interpretable Machine Learning for Privacy-Preserving Pervasive Systems
    Baron, Benjamin
    Musolesi, Mirco
    IEEE PERVASIVE COMPUTING, 2020, 19 (01) : 73 - 82
  • [9] FedSkill: Privacy Preserved Interpretable Skill Learning via Imitation
    Jiang, Yushan
    Yu, Wenchao
    Song, Dongjin
    Wang, Lu
    Cheng, Wei
    Chen, Haifeng
    PROCEEDINGS OF THE 29TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2023, 2023, : 1010 - 1019
  • [10] Evaluating privacy impact assessments
    Wadhwa, Kush
    Rodrigues, Rowena
    INNOVATION-THE EUROPEAN JOURNAL OF SOCIAL SCIENCE RESEARCH, 2013, 26 (1-2) : 161 - 180