Is Machine Learning Model Checking Privacy Preserving?

被引:0
|
作者
Bortolussi, Luca [1 ]
Nenzi, Laura [1 ]
Saveri, Gaia [1 ,2 ]
Silvetti, Simone [1 ,3 ]
机构
[1] Univ Trieste, Trieste, Italy
[2] Univ Pisa, Pisa, Italy
[3] Esteco SpA, Trieste, Italy
关键词
Signal Temporal Logic; Learning Model Checking; Privacy; Time Series Analysis;
D O I
10.1007/978-3-031-75107-3_9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continuous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model checking on the system that is checked. The analysis is carried out for STL under different training regimes.
引用
收藏
页码:139 / 155
页数:17
相关论文
共 50 条
  • [1] Preserving Model Privacy for Machine Learning in Distributed Systems
    Jia, Qi
    Guo, Linke
    Jin, Zhanpeng
    Fang, Yuguang
    IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, 2018, 29 (08) : 1808 - 1822
  • [2] Privacy Preserving Machine Learning Systems
    El Mestari, Soumia Zohra
    PROCEEDINGS OF THE 2022 AAAI/ACM CONFERENCE ON AI, ETHICS, AND SOCIETY, AIES 2022, 2022, : 898 - 898
  • [3] Privacy-Preserving Machine Learning
    Chow, Sherman S. M.
    FRONTIERS IN CYBER SECURITY, 2018, 879 : 3 - 6
  • [4] Preserving User Privacy for Machine Learning: Local Differential Privacy or Federated Machine Learning?
    Zheng, Huadi
    Hu, Haibo
    Han, Ziyang
    IEEE INTELLIGENT SYSTEMS, 2020, 35 (04) : 5 - 14
  • [5] Privacy Preserving Extreme Learning Machine Classification Model for Distributed Systems
    Catak, Ferhat Ozgur
    Mustacoglu, Ahmet Fatih
    Topcu, Ahmet Ercan
    2016 24TH SIGNAL PROCESSING AND COMMUNICATION APPLICATION CONFERENCE (SIU), 2016, : 313 - 316
  • [6] Privacy preserving distributed machine learning with federated learning
    Chamikara, M. A. P.
    Bertok, P.
    Khalil, I.
    Liu, D.
    Camtepe, S.
    COMPUTER COMMUNICATIONS, 2021, 171 : 112 - 125
  • [7] Privacy-Preserving Machine Learning [Cryptography]
    Kerschbaum, Florian
    Lukas, Nils
    IEEE SECURITY & PRIVACY, 2023, 21 (06) : 90 - 94
  • [8] Measuring data privacy preserving and machine learning
    Gustavo Esquivel-Quiros, Luis
    Gabriela Barrantes, Elena
    Esponda Darlington, Fernando
    2018 7TH INTERNATIONAL CONFERENCE ON SOFTWARE PROCESS IMPROVEMENT (CIMPS): APPLICATIONS IN SOFTWARE ENGINEERING, 2018, : 85 - 94
  • [9] Survey on Privacy-Preserving Machine Learning
    Liu J.
    Meng X.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2020, 57 (02): : 346 - 362
  • [10] Soteria: Preserving Privacy in Distributed Machine Learning
    Brito, Claudia
    Ferreira, Pedro
    Portela, Bernardo
    Oliveira, Rui
    Paulo, Joao
    38TH ANNUAL ACM SYMPOSIUM ON APPLIED COMPUTING, SAC 2023, 2023, : 135 - 142