Is Machine Learning Model Checking Privacy Preserving?

被引:0
|
作者
Bortolussi, Luca [1 ]
Nenzi, Laura [1 ]
Saveri, Gaia [1 ,2 ]
Silvetti, Simone [1 ,3 ]
机构
[1] Univ Trieste, Trieste, Italy
[2] Univ Pisa, Pisa, Italy
[3] Esteco SpA, Trieste, Italy
关键词
Signal Temporal Logic; Learning Model Checking; Privacy; Time Series Analysis;
D O I
10.1007/978-3-031-75107-3_9
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continuous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model checking on the system that is checked. The analysis is carried out for STL under different training regimes.
引用
收藏
页码:139 / 155
页数:17
相关论文
共 50 条
  • [31] Privacy-Preserving Machine Learning: Threats and Solutions
    Al-Rubaie, Mohammad
    Chang, J. Morris
    IEEE SECURITY & PRIVACY, 2019, 17 (02) : 49 - 58
  • [32] Privacy Preserving Machine Learning for Malicious URL Detection
    Shaik, Imtiyazuddin
    Emmadi, Nitesh
    Tupsamudre, Harshal
    Narumanchi, Harika
    Bhattachar, Rajan Mindigal Alasingara
    DATABASE AND EXPERT SYSTEMS APPLICATIONS - DEXA 2021 WORKSHOPS, 2021, 1479 : 31 - 41
  • [33] A Review of Privacy-Preserving Machine Learning Classification
    Wang, Andy
    Wang, Chen
    Bi, Meng
    Xu, Jian
    CLOUD COMPUTING AND SECURITY, PT IV, 2018, 11066 : 671 - 682
  • [34] Privacy Preserving Machine Learning with Limited Information Leakage
    Tang, Wenyi
    Qin, Bo
    Zhao, Suyun
    Zhao, Boning
    Xue, Yunzhi
    Chen, Hong
    NETWORK AND SYSTEM SECURITY, NSS 2019, 2019, 11928 : 352 - 370
  • [35] Challenges of Privacy-Preserving Machine Learning in IoT
    Zheng, Mengyao
    Xu, Dixing
    Jiang, Linshan
    Gu, Chaojie
    Tan, Rui
    Cheng, Peng
    PROCEEDINGS OF THE 2019 INTERNATIONAL WORKSHOP ON CHALLENGES IN ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING FOR INTERNET OF THINGS (AICHALLENGEIOT '19), 2019, : 1 - 7
  • [36] Cryptographic Approaches for Privacy-Preserving Machine Learning
    Jiang Han
    Liu Yiran
    Song Xiangfu
    Wang Hao
    Zheng Zhihua
    Xu Qiuliang
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2020, 42 (05) : 1068 - 1078
  • [37] Toward Verifiable and Privacy Preserving Machine Learning Prediction
    Niu, Chaoyue
    Wu, Fan
    Tang, Shaojie
    Ma, Shuai
    Chen, Guihai
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (03) : 1703 - 1721
  • [38] Federated Learning for Privacy-Preserving Machine Learning in IoT Networks
    Anitha, G.
    Jegatheesan, A.
    2024 SECOND INTERNATIONAL CONFERENCE ON INTELLIGENT CYBER PHYSICAL SYSTEMS AND INTERNET OF THINGS, ICOICI 2024, 2024, : 338 - 342
  • [39] Privacy preserving perceptron learning in malicious model
    Yuan Zhang
    Sheng Zhong
    Neural Computing and Applications, 2013, 23 : 843 - 856
  • [40] Privacy preserving perceptron learning in malicious model
    Zhang, Yuan
    Zhong, Sheng
    NEURAL COMPUTING & APPLICATIONS, 2013, 23 (3-4): : 843 - 856