Canary Extraction in Natural Language Understanding Models

被引:0
|
作者
Parikh, Rahil [1 ]
Dupuy, Christophe [2 ]
Gupta, Rahul [2 ]
机构
[1] Univ Maryland, Inst Syst Res, Baltimore, MD 21201 USA
[2] Amazon Alexa AI, New York, NY USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Natural Language Understanding (NLU) models can be trained on sensitive information such as phone numbers, zip-codes etc. Recent literature has focused on Model Inversion Attacks (ModIvA) that can extract training data from model parameters. In this work, we present a version of such an attack by extracting canaries inserted in NLU training data. In the attack, an adversary with open-box access to the model reconstructs the canaries contained in the model's training set. We evaluate our approach by performing text completion on canaries and demonstrate that by using the prefix (non-sensitive) tokens of the canary, we can generate the full canary. As an example, our attack is able to reconstruct a four digit code in the training dataset of the NLU model with a probability of 0.5 in its best configuration. As countermeasures, we identify several defense mechanisms that, when combined, effectively eliminate the risk of ModIvA in our experiments.
引用
收藏
页码:552 / 560
页数:9
相关论文
共 50 条
  • [11] Understanding models understanding language
    Sogaard, Anders
    [J]. SYNTHESE, 2022, 200 (06)
  • [12] Understanding models understanding language
    Anders Søgaard
    [J]. Synthese, 200
  • [13] Natural Language Understanding
    Di Sciullo, Anna Maria
    [J]. NEW TRENDS IN SOFTWARE METHODOLOGIES, TOOLS AND TECHNIQUES, 2009, 199 : 551 - 563
  • [14] UNDERSTANDING NATURAL LANGUAGE
    WINOGRAD, T
    [J]. COGNITIVE PSYCHOLOGY, 1972, 3 (01) : 1 - 191
  • [15] Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding
    Ghaddar, Abbas
    Wu, Yimeng
    Bagga, Sunyam
    Rashid, Ahmad
    Bibi, Khalil
    Rezagholizadeh, Mehdi
    Xing, Chao
    Wang, Yasheng
    Xinyu, Duan
    Wang, Zhefeng
    Huai, Baoxing
    Jiang, Xin
    Liu, Qun
    Langlais, Philippe
    [J]. arXiv, 2022,
  • [16] The language of thought and natural language understanding
    Knowles, J
    [J]. ANALYSIS, 1998, 58 (04) : 264 - 272
  • [17] Calibration of Natural Language Understanding Models with Venn-ABERS Predictors
    Giovannotti, Patrizio
    [J]. CONFORMAL AND PROBABILISTIC PREDICTION WITH APPLICATIONS, VOL 179, 2022, 179
  • [18] CommonsenseVIS: Visualizing and Understanding Commonsense Reasoning Capabilities of Natural Language Models
    Wang, Xingbo
    Huang, Renfei
    Jin, Zhihua
    Fang, Tianqing
    Qu, Huamin
    [J]. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS, 2024, 30 (01) : 273 - 283
  • [19] Comparison of alignment templates and maximum entropy models for natural language understanding
    Bender, O
    Macherey, K
    Och, FJ
    Ney, H
    [J]. EACL 2003: 10TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, PROCEEDINGS OF THE CONFERENCE, 2003, : 11 - 18
  • [20] Evaluation of Sentence Embedding Models for Natural Language Understanding Problems in Russian
    Popov, Dmitry
    Pugachev, Alexander
    Svyatokum, Polina
    Svitanko, Elizaveta
    Artemova, Ekaterina
    [J]. ANALYSIS OF IMAGES, SOCIAL NETWORKS AND TEXTS, AIST 2019, 2019, 11832 : 205 - 217