Canary Extraction in Natural Language Understanding Models

被引:0
|
作者
Parikh, Rahil [1 ]
Dupuy, Christophe [2 ]
Gupta, Rahul [2 ]
机构
[1] Univ Maryland, Inst Syst Res, Baltimore, MD 21201 USA
[2] Amazon Alexa AI, New York, NY USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Natural Language Understanding (NLU) models can be trained on sensitive information such as phone numbers, zip-codes etc. Recent literature has focused on Model Inversion Attacks (ModIvA) that can extract training data from model parameters. In this work, we present a version of such an attack by extracting canaries inserted in NLU training data. In the attack, an adversary with open-box access to the model reconstructs the canaries contained in the model's training set. We evaluate our approach by performing text completion on canaries and demonstrate that by using the prefix (non-sensitive) tokens of the canary, we can generate the full canary. As an example, our attack is able to reconstruct a four digit code in the training dataset of the NLU model with a probability of 0.5 in its best configuration. As countermeasures, we identify several defense mechanisms that, when combined, effectively eliminate the risk of ModIvA in our experiments.
引用
收藏
页码:552 / 560
页数:9
相关论文
共 50 条
  • [31] Interpretable Natural Language Understanding
    He, Yulan
    [J]. PROCEEDINGS OF THE 32ND ACM INTERNATIONAL CONFERENCE ON INFORMATION AND KNOWLEDGE MANAGEMENT, CIKM 2023, 2023, : 1 - 2
  • [32] Large Language Models are Not Models of Natural Language: They are Corpus Models
    Veres, Csaba
    [J]. IEEE ACCESS, 2022, 10 : 61970 - 61979
  • [33] A Survey of Joint Intent Detection and Slot Filling Models in Natural Language Understanding
    Weld, Henry
    Huang, Xiaoqi
    Long, Siqu
    Poon, Josiah
    Han, Soyeon Caren
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (08)
  • [34] Using Dialogue Corpora to Extend Information Extraction Patterns for Natural Language Understanding of Dialogue
    Catizone, Roberta
    Dingli, Alexiei
    Gaizauskas, Robert
    [J]. LREC 2010 - SEVENTH INTERNATIONAL CONFERENCE ON LANGUAGE RESOURCES AND EVALUATION, 2010, : 2136 - 2140
  • [35] Did the Models Understand Documents? Benchmarking Models for Language Understanding in Document-Level Relation Extraction
    Chen, Haotian
    Chen, Bingsheng
    Zhou, Xiangdong
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 6418 - 6435
  • [36] Understanding language understanding: Computational models of reading
    Dyer, MG
    [J]. TRENDS IN COGNITIVE SCIENCES, 2000, 4 (01) : 35 - 35
  • [37] Understanding by Understanding Not: Modeling Negation in Language Models
    Hosseini, Arian
    Reddy, Siva
    Bandanau, Dzmitry
    Hjelm, R. Devon
    Sordoni, Alessandro
    Courville, Aaron
    [J]. 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 1301 - 1312
  • [38] Language understanding using hidden understanding models
    Schwartz, R
    Miller, S
    Stallard, D
    Makhoul, J
    [J]. ICSLP 96 - FOURTH INTERNATIONAL CONFERENCE ON SPOKEN LANGUAGE PROCESSING, PROCEEDINGS, VOLS 1-4, 1996, : 997 - 1000
  • [39] The Importance of Understanding Language in Large Language Models
    Youssef, Alaa
    Stein, Samantha
    Clapp, Justin
    Magnus, David
    [J]. AMERICAN JOURNAL OF BIOETHICS, 2023, 23 (10): : 6 - 7
  • [40] Multiagent Neurocognitive Models of the Processes of Understanding the Natural Language Description of the Mission of Autonomous robots
    Nagoev, Z., V
    Nagoeva, O., V
    Pshenokova, I. A.
    Bzhikhatlov, K. Ch
    Gurtueva, I. A.
    Kankulov, S. A.
    [J]. BIOLOGICALLY INSPIRED COGNITIVE ARCHITECTURES 2021, 2022, 1032 : 327 - 332