Learning transferable targeted universal adversarial perturbations by sequential meta-learning

被引:0
|
作者
Weng, Juanjuan [1 ]
Luo, Zhiming [1 ]
Lin, Dazhen [1 ]
Li, Shaozi [1 ,2 ]
机构
[1] Xiamen Univ, Dept Artificial Intelligence, Xiamen 361005, Peoples R China
[2] Wuyi Univ, Fujian Key Lab Big Data Applicat & Intellectualiza, Wuyishan 354300, Peoples R China
关键词
Targeted adversarial attacks; Model-agnostic meta-learning; Data-free universal adversarial perturbations; Transfer-based black-box attacks;
D O I
10.1016/j.cose.2023.103584
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, the transferability of adversarial perturbations in non-targeted scenarios has been extensively studied. However, changing the predictions of an unknown model to a pre-defined 'targeted' class still remains challenging. In this study, we aim to learn the targeted universal adversarial perturbations (UAPs) with higher transferability by the ensemble of multiple models. First, we observe the phenomenon that the logit of the target class will bias to a specific white-box model in existing ensemble-based attacks. To deal with the issue, we propose a normalized logit loss to narrow the margin of the targeted class's logits among different models. Besides, we introduce a novel sequential meta-learning optimization strategy to further increase transferability, consisting of the inner loop and the outer loop. In the inner loop, we sequentially learn task-specific targeted UAPs for each source model by jointly considering the perturbation from the previous model. In the outer loop, we optimize the task-agnostic targeted UAP by combining the targeted UAPs from the inner loop. Experimental results demonstrate the mutual benefits of the normalized logit loss and the sequential meta-learning optimization strategy for learning targeted adversarial perturbations, outperforming existing ensemble attacks in both white box and black-box settings. The source code of this study is available at: Link.
引用
收藏
页数:13
相关论文
共 50 条
  • [21] FLAMINGO: Adaptive and Resilient Federated Meta-Learning against Adversarial Attacks
    Hossain, Md Zarif
    Imteaj, Ahmed
    Shahid, Abdur R.
    2024 IEEE 44TH INTERNATIONAL CONFERENCE ON DISTRIBUTED COMPUTING SYSTEMS WORKSHOPS, ICDCS 2024, 2024, : 17 - 22
  • [22] Adversarial Training Based on Meta-Learning in Unseen Domains for Speaker Verification
    Zhang, Jian-Tao
    Fang, Xin
    Li, Jin
    Song, Yan
    Dai, Li-Rong
    MAN-MACHINE SPEECH COMMUNICATION, NCMMSC 2022, 2023, 1765 : 124 - 131
  • [23] A Meta-Learning and Bounded Rationality Framework for Repeated Games in Adversarial Environments
    Kanellopoulos, Aris
    Fotiadis, Filippos
    Vamvoudakis, Kyriakos G.
    Gupta, Vijay
    2020 59TH IEEE CONFERENCE ON DECISION AND CONTROL (CDC), 2020, : 1640 - 1645
  • [24] Learning Tensor Representations for Meta-Learning
    Deng, Samuel
    Guo, Yilin
    Hsu, Daniel
    Mandal, Debmalya
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 151, 2022, 151
  • [25] Meta-learning for fast incremental learning
    Oohira, T
    Yamauchi, K
    Omori, T
    ARTIFICAIL NEURAL NETWORKS AND NEURAL INFORMATION PROCESSING - ICAN/ICONIP 2003, 2003, 2714 : 157 - 164
  • [26] Learning to Propagate for Graph Meta-Learning
    Liu, Lu
    Zhou, Tianyi
    Long, Guodong
    Jiang, Jing
    Zhang, Chengqi
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [27] Subspace Learning for Effective Meta-Learning
    Jiang, Weisen
    Kwok, James T.
    Zhang, Yu
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022, : 10177 - 10194
  • [28] Submodular Meta-Learning
    Adibi, Arman
    Mokhtari, Aryan
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [29] Online Meta-Learning
    Finn, Chelsea
    Rajeswaran, Aravind
    Kakade, Sham
    Levine, Sergey
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [30] Meta-Learning Representations for Continual Learning
    Javed, Khurram
    White, Martha
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32