Generating Robust Adversarial Examples against Online Social Networks (OSNs)

被引:0
|
作者
Liu, Jun [1 ]
Zhou, Jiantao [1 ]
Wu, Haiwei [1 ]
Sun, Weiwei [2 ]
Tian, Jinyu [3 ]
机构
[1] Univ Macau, Fac Sci & Technol, Dept Comp & Informat Sci, State Key Lab Internet Things Smart City, Univ Ave, Taipa 999078, Macau, Peoples R China
[2] Alibaba Grp, 699 Wangshang Rd, Hangzhou 310052, Zhejiang, Peoples R China
[3] Macau Univ Sci & Technol, Sch Comp Sci & Engn, Fac Innovat Engn, Weilong Rd, Taipa 999078, Macau, Peoples R China
关键词
Adversarial examples; adversarial images; robustness; online social networks; deep neural networks;
D O I
10.1145/3632528
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Online Social Networks (OSNs) have blossomed into prevailing transmission channels for images in the modern era. Adversarial examples (AEs) deliberately designed to mislead deep neural networks (DNNs) are found to be fragile against the inevitable lossy operations conducted by OSNs. As a result, the AEs would lose their attack capabilities after being transmitted over OSNs. In this work, we aim to design a new framework for generating robust AEs that can survive the OSN transmission; namely, the AEs before and after the OSN transmission both possess strong attack capabilities. To this end, we first propose a differentiable network termed SImulated OSN (SIO) to simulate the various operations conducted by an OSN. Specifically, the SIO network consists of two modules: (1) a differentiable JPEG layer for approximating the ubiquitous JPEG compression and (2) an encoder-decoder subnetwork for mimicking the remaining operations. Based upon the SIO network, we then formulate an optimization framework to generate robust AEs by enforcing model outputs with and without passing through the SIO to be both misled. Extensive experiments conducted over Facebook, WeChat and QQ demonstrate that our attack methods produce more robust AEs than existing approaches, especially under small distortion constraints; the performance gain in terms of Attack Success Rate (ASR) could be more than 60%. Furthermore, we build a public dataset containing more than 10,000 pairs of AEs processed by Facebook, WeChat or QQ, facilitating future research in the robust AEs generation. The dataset and code are available at https://github.com/csjunjun/RobustOSNAttack.git.
引用
收藏
页数:26
相关论文
共 50 条
  • [21] Towards a Robust Classifier: An MDL-Based Method for Generating Adversarial Examples
    Asadi, Behzad
    Varadharajan, Vijay
    2020 IEEE 19TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2020), 2020, : 793 - 801
  • [22] Pruning Adversarially Robust Neural Networks without Adversarial Examples
    Jian, Tong
    Wang, Zifeng
    Wang, Yanzhi
    Dy, Jennifer
    Ioannidis, Stratis
    2022 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2022, : 993 - 998
  • [23] Simplicial-Map Neural Networks Robust to Adversarial Examples
    Paluzo-Hidalgo, Eduardo
    Gonzalez-Diaz, Rocio
    Gutierrez-Naranjo, Miguel A.
    Heras, Jonathan
    MATHEMATICS, 2021, 9 (02) : 1 - 16
  • [24] Generating Adversarial Examples With Conditional Generative Adversarial Net
    Yu, Ping
    Song, Kaitao
    Lu, Jianfeng
    2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 676 - 681
  • [25] Generating Watermarked Speech Adversarial Examples
    Wang, Yumin
    Ye, Jingyu
    Wu, Hanzhou
    PROCEEDINGS OF ACM TURING AWARD CELEBRATION CONFERENCE, ACM TURC 2021, 2021, : 254 - 260
  • [26] Generating Adversarial Examples With Shadow Model
    Zhang, Rui
    Xia, Hui
    Hu, Chunqiang
    Zhang, Cheng
    Liu, Chao
    Xiao, Fu
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (09) : 6283 - 6289
  • [27] Generating Natural Language Adversarial Examples
    Alzantot, Moustafa
    Sharma, Yash
    Elgohary, Ahmed
    Ho, Bo-Jhang
    Srivastava, Mani B.
    Chang, Kai-Wei
    2018 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2018), 2018, : 2890 - 2896
  • [28] Synthesizing Robust Adversarial Examples
    Athalye, Anish
    Engstrom, Logan
    Ilyas, Andrew
    Kwok, Kevin
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 80, 2018, 80
  • [29] Robust adversarial examples against scale transformation via generative network
    Liu, Minjie
    Zhang, Xinpeng
    Feng, Guorui
    ELECTRONICS LETTERS, 2022, 58 (07) : 290 - 292
  • [30] FePN: A robust feature purification network to defend against adversarial examples
    Cao, Dongliang
    Wei, Kaimin
    Wu, Yongdong
    Zhang, Jilian
    Feng, Bingwen
    Chen, Jinpeng
    COMPUTERS & SECURITY, 2023, 134