Scalable Universal Adversarial Watermark Defending Against Facial Forgery

被引:0
|
作者
Qiao, Tong [1 ]
Zhao, Bin [1 ]
Shi, Ran [2 ]
Han, Meng [3 ]
Hassaballah, Mahmoud [4 ,5 ]
Retraint, Florent [6 ]
Luo, Xiangyang [7 ]
机构
[1] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[4] Prince Sattam Bin Abdulaziz Univ, Coll Comp Engn & Sci, Dept Comp Sci, Alkharj 16278, Saudi Arabia
[5] South Valley Univ, Dept Comp Sci, Qena 83523, Egypt
[6] Univ Technol Troyes, Lab Comp Sci & Digital Soc, F-10004 Troyes, France
[7] State Key Lab Math Engn & Adv Comp, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Watermarking; Forgery; Predictive models; Generative adversarial networks; Computational modeling; Perturbation methods; Detectors; GAN forgery model; active defense; adversarial watermark; scalability;
D O I
10.1109/TIFS.2024.3460387
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The illegal use of facial forgery models, such as Generative Adversarial Networks (GAN) synthesized contents, has been on the rise, thereby posing great threats to personal reputation and national security. To mitigate these threats, recent studies have proposed the use of adversarial watermarks as countermeasures against GAN, effectively disrupting their outputs. However, the majority of these adversarial watermarks exhibit very limited defense ranges, providing defense against only a single GAN forgery model. Although some universal adversarial watermarks have demonstrated impressive results, they lack the defense scalability as a new-emerging forgery model appears. To address the tough issue, we propose a scalable approach even when the original forgery models are unknown. Specifically, a watermark expansion scheme, which mainly involves inheriting, defense and constraint steps, is introduced. On the one hand, the proposed method can effectively inherit the defense range of the prior well-trained adversarial watermark; on the other hand, it can defend against a new forgery model. Extensive experimental results validate the efficacy of the proposed method, exhibiting superior performance and reduced computational time compared to the state-of-the-arts.
引用
下载
收藏
页码:8998 / 9011
页数:14
相关论文
共 50 条
  • [41] Defending against Whitebox Adversarial Attacks via Randomized Discretization
    Zhang, Yuchen
    Liang, Percy
    22ND INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 89, 2019, 89 : 684 - 693
  • [42] Defending against adversarial examples using perceptual image hashing
    Wu, Ke
    Wang, Zichi
    Zhang, Xinpeng
    Tang, Zhenjun
    JOURNAL OF ELECTRONIC IMAGING, 2023, 32 (02)
  • [43] Defending Network IDS against Adversarial Examples with Continual Learning
    Kozal, Jedrzej
    Zwolinska, Justyna
    Klonowski, Marek
    Wozniak, Michal
    2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 60 - 69
  • [44] Defending Wireless Receivers Against Adversarial Attacks on Modulation Classifiers
    de Araujo-Filho, Paulo Freitas
    Kaddoum, Georges
    Chiheb Ben Nasr, Mohamed
    Arcoverde, Henrique F.
    Campelo, Divanilson R.
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (21) : 19153 - 19162
  • [45] Defending against Adversarial Samples without Security through Obscurity
    Guo, Wenbo
    Wang, Qinglong
    Zhang, Kaixuan
    Ororbia, Alexander G., II
    Huang, Sui
    Liu, Xue
    Giles, C. Lee
    Lin, Lin
    Xing, Xinyu
    2018 IEEE INTERNATIONAL CONFERENCE ON DATA MINING (ICDM), 2018, : 137 - 146
  • [46] GNNGUARD: Defending Graph Neural Networks against Adversarial Attacks
    Zhang, Xiang
    Zitnik, Marinka
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [47] DiffDefense: Defending Against Adversarial Attacks via Diffusion Models
    Silva, Hondamunige Prasanna
    Seidenari, Lorenzo
    Del Bimbo, Alberto
    IMAGE ANALYSIS AND PROCESSING, ICIAP 2023, PT II, 2023, 14234 : 430 - 442
  • [48] Defending Against Adversarial Attacks via Neural Dynamic System
    Li, Xiyuan
    Zou, Xin
    Liu, Weiwei
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35, NEURIPS 2022, 2022,
  • [49] Defending Against Unforeseen Failure Modes with Latent Adversarial Training
    Casper, Stephen
    Schulze, Lennart
    Patel, Oam
    Hadfield-Menell, Dylan
    arXiv,
  • [50] DeT: Defending Against Adversarial Examples via Decreasing Transferability
    Li, Changjiang
    Weng, Haiqin
    Ji, Shouling
    Dong, Jianfeng
    He, Qinming
    CYBERSPACE SAFETY AND SECURITY, PT I, 2020, 11982 : 307 - 322