Scalable Universal Adversarial Watermark Defending Against Facial Forgery

被引:0
|
作者
Qiao, Tong [1 ]
Zhao, Bin [1 ]
Shi, Ran [2 ]
Han, Meng [3 ]
Hassaballah, Mahmoud [4 ,5 ]
Retraint, Florent [6 ]
Luo, Xiangyang [7 ]
机构
[1] Hangzhou Dianzi Univ, Sch Cyberspace, Hangzhou 310018, Peoples R China
[2] Nanjing Univ Sci & Technol, Sch Comp Sci & Engn, Nanjing 210094, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310058, Peoples R China
[4] Prince Sattam Bin Abdulaziz Univ, Coll Comp Engn & Sci, Dept Comp Sci, Alkharj 16278, Saudi Arabia
[5] South Valley Univ, Dept Comp Sci, Qena 83523, Egypt
[6] Univ Technol Troyes, Lab Comp Sci & Digital Soc, F-10004 Troyes, France
[7] State Key Lab Math Engn & Adv Comp, Zhengzhou 450001, Peoples R China
基金
中国国家自然科学基金;
关键词
Watermarking; Forgery; Predictive models; Generative adversarial networks; Computational modeling; Perturbation methods; Detectors; GAN forgery model; active defense; adversarial watermark; scalability;
D O I
10.1109/TIFS.2024.3460387
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The illegal use of facial forgery models, such as Generative Adversarial Networks (GAN) synthesized contents, has been on the rise, thereby posing great threats to personal reputation and national security. To mitigate these threats, recent studies have proposed the use of adversarial watermarks as countermeasures against GAN, effectively disrupting their outputs. However, the majority of these adversarial watermarks exhibit very limited defense ranges, providing defense against only a single GAN forgery model. Although some universal adversarial watermarks have demonstrated impressive results, they lack the defense scalability as a new-emerging forgery model appears. To address the tough issue, we propose a scalable approach even when the original forgery models are unknown. Specifically, a watermark expansion scheme, which mainly involves inheriting, defense and constraint steps, is introduced. On the one hand, the proposed method can effectively inherit the defense range of the prior well-trained adversarial watermark; on the other hand, it can defend against a new forgery model. Extensive experimental results validate the efficacy of the proposed method, exhibiting superior performance and reduced computational time compared to the state-of-the-arts.
引用
下载
收藏
页码:8998 / 9011
页数:14
相关论文
共 50 条
  • [21] Minority Reports Defense: Defending Against Adversarial Patches
    McCoyd, Michael
    Park, Won
    Chen, Steven
    Shah, Neil
    Roggenkemper, Ryan
    Hwang, Minjune
    Liu, Jason Xinyu
    Wagner, David
    APPLIED CRYPTOGRAPHY AND NETWORK SECURITY WORKSHOPS, ACNS 2020, 2020, 12418 : 564 - 582
  • [22] DEFENDING GRAPH CONVOLUTIONAL NETWORKS AGAINST ADVERSARIAL ATTACKS
    Ioannidis, Vassilis N.
    Giannakis, Georgios B.
    2020 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, 2020, : 8469 - 8473
  • [23] DCAL: A New Method for Defending Against Adversarial Examples
    Lin, Xiaoyu
    Cao, Chunjie
    Wang, Longjuan
    Liu, Zhiyuan
    Li, Mengqian
    Ma, Haiying
    ARTIFICIAL INTELLIGENCE AND SECURITY, ICAIS 2022, PT II, 2022, 13339 : 38 - 50
  • [24] DifFilter: Defending Against Adversarial Perturbations With Diffusion Filter
    Chen, Yong
    Li, Xuedong
    Hu, Peng
    Peng, Dezhong
    Wang, Xu
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 6779 - 6794
  • [25] Defending Against Model Inversion Attack by Adversarial Examples
    Wen, Jing
    Yiu, Siu-Ming
    Hui, Lucas C. K.
    PROCEEDINGS OF THE 2021 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE (IEEE CSR), 2021, : 551 - 556
  • [26] Defending Deep Learning Models Against Adversarial Attacks
    Mani, Nag
    Moh, Melody
    Moh, Teng-Sheng
    INTERNATIONAL JOURNAL OF SOFTWARE SCIENCE AND COMPUTATIONAL INTELLIGENCE-IJSSCI, 2021, 13 (01): : 72 - 89
  • [27] Defending Against Adversarial Attacks Using Random Forest
    Ding, Yifan
    Wang, Liqiang
    Zhang, Huan
    Yi, Jinfeng
    Fan, Deliang
    Gong, Boqing
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS (CVPRW 2019), 2019, : 105 - 114
  • [28] CardioDefense: Defending against adversarial attack in ECG classification with adversarial distillation training
    Shao, Jiahao
    Geng, Shijia
    Fu, Zhaoji
    Xu, Weilun
    Liu, Tong
    Hong, Shenda
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 91
  • [29] Adversarial Attack for Robust Watermark Protection Against Inpainting-based and Blind Watermark Removers
    Lyu, Mingzhi
    Huang, Yi
    Kong, Adams Wai-Kin
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8396 - 8405
  • [30] On the Effectiveness of Adversarial Training in Defending against Adversarial Example Attacks for Image Classification
    Park, Sanglee
    So, Jungmin
    APPLIED SCIENCES-BASEL, 2020, 10 (22): : 1 - 16