Structural Watermarking to Deep Neural Networks via Network Channel Pruning

被引:5
|
作者
Zhao, Xiangyu [1 ]
Yao, Yinzhe [1 ]
Wu, Hanzhou [1 ]
Zhang, Xinpeng [1 ]
机构
[1] Shanghai Univ, Shanghai 200444, Peoples R China
基金
中国国家自然科学基金;
关键词
Watermarking; deep neural networks; ownership protection; deep learning; security;
D O I
10.1109/WIFS53200.2021.9648376
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In order to protect the intellectual property (IP) of deep neural networks (DNNs), many existing DNN watermarking techniques either embed watermarks directly into the DNN parameters or insert backdoor watermarks by fine-tuning the DNN parameters, which, however, cannot resist against various attack methods that remove watermarks by altering DNN parameters. In this paper, we bypass such attacks by introducing a structural watermarking scheme that utilizes channel pruning to embed the watermark into the host DNN architecture instead of crafting the DNN parameters. To be specific, during watermark embedding, we prune the internal channels of the host DNN with the channel pruning rates controlled by the watermark. During watermark extraction, the watermark is retrieved by identifying the channel pruning rates from the architecture of the target DNN model. Due to the superiority of pruning mechanism, the performance of the DNN model on its original task is reserved during watermark embedding. Experimental results have shown that, the proposed work enables the embedded watermark to be reliably recovered and provides a sufficient payload, without sacrificing the usability of the DNN model. It is also demonstrated that the proposed work is robust against common transforms and attacks designed for conventional watermarking approaches.
引用
收藏
页码:14 / 19
页数:6
相关论文
共 50 条
  • [21] Digital watermarking for deep neural networks
    Yuki Nagai
    Yusuke Uchida
    Shigeyuki Sakazawa
    Shin’ichi Satoh
    International Journal of Multimedia Information Retrieval, 2018, 7 : 3 - 16
  • [22] CHANNEL PRUNING VIA GRADIENT OF MUTUAL INFORMATION FOR LIGHTWEIGHT CONVOLUTIONAL NEURAL NETWORKS
    Lee, Min Kyu
    Lee, Seunghyun
    Lee, Sang Hyuk
    Song, Byung Cheol
    2020 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2020, : 1751 - 1755
  • [23] Pruning by explaining: A novel criterion for deep neural network pruning
    Yeom, Seul-Ki
    Seegerer, Philipp
    Lapuschkin, Sebastian
    Binder, Alexander
    Wiedemann, Simon
    Mueller, Klaus-Robert
    Samek, Wojciech
    PATTERN RECOGNITION, 2021, 115
  • [24] Layer Pruning via Fusible Residual Convolutional Block for Deep Neural Networks
    Xu P.
    Cao J.
    Sun W.
    Li P.
    Wang Y.
    Zhang X.
    Beijing Daxue Xuebao (Ziran Kexue Ban)/Acta Scientiarum Naturalium Universitatis Pekinensis, 2022, 58 (05): : 801 - 807
  • [25] Structured Pruning for Deep Convolutional Neural Networks via Adaptive Sparsity Regularization
    Shao, Tuanjie
    Shin, Dongkun
    2022 IEEE 46TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE (COMPSAC 2022), 2022, : 982 - 987
  • [26] Filter pruning via annealing decaying for deep convolutional neural networks acceleration
    Jiawen Huang
    Liyan Xiong
    Xiaohui Huang
    Qingsen Chen
    Peng Huang
    Cluster Computing, 2025, 28 (2)
  • [27] Filter Pruning via Geometric Median for Deep Convolutional Neural Networks Acceleration
    He, Yang
    Liu, Ping
    Wang, Ziwei
    Hu, Zhilan
    Yang, Yi
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 4335 - 4344
  • [28] Pruning Filter via Gaussian Distribution Feature for Deep Neural Networks Acceleration
    Xu, Jianrong
    Diao, Boyu
    Cui, Bifeng
    Yang, Kang
    Li, Chao
    Hong, Hailong
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [29] Pruning Deep Neural Network Models via Minimax Concave Penalty Regression
    Liu, Xinggu
    Zhou, Lin
    Luo, Youxi
    APPLIED SCIENCES-BASEL, 2024, 14 (09):
  • [30] ACP: Automatic Channel Pruning Method by Introducing Additional Loss for Deep Neural Networks
    Yu, Haoran
    Zhang, Weiwei
    Ji, Ming
    Zhen, Chenghui
    NEURAL PROCESSING LETTERS, 2023, 55 (02) : 1071 - 1085