DeepMark: Embedding Watermarks into Deep Neural Network Using Pruning

被引:2
|
作者
Xie, Chenqi [1 ]
Yi, Ping [1 ]
Zhang, Baowen [1 ]
Zou, Futai [1 ]
机构
[1] Shanghai Jiao Tong Univ, Sch Cyber Sci & Engn, Shanghai, Peoples R China
基金
中国国家自然科学基金;
关键词
watermark; pruning; information; embed;
D O I
10.1109/ICTAI52525.2021.00031
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
With the rapid development of artificial intelligence in recent years, the deep neural network model has been used in many fields such as speech and images due to its excellent performance, and has achieved remarkable results. As we all know, training a deep model requires a lot of time and resources. But these trained deep learning models are very easy to be copied and diffused. Therefore, the protection of intellectual property rights of the model has gradually attracted people's attention. A series of algorithms or technologies came into being, and one of them is model watermarking technology. Model watermarks can function like digital watermarks. Once the model is stolen, watermarks can prove the copyright of model by verifying the watermarks, maintain its intellectual property rights, and protect the model. This paper proposes a model watermark generation method based on pruning. Where to prune is selected by the calculation result of connection sensitivity, and then the information is embedded by pruning. Compared with the four proposed model watermarking methods, our method has higher fidelity and reliability. Experiments show that our watermarking method is robust against fine-tuning and weight pruning.
引用
收藏
页码:169 / 175
页数:7
相关论文
共 50 条
  • [1] Embedding Watermarks into Deep Neural Networks
    Uchida, Yusuke
    Nagai, Yuki
    Sakazawa, Shigeyuki
    Satoh, Shin'ichi
    [J]. PROCEEDINGS OF THE 2017 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR'17), 2017, : 274 - 282
  • [2] Fused Pruning based Robust Deep Neural Network Watermark Embedding
    Li, Tengfei
    Wang, Shuo
    Jing, Huiyun
    Lian, Zhichao
    Meng, Shunmei
    Li, Qianmu
    [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2475 - 2481
  • [3] Deep Neural Network Pruning Using Persistent Homology
    Watanabe, Satoru
    Yamana, Hayato
    [J]. 2020 IEEE THIRD INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND KNOWLEDGE ENGINEERING (AIKE 2020), 2020, : 153 - 156
  • [4] Pruning by explaining: A novel criterion for deep neural network pruning
    Yeom, Seul-Ki
    Seegerer, Philipp
    Lapuschkin, Sebastian
    Binder, Alexander
    Wiedemann, Simon
    Mueller, Klaus-Robert
    Samek, Wojciech
    [J]. PATTERN RECOGNITION, 2021, 115
  • [5] Pruning the deep neural network by similar function
    Liu, Hanqing
    Xin, Bo
    Mu, Senlin
    Zhu, Zhangqing
    [J]. 2018 INTERNATIONAL SYMPOSIUM ON POWER ELECTRONICS AND CONTROL ENGINEERING (ISPECE 2018), 2019, 1187
  • [6] Automated Pruning for Deep Neural Network Compression
    Manessi, Franco
    Rozza, Alessandro
    Bianco, Simone
    Napoletano, Paolo
    Schettini, Raimondo
    [J]. 2018 24TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2018, : 657 - 664
  • [7] Overview of Deep Convolutional Neural Network Pruning
    Li, Guang
    Liu, Fang
    Xia, Yuping
    [J]. 2020 INTERNATIONAL CONFERENCE ON IMAGE, VIDEO PROCESSING AND ARTIFICIAL INTELLIGENCE, 2020, 11584
  • [8] An FPGA Realization of a Deep Convolutional Neural Network Using a Threshold Neuron Pruning
    Fujii, Tomoya
    Sato, Simpei
    Nakahara, Hiroki
    Motomura, Masato
    [J]. APPLIED RECONFIGURABLE COMPUTING, 2017, 10216 : 268 - 280
  • [9] A Discriminant Information Approach to Deep Neural Network Pruning
    Hou, Zejiang
    Kung, Sun-Yuan
    [J]. 2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 9553 - 9560
  • [10] Pruning and quantization for deep neural network acceleration: A survey
    Liang, Tailin
    Glossner, John
    Wang, Lei
    Shi, Shaobo
    Zhang, Xiaotong
    [J]. NEUROCOMPUTING, 2021, 461 : 370 - 403