An Adversarial Contrastive Distillation Algorithm Based on Masked Auto-Encoder

被引:0
|
作者
Zhang, Dian [1 ]
Dong, Yun-Wei [2 ]
机构
[1] School of Computer Science, Northwestern Polytechnical University, Xi’an,710129, China
[2] School of Software, Northwestern Polytechnical University, Xi’an,710129, China
来源
关键词
Contrastive Learning - Deep neural networks - Generative adversarial networks - Image enhancement - Network coding - Personnel training;
D O I
10.11897/SP.J.1016.2024.02274
中图分类号
学科分类号
摘要
With the continuous development of artificial intelligence, neural networks have exhibited exceptional performance across various domains. However, the existence of adversarial samples poses a significant challenge to the application of neural networks in security-related fields. As research progresses, there is an increasing focus on the robustness of neural networks and their inherent performance. This paper aims to improve neural networks to enhance their adversarial robustness. Although adversarial training has shown great potential in improving adversarial robustness, it suffers from the drawback of long running times. This is primarily because it requires generating adversarial samples for the target model at each iteration step. To address the issues of time-consuming adversarial sample generation and lack of diversity in adversarial training, this paper proposes a contrastive distillation algorithm based on masked autoencoders (MAE) to enhance the adversarial robustness of neural networks. Due to the low information density in images, the loss of image pixels caused by masking can often be recovered using neural networks. Thus, masking-based methods are commonly employed to increase sample diversity and improve the feature learning capabilities of neural networks. Given that adversarial training methods often require considerable time to generate adversarial samples, this paper adopts masking methods to mitigate the time-consuming issue of continuously generating adversarial samples during adversarial training. Additionally, randomly occluding parts of the image can effectively enhance sample diversity, which helps create multi-view samples to address the problem of feature in contrastive learning. Firstly, to reduce the teacher model's reliance on global image features, the teacher model learns in an improved masked autoencoder how to infer the features of obscured blocks based on visible sub-blocks. This method allows the teacher model to focus on learning how to reconstruct global features from limited visible parts, thereby enhancing its deep feature learning ability. Then, to mitigate the impact of adversarial interference, this paper employs knowledge distillation and contrastive learning methods to enhance the target model's adversarial robustness. Knowledge distillation reduces the target model's dependence on global features by transfering the knowledge from the teacher model, while contrastive learning enhances the model's ability to recognize tine-grained information among images by leveraging the diverty of the generated multi-view samples. Finally, label information is utilized to adjust the classification head to ensure recognition accuracy. By fine-tuning the classification head with label information, the model can maintain high accuracy in recognizing dean samples while improving its robustness against adversarial attacks. Experimental results conducted on ResNet50 and WideResNet50 demonstrate an average improvement of 11.50% in adversarial accuracy on CIFAR-10 and an average improvement of 6.35% on CIFAR-100. These results validate the effectiveness of the proposed contrastive distillation algorithm based on masked autoencoders. The algorithm attenuates the impact of adversarial interference by generating adversarial samples only once, enhances sample diversity through random masking, and improves the neural network's adversarial robustness. © 2024 Science Press. All rights reserved.
引用
收藏
页码:2274 / 2288
相关论文
共 50 条
  • [31] Unsupervised discriminative feature representation via adversarial auto-encoder
    Guo, Wenzhong
    Cai, Jinyu
    Wang, Shiping
    APPLIED INTELLIGENCE, 2020, 50 (04) : 1155 - 1171
  • [32] Auto-encoder based dimensionality reduction
    Wang, Yasi
    Yao, Hongxun
    Zhao, Sicheng
    NEUROCOMPUTING, 2016, 184 : 232 - 242
  • [33] Deep auto-encoder based clustering
    Song, Chunfeng
    Huang, Yongzhen
    Liu, Feng
    Wang, Zhenyu
    Wang, Liang
    INTELLIGENT DATA ANALYSIS, 2014, 18 : S65 - S76
  • [34] Contrastive variational auto-encoder driven convergence guidance in evolutionary multitasking
    Wang, Ruilin
    Feng, Xiang
    Yu, Huiqun
    APPLIED SOFT COMPUTING, 2024, 163
  • [35] An Efficient RFF Extraction Method Using Asymmetric Masked Auto-Encoder
    Yao, Zhisheng
    Fu, Xue
    Wang, Shufei
    Wang, Yu
    Gui, Guan
    Mao, Shiwen
    2023 28TH ASIA PACIFIC CONFERENCE ON COMMUNICATIONS, APCC 2023, 2023, : 364 - 368
  • [36] Fetal MRI Synthesis via Balanced Auto-Encoder Based Generative Adversarial Networks
    Torrents-Barrena, Jordina
    Piella, Gemma
    Masoller, Narcis
    Gratacos, Eduard
    Eixarch, Elisenda
    Ceresa, Mario
    Gonzalez Ballester, Miguel A.
    2018 40TH ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2018, : 2599 - 2602
  • [37] Combining Contrastive Learning with Auto-Encoder for Out-of-Distribution Detection
    Luo, Dawei
    Zhou, Heng
    Bae, Joonsoo
    Yun, Bom
    APPLIED SCIENCES-BASEL, 2023, 13 (23):
  • [38] Collaborative Filtering Algorithm Based on Denoising Auto-Encoder and Item Embedding
    Guo, Yudong
    Tang, Yongwang
    2017 IEEE 2ND ADVANCED INFORMATION TECHNOLOGY, ELECTRONIC AND AUTOMATION CONTROL CONFERENCE (IAEAC), 2017, : 1751 - 1755
  • [39] An Ensemble Net of Convolutional Auto-Encoder and Graph Auto-Encoder for Auto-Diagnosis
    Li, Jianqiang
    Ji, Changping
    Yan, Guokai
    You, Linlin
    Chen, Jie
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2021, 13 (01) : 189 - 199
  • [40] Effective fuzzing testcase generation based on variational auto-encoder generative adversarial network
    Qin, Zhongyuan
    Fan, Jiarong
    Liu, Xujian
    Li, Zeru
    Sun, Xin
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2025, 144