An Adversarial Contrastive Distillation Algorithm Based on Masked Auto-Encoder

被引:0
|
作者
Zhang, Dian [1 ]
Dong, Yun-Wei [2 ]
机构
[1] School of Computer Science, Northwestern Polytechnical University, Xi’an,710129, China
[2] School of Software, Northwestern Polytechnical University, Xi’an,710129, China
来源
关键词
Contrastive Learning - Deep neural networks - Generative adversarial networks - Image enhancement - Network coding - Personnel training;
D O I
10.11897/SP.J.1016.2024.02274
中图分类号
学科分类号
摘要
With the continuous development of artificial intelligence, neural networks have exhibited exceptional performance across various domains. However, the existence of adversarial samples poses a significant challenge to the application of neural networks in security-related fields. As research progresses, there is an increasing focus on the robustness of neural networks and their inherent performance. This paper aims to improve neural networks to enhance their adversarial robustness. Although adversarial training has shown great potential in improving adversarial robustness, it suffers from the drawback of long running times. This is primarily because it requires generating adversarial samples for the target model at each iteration step. To address the issues of time-consuming adversarial sample generation and lack of diversity in adversarial training, this paper proposes a contrastive distillation algorithm based on masked autoencoders (MAE) to enhance the adversarial robustness of neural networks. Due to the low information density in images, the loss of image pixels caused by masking can often be recovered using neural networks. Thus, masking-based methods are commonly employed to increase sample diversity and improve the feature learning capabilities of neural networks. Given that adversarial training methods often require considerable time to generate adversarial samples, this paper adopts masking methods to mitigate the time-consuming issue of continuously generating adversarial samples during adversarial training. Additionally, randomly occluding parts of the image can effectively enhance sample diversity, which helps create multi-view samples to address the problem of feature in contrastive learning. Firstly, to reduce the teacher model's reliance on global image features, the teacher model learns in an improved masked autoencoder how to infer the features of obscured blocks based on visible sub-blocks. This method allows the teacher model to focus on learning how to reconstruct global features from limited visible parts, thereby enhancing its deep feature learning ability. Then, to mitigate the impact of adversarial interference, this paper employs knowledge distillation and contrastive learning methods to enhance the target model's adversarial robustness. Knowledge distillation reduces the target model's dependence on global features by transfering the knowledge from the teacher model, while contrastive learning enhances the model's ability to recognize tine-grained information among images by leveraging the diverty of the generated multi-view samples. Finally, label information is utilized to adjust the classification head to ensure recognition accuracy. By fine-tuning the classification head with label information, the model can maintain high accuracy in recognizing dean samples while improving its robustness against adversarial attacks. Experimental results conducted on ResNet50 and WideResNet50 demonstrate an average improvement of 11.50% in adversarial accuracy on CIFAR-10 and an average improvement of 6.35% on CIFAR-100. These results validate the effectiveness of the proposed contrastive distillation algorithm based on masked autoencoders. The algorithm attenuates the impact of adversarial interference by generating adversarial samples only once, enhances sample diversity through random masking, and improves the neural network's adversarial robustness. © 2024 Science Press. All rights reserved.
引用
收藏
页码:2274 / 2288
相关论文
共 50 条
  • [1] CONTRASTIVE AUTO-ENCODER FOR PHONEME RECOGNITION
    Zheng, Xin
    Wu, Zhiyong
    Meng, Helen
    Cai, Lianhong
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [2] Improving Gradient-based Adversarial Training for Text Classification by Contrastive Learning and Auto-Encoder
    Qiu, Yao
    Zhang, Jinchao
    Zhou, Jie
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 1698 - 1707
  • [3] Auto-encoder generative adversarial networks
    Zhai, Zhonghua
    JOURNAL OF INTELLIGENT & FUZZY SYSTEMS, 2018, 35 (03) : 3043 - 3049
  • [4] Contrastive graph auto-encoder for graph embedding
    Zu, Shuaishuai
    Li, Li
    Shen, Jun
    Tang, Weitao
    NEURAL NETWORKS, 2025, 187
  • [5] MAE-MACD: The Masked Adversarial Contrastive Distillation Algorithm Grounded in Masked Autoencoders
    Zhang, Dian
    Dong, Yunwei
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2025, 21 (01) : 337 - 346
  • [6] Adversarial Auto-encoder Based Preprocessing Algorithm for Improving Image Identification and Navigation Accuracy
    Kim S.Y.
    Kang C.H.
    Journal of Institute of Control, Robotics and Systems, 2022, 28 (11) : 999 - 1005
  • [7] A Novel Fault Detection Method Based on Adversarial Auto-Encoder
    Wang Jian
    Han Zhiyan
    PROCEEDINGS OF THE 39TH CHINESE CONTROL CONFERENCE, 2020, : 4166 - 4170
  • [8] An unsupervised adversarial domain adaptation based on variational auto-encoder
    Zonoozi, Mahta Hassan Pour
    Seydi, Vahid
    Deypir, Mahmood
    MACHINE LEARNING, 2025, 114 (05)
  • [9] ConTextual Masked Auto-Encoder for Dense Passage Retrieval
    Wu, Xing
    Ma, Guangyuan
    Lin, Meng
    Lin, Zijia
    Wang, Zhongyuan
    Hu, Songlin
    THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 4, 2023, : 4738 - 4746
  • [10] A contrastive variational graph auto-encoder for node clustering
    Mrabah, Nairouz
    Bouguessa, Mohamed
    Ksantini, Riadh
    PATTERN RECOGNITION, 2024, 149