Augmentation-induced Consistency Regularization for Classification

被引:0
|
作者
Wu, Jianhan [1 ,2 ]
Si, Shijing [1 ]
Wang, Jianzong [1 ]
Xiao, Jing [1 ]
机构
[1] Ping Technol Shenzhen Co Ltd, Shenzhen, Peoples R China
[2] Univ Sci & Technol China, Hefei, Peoples R China
关键词
consistency regularization; data augmentation; over-fitting; stop-gradient;
D O I
10.1109/IJCNN55064.2022.9892448
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks have become popular in many supervised learning tasks, but they may suffer from overfitting when the training dataset is limited. To mitigate this, many researchers use data augmentation, which is a widely used and effective method for increasing the variety of datasets. However, the randomness introduced by data augmentation causes inevitable inconsistency between training and inference, which leads to poor improvement. In this paper, we propose a consistency regularization framework based on data augmentation, called CR-Aug, which forces the output distributions of different sub models generated by data augmentation to be consistent with each other. Specifically, CR-Aug evaluates the discrepancy between the output distributions of two augmented versions of each sample, and it utilizes a stop-gradient operation to minimize the consistency loss. We implement CR-Aug to image and audio classification tasks and conduct extensive experiments to verify its effectiveness in improving the generalization ability of classifiers. Our CR-Aug framework is ready-to-use, it can be easily adapted to many state-of-the-art network architectures. Our empirical results show that CR-Aug outperforms baseline methods by a significant margin.
引用
收藏
页数:7
相关论文
共 50 条
  • [1] Learning Augmentation for GNNs With Consistency Regularization
    Park, Hyeonjin
    Lee, Seunghun
    Hwang, Dasol
    Jeong, Jisu
    Kim, Kyung-Min
    Ha, Jung-Woo
    Kim, Hyunwoo J.
    IEEE ACCESS, 2021, 9 : 127961 - 127972
  • [2] Sample Efficiency of Data Augmentation Consistency Regularization
    Yang, Shuo
    Dong, Yijun
    Ward, Rachel
    Dhillon, Inderjit S.
    Sanghavi, Sujay
    Lei, Qi
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 206, 2023, 206
  • [3] Augmentation, Mixing, and Consistency Regularization for Domain Generalization
    Mehmood, Noaman
    Barner, Kenneth
    2024 IEEE 3RD INTERNATIONAL CONFERENCE ON COMPUTING AND MACHINE INTELLIGENCE, ICMI 2024, 2024,
  • [4] Consistency Regularization Semisupervised Learning for PolSAR Image Classification
    Wang, Yu
    Jiang, Shan
    Li, Weijie
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2025, 2025 (01)
  • [5] AugMixSpeech: A Data Augmentation Method and Consistency Regularization for Mandarin Automatic Speech Recognition
    Jiang, Yang
    Chen, Jun
    Han, Kai
    Liu, Yi
    Ma, Siqi
    Song, Yuqing
    Liu, Zhe
    NATURAL LANGUAGE PROCESSING AND CHINESE COMPUTING, PT III, NLPCC 2024, 2025, 15361 : 145 - 157
  • [6] A study on the performance improvement of learning based on consistency regularization and unlabeled data augmentation
    Kim, Hyunwoong
    Seok, Kyungha
    KOREAN JOURNAL OF APPLIED STATISTICS, 2021, 34 (02) : 167 - 175
  • [7] Using Data Augmentation and Consistency Regularization to Improve Semi-supervised Speech Recognition
    Sapru, Ashtosh
    INTERSPEECH 2022, 2022, : 5115 - 5119
  • [8] Audio Classification with Semi-supervised Contrastive Loss and Consistency Regularization
    Xu, Juan-Wei
    Yeh, Yi-Ren
    2024 IEEE 48TH ANNUAL COMPUTERS, SOFTWARE, AND APPLICATIONS CONFERENCE, COMPSAC 2024, 2024, : 1770 - 1775
  • [9] NCCR: Neighbor and Cluster Consistency Regularization for Improving Graph Node Classification
    Yang, Feiming
    Lai, Yurui
    Wang, Leshan
    Fan, Rui
    2022 IEEE 34TH INTERNATIONAL CONFERENCE ON TOOLS WITH ARTIFICIAL INTELLIGENCE, ICTAI, 2022, : 28 - 35
  • [10] Semi-supervised Audio Classification with Consistency-Based Regularization
    Lu, Kangkang
    Foo, Chuan-Sheng
    Teh, Kah Kuan
    Huy Dat Tran
    Chandrasekhar, Vijay Ramaseshan
    INTERSPEECH 2019, 2019, : 3654 - 3658