Combining Contrastive Learning with Auto-Encoder for Out-of-Distribution Detection

被引:0
|
作者
Luo, Dawei [1 ]
Zhou, Heng [2 ]
Bae, Joonsoo [1 ]
Yun, Bom [3 ]
机构
[1] Jeonbuk Natl Univ, Dept Ind & Informat Syst Engn, Jeonju 54896, South Korea
[2] Jeonbuk Natl Univ, Dept Elect & Informat Engn, Jeonju 54896, South Korea
[3] Korean Construct Equipment Technol Inst, Gunsan 10203, South Korea
来源
APPLIED SCIENCES-BASEL | 2023年 / 13卷 / 23期
关键词
contrastive learning; auto-encoder; out-of-distribution; representation learning; unsupervised learning; FAULT-DIAGNOSIS; AUTOENCODER;
D O I
10.3390/app132312930
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
Reliability and robustness are fundamental requisites for the successful integration of deep-learning models into real-world applications. Deployed models must exhibit an awareness of their limitations, necessitating the ability to discern out-of-distribution (OOD) data and prompt human intervention, a critical competency. While several frameworks for OOD detection have been introduced and achieved remarkable results, most state-of-the-art (SOTA) models rely on supervised learning with annotated data for their training. However, acquiring labeled data can be a demanding, time-consuming or, in some cases, an infeasible task. Consequently, unsupervised learning has gained substantial traction and has made noteworthy advancements. It empowers models to undergo training solely on unlabeled data while still achieving comparable or even superior performance compared to supervised alternatives. Among the array of unsupervised methods, contrastive learning has asserted its effectiveness in feature extraction for a variety of downstream tasks. Conversely, auto-encoders are extensively employed to acquire indispensable representations that faithfully reconstruct input data. In this study, we introduce a novel approach that amalgamates contrastive learning with auto-encoders for OOD detection using unlabeled data. Contrastive learning diligently tightens the grouping of in-distribution data while meticulously segregating OOD data, and the auto-encoder augments the feature space with increased refinement. Within this framework, data undergo implicit classification into in-distribution and OOD categories with a notable degree of precision. Our experimental findings manifest that this method surpasses most of the existing detectors reliant on unlabeled data or even labeled data. By incorporating an auto-encoder into an unsupervised learning framework and training it on the CIFAR-100 dataset, our model enhances the detection rate of unsupervised learning methods by an average of 5.8%. Moreover, it outperforms the supervised-based OOD detector by an average margin of 11%.
引用
收藏
页数:16
相关论文
共 50 条
  • [1] Likelihood Regret: An Out-of-Distribution Detection Score For Variational Auto-encoder
    Xiao, Zhisheng
    Yan, Qing
    Amit, Yali
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [2] Detecting out-of-distribution samples via variational auto-encoder with reliable uncertainty estimation
    Ran, Xuming
    Xu, Mingkun
    Mei, Lingrui
    Xu, Qi
    Liu, Quanying
    NEURAL NETWORKS, 2022, 145 : 199 - 208
  • [3] CONTRASTIVE AUTO-ENCODER FOR PHONEME RECOGNITION
    Zheng, Xin
    Wu, Zhiyong
    Meng, Helen
    Cai, Lianhong
    2014 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2014,
  • [4] Contrastive Out-of-Distribution Detection for Pretrained Transformers
    Zhou, Wenxuan
    Liu, Fangyu
    Chen, Muhao
    2021 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2021), 2021, : 1100 - 1111
  • [5] Out-of-distribution Detection Learning with Unreliable Out-of-distribution Sources
    Zheng, Haotian
    Wang, Qizhou
    Fang, Zhen
    Xia, Xiaobo
    Liu, Feng
    Liu, Tongliang
    Han, Bo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [6] Tire Pattern Image Classification using Variational Auto-Encoder with Contrastive Learning
    Yang, Jianning
    Xue, Jiahao
    Feng, Xiaodong
    Song, Chaoqi
    Hao, Yu
    2022 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2022,
  • [7] A contrastive variational graph auto-encoder for node clustering
    Mrabah, Nairouz
    Bouguessa, Mohamed
    Ksantini, Riadh
    PATTERN RECOGNITION, 2024, 149
  • [8] Partial and Asymmetric Contrastive Learning for Out-of-Distribution Detection in Long-Tailed Recognition
    Wang, Haotao
    Zhang, Aston
    Zhu, Yi
    Zheng, Shuai
    Li, Mu
    Smola, Alex
    Wang, Zhangyang
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [9] Enhancing out-of-distribution detection via diversified multi-prototype contrastive learning
    Jia, Yulong
    Li, Jiaming
    Zhao, Ganlong
    Liu, Shuangyin
    Sun, Weijun
    Lin, Liang
    Li, Guanbin
    Pattern Recognition, 2025, 161
  • [10] Learning Sparse Representation With Variational Auto-Encoder for Anomaly Detection
    Sun, Jiayu
    Wang, Xinzhou
    Xiong, Naixue
    Shao, Jie
    IEEE ACCESS, 2018, 6 : 33353 - 33361