Improving robustness against common corruptions by covariate shift adaptation

被引:0
|
作者
Schneider, Steffen [1 ]
Rusak, Evgenia [1 ,2 ]
Eck, Luisa [3 ]
Bringmann, Oliver [1 ]
Brendel, Wieland [1 ]
Bethge, Matthias [1 ]
机构
[1] Univ Tubingen, Tubingen, Germany
[2] IMPRS IS, Munich, Germany
[3] Ludwig Maximilians Univ Munchen, Munich, Germany
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Today's state-of-the-art machine vision models are vulnerable to image corruptions like blurring or compression artefacts, limiting their performance in many real-world applications. We here argue that popular benchmarks to measure model robustness against common corruptions (like ImageNet-C) underestimate model robustness in many (but not all) application scenarios. The key insight is that in many scenarios, multiple unlabeled examples of the corruptions are available and can be used for unsupervised online adaptation. Replacing the activation statistics estimated by batch normalization on the training set with the statistics of the corrupted images consistently improves the robustness across 25 different popular computer vision models. Using the corrected statistics, ResNet-50 reaches 62.2% mCE on ImageNet-C compared to 76.7% without adaptation. With the more robust DeepAugment+AugMix model, we improve the state of the art achieved by a ResNet50 model up to date from 53.6% mCE to 45.4% mCE. Even adapting to a single sample improves robustness for the ResNet-50 and AugMix models, and 32 samples are sufficient to improve the current state of the art for a ResNet-50 architecture. We argue that results with adapted statistics should be included whenever reporting scores in corruption benchmarks and other out-of-distribution generalization settings.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] Improving robustness against common corruptions with frequency biased models
    Saikia, Tonmoy
    Schmid, Cordelia
    Brox, Thomas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10191 - 10200
  • [2] Towards Better Robustness against Common Corruptions for Unsupervised Domain Adaptation
    Gao, Zhiqiang
    Huang, Kaizhu
    Zhang, Rui
    Liu, Dawei
    Ma, Jieming
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18836 - 18847
  • [3] Improving Robustness of DNNs against Common Corruptions via Gaussian Adversarial Training
    Yi, Chenyu
    Li, Haoliang
    Wan, Renjie
    Kot, Alex C.
    2020 IEEE INTERNATIONAL CONFERENCE ON VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), 2020, : 17 - 20
  • [4] Benchmarking the Robustness of UAV Tracking Against Common Corruptions
    Liu, Xiaoqiong
    Feng, Yunhe
    Hu, Shu
    Yuan, Xiaohui
    Fan, Heng
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL, MIPR 2024, 2024, : 465 - 470
  • [5] Improving the Robustness of Myoelectric Pattern Recognition for Upper Limb Prostheses by Covariate Shift Adaptation
    Vidovic, Marina M. -C.
    Hwang, Han-Jeong
    Amsuess, Sebastian
    Hahne, Janne M.
    Farina, Dario
    Muller, Klaus-Robert
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2016, 24 (09) : 961 - 970
  • [6] Accuracy and Robustness against Covariate Shift of Water Chiller Models
    Acerbi, Federica
    De Nicolao, Giuseppe
    Obiltschnig, Josef
    Richter, Patrick
    De Luca, Cristina
    2018 IEEE 14TH INTERNATIONAL CONFERENCE ON AUTOMATION SCIENCE AND ENGINEERING (CASE), 2018, : 809 - 816
  • [7] NoisyMix: Boosting Model Robustness to Common Corruptions
    Erichson, N. Benjamin
    Lim, Soon Hoe
    Xu, Winnie
    Utera, Francisco
    Cao, Ziang
    Mahoney, Michael W.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [8] A Spectral View of Randomized Smoothing Under Common Corruptions: Benchmarking and Improving Certified Robustness
    Sun, Jiachen
    Mehra, Akshay
    Kailkhura, Bhavya
    Chen, Pin-Yu
    Hendrycks, Dan
    Hamm, Jihun
    Mao, Z. Morley
    COMPUTER VISION - ECCV 2022, PT IV, 2022, 13664 : 654 - 671
  • [9] Vulnerability of Covariate Shift Adaptation Against Malicious Poisoning Attacks
    Umer, Muhammad
    Frederickson, Christopher
    Polikar, Robi
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [10] Exploring the Robustness of Human Parsers Toward Common Corruptions
    Zhang, Sanyi
    Cao, Xiaochun
    Wang, Rui
    Qi, Guo-Jun
    Zhou, Jie
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 5394 - 5407