Exploring the Robustness of Human Parsers Toward Common Corruptions

被引:0
|
作者
Zhang, Sanyi [1 ,2 ]
Cao, Xiaochun [3 ]
Wang, Rui [1 ,2 ]
Qi, Guo-Jun [4 ,5 ]
Zhou, Jie [6 ]
机构
[1] Chinese Acad Sci, Inst Informat Engn, State Key Lab Informat Secur, Beijing 100093, Peoples R China
[2] Univ Chinese Acad Sci, Sch Cyber Secur, Beijing 100049, Peoples R China
[3] Sun Yat Sen Univ, Sch Cyber Sci & Technol, Shenzhen Campus, Shenzhen 518107, Peoples R China
[4] Westlake Univ, Sch Engn, Hangzhou 310030, Peoples R China
[5] OPPO US Res Ctr, Bellevue, WA 98004 USA
[6] Tsinghua Univ, Beijing Res Ctr Informat Sci & Technol BNRist, Dept Automat, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
基金
中国国家自然科学基金; 中国博士后科学基金;
关键词
Robustness; Data models; Task analysis; Computational modeling; Benchmark testing; Semantics; Data augmentation; Human parsing; model robustness; heterogeneous augmentation; common corruptions;
D O I
10.1109/TIP.2023.3313493
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human parsing aims to segment each pixel of the human image with fine-grained semantic categories. However, current human parsers trained with clean data are easily confused by numerous image corruptions such as blur and noise. To improve the robustness of human parsers, in this paper, we construct three corruption robustness benchmarks, termed LIP-C, ATR-C, and Pascal-Person-Part-C, to assist us in evaluating the risk tolerance of human parsing models. Inspired by the data augmentation strategy, we propose a novel heterogeneous augmentation-enhanced mechanism to bolster robustness under commonly corrupted conditions. Specifically, two types of data augmentations from different views, i.e., image-aware augmentation and model-aware image-to-image transformation, are integrated in a sequential manner for adapting to unforeseen image corruptions. The image-aware augmentation can enrich the high diversity of training images with the help of common image operations. The model-aware augmentation strategy that improves the diversity of input data by considering the model's randomness. The proposed method is model-agnostic, and it can plug and play into arbitrary state-of-the-art human parsing frameworks. The experimental results show that the proposed method demonstrates good universality which can improve the robustness of the human parsing models and even the semantic segmentation models when facing various image common corruptions. Meanwhile, it can still obtain approximate performance on clean data.
引用
收藏
页码:5394 / 5407
页数:14
相关论文
共 50 条
  • [1] NoisyMix: Boosting Model Robustness to Common Corruptions
    Erichson, N. Benjamin
    Lim, Soon Hoe
    Xu, Winnie
    Utera, Francisco
    Cao, Ziang
    Mahoney, Michael W.
    INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE AND STATISTICS, VOL 238, 2024, 238
  • [2] Benchmarking the Robustness of UAV Tracking Against Common Corruptions
    Liu, Xiaoqiong
    Feng, Yunhe
    Hu, Shu
    Yuan, Xiaohui
    Fan, Heng
    2024 IEEE 7TH INTERNATIONAL CONFERENCE ON MULTIMEDIA INFORMATION PROCESSING AND RETRIEVAL, MIPR 2024, 2024, : 465 - 470
  • [3] The Role of ViT Design and Training in Robustness to Common Corruptions
    Tian, Rui
    Wu, Zuxuan
    Dai, Qi
    Goldblum, Micah
    Hu, Han
    Jiang, Yu-Gang
    IEEE TRANSACTIONS ON MULTIMEDIA, 2025, 27 : 1374 - 1385
  • [4] Benchmarking the Robustness of Semantic Segmentation Models with Respect to Common Corruptions
    Kamann, Christoph
    Rother, Carsten
    INTERNATIONAL JOURNAL OF COMPUTER VISION, 2021, 129 (02) : 462 - 483
  • [5] Improving robustness against common corruptions with frequency biased models
    Saikia, Tonmoy
    Schmid, Cordelia
    Brox, Thomas
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10191 - 10200
  • [6] PRIME: A Few Primitives Can Boost Robustness to Common Corruptions
    Modas, Apostolos
    Rade, Rahul
    Ortiz-Jimenez, Guillermo
    Moosavi-Dezfooli, Seyed-Mohsen
    Frossard, Pascal
    COMPUTER VISION, ECCV 2022, PT XXV, 2022, 13685 : 623 - 640
  • [7] Improving robustness against common corruptions by covariate shift adaptation
    Schneider, Steffen
    Rusak, Evgenia
    Eck, Luisa
    Bringmann, Oliver
    Brendel, Wieland
    Bethge, Matthias
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 33, NEURIPS 2020, 2020, 33
  • [8] Benchmarking the Robustness of Semantic Segmentation Models with Respect to Common Corruptions
    Christoph Kamann
    Carsten Rother
    International Journal of Computer Vision, 2021, 129 : 462 - 483
  • [9] Benchmarking the Robustness of Deep Neural Networks to Common Corruptions in Digital Pathology
    College of Computer Science and Technology, Zhejiang University, Hangzhou, China
    不详
    Lect. Notes Comput. Sci., 2022, (242-252):
  • [10] Benchmarking the Robustness of Deep Neural Networks to Common Corruptions in Digital Pathology
    Zhang, Yunlong
    Sun, Yuxuan
    Li, Honglin
    Zheng, Sunyi
    Zhu, Chenglu
    Yang, Lin
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2022, PT II, 2022, 13432 : 242 - 252