How robust are discriminatively trained zero-shot learning models?

被引:10
|
作者
Yucel, Mehmet Kerim [1 ]
Cinbis, RamazanGokberk [2 ]
Duygulu, Pinar [1 ]
机构
[1] Hacettepe Univ, Grad Sch Sci & Engn, TR-06800 Ankara, Turkey
[2] Middle East Tech Univ, Dept Comp Engn, TR-06800 Ankara, Turkey
关键词
Zero-shot learning; Robust generalization; Adversarial robustness;
D O I
10.1016/j.imavis.2022.104392
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Data shift robustness has been primarily investigated from a fully supervised perspective, and robustness of zero shot learning (ZSL) models have been largely neglected. In this paper, we present novel analyses on the robustness of discriminative ZSL to image corruptions. We subject several ZSL models to a large set of common corruptions and defenses. In order to realize the corruption analysis, we curate and release the first ZSL corruption robustness datasets SUN-C, CUB-C and AWA2-C. We analyse our results by taking into account the dataset characteristics, class imbalance, class transitions between seen and unseen classes and the discrepancies between ZSL and GZSL performances. Our results show that discriminative ZSL suffers from corruptions and this trend is further exacerbated by the severe class imbalance and model weakness inherent in ZSL methods. We then combine our findings with those based on adversarial attacks in ZSL, and highlight the different effects of corruptions and adversarial examples, such as the pseudo-robustness effect present under adversarial attacks. We also obtain new strong baselines for both models with the defense methods. Finally, our experiments show that although existing methods to improve robustness somewhat work for ZSL models, they do not produce a tangible effect. (c) 2022 Elsevier B.V. All rights reserved.
引用
收藏
页数:11
相关论文
共 50 条
  • [21] RAPID: Zero-Shot Domain Adaptation for Code Search with Pre-Trained Models
    Fan, Guodong
    Chen, Shizhan
    Gao, Cuiyun
    Xiao, Jianmao
    Zhang, Tao
    Feng, Zhiyong
    [J]. ACM TRANSACTIONS ON SOFTWARE ENGINEERING AND METHODOLOGY, 2024, 33 (05)
  • [22] What Makes Pre-trained Language Models Better Zero-shot Learners?
    Lu, Jinghui
    Zhu, Dongsheng
    Han, Weidong
    Zhao, Rui
    Mac Namee, Brian
    Tan, Fei
    [J]. PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 2288 - 2303
  • [23] Zero-Shot Recommendations with Pre-Trained Large Language Models for Multimodal Nudging
    Harrison, Rachel M.
    Dereventsov, Anton
    Bibin, Anton
    [J]. 2023 23RD IEEE INTERNATIONAL CONFERENCE ON DATA MINING WORKSHOPS, ICDMW 2023, 2023, : 1535 - 1542
  • [24] Zero-Shot AutoML with Pretrained Models
    Oeztuerk, Ekrem
    Ferreira, Fabio
    Jomaa, Hadi S.
    Schmidt-Thieme, Lars
    Grabocka, Josif
    Hutter, Frank
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [25] Learning semantic ambiguities for zero-shot learning
    Hanouti, Celina
    Le Borgne, Herve
    [J]. MULTIMEDIA TOOLS AND APPLICATIONS, 2023, 82 (26) : 40745 - 40759
  • [26] Learning semantic ambiguities for zero-shot learning
    Celina Hanouti
    Hervé Le Borgne
    [J]. Multimedia Tools and Applications, 2023, 82 : 40745 - 40759
  • [27] Practical Aspects of Zero-Shot Learning
    Saad, Elie
    Paprzycki, Marcin
    Ganzha, Maria
    [J]. COMPUTATIONAL SCIENCE, ICCS 2022, PT II, 2022, : 88 - 95
  • [28] Research progress of zero-shot learning
    Sun, Xiaohong
    Gu, Jinan
    Sun, Hongying
    [J]. APPLIED INTELLIGENCE, 2021, 51 (06) : 3600 - 3614
  • [29] Zero-Shot Program Representation Learning
    Cui, Nan
    Jiang, Yuze
    Gu, Xiaodong
    Shen, Beijun
    [J]. 30TH IEEE/ACM INTERNATIONAL CONFERENCE ON PROGRAM COMPREHENSION (ICPC 2022), 2022, : 60 - 70
  • [30] Joint Dictionaries for Zero-Shot Learning
    Kolouri, Soheil
    Rostami, Mohammad
    Owechko, Yuri
    Kim, Kyungnam
    [J]. THIRTY-SECOND AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTIETH INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / EIGHTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2018, : 3431 - 3439