A Closer Look at Classifier in Adversarial Domain Generalization

被引:4
|
作者
Wang, Ye [1 ]
Chen, Junyang [2 ]
Wang, Mengzhu [3 ]
Li, Hao [1 ]
Wang, Wei [4 ,6 ]
Su, Houcheng [5 ]
Lai, Zhihui [2 ]
Wang, Wei [4 ,6 ]
Chen, Zhenghan [7 ]
机构
[1] Natl Univ Def Technol, Changsha, Hunan, Peoples R China
[2] Shenzhen Univ, Shenzhen, Guangdong, Peoples R China
[3] Hefei Univ Technol, Hefei, Anhui, Peoples R China
[4] Sun Yat Sen Univ, Shenzhen Campus, Shenzhen, Guangdong, Peoples R China
[5] Univ Macau, Taipa, Macao, Peoples R China
[6] Shenzhen MSU BIT Univ, Shenzhen, Guangdong, Peoples R China
[7] Peking Univ, Beijing, Peoples R China
关键词
domain generalization; condition-invariant features; smoothing optima;
D O I
10.1145/3581783.3611743
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of domain generalization is to learn a classification model from multiple source domains and generalize it to unknown target domains. The key to domain generalization is learning discriminative domain-invariant features. Invariant representations are achieved using adversarial domain generalization as one of the primary techniques. For example, generative adversarial networks have been widely used, but suffer from the problem of low intra-class diversity, which can lead to poor generalization ability. To address this issue, we propose a new method called auxiliary classifier in adversarial domain generalization (CloCls). CloCls improve the diversity of the source domain by introducing auxiliary classifier. Combining typical task-related losses, e.g., cross-entropy loss for classification and adversarial loss for domain discrimination, our overall goal is to guarantee the learning of condition-invariant features for all source domains while increasing the diversity of source domains. Further, inspired by smoothing optima have improved generalization for supervised learning tasks like classification. We leverage that converging to a smooth minima with respect task loss stabilizes the adversarial training leading to better performance on unseen target domain which can effectively enhances the performance of domain adversarial methods. We have conducted extensive image classification experiments on benchmark datasets in domain generalization, and our model exhibits sufficient generalization ability and outperforms state-of-the-art DG methods.
引用
收藏
页码:280 / 289
页数:10
相关论文
共 50 条
  • [21] On the Generalization of Face Forgery Detection with Domain Adversarial Learning
    Weng Z.
    Chen J.
    Jiang Y.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (07): : 1476 - 1489
  • [22] Alleviating the generalization issue in adversarial domain adaptation networks
    Zhe, Xiao
    Du, Zhekai
    Lou, Chunwei
    Li, Jingjing
    IMAGE AND VISION COMPUTING, 2023, 135
  • [23] Robust Semantic Parsing with Adversarial Learning for Domain Generalization
    Marzinotto, Gabriel
    Damnati, Geraldine
    Bechet, Frederic
    Favre, Benoit
    2019 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES(NAACL HLT 2019), VOL. 2 (INDUSTRY PAPERS), 2019, : 166 - 173
  • [24] Adversarial and Random Transformations for Robust Domain Adaptation and Generalization
    Xiao, Liang
    Xu, Jiaolong
    Zhao, Dawei
    Shang, Erke
    Zhu, Qi
    Dai, Bin
    SENSORS, 2023, 23 (11)
  • [25] Correlation-aware adversarial domain adaptation and generalization
    Rahman, Mohammad Mahfujur
    Fookes, Clinton
    Baktashmotlagh, Mahsa
    Sridharan, Sridha
    PATTERN RECOGNITION, 2020, 100
  • [26] Unseen Target Stance Detection with Adversarial Domain Generalization
    Wang, Zhen
    Wang, Qiansheng
    Lv, Chengguo
    Cao, Xue
    Fu, Guohong
    2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2020,
  • [27] A CLOSER LOOK AT GALBRAITH CLOSER LOOK - COMMENT
    BERBAUM, ML
    MARKUS, GB
    ZAJONC, RB
    DEVELOPMENTAL PSYCHOLOGY, 1982, 18 (02) : 174 - 180
  • [28] A Closer Look into the Robustness of Neural Dependency Parsers Using Better Adversarial Examples
    Wang, Yuxuan
    Che, Wanxiang
    Titov, Ivan
    Cohen, Shay B.
    Lei, Zhilin
    Liu, Ting
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL-IJCNLP 2021, 2021, : 2344 - 2354
  • [29] Closer Look at the Transferability of Adversarial Examples: How They Fool Different Models Differently
    Waseda, Futa
    Nishikawa, Sosuke
    Trung-Nghia Le
    Nguyen, Huy H.
    Echizen, Isao
    2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 1360 - 1368
  • [30] Domain adversarial neural networks for domain generalization: when it works and how to improve
    Anthony Sicilia
    Xingchen Zhao
    Seong Jae Hwang
    Machine Learning, 2023, 112 : 2685 - 2721