A Closer Look at Classifier in Adversarial Domain Generalization

被引:4
|
作者
Wang, Ye [1 ]
Chen, Junyang [2 ]
Wang, Mengzhu [3 ]
Li, Hao [1 ]
Wang, Wei [4 ,6 ]
Su, Houcheng [5 ]
Lai, Zhihui [2 ]
Wang, Wei [4 ,6 ]
Chen, Zhenghan [7 ]
机构
[1] Natl Univ Def Technol, Changsha, Hunan, Peoples R China
[2] Shenzhen Univ, Shenzhen, Guangdong, Peoples R China
[3] Hefei Univ Technol, Hefei, Anhui, Peoples R China
[4] Sun Yat Sen Univ, Shenzhen Campus, Shenzhen, Guangdong, Peoples R China
[5] Univ Macau, Taipa, Macao, Peoples R China
[6] Shenzhen MSU BIT Univ, Shenzhen, Guangdong, Peoples R China
[7] Peking Univ, Beijing, Peoples R China
关键词
domain generalization; condition-invariant features; smoothing optima;
D O I
10.1145/3581783.3611743
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The task of domain generalization is to learn a classification model from multiple source domains and generalize it to unknown target domains. The key to domain generalization is learning discriminative domain-invariant features. Invariant representations are achieved using adversarial domain generalization as one of the primary techniques. For example, generative adversarial networks have been widely used, but suffer from the problem of low intra-class diversity, which can lead to poor generalization ability. To address this issue, we propose a new method called auxiliary classifier in adversarial domain generalization (CloCls). CloCls improve the diversity of the source domain by introducing auxiliary classifier. Combining typical task-related losses, e.g., cross-entropy loss for classification and adversarial loss for domain discrimination, our overall goal is to guarantee the learning of condition-invariant features for all source domains while increasing the diversity of source domains. Further, inspired by smoothing optima have improved generalization for supervised learning tasks like classification. We leverage that converging to a smooth minima with respect task loss stabilizes the adversarial training leading to better performance on unseen target domain which can effectively enhances the performance of domain adversarial methods. We have conducted extensive image classification experiments on benchmark datasets in domain generalization, and our model exhibits sufficient generalization ability and outperforms state-of-the-art DG methods.
引用
收藏
页码:280 / 289
页数:10
相关论文
共 50 条
  • [41] Deep Domain Generalization via Conditional Invariant Adversarial Networks
    Li, Ya
    Tian, Xinmei
    Gong, Mingming
    Liu, Yajing
    Liu, Tongliang
    Zhang, Kun
    Tao, Dacheng
    COMPUTER VISION - ECCV 2018, PT 15, 2018, 11219 : 647 - 663
  • [42] Closer look
    Water Environment and Technology, 2000, 12 (02):
  • [43] A closer look
    Anon
    Chemical Processing, 2002, 65 (10):
  • [44] A CLOSER LOOK
    Diestelhorst, Michael
    STRAD, 2024, 135 (1609): : 10 - 10
  • [45] A CLOSER LOOK
    Bergonzi, Martzy
    STRAD, 2024, 135 (1611): : 30 - 31
  • [46] CLOSER LOOK
    Fry, Erika
    FORTUNE, 2013, 167 (03) : 10 - +
  • [47] A CLOSER LOOK
    LONG, G
    AVIATION WEEK & SPACE TECHNOLOGY, 1983, 118 (26): : 94 - 94
  • [48] CLOSER LOOK
    Callaway, Sue
    FORTUNE, 2015, 171 (06) : 8 - +
  • [49] Domain Generalization and Feature Fusion for Cross-domain Imperceptible Adversarial Attack Detection
    Li, Yi
    Angelov, Plamen
    Suri, Neeraj
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [50] Closer Look
    Matthews, Chris
    FORTUNE, 2016, 174 (04) : 12 - 14