Grassmannian graph-attentional landmark selection for domain adaptation

被引:0
|
作者
Bin Sun
Shaofan Wang
Dehui Kong
Jinghua Li
Baocai Yin
机构
[1] Beijing University of Technology,Beijing Key Laboratory of Multimedia and Intelligent Software Technology, BJUT Faculty of Information Technology
[2] Li Auto,undefined
来源
关键词
Domain adapation; Transfer learning; Landmark; Manifold;
D O I
暂无
中图分类号
学科分类号
摘要
Domain adaptation aims to leverage information from the source domain to improve the classification performance in the target domain. It mainly utilizes two schemes: sample reweighting and feature matching. While the first scheme allocates different weights to individual samples, the second scheme matches the feature of two domains using global structural statistics. The two schemes are complementary with each other, which are expected to jointly work for robust domain adaptation. Several methods combine the two schemes, but the underlying relationship of samples is insufficiently analyzed due to the neglect of the hierarchy of samples and the geometric properties between samples. To better combine the advantages of the two schemes, we propose a Grassmannian graph-attentional landmark selection (GGLS) framework for domain adaptation. GGLS presents a landmark selection scheme using attention-induced neighbors of the graphical structure of samples and performs distribution adaptation and knowledge adaptation over Grassmann manifold. the former treats the landmarks of each sample differently, and the latter avoids feature distortion and achieves better geometric properties. Experimental results on different real-world cross-domain visual recognition tasks demonstrate that GGLS provides better classification accuracies compared with state-of-the-art domain adaptation methods.
引用
收藏
页码:30243 / 30266
页数:23
相关论文
共 50 条
  • [31] SPA: A Graph Spectral Alignment Perspective for Domain Adaptation
    Xiao, Zhiqing
    Wang, Haobo
    Jin, Ying
    Feng, Lei
    Chen, Gang
    Huang, Fei
    Zhao, Junbo
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [32] Adaptive Graph Embedding With Consistency and Specificity for Domain Adaptation
    Shaohua Teng
    Zefeng Zheng
    Naiqi Wu
    Luyao Teng
    Wei Zhang
    IEEE/CAA Journal of Automatica Sinica, 2023, 10 (11) : 2094 - 2107
  • [33] Adversarial Bipartite Graph Learning for Video Domain Adaptation
    Luo, Yadan
    Huang, Zi
    Wang, Zijian
    Zhang, Zheng
    Baktashmotlagh, Mahsa
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 19 - 27
  • [34] Domain Adaptation in Physical Systems via Graph Kernel
    Li, Haoran
    Tong, Hanghang
    Weng, Yang
    PROCEEDINGS OF THE 28TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, KDD 2022, 2022, : 868 - 876
  • [35] Domain Adaptation on Graphs by Learning Aligned Graph Bases
    Pilanci, Mehmet
    Vural, Elif
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2022, 34 (02) : 587 - 600
  • [36] NI-UDA: Graph Contrastive Domain Adaptation for Nonshared-and-Imbalanced Unsupervised Domain Adaptation
    Xiao, Guangyi
    Xiang, Weiwei
    Peng, Shun
    Chen, Hao
    Guo, Jingzhi
    Gong, Zhiguo
    IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS, 2022, 58 (06) : 5105 - 5117
  • [37] Adaptive Graph Adversarial Networks for Partial Domain Adaptation
    Kim, Youngeun
    Hong, Sungeun
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (01) : 172 - 182
  • [38] Joint Adversarial Domain Adaptation With Structural Graph Alignment
    Wang, Mengzhu
    Chen, Junyang
    Wang, Ye
    Wang, Shanshan
    Li, Lin
    Su, Hao
    Gong, Zhiguo
    Wu, Kaishun
    Chen, Zhenghan
    IEEE TRANSACTIONS ON NETWORK SCIENCE AND ENGINEERING, 2024, 11 (01): : 604 - 612
  • [39] Deeply Coupled Graph Structured Autoencoder for Domain Adaptation
    Majumdar, Angshul
    PROCEEDINGS OF THE 6TH ACM IKDD CODS AND 24TH COMAD, 2019, : 94 - 102
  • [40] Graph Adaptive Knowledge Transfer for Unsupervised Domain Adaptation
    Ding, Zhengming
    Li, Sheng
    Shao, Ming
    Fu, Yun
    COMPUTER VISION - ECCV 2018, PT II, 2018, 11206 : 36 - 52