Few-Shot Domain Adaptation with Polymorphic Transformers

被引:11
|
作者
Li, Shaohua [1 ]
Sui, Xiuchao [1 ]
Fu, Jie [2 ]
Fu, Huazhu [3 ]
Luo, Xiangde [4 ]
Feng, Yangqin [1 ]
Xu, Xinxing [1 ]
Liu, Yong [1 ]
Ting, Daniel S. W. [5 ]
Goh, Rick Siow Mong [1 ]
机构
[1] ASTAR, Inst High Performance Comp, Singapore, Singapore
[2] Univ Montreal, Mila, Montreal, PQ, Canada
[3] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[4] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[5] Singapore Eye Res Inst, Singapore, Singapore
关键词
Transformer; Domain adaptation; Few-shot;
D O I
10.1007/978-3-030-87196-3_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) trained on one set of medical images often experience severe performance drop on unseen test images, due to various domain discrepancy between the training images (source domain) and the test images (target domain), which raises a domain adaptation issue. In clinical settings, it is difficult to collect enough annotated target domain data in a short period. Few-shot domain adaptation, i.e., adapting a trained model with a handful of annotations, is highly practical and useful in this case. In this paper, we propose a Polymorphic Transformer (Polyformer), which can be incorporated into any DNN backbones for few-shot domain adaptation. Specifically, after the polyformer layer is inserted into a model trained on the source domain, it extracts a set of prototype embeddings, which can be viewed as a "basis" of the source-domain features. On the target domain, the polyformer layer adapts by only updating a projection layer which controls the interactions between image features and the prototype embeddings. All other model weights (except BatchNorm parameters) are frozen during adaptation. Thus, the chance of overfitting the annotations is greatly reduced, and the model can perform robustly on the target domain after being trained on a few annotated images. We demonstrate the effectiveness of Polyformer on two medical segmentation tasks (i.e., optic disc/cup segmentation, and polyp segmentation). The source code of Polyformer is released at https://github.com/askerlee/segtran.
引用
收藏
页码:330 / 340
页数:11
相关论文
共 50 条
  • [21] DOMAIN ADAPTATION FOR LEARNING GENERATOR FROM PAIRED FEW-SHOT DATA
    Teng, Chun-Chih
    Chen, Pin-Yu
    Chiu, Wei-Chen
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 1750 - 1754
  • [22] Perspectives of Calibrated Adaptation for Few-Shot Cross-Domain Classification
    Kong, Dechen
    Yang, Xi
    Wang, Nannan
    Gao, Xinbo
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2025, 35 (03) : 2410 - 2421
  • [23] Few-Shot Object Detection Based on Global Domain Adaptation Strategy
    Gong, Xiaolin
    Cai, Youpeng
    Wang, Jian
    Liu, Daqing
    Ma, Yongtao
    NEURAL PROCESSING LETTERS, 2025, 57 (01)
  • [24] Knowledge-Enhanced Domain Adaptation in Few-Shot Relation Classification
    Zhang, Jiawen
    Zhu, Jiaqi
    Yang, Yi
    Shi, Wandong
    Zhang, Congcong
    Wang, Hongan
    KDD '21: PROCEEDINGS OF THE 27TH ACM SIGKDD CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2021, : 2183 - 2191
  • [25] Inductive Unsupervised Domain Adaptation for Few-Shot Classification via Clustering
    Cong, Xin
    Yu, Bowen
    Liu, Tingwen
    Cui, Shiyao
    Tang, Hengzhu
    Wang, Bin
    MACHINE LEARNING AND KNOWLEDGE DISCOVERY IN DATABASES, ECML PKDD 2020, PT II, 2021, 12458 : 624 - 639
  • [26] Few-Shot Learning Meets Transformer: Unified Query-Support Transformers for Few-Shot Classification
    Wang, Xixi
    Wang, Xiao
    Jiang, Bo
    Luo, Bin
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2023, 33 (12) : 7789 - 7802
  • [27] Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
    Oh, Jaehoon
    Kim, Sungnyun
    Ho, Namgyu
    Kim, Jin-Hwa
    Song, Hwanjun
    Yun, Se-Young
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [28] StyleDomain: Efficient and Lightweight Parameterizations of StyleGAN for One-shot and Few-shot Domain Adaptation
    Alanov, Aibek
    Titov, Vadim
    Nakhodnov, Maksim
    Vetrov, Dmitry
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, : 2184 - 2194
  • [29] Supervised Masked Knowledge Distillation for Few-Shot Transformers
    Lin, Han
    Han, Guangxing
    Ma, Jiawei
    Huang, Shiyuan
    Lin, Xudong
    Chang, Shih-Fu
    2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 19649 - 19659
  • [30] PDA: Proxy-based domain adaptation for few-shot image recognition
    Liu, Ge
    Zhao, Linglan
    Fang, Xiangzhong
    IMAGE AND VISION COMPUTING, 2021, 110