Few-Shot Domain Adaptation with Polymorphic Transformers

被引:11
|
作者
Li, Shaohua [1 ]
Sui, Xiuchao [1 ]
Fu, Jie [2 ]
Fu, Huazhu [3 ]
Luo, Xiangde [4 ]
Feng, Yangqin [1 ]
Xu, Xinxing [1 ]
Liu, Yong [1 ]
Ting, Daniel S. W. [5 ]
Goh, Rick Siow Mong [1 ]
机构
[1] ASTAR, Inst High Performance Comp, Singapore, Singapore
[2] Univ Montreal, Mila, Montreal, PQ, Canada
[3] Incept Inst Artificial Intelligence, Abu Dhabi, U Arab Emirates
[4] Univ Elect Sci & Technol China, Chengdu, Peoples R China
[5] Singapore Eye Res Inst, Singapore, Singapore
关键词
Transformer; Domain adaptation; Few-shot;
D O I
10.1007/978-3-030-87196-3_31
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) trained on one set of medical images often experience severe performance drop on unseen test images, due to various domain discrepancy between the training images (source domain) and the test images (target domain), which raises a domain adaptation issue. In clinical settings, it is difficult to collect enough annotated target domain data in a short period. Few-shot domain adaptation, i.e., adapting a trained model with a handful of annotations, is highly practical and useful in this case. In this paper, we propose a Polymorphic Transformer (Polyformer), which can be incorporated into any DNN backbones for few-shot domain adaptation. Specifically, after the polyformer layer is inserted into a model trained on the source domain, it extracts a set of prototype embeddings, which can be viewed as a "basis" of the source-domain features. On the target domain, the polyformer layer adapts by only updating a projection layer which controls the interactions between image features and the prototype embeddings. All other model weights (except BatchNorm parameters) are frozen during adaptation. Thus, the chance of overfitting the annotations is greatly reduced, and the model can perform robustly on the target domain after being trained on a few annotated images. We demonstrate the effectiveness of Polyformer on two medical segmentation tasks (i.e., optic disc/cup segmentation, and polyp segmentation). The source code of Polyformer is released at https://github.com/askerlee/segtran.
引用
收藏
页码:330 / 340
页数:11
相关论文
共 50 条
  • [41] Few-Shot Structured Domain Adaptation for Virtual-to-Real Scene Parsing
    Zhang, Junyi
    Chen, Ziliang
    Huang, Junying
    Lin, Liang
    Zhang, Dongyu
    2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCVW), 2019, : 9 - 17
  • [42] Cross-Domain Few-Shot Relation Extraction via Representation Learning and Domain Adaptation
    Yuan, Zhongju
    Wang, Zhenkun
    Li, Genghui
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [43] Few-shot Adaptation Works with UnpredicTable Data
    Chan, Jun Shern
    Pieler, Michael
    Jao, Jonathan
    Scheurer, Jeremy
    Perez, Ethan
    PROCEEDINGS OF THE 61ST ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 1, 2023, : 1806 - 1842
  • [44] Causal Intervention for Few-Shot Hypothesis Adaptation
    Qi, Guodong
    Long, Yangqi
    Lu, Zhaohui
    Yu, Huimin
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 1267 - 1271
  • [45] Few-shot Learning for New Environment Adaptation
    Wang, Ouya
    Zhou, Shenglong
    Li, Geoffrey Ye
    IEEE CONFERENCE ON GLOBAL COMMUNICATIONS, GLOBECOM, 2023, : 351 - 356
  • [46] Subspace Adaptation Prior for Few-Shot Learning
    Mike Huisman
    Aske Plaat
    Jan N. van Rijn
    Machine Learning, 2024, 113 : 725 - 752
  • [47] Few-Shot Adaptation for Multimedia Semantic Indexing
    Inoue, Nakamasa
    Shinoda, Koichi
    PROCEEDINGS OF THE 2018 ACM MULTIMEDIA CONFERENCE (MM'18), 2018, : 1110 - 1118
  • [48] Subspace Adaptation Prior for Few-Shot Learning
    Huisman, Mike
    Plaat, Aske
    van Rijn, Jan N.
    MACHINE LEARNING, 2024, 113 (02) : 725 - 752
  • [49] Adaptive Swin Transformers for Few-Shot Cross-Domain Silent Face Liveness Detection
    Tang, Ying
    Chen, Zhongyue
    Ye, Minchao
    Zhang, Zhaojuan
    Qi, Yaping
    Lu, Huijuan
    Huo, Wanli
    ADVANCED INTELLIGENT COMPUTING TECHNOLOGY AND APPLICATIONS, PT XI, ICIC 2024, 2024, 14872 : 15 - 26
  • [50] Adaptive Transformers for Robust Few-shot Cross-domain Face Anti-spoofing
    Huang, Hsin-Ping
    Sun, Deqing
    Liu, Yaojie
    Chu, Wen-Sheng
    Xiao, Taihong
    Yuan, Jinwei
    Adam, Hartwig
    Yang, Ming-Hsuan
    COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 37 - 54