Adaptive Hybrid Vision Transformer for Small Datasets

被引:0
|
作者
Yin, Mingjun [1 ]
Chang, Zhiyong [2 ]
Wang, Yan [3 ]
机构
[1] Univ Melbourne, Melbourne, Vic, Australia
[2] Peking Univ, Beijing, Peoples R China
[3] Xiaochuan Chuhai, Beijing, Peoples R China
关键词
Vision Transformer; Small Dataset; Self-Attention;
D O I
10.1109/ICTAI59109.2023.00132
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recently, vision Transformers (ViTs) have achieved competitive performance on many computer vision tasks. However, vision Transformers show impaired performance on small datasets when training from scratch compared with Convolutional Neural Networks (CNNs), which is interpreted as the lack of locality inductive bias. This impedes the application of vision Transformers for small-size datasets. In this work, we propose Adaptive Hybrid Vision Transformer (AHVT) as the solution to boost the performance of vision Transformers on small-scale datasets. Specifically, on spatial dimension, we exploit a Convolutional Overlapping Patch Embedding (COPE) layer to inject desirable inductive bias in model, forcing the model to learn the local token features. On channel dimension, we insert a adaptive channel features aggregation block into vanilla feed forward network to calibrate channel responses. Meanwhile, we add several extra learnable "cardinality tokens" to patch token sequences to capture cross-channel interaction. We present extensive experiments to validate the effectiveness of our method on five small/medium datasets including CIFAR10/100, SVHN, Tiny-ImageNet and ImageNet-1k. Our approach attains state-of-the-art performance on above four small datasets when training from scratch.
引用
收藏
页码:873 / 880
页数:8
相关论文
共 50 条
  • [21] VisionNet: An efficient vision transformer-based hybrid adaptive networks for eye cancer detection with enhanced cheetah optimizer
    Akshaya, B.
    Sakthivel, P.
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2024, 97
  • [22] Hybrid UNet transformer architecture for ischemic stoke segmentation with MRI and CT datasets
    Soh, Wei Kwek
    Rajapakse, Jagath C.
    FRONTIERS IN NEUROSCIENCE, 2023, 17
  • [23] Hybrid AI model for power transformer assessment using imbalanced DGA datasets
    Wang, Lin
    Littler, Tim
    Liu, Xueqin
    IET RENEWABLE POWER GENERATION, 2023, 17 (08) : 1912 - 1922
  • [24] IRSTFormer: A Hierarchical Vision Transformer for Infrared Small Target Detection
    Chen, Gao
    Wang, Weihua
    Tan, Sirui
    REMOTE SENSING, 2022, 14 (14)
  • [25] Vision Transformers for Small Histological Datasets Learned Through Knowledge Distillation
    Kanwal, Neel
    Eftestol, Trygve
    Khoraminia, Farbod
    Zuiverloon, Tahlita C. M.
    Engan, Kjersti
    ADVANCES IN KNOWLEDGE DISCOVERY AND DATA MINING, PAKDD 2023, PT III, 2023, 13937 : 167 - 179
  • [26] AViT: Adapting Vision Transformers for Small Skin Lesion Segmentation Datasets
    Du, Siyi
    Bayasi, Nourhan
    Hamarneh, Ghassan
    Garbi, Rafeef
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023 WORKSHOPS, 2023, 14393 : 25 - 36
  • [27] Comparative Analysis of Vision Transformer Models for Facial Emotion Recognition Using Augmented Balanced Datasets
    Bobojanov, Sukhrob
    Kim, Byeong Man
    Arabboev, Mukhriddin
    Begmatov, Shohruh
    APPLIED SCIENCES-BASEL, 2023, 13 (22):
  • [28] Image-Adaptive Hint Generation via Vision Transformer for Outpainting
    Kong, Daehyeon
    Kong, Kyeongbo
    Kim, Kyunghun
    Min, Sung-Jun
    Kang, Suk-Ju
    2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 4029 - 4038
  • [29] Adaptive Parking Slot Occupancy Detection Using Vision Transformer and LLIE
    Pannerselvam, Karthick
    2021 IEEE INTERNATIONAL SMART CITIES CONFERENCE (ISC2), 2021,
  • [30] HaViT: Hybrid-Attention Based Vision Transformer for Video Classification
    Li, Li
    Zhuang, Liansheng
    Gao, Shenghua
    Wang, Shafei
    COMPUTER VISION - ACCV 2022, PT IV, 2023, 13844 : 502 - 517