Few-Shot Learning on Edge Devices Using CLIP: A Resource-Efficient Approach for Image Classification

被引:0
|
作者
Lu, Jin [1 ]
机构
[1] Shenzhen Polytech Univ, Guangdong Key Lab Big Data Intelligence Vocat Educ, Shenzhen 518055, Guangdong, Peoples R China
来源
INFORMATION TECHNOLOGY AND CONTROL | 2024年 / 53卷 / 03期
关键词
Few-shot learning; CLIP model; image classification; edge devices; deep learnig;
D O I
10.5755/j01.itc.53.3.36943
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In the field of deep learning, traditional image classification tasks typically require extensive annotated data-sets and complex model training processes, which pose significant challenges for deployment on resource-con-strained edge devices. To address these challenges, this study introduces a few-shot learning method based on OpenAI's CLIP model that significantly reduces computational demands by eliminating the need to run a text encoder at the inference stage. By pre-computing the embedding centers of classification text with a small set of image-text data, our approach enables the direct use of CLIP's image encoder and pre-calculated text embeddings for efficient image classification. This adaptation not only allows for high-precision classification tasks on edge devices with limited computing capabilities but also achieves accuracy and recall rates that close-ly approximate those of the pre-trained ResNet approach while using far less data. Furthermore, our method halves the memory usage compared to other large-scale visual models of similar capacity by avoiding the use of a text encoder during inference, making it particularly suitable for low-resource environments. This com-parative advantage underscores the efficiency of our approach in handling few-shot image classification tasks, demonstrating both competitive accuracy and practical viability in resource-limited settings. The outcomes of this research not only highlight the potential of the CLIP model in few-shot learning scenarios but also pave a new path for efficient, low-resource deep learning applications in edge computing environments
引用
收藏
页数:324
相关论文
共 50 条
  • [1] Federated few-shot learning for cough classification with edge devices
    Ngan Dao Hoang
    Dat Tran-Anh
    Manh Luong
    Cong Tran
    Cuong Pham
    Applied Intelligence, 2023, 53 : 28241 - 28253
  • [2] Federated few-shot learning for cough classification with edge devices
    Ngan Dao Hoang
    Dat Tran-Anh
    Manh Luong
    Cong Tran
    Cuong Pham
    APPLIED INTELLIGENCE, 2023, 53 (23) : 28241 - 28253
  • [3] Adversarial domain adaptation with CLIP for few-shot image classification
    Sun, Tongfeng
    Yang, Hongjian
    Li, Zhongnian
    Xu, Xinzheng
    Wang, Xiurui
    APPLIED INTELLIGENCE, 2025, 55 (01)
  • [4] Malware Classification Using Few-Shot Learning Approach
    Alfarsi, Khalid
    Rasheed, Saim
    Ahmad, Iftikhar
    INFORMATION, 2024, 15 (11)
  • [5] Few-Shot Learning for Medical Image Classification
    Cai, Aihua
    Hu, Wenxin
    Zheng, Jun
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING, ICANN 2020, PT I, 2020, 12396 : 441 - 452
  • [6] Multi-layer Tuning CLIP for Few-Shot Image Classification
    Zhang, Ruihao
    Geng, Jinsong
    Liu, Cenyu
    Zhang, Wei
    Feng, Zunlei
    Xue, Liang
    Bei, Yijun
    PATTERN RECOGNITION AND COMPUTER VISION, PT V, PRCV 2024, 2025, 15035 : 173 - 186
  • [7] Rare Data Image Classification System Using Few-Shot Learning
    Lee, Juhyeok
    Kim, Mihui
    ELECTRONICS, 2024, 13 (19)
  • [8] Heterogeneous Few-Shot Learning for Hyperspectral Image Classification
    Wang, Yan
    Liu, Ming
    Yang, Yuexin
    Li, Zhaokui
    Du, Qian
    Chen, Yushi
    Li, Fei
    Yang, Haibo
    IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2022, 19
  • [9] An intelligent medical image classification system using few-shot learning
    Abbas, Qaisar
    CONCURRENCY AND COMPUTATION-PRACTICE & EXPERIENCE, 2023, 35 (02):
  • [10] Few-shot learning for skin lesion image classification
    Xue-Jun Liu
    Kai-li Li
    Hai-ying Luan
    Wen-hui Wang
    Zhao-yu Chen
    Multimedia Tools and Applications, 2022, 81 : 4979 - 4990