CONTINUAL LEARNING IN VISION TRANSFORMER

被引:1
|
作者
Takeda, Mana [1 ]
Yanai, Keiji [1 ]
机构
[1] Univ Electrocommun, Tokyo, Japan
关键词
Continual Learning; Vision Transformer;
D O I
10.1109/ICIP46576.2022.9897851
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning aims to continuously learn new tasks from new data while retaining the knowledge of tasks learned in the past. Recently, the Vision Transformer, which utilizes the Transformer initially proposed in natural language processing for computer vision, has shown higher accuracy than Convolutional Neural Networks (CNN) in image recognition tasks. However, there are few methods that have achieved continual learning with Vision Transformer. In this paper, we compare and improve continual learning methods that can be applied to both CNN and Vision Transformers. In our experiments, we compare several continual learning methods and their combinations to show the differences in accuracy and the number of parameters.
引用
收藏
页码:616 / 620
页数:5
相关论文
共 50 条
  • [1] Continual Learning with Lifelong Vision Transformer
    Wang, Zhen
    Liu, Liu
    Duan, Yiqun
    Kong, Yajing
    Tao, Dacheng
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 171 - 181
  • [2] Online Continual Learning with Contrastive Vision Transformer
    Wang, Zhen
    Liu, Liu
    Kong, Yajing
    Guo, Jiaxian
    Tao, Dacheng
    [J]. COMPUTER VISION, ECCV 2022, PT XX, 2022, 13680 : 631 - 650
  • [3] FedViT: Federated continual learning of vision transformer at edge
    Zuo, Xiaojiang
    Luopan, Yaxin
    Han, Rui
    Zhang, Qinglong
    Liu, Chi Harold
    Wang, Guoren
    Chen, Lydia Y.
    [J]. FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 154 : 1 - 15
  • [4] Task-Free Dynamic Sparse Vision Transformer for Continual Learning
    Ye, Fei
    Bors, Adrian G.
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 15, 2024, : 16442 - 16450
  • [5] Transformer with Task Selection for Continual Learning
    Huang, Sheng-Kai
    Huang, Chun-Rong
    [J]. 2023 18TH INTERNATIONAL CONFERENCE ON MACHINE VISION AND APPLICATIONS, MVA, 2023,
  • [6] On the Effectiveness of LayerNorm Tuning for Continual Learning in Vision Transformers
    De Min, Thomas
    Mancini, Massimiliano
    Alahari, Karteek
    Alameda-Pineda, Xavier
    Ricci, Elisa
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3577 - 3586
  • [7] Representation Learning Based on Vision Transformer
    Ran, Ruisheng
    Gao, Tianyu
    Hu, Qianwei
    Zhang, Wenfeng
    Peng, Shunshun
    Fang, Bin
    [J]. INTERNATIONAL JOURNAL OF PATTERN RECOGNITION AND ARTIFICIAL INTELLIGENCE, 2024, 38 (07)
  • [8] CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
    Srinivasan, Tejas
    Chang, Ting-Yun
    Alva, Leticia Pinto
    Chochlakis, Georgios
    Rostami, Mohammad
    Thomason, Jesse
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 35 (NEURIPS 2022), 2022,
  • [9] Continual Vision-based Reinforcement Learning with Group Symmetries
    Liu, Shiqi
    Xu, Mengdi
    Huang, Peide
    Zhang, Xilun
    Liu, Yongkang
    Oguchi, Kentaro
    Zhao, Ding
    [J]. CONFERENCE ON ROBOT LEARNING, VOL 229, 2023, 229
  • [10] Vision Transformer Adapters for Generalizable Multitask Learning
    Bhattacharjee, Deblina
    Susstrunk, Sabine
    Salzmann, Mathieu
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 18969 - 18980