CONTINUAL LEARNING IN VISION TRANSFORMER

被引:1
|
作者
Takeda, Mana [1 ]
Yanai, Keiji [1 ]
机构
[1] Univ Electrocommun, Tokyo, Japan
关键词
Continual Learning; Vision Transformer;
D O I
10.1109/ICIP46576.2022.9897851
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Continual learning aims to continuously learn new tasks from new data while retaining the knowledge of tasks learned in the past. Recently, the Vision Transformer, which utilizes the Transformer initially proposed in natural language processing for computer vision, has shown higher accuracy than Convolutional Neural Networks (CNN) in image recognition tasks. However, there are few methods that have achieved continual learning with Vision Transformer. In this paper, we compare and improve continual learning methods that can be applied to both CNN and Vision Transformers. In our experiments, we compare several continual learning methods and their combinations to show the differences in accuracy and the number of parameters.
引用
收藏
页码:616 / 620
页数:5
相关论文
共 50 条
  • [41] Residual Continual Learning
    Lee, Janghyeon
    Joo, Donggyu
    Hong, Hyeong Gwon
    Kim, Junmo
    [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 4553 - 4560
  • [42] Flashback for Continual Learning
    Mahmoodi, Leila
    Harandi, Mehrtash
    Moghadam, Peyman
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 3426 - 3435
  • [43] Kernel Continual Learning
    Derakhshani, Mohammad Mahdi
    Zhen, Xiantong
    Shao, Ling
    Snoek, Cees G. M.
    [J]. INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [44] Reinforced Continual Learning
    Xu, Ju
    Zhu, Zhanxing
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [45] Open-world continual learning: Unifying novelty detection and continual learning
    Kim, Gyuhak
    Xiao, Changnan
    Konishi, Tatsuya
    Ke, Zixuan
    Liu, Bing
    [J]. Artificial Intelligence, 2025, 338
  • [46] Preventing Zero-Shot Transfer Degradation in Continual Learning of Vision-Language Models
    Zheng, Zangwei
    Ma, Mingyuan
    Wang, Kai
    Qin, Ziheng
    Yue, Xiangyu
    You, Yang
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 19068 - 19079
  • [47] Continual World: A Robotic Benchmark For Continual Reinforcement Learning
    Wolczyk, Maciej
    Zajac, Michal
    Pascanu, Razvan
    Kucinski, Lukasz
    Milos, Piotr
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [48] ViT-LR: Pushing the Envelope for Transformer-Based On-Device Embedded Continual Learning
    Dequino, Alberto
    Conti, Francesco
    Benini, Luca
    [J]. 2022 IEEE 13TH INTERNATIONAL GREEN AND SUSTAINABLE COMPUTING CONFERENCE (IGSC), 2022, : 5 - 10
  • [49] Vision Transformer-Based Ensemble Learning for Hyperspectral Image Classification
    Liu, Jun
    Guo, Haoran
    He, Yile
    Li, Huali
    [J]. REMOTE SENSING, 2023, 15 (21)
  • [50] CrimeNet: Neural Structured Learning using Vision Transformer for violence detection
    Rendon-Segador, Fernando J.
    Alvarez-Garcia, Juan A.
    Salazar-Gonzalez, Jose L.
    Tommasi, Tatiana
    [J]. NEURAL NETWORKS, 2023, 161 : 318 - 329