Class-Incremental Learning on Video-Based Action Recognition by Distillation of Various Knowledge

被引:0
|
作者
Maraghi, Vali Ollah [1 ]
Faez, Karim [1 ]
机构
[1] Amirkabir Univ Technol, Dept Elect Engn, Tehran Polytech, Tehran, Iran
关键词
D O I
10.1155/2022/4879942
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Recognition of activities in the video is an important field in computer vision. Many successful works have been done on activity recognition and they achieved acceptable results in recent years. However, their training is completely static, meaning that all classes are taught to the system in one training step. The system is only able to recognize the equivalent classes. The main disadvantage of this type of training is that if new classes need to be taught to the system, the system must be retrained from scratch and all classes retaught to the system. This specification has many challenges, such as storing and retaining data and respending training costs. We propose an approach for training the action recognition system in video data which can teach new classes to the system without the need for previous data. We will provide an incremental learning algorithm for class recognition tasks in video data. Two different approaches are combined to prevent catastrophic forgetting in the proposed algorithm. In the proposed incremental learning algorithm, two approaches are introduced and used to maintain network information in combination. These two approaches are network sharing and network knowledge distillation. We introduce a neural network architecture for action recognition to understand and represent the video data. We propose the distillation of network knowledge at the classification and feature level, which can be divided into spatial and temporal parts at the feature level. We also suggest initializing new classifiers using previous classifiers. The proposed algorithm is evaluated on the USCF101, HMDB51, and Kinetics-400 datasets. We will consider various factors such as the amount of distillation knowledge, the number of new classes and the incremental learnings stages, and their impact on the final recognition system. Finally, we will show that the proposed algorithm can teach new classes to the recognition system without forgetting the previous classes and does not need the previous data or exemplar data.
引用
收藏
页数:12
相关论文
共 50 条
  • [1] Class-Incremental Learning for Action Recognition in Videos
    Park, Jaeyoo
    Kang, Minsoo
    Han, Bohyung
    [J]. 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 13678 - 13687
  • [2] CLASS-INCREMENTAL LEARNING FOR REMOTE SENSING IMAGES BASED ON KNOWLEDGE DISTILLATION
    Song, Jingduo
    Jia, Hecheng
    Xu, Feng
    [J]. IGARSS 2023 - 2023 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM, 2023, : 5026 - 5028
  • [3] Hyperspectral Image Classification Based on Class-Incremental Learning with Knowledge Distillation
    Xu, Meng
    Zhao, Yuanyuan
    Liang, Yajun
    Ma, Xiaorui
    [J]. REMOTE SENSING, 2022, 14 (11)
  • [4] Class-Incremental Learning by Knowledge Distillation with Adaptive Feature Consolidation
    Kang, Minsoo
    Park, Jaeyoo
    Han, Bohyung
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16050 - 16059
  • [5] Class-Incremental Learning of Plant and Disease Detection: Growing Branches with Knowledge Distillation
    Page-Fortin, Mathieu
    [J]. 2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS, ICCVW, 2023, : 593 - 603
  • [6] Few-Shot Class-Incremental Learning via Relation Knowledge Distillation
    Dong, Songlin
    Hong, Xiaopeng
    Tao, Xiaoyu
    Chang, Xinyuan
    Wei, Xing
    Gong, Yihong
    [J]. THIRTY-FIFTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THIRTY-THIRD CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE AND THE ELEVENTH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2021, 35 : 1255 - 1263
  • [7] Hypercorrelation Evolution for Video Class-Incremental Learning
    Liang, Sen
    Zhu, Kai
    Zhai, Wei
    Liu, Zhiheng
    Cao, Yang
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 4, 2024, : 3315 - 3323
  • [8] Class-Incremental Exemplar Compression for Class-Incremental Learning
    Luo, Zilin
    Liu, Yaoyao
    Schiele, Bernt
    Sun, Qianru
    [J]. 2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, : 11371 - 11380
  • [9] Semantic-aware Knowledge Distillation for Few-Shot Class-Incremental Learning
    Cheraghian, Ali
    Rahman, Shafin
    Fang, Pengfei
    Roy, Soumava Kumar
    Petersson, Lars
    Harandi, Mehrtash
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 2534 - 2543
  • [10] Multi-granularity knowledge distillation and prototype consistency regularization for class-incremental learning
    Shi, Yanyan
    Shi, Dianxi
    Qiao, Ziteng
    Wang, Zhen
    Zhang, Yi
    Yang, Shaowu
    Qiu, Chunping
    [J]. NEURAL NETWORKS, 2023, 164 : 617 - 630