PREDICTABILITY ANALYZING: DEEP REINFORCEMENT LEARNING FOR EARLY ACTION RECOGNITION

被引:2
|
作者
Chen, Xiaokai [1 ,2 ]
Gao, Ke [1 ]
Caol, Juan [1 ]
机构
[1] Chinese Acad Sci, Inst Comp Technol, Beijing, Peoples R China
[2] Univ Chinese Acad Sci, Beijing, Peoples R China
基金
国家重点研发计划;
关键词
Early action recognition; Predictability; Reinforcement learning;
D O I
10.1109/ICME.2019.00169
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Early action recognition aims at inferring ongoing activities from partial videos as early as possible, whereas conventional action recognition relies on fully observed activities. Observations show that the predictability of different activity subsequences vary wildly, however most existing work failing to fully exploit this phenomenon. We define the predictability of activity subsequences as its capacity to perform recognition early and accurately. A predictability-based early action recognition framework(PEAR) is established to utilize predictability information to achieve early recognition. It consists of a predictability evaluator and a classifier. Due to lacking of fine-grained supervision, we develop a reinforcement-learning-based strategy to optimize the evaluator encouraged by a recognizability reward and an early reward. With the predictability estimated by the evaluator, the classifier learns discriminative representation of subsequences to perform early action recognition without sacrificing much accuracy. Experiments on two benchmark datasets demonstrate the proposed approach outperforms existing methods significantly.
引用
收藏
页码:958 / 963
页数:6
相关论文
共 50 条
  • [21] DEEP SELECTIVE FEATURE LEARNING FOR ACTION RECOGNITION
    Li, Ziqiang
    Ge, Yongxin
    Feng, Jinyuan
    Qi, Xiaolei
    Yu, Jiaruo
    Yu, Hui
    [J]. 2020 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA AND EXPO (ICME), 2020,
  • [22] For SALE: State-Action Representation Learning for Deep Reinforcement Learning
    Fujimoto, Scott
    Chang, Wei-Di
    Smith, Edward J.
    Gu, Shixiang Shane
    Precup, Doina
    Meger, David
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [23] Deep Reinforcement Learning for Optimization at Early Design Stages
    Servadei, Lorenzo
    Lee, Jin Hwa
    Arjona Medina, Jose A.
    Werner, Michael
    Hochreiter, Sepp
    Ecker, Wolfgang
    Wille, Robert
    [J]. IEEE DESIGN & TEST, 2023, 40 (01) : 43 - 51
  • [24] Deep Reinforcement Learning in Human Activity Recognition: A Survey and Outlook
    Nikpour, Bahareh
    Sinodinos, Dimitrios
    Armanfard, Narges
    [J]. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2024, : 1 - 12
  • [25] Deep Reinforcement Learning for Early Diagnosis of Lung Cancer
    Wang, Yifan
    Zhang, Qining
    Ying, Lei
    Zhou, Chuan
    [J]. THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 20, 2024, : 22410 - 22419
  • [26] Part-Activated Deep Reinforcement Learning for Action Prediction
    Chen, Lei
    Lu, Jiwen
    Song, Zhanjie
    Zhou, Jie
    [J]. COMPUTER VISION - ECCV 2018, PT III, 2018, 11207 : 435 - 451
  • [27] Action Markets in Deep Multi-Agent Reinforcement Learning
    Schmid, Kyrill
    Belzner, Lenz
    Gabor, Thomas
    Phan, Thomy
    [J]. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2018, PT II, 2018, 11140 : 240 - 249
  • [28] Learn What Not to Learn: Action Elimination with Deep Reinforcement Learning
    Zahavy, Tom
    Haroush, Matan
    Merlis, Nadav
    Mankowitz, Daniel J.
    Mannor, Shie
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [29] A Deep Reinforcement Learning Method with Action Switching for Autonomous Navigation
    Wang, Zuowei
    Liao, Xiaozhong
    Zhang, Fengdi
    Xu, Min
    Liu, Yanmin
    Liu, Xiangdong
    Zhang, Xi
    Dong, Rui Wei
    Li, Zhen
    [J]. 2021 PROCEEDINGS OF THE 40TH CHINESE CONTROL CONFERENCE (CCC), 2021, : 3491 - 3496
  • [30] Deep Reinforcement Learning with Parameterized Action Space for Object Detection
    Wu, Zheng
    Khan, Naimul Mefraz
    Gao, Lei
    Guan, Ling
    [J]. 2018 IEEE INTERNATIONAL SYMPOSIUM ON MULTIMEDIA (ISM 2018), 2018, : 101 - 104