Spatio-Temporal Interest Points Chain (STIPC) for Activity Recognition

被引:0
|
作者
Yuan, Fei [1 ]
Xia, Gui-Song [2 ]
Sahbi, Hichem [2 ]
Prinet, Veronique [1 ]
机构
[1] Chinese Acad Sci, Inst Automat, Natl Lab Pattern Recognit, Beijing, Peoples R China
[2] TELECOM ParisTech, LTCI CNRS, Paris, France
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a novel feature, named Spatio-Temporal Interest Points Chain (STIPC), for activity representation and recognition. This new feature consists of a set of trackable spatio-temporal interest points, which correspond to a series of discontinuous motion among a long-term motion of an object or its part. By this chain feature, we can not only capture the discriminative motion information which space-time interest point-like feature try to pursue, but also build the connection between them. Specifically, we first extract the point trajectories from the image sequences, then partition the points on each trajectory into two kinds of different yet close related points: discontinuous motion points and continuous motion points. We extract local space-time features around discontinuous motion points and use a chain model to represent them. Furthermore, we introduce a chain descriptor to encode the temporal relationships between these interdependent local space-time features. The experimental results on challenging datasets show that our STIPC features improves local space-time features and achieve state-of-the-art results.
引用
下载
收藏
页码:22 / 26
页数:5
相关论文
共 50 条
  • [31] Spatio-Temporal Dynamic Inference Network for Group Activity Recognition
    Yuan, Hangjie
    Ni, Dong
    Wang, Mang
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 7456 - 7465
  • [32] Spatio-temporal Weight of Active Region for Human Activity Recognition
    Lee, Dong-Gyu
    Won, Dong-Ok
    PATTERN RECOGNITION, ACPR 2021, PT I, 2022, 13188 : 92 - 103
  • [33] Towards A Robust Spatio-Temporal Interest Point Detection For Human Action Recognition
    Shabani, Hossein
    Clausi, David A.
    Zelek, John S.
    2009 CANADIAN CONFERENCE ON COMPUTER AND ROBOT VISION, 2009, : 237 - 243
  • [34] Spatio-Temporal Activity Detection and Recognition in Untrimmed Surveillance Videos
    Gkountakos, Konstantinos
    Touska, Despoina
    Ioannidis, Konstantinos
    Tsikrika, Theodora
    Vrochidis, Stefanos
    Kompatsiaris, Ioannis
    PROCEEDINGS OF THE 2021 INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL (ICMR '21), 2021, : 451 - 455
  • [35] Spatio-Temporal Activity Recognition for Evolutionary Search Behavior Prediction
    Friess, Stephen
    Tino, Peter
    Menzel, Stefan
    Xu, Zhao
    Sendhoff, Bernhard
    Yao, Xin
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [36] Using Spatio-Temporal Interest Points (STIP) for myoclonic jerk detection in nocturnal video
    Cuppens, Kris
    Chen, Chih-Wei
    Wong, Kevin Bing-Yung
    Van de Vel, Anouk
    Lagae, Lieven
    Ceulemans, Berten
    Tuytelaars, Tinne
    Van Huffel, Sabine
    Vanrumste, Bart
    Aghajan, Hamid
    2012 ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY (EMBC), 2012, : 4454 - 4457
  • [37] GA-STIP: Action Recognition in Multi-Channel Videos With Geometric Algebra Based Spatio-Temporal Interest Points
    Wang, Rui
    Cao, Zhongzheng
    Wang, Xiangyang
    Xue, Weici
    Cao, Wenming
    IEEE ACCESS, 2018, 6 : 56575 - 56586
  • [38] Local descriptors for spatio-temporal recognition
    Laptev, Ivan
    Lindeberg, Tony
    SPATIAL COHERENCE FOR VISUAL MOTION ANALYSIS, 2006, 3667 : 91 - 103
  • [39] A comprehensive survey of human action recognition with spatio-temporal interest point (STIP) detector
    Das Dawn, Debapratim
    Shaikh, Soharab Hossain
    VISUAL COMPUTER, 2016, 32 (03): : 289 - 306
  • [40] Single and interactive human behavior recognition algorithm based on spatio-temporal interest point
    Wang, Shi-Gang
    Sun, Ai-Meng
    Zhao, Wen-Ting
    Hui, Xiang-Long
    Jilin Daxue Xuebao (Gongxueban)/Journal of Jilin University (Engineering and Technology Edition), 2015, 45 (01): : 304 - 308