Learning Sequence Descriptor Based on Spatio-Temporal Attention for Visual Place Recognition

被引:2
|
作者
Zhao, Junqiao [1 ,2 ,3 ]
Zhang, Fenglin [1 ,2 ]
Cai, Yingfeng [1 ,2 ]
Tian, Gengxuan [1 ,2 ]
Mu, Wenjie [1 ,2 ]
Ye, Chen [1 ,2 ]
Feng, Tiantian [4 ]
机构
[1] Tongji Univ, Sch Elect & Informat Engn, Dept Comp Sci & Technol, Shanghai 201804, Peoples R China
[2] Tongji Univ, MOE Key Lab Embedded Syst & Serv Comp, Shanghai 201804, Peoples R China
[3] Tongji Univ, Inst Intelligent Vehicles, Shanghai 201804, Peoples R China
[4] Tongji Univ, Sch Surveying & Geoinformat, Shanghai 200092, Peoples R China
关键词
Transformers; Visualization; Encoding; Computer architecture; Task analysis; Simultaneous localization and mapping; Heuristic algorithms; Recognition; localization; SLAM; visual place recognition;
D O I
10.1109/LRA.2024.3354627
中图分类号
TP24 [机器人技术];
学科分类号
080202 ; 1405 ;
摘要
Visual Place Recognition (VPR) aims to retrieve frames from a geotagged database that are located at the same place as the query frame. To improve the robustness of VPR in perceptually aliasing scenarios, sequence-based VPR methods are proposed. These methods are either based on matching between frame sequences or extracting sequence descriptors for direct retrieval. However, the former is usually based on the assumption of constant velocity, which is difficult to hold in practice, and is computationally expensive and subject to sequence length. Although the latter overcomes these problems, existing sequence descriptors are constructed by aggregating features of multiple frames only, without interaction on temporal information, and thus cannot obtain descriptors with spatio-temporal discrimination. In this letter, we propose a sequence descriptor that effectively incorporates spatio-temporal information. Specifically, spatial attention within the same frame is utilized to learn spatial feature patterns, while attention in corresponding local regions of different frames is utilized to learn the persistence or change of features over time. We use a sliding window to control the temporal range of attention and use relative positional encoding to construct sequential relationships between different features. This allows our descriptors to capture the intrinsic dynamics in a sequence of frames. Comprehensive experiments on challenging benchmark datasets show that the proposed approach outperforms recent state-of-the-art methods.
引用
收藏
页码:2351 / 2358
页数:8
相关论文
共 50 条
  • [21] Spatio-Temporal Attention Networks for Action Recognition and Detection
    Li, Jun
    Liu, Xianglong
    Zhang, Wenxuan
    Zhang, Mingyuan
    Song, Jingkuan
    Sebe, Nicu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (11) : 2990 - 3001
  • [22] Pedestrian Attribute Recognition via Spatio-temporal Relationship Learning for Visual Surveillance
    Liu, Zhenyu
    Li, Da
    Zhang, Xinyu
    Zhang, Zhang
    Zhang, Peng
    Shan, Caifeng
    Han, Jungong
    ACM TRANSACTIONS ON MULTIMEDIA COMPUTING COMMUNICATIONS AND APPLICATIONS, 2024, 20 (06)
  • [23] Visual learning and recognition of a probabilistic spatio-temporal model of cyclic human locomotion
    Peternel, M
    Leonardis, A
    PROCEEDINGS OF THE 17TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION, VOL 4, 2004, : 146 - 149
  • [24] Spatio-temporal Active Learning for Visual Tracking
    Liu, Chenfeng
    Zhu, Pengfei
    Hu, Qinghua
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [25] Learning spatio-temporal properties of visual cells
    Burkitt, Anthony
    Lian, Yanbo
    Ruslim, Marko
    JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2024, 52 : S116 - S116
  • [26] Erratum to: Action recognition with spatio-temporal augmented descriptor and fusion method
    Lijun Li
    Shuling Dai
    Multimedia Tools and Applications, 2017, 76 : 13971 - 13971
  • [27] Learning Spatio-Temporal Transformer for Visual Tracking
    Yan, Bin
    Peng, Houwen
    Fu, Jianlong
    Wang, Dong
    Lu, Huchuan
    2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, : 10428 - 10437
  • [28] LEARNING SPATIO-TEMPORAL DEPENDENCIES FOR ACTION RECOGNITION
    Cai, Qiao
    Yin, Yafeng
    Man, Hong
    2013 20TH IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP 2013), 2013, : 3740 - 3744
  • [29] Learning spatio-temporal properties of visual cells
    Burkitt, Anthony
    Lian, Yanbo
    Ruslim, Marko
    JOURNAL OF COMPUTATIONAL NEUROSCIENCE, 2024, 52 : S116 - S116
  • [30] Action Recognition via an Improved Local Descriptor for Spatio-temporal Features
    Yang, Kai
    Du, Ji-Xiang
    Zhai, Chuan-Min
    ADVANCED INTELLIGENT COMPUTING, 2011, 6838 : 234 - 241