SPAN: siampillars attention network for 3D object tracking in point clouds

被引:1
|
作者
Zhuang, Yi [1 ]
Zhao, Haitao [1 ]
机构
[1] East China Univ Sci & Technol, Sch Informat Sci & Engn, Automat Dept, Shanghai 200237, Peoples R China
基金
中国国家自然科学基金;
关键词
Point clouds; 3D object tracking; Siamese trackers; Attention mechanism; Deformable convolution; Region proposal network;
D O I
10.1007/s13042-022-01508-8
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
3D point clouds produce rich geometric information to address the scale variation in 2D image-based object tracking. Although siamese-based trackers are widely used and achieve great performance, their applications in 3D point clouds have not been seriously considered because of different data formats and structural information. To utilize a 2D siamese-based tracker for object tracking in raw point clouds, we propose a siampillars attention network in this paper. SPAN firstly converts 3D point clouds into 2D pseudo images so that 2D tracking methods can be applied. In response to the sparsity of raw point clouds, a separate attention module (SAM) consists of a height-and-width (HW) attention module, and a crosschannel attention module is designed to enrich the extracted features. A modulated deformable convolutional network (MDCN) is further applied to handle the deformations during tracking. The anchor-based region proposal network (RPN) with depth-wise correlation is deployed finally to locate the object and regress the 3D bounding box, which makes SPAN work single-shortly in an end-to-end learning manner. Our experiments on the KITTI dataset demonstrate the superiority of SPAN. SPAN runs with 46.6 frames per second (FPS) on a single NVIDIA 1080ti GPU. Codes are available at https://github.com/ZCHILLAXY/SPAN.
引用
收藏
页码:2105 / 2117
页数:13
相关论文
共 50 条
  • [1] SPAN: siampillars attention network for 3D object tracking in point clouds
    Yi Zhuang
    Haitao Zhao
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 2105 - 2117
  • [2] 3D Siamese Transformer Network for Single Object Tracking on Point Clouds
    Hui, Le
    Wang, Lingpeng
    Tang, Linghua
    Lan, Kaihao
    Xie, Jin
    Yang, Jian
    COMPUTER VISION - ECCV 2022, PT II, 2022, 13662 : 293 - 310
  • [3] Point attention network for semantic segmentation of 3D point clouds
    Feng, Mingtao
    Zhang, Liang
    Lin, Xuefei
    Gilani, Syed Zulqarnain
    Mian, Ajmal
    PATTERN RECOGNITION, 2020, 107 (107)
  • [4] P2B: Point-to-Box Network for 3D Object Tracking in Point Clouds
    Qi, Haozhe
    Feng, Chen
    Cao, Zhiguo
    Zhao, Feng
    Xiao, Yang
    2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2020, : 6328 - 6337
  • [5] Graph-Based Point Tracker for 3D Object Tracking in Point Clouds
    Park, Minseong
    Seong, Hongje
    Jang, Wonje
    Kim, Euntai
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 2053 - 2061
  • [6] Point Siamese Network for Person Tracking Using 3D Point Clouds
    Cui, Yubo
    Fang, Zheng
    Zhou, Sifan
    SENSORS, 2020, 20 (01)
  • [7] Enhanced Vote Network for 3D Object Detection in Point Clouds
    Zhong, Min
    Zeng, Gang
    2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2021, : 6624 - 6631
  • [8] Relation Graph Network for 3D Object Detection in Point Clouds
    Feng, Mingtao
    Gilani, Syed Zulqarnain
    Wang, Yaonan
    Zhang, Liang
    Mian, Ajmal
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2021, 30 : 92 - 107
  • [9] Learning Deformable Network for 3D Object Detection on Point Clouds
    Zhang, Wanyi
    Fu, Xiuhua
    Li, Wei
    MOBILE INFORMATION SYSTEMS, 2021, 2021
  • [10] Optimisation of the PointPillars network for 3D object detection in point clouds
    Stanisz, Joanna
    Lis, Konrad
    Kryjak, Tomasz
    Gorgon, Marek
    2020 SIGNAL PROCESSING - ALGORITHMS, ARCHITECTURES, ARRANGEMENTS, AND APPLICATIONS (SPA), 2020, : 122 - 127