Neuromorphic Vision-Based Motion Segmentation With Graph Transformer Neural Network

被引:0
|
作者
Alkendi, Yusra [1 ]
Azzam, Rana [2 ,3 ]
Javed, Sajid
Seneviratne, Lakmal [2 ]
Zweiri, Yahya [3 ,4 ]
机构
[1] Technol Innovat Inst TII, Prop & Space Res Ctr PSRC, Abu Dhabi, U Arab Emirates
[2] Khalifa Univ Sci & Technol, Khalifa Univ Ctr Autonomous Robot Syst KUCARS, Abu Dhabi, U Arab Emirates
[3] Khalifa Univ Sci & Technol, Dept Aerosp Engn, Abu Dhabi, U Arab Emirates
[4] Khalifa Univ Sci & Technol, Adv Res & Innovat Ctr ARIC, Abu Dhabi, U Arab Emirates
关键词
Motion segmentation; Computer vision; Dynamics; Cameras; Event detection; Transformers; Streams; Heuristic algorithms; Vehicle dynamics; Classification algorithms; Neuromorphic vision; dynamic vision sensor; event camera; motion segmentation; graph transformer neural networks; NAVIGATION;
D O I
10.1109/TMM.2024.3521662
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Moving object segmentation is critical to interpret scene dynamics for robotic navigation systems in challenging environments. Neuromorphic vision sensors are tailored for motion perception due to their asynchronous nature, high temporal resolution, and reduced power consumption. However, their unconventional output requires novel perception paradigms to leverage their spatially sparse and temporally dense nature. In this work, we propose a novel event-based motion segmentation algorithm using a Graph Transformer Neural Network, dubbed GTNN. Our proposed algorithm processes event streams as 3D graphs by a series of nonlinear transformations to unveil local and global spatiotemporal correlations between events. Based on these correlations, events belonging to moving objects are segmented from the background without prior knowledge of the dynamic scene geometry. The algorithm is trained on publicly available datasets including MOD, EV-IMO, and EV-IMO2 using the proposed training scheme to facilitate efficient training on extensive datasets. Moreover, we introduce the Dynamic Object Mask-aware Event Labeling (DOMEL) approach for generating approximate ground-truth labels for event-based motion segmentation datasets. We use DOMEL to label our own recorded Event dataset for Motion Segmentation (EMS-DOMEL), which we release to the public for further research and benchmarking. Rigorous experiments are conducted on several unseen publicly-available datasets where the results revealed that GTNN outperforms state-of-the-art methods in the presence of dynamic background variations, motion patterns, and multiple dynamic objects with varying sizes and velocities. GTNN achieves significant performance gains with an average increase of 9.4% and 4.5% in terms of motion segmentation accuracy (IoU%) and detection rate (DR%), respectively.
引用
收藏
页码:385 / 400
页数:16
相关论文
共 50 条
  • [31] CoVi-Net: A hybrid convolutional and vision transformer neural network for retinal vessel segmentation
    Jiang, Minshan
    Zhu, Yongfei
    Zhang, Xuedian
    COMPUTERS IN BIOLOGY AND MEDICINE, 2024, 170
  • [32] Vision-based human motion analysis: An overview
    Poppe, Ronald
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2007, 108 (1-2) : 4 - 18
  • [33] The Design of a Vision-Based Motion Performance System
    Ren, Cheng
    Ye, Shuai
    Wang, Xin
    INTELLIGENT ROBOTICS AND APPLICATIONS, PT II, 2011, 7102 : 125 - 134
  • [34] Vision-Based Biomechanical Markerless Motion Classification
    Liew Y.L.
    Chin J.F.
    Machine Graphics and Vision, 2023, 32 (01): : 3 - 24
  • [35] An insect vision-based motion detection chip
    Moini, A
    Bouzerdoum, A
    Eshraghian, K
    Yakovleff, A
    Nguyen, XT
    Blanksby, A
    Beare, R
    Abbott, D
    Bogner, RE
    IEEE JOURNAL OF SOLID-STATE CIRCUITS, 1997, 32 (02) : 279 - 284
  • [36] Models Used by vision-based motion capture
    Radlova, R.
    Bouwmans, T.
    Vachon, B.
    9TH INTERNATIONAL CONFERENCE ON COMPUTER GRAPHICS AND ARTIFICIAL INTELLIGENCE, 2006, : 191 - 196
  • [37] Direct Motion Planning for Vision-Based Control
    Pieters, Roel
    Ye, Zhenyu
    Jonker, Pieter
    Nijmeijer, Henk
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2014, 11 (04) : 1282 - 1288
  • [38] Following the flock: Distributed formation control with omnidirectional vision-based motion segmentation and visual servoing
    Vidal, René
    Shakernia, Omid
    Sastry, Shankar
    IEEE Robotics and Automation Magazine, 2004, 11 (04): : 14 - 20
  • [39] Vision-based motion primitives for reactive walking
    Garcia, Mauricio
    Stasse, Olivier
    Hayet, Jean-Bernard
    Esteves, Claudia
    Laumond, Jean-Paul
    2013 13TH IEEE-RAS INTERNATIONAL CONFERENCE ON HUMANOID ROBOTS (HUMANOIDS), 2013, : 286 - 291
  • [40] New Metric for Evaluation of Deep Neural Network Applied in Vision-Based Systems
    Bakhshande, Fateme
    Ameyaw, Daniel Adofo
    Madan, Neelu
    Soeffker, Dirk
    APPLIED SCIENCES-BASEL, 2022, 12 (07):