Neuromorphic Vision-Based Motion Segmentation With Graph Transformer Neural Network

被引:0
|
作者
Alkendi, Yusra [1 ]
Azzam, Rana [2 ,3 ]
Javed, Sajid
Seneviratne, Lakmal [2 ]
Zweiri, Yahya [3 ,4 ]
机构
[1] Technol Innovat Inst TII, Prop & Space Res Ctr PSRC, Abu Dhabi, U Arab Emirates
[2] Khalifa Univ Sci & Technol, Khalifa Univ Ctr Autonomous Robot Syst KUCARS, Abu Dhabi, U Arab Emirates
[3] Khalifa Univ Sci & Technol, Dept Aerosp Engn, Abu Dhabi, U Arab Emirates
[4] Khalifa Univ Sci & Technol, Adv Res & Innovat Ctr ARIC, Abu Dhabi, U Arab Emirates
关键词
Motion segmentation; Computer vision; Dynamics; Cameras; Event detection; Transformers; Streams; Heuristic algorithms; Vehicle dynamics; Classification algorithms; Neuromorphic vision; dynamic vision sensor; event camera; motion segmentation; graph transformer neural networks; NAVIGATION;
D O I
10.1109/TMM.2024.3521662
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Moving object segmentation is critical to interpret scene dynamics for robotic navigation systems in challenging environments. Neuromorphic vision sensors are tailored for motion perception due to their asynchronous nature, high temporal resolution, and reduced power consumption. However, their unconventional output requires novel perception paradigms to leverage their spatially sparse and temporally dense nature. In this work, we propose a novel event-based motion segmentation algorithm using a Graph Transformer Neural Network, dubbed GTNN. Our proposed algorithm processes event streams as 3D graphs by a series of nonlinear transformations to unveil local and global spatiotemporal correlations between events. Based on these correlations, events belonging to moving objects are segmented from the background without prior knowledge of the dynamic scene geometry. The algorithm is trained on publicly available datasets including MOD, EV-IMO, and EV-IMO2 using the proposed training scheme to facilitate efficient training on extensive datasets. Moreover, we introduce the Dynamic Object Mask-aware Event Labeling (DOMEL) approach for generating approximate ground-truth labels for event-based motion segmentation datasets. We use DOMEL to label our own recorded Event dataset for Motion Segmentation (EMS-DOMEL), which we release to the public for further research and benchmarking. Rigorous experiments are conducted on several unseen publicly-available datasets where the results revealed that GTNN outperforms state-of-the-art methods in the presence of dynamic background variations, motion patterns, and multiple dynamic objects with varying sizes and velocities. GTNN achieves significant performance gains with an average increase of 9.4% and 4.5% in terms of motion segmentation accuracy (IoU%) and detection rate (DR%), respectively.
引用
收藏
页码:385 / 400
页数:16
相关论文
共 50 条
  • [41] Convolutional Neural Network for Monocular Vision-based Multi-target Tracking
    Kim, Sang-Hyeon
    Choi, Han-Lim
    INTERNATIONAL JOURNAL OF CONTROL AUTOMATION AND SYSTEMS, 2019, 17 (09) : 2284 - 2296
  • [42] Study of vision-based measurement of weld joint shape incorporating the neural network
    Bae, K.Y.
    Na, S.J.
    Proceedings of the Institution of Mechanical Engineers, Part B: Journal of Engineering Manufacture, 1994, 208 (B1) : 61 - 69
  • [43] A Review of Vision-Based Motion Analysis in Sport
    Barris, Sian
    Button, Chris
    SPORTS MEDICINE, 2008, 38 (12) : 1025 - 1043
  • [44] A Review of Vision-Based Motion Analysis in Sport
    Sian Barris
    Chris Button
    Sports Medicine, 2008, 38 : 1025 - 1043
  • [45] A vision-based motion sensor for undergraduate laboratories
    Salumbides, EJ
    Maristela, J
    Uy, A
    Karremans, K
    AMERICAN JOURNAL OF PHYSICS, 2002, 70 (08) : 868 - 871
  • [46] Vision-based Autonomous Navigation based on Motion Estimation
    Kim, Jungho
    Kweon, In So
    2008 INTERNATIONAL CONFERENCE ON CONTROL, AUTOMATION AND SYSTEMS, VOLS 1-4, 2008, : 1466 - +
  • [47] Reaching Motion Planning with Vision-Based Deep Neural Networks for Dual Arm Robots
    Hoshino, Satoshi
    Oikawa, Ryota
    INTELLIGENT AUTONOMOUS SYSTEMS 17, IAS-17, 2023, 577 : 455 - 469
  • [48] A recurrent emotional CMAC neural network controller for vision-based mobile robots
    Fang, Wubing
    Chao, Fei
    Yang, Longzhi
    Lin, Chih-Min
    Shang, Changjing
    Zhou, Changle
    Shen, Qiang
    NEUROCOMPUTING, 2019, 334 : 227 - 238
  • [49] Vision-based Traffic Sign Compliance Evaluation using Convolutional Neural Network
    Roxas, Edison A.
    Acilo, Joshua N.
    Vicerra, Ryan Rhay P.
    Dadios, Elmer P.
    Bandala, Argel A.
    PROCEEDINGS OF 4TH IEEE INTERNATIONAL CONFERENCE ON APPLIED SYSTEM INNOVATION 2018 ( IEEE ICASI 2018 ), 2018, : 120 - 123
  • [50] A vision-based inspection system using fuzzy rough neural network method
    Li, Meng-Xin
    Wu, Cheng-Dong
    Jin, Feng
    PROCEEDINGS OF 2006 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS, VOLS 1-7, 2006, : 3228 - +