Asynchronous Event-Based Fourier Analysis

被引:17
|
作者
Sabatier, Quentin [1 ,2 ,3 ,4 ]
Ieng, Sio-Hoi [1 ,2 ,3 ]
Benosman, Ryad [1 ,2 ,3 ]
机构
[1] UPMC Univ Paris 06, Sorbonne Univ, F-75252 Paris, France
[2] Inst Vis, UMR S 968, F-75012 Paris, France
[3] CNRS, UMR 7210, F-75012 Paris, France
[4] Gensight Biol, F-75012 Paris, France
关键词
Address event representation (AER); event-based processing; fast Fourier transform; neuromorphic vision; SIGNAL; PERFORMANCE; VISION; FFT;
D O I
10.1109/TIP.2017.2661702
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
This paper introduces a method to compute the FFT of a visual scene at a high temporal precision of around 1-mu s output from an asynchronous event-based camera. Event-based cameras allow to go beyond the widespread and ingrained belief that acquiring series of images at some rate is a good way to capture visual motion. Each pixel adapts its own sampling rate to the visual input it receives and defines the timing of its own sampling points in response to its visual input by reacting to changes of the amount of incident light. As a consequence, the sampling process is no longer governed by a fixed timing source but by the signal to be sampled itself, or more precisely by the variations of the signal in the amplitude domain. Event-based cameras acquisition paradigm allows to go beyond the current conventional method to compute the FFT. The event-driven FFT algorithm relies on a heuristic methodology designed to operate directly on incoming gray level events to update incrementally the FFT while reducing both computation and data load. We show that for reasonable levels of approximations at equivalent frame rates beyond the millisecond, the method performs faster and more efficiently than conventional image acquisition. Several experiments are carried out on indoor and outdoor scenes where both conventional and event-driven FFT computation is shown and compared.
引用
收藏
页码:2192 / 2202
页数:11
相关论文
共 50 条
  • [1] Analysis of Asynchronous Programs with Event-Based Synchronization
    Emmi, Michael
    Ganty, Pierre
    Majumdar, Rupak
    Rosa-Velardo, Fernando
    PROGRAMMING LANGUAGES AND SYSTEMS, 2015, 9032 : 535 - 559
  • [2] Event Trojan: Asynchronous Event-Based Backdoor Attacks
    Wang, Ruofei
    Guo, Qing
    Li, Haoliang
    Wan, Renjie
    COMPUTER VISION-ECCV 2024, PT VII, 2025, 15065 : 315 - 332
  • [3] BAYES CLASSIFICATION FOR ASYNCHRONOUS EVENT-BASED CAMERAS
    Fillatre, Lionel
    2015 23RD EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO), 2015, : 824 - 828
  • [4] Asynchronous frameless event-based optical flow
    Benosman, Ryad
    Ieng, Sio-Hoi
    Clercq, Charles
    Bartolozzi, Chiara
    Srinivasan, Mandyam
    NEURAL NETWORKS, 2012, 27 : 32 - 37
  • [5] Asynchronous Event-Based Hebbian Epipolar Geometry
    Benosman, Ryad
    Ieng, Sio-Hoi
    Rogister, Paul
    Posch, Christoph
    IEEE TRANSACTIONS ON NEURAL NETWORKS, 2011, 22 (11): : 1723 - 1734
  • [6] Asynchronous Optimisation for Event-based Visual Odometry
    Liu, Daqi
    Parra, Alvaro
    Latif, Yasir
    Chen, Bo
    Chin, Tat-Jun
    Reid, Ian
    2022 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION, ICRA 2022, 2022, : 9432 - 9438
  • [7] Asynchronous event-based corner detection and matching
    Clady, Xavier
    Ieng, Sio-Hoi
    Benosman, Ryad
    NEURAL NETWORKS, 2015, 66 : 91 - 106
  • [8] Adversarial Attack for Asynchronous Event-Based Data
    Lee, Wooju
    Myung, Hyun
    THIRTY-SIXTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FOURTH CONFERENCE ON INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE / THE TWELVETH SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2022, : 1237 - 1244
  • [9] Spatiotemporal features for asynchronous event-based data
    Lagorce, Xavier
    Ieng, Sio-Hoi
    Clady, Xavier
    Pfeiffer, Michael
    Benosman, Ryad B.
    FRONTIERS IN NEUROSCIENCE, 2015, 9
  • [10] Asynchronous Event-Based Binocular Stereo Matching
    Rogister, Paul
    Benosman, Ryad
    Ieng, Sio-Hoi
    Lichtsteiner, Patrick
    Delbruck, Tobi
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2012, 23 (02) : 347 - 353