Temporal spiking generative adversarial networks for heading direction decoding

被引:0
|
作者
Shen, Jiangrong [1 ,2 ,3 ]
Wang, Kejun [2 ,3 ]
Gao, Wei [4 ]
Liu, Jian K. [6 ]
Xu, Qi [7 ]
Pan, Gang [3 ]
Chen, Xiaodong [4 ,5 ]
Tang, Huajin [2 ,3 ]
机构
[1] Xi An Jiao Tong Univ, Sch Comp Sci & Technol, Xian, Peoples R China
[2] Zhejiang Univ, State Key Lab Brain Machine Intelligence, Hangzhou, Peoples R China
[3] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou, Peoples R China
[4] Hangzhou Normal Univ, Inst Brain Sci, Sch Basic Med Sci, Hangzhou, Peoples R China
[5] Zhejiang Univ, Interdisciplinary Inst Neurosci & Technol, Sch Med, Hangzhou, Peoples R China
[6] Univ Birmingham, Sch Comp Sci, Birmingham, England
[7] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian, Peoples R China
基金
中国国家自然科学基金;
关键词
Spiking neural networks; Spiking generative adversarial networks; Heading direction decoding; INTELLIGENCE;
D O I
10.1016/j.neunet.2024.106975
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The spike-based neuronal responses within the ventral intraparietal area (VIP) exhibit intricate spatial and temporal dynamics in the posterior parietal cortex, presenting decoding challenges such as limited data availability at the biological population level. The practical difficulty in collecting VIP neuronal response data hinders the application of sophisticated decoding models. To address this challenge, we propose a unified spike-based decoding framework leveraging spiking neural networks (SNNs) for both generative and decoding purposes, for their energy efficiency and suitability for neural decoding tasks. We propose the Temporal Spiking Generative Adversarial Networks (T-SGAN), a model based on a spiking transformer, to generate synthetic time-series data reflecting the neuronal response of VIP neurons. T-SGAN incorporates temporal segmentation to reduce the temporal dimension length, while spatial self-attention facilitates the extraction of associated information among VIP neurons. This is followed by recurrent SNNs decoder equipped with an attention mechanism, designed to capture the intricate spatial and temporal dynamics for heading direction decoding. Experimental evaluations conducted on biological datasets from monkeys showcase the effectiveness of the proposed framework. Results indicate that T-SGAN successfully generates realistic synthetic data, leading to a significant improvement of up to 1.75% in decoding accuracy for recurrent SNNs. Furthermore, the SNN-based decoding framework capitalizes on the low power consumption advantages, offering substantial benefits for neuronal response decoding applications.
引用
收藏
页数:9
相关论文
共 50 条
  • [1] Spiking generative adversarial network with attention scoring decoding
    Feng, Linghao
    Zhao, Dongcheng
    Zeng, Yi
    NEURAL NETWORKS, 2024, 178
  • [2] Spatio-Temporal Generative Adversarial Networks
    Qin, Chao
    Gao, Xiaoguang
    CHINESE JOURNAL OF ELECTRONICS, 2020, 29 (04) : 623 - 631
  • [3] Spatio-Temporal Generative Adversarial Networks
    QIN Chao
    GAO Xiaoguang
    Chinese Journal of Electronics, 2020, 29 (04) : 623 - 631
  • [4] Pedestrian Walking Direction Prediction Using Generative Adversarial Networks
    He, Bate
    Kita, Eisuke
    2020 IEEE INTERNATIONAL CONFERENCE ON SYSTEMS, MAN, AND CYBERNETICS (SMC), 2020, : 4358 - 4364
  • [5] Distributed spatio-temporal generative adversarial networks
    Qin Chao
    Gao Xiaoguang
    JOURNAL OF SYSTEMS ENGINEERING AND ELECTRONICS, 2020, 31 (03) : 578 - 592
  • [6] Distributed spatio-temporal generative adversarial networks
    QIN Chao
    GAO Xiaoguang
    Journal of Systems Engineering and Electronics, 2020, 31 (03) : 578 - 592
  • [7] GANzilla: User-Driven Direction Discovery in Generative Adversarial Networks
    Evirgen, Noyan
    Chen, Xiang 'Anthony'
    PROCEEDINGS OF THE 35TH ANNUAL ACM SYMPOSIUM ON USER INTERFACE SOFTWARE AND TECHNOLOGY, UIST 2022, 2022,
  • [8] GANravel: User-Driven Direction Disentanglement in Generative Adversarial Networks
    Evirgen, Noyan
    Chen, Xiang 'Anthony'
    PROCEEDINGS OF THE 2023 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2023), 2023,
  • [9] Temporal-Spatial Generative Adversarial Networks for Video Inpainting
    Yu B.
    Ding Y.
    Xie Z.
    Huang D.
    Ma L.
    Xie, Zhifeng (zhifeng_xie@shu.edu.cn), 1600, Institute of Computing Technology (32): : 769 - 779
  • [10] Generative Adversarial Networks for Spatio-temporal Data: A Survey
    Gao, Nan
    Xue, Hao
    Shao, Wei
    Zhao, Sichen
    Qin, Kyle Kai
    Prabowo, Arian
    Rahaman, Mohammad Saiedur
    Salim, Flora D.
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2022, 13 (02)