Spatio-temporal transformers for decoding neural movement control

被引:0
|
作者
Candelori, Benedetta [3 ]
Bardella, Giampiero [1 ]
Spinelli, Indro [2 ]
Ramawat, Surabhi [1 ]
Pani, Pierpaolo [1 ]
Ferraina, Stefano [1 ]
Scardapane, Simone [3 ]
机构
[1] Sapienza Univ Rome, Dept Physiol & Pharmacol, Rome, Italy
[2] Sapienza Univ Rome, Dept Comp Sci, Rome, Italy
[3] Sapienza Univ Rome, Dept Informat Engn Elect & Telecommun, Rome, Italy
关键词
deep learning; motor decoding; macaque; single-neuron recordings; brain-computer interfaces (BCIs); neural dynamics; transformers; LONG-LATENCY STRETCH; PREMOTOR CORTEX; CORTICAL ACTIVITY; ARM MOVEMENTS; POPULATION-DYNAMICS; WORKING-MEMORY; FRONTAL-CORTEX; MOTOR; INFORMATION; DISCHARGE;
D O I
10.1088/1741-2552/adaef0
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
Objective. Deep learning tools applied to high-resolution neurophysiological data have significantly progressed, offering enhanced decoding, real-time processing, and readability for practical applications. However, the design of artificial neural networks to analyze neural activity in vivo remains a challenge, requiring a delicate balance between efficiency in low-data regimes and the interpretability of the results. Approach. To address this challenge, we introduce a novel specialized transformer architecture to analyze single-neuron spiking activity. The model is tested on multi-electrode recordings from the dorsal premotor cortex of non-human primates performing a motor inhibition task. Main results. The proposed architecture provides an early prediction of the correct movement direction, achieving accurate results no later than 230 ms after the Go signal presentation across animals. Additionally, the model can forecast whether the movement will be generated or withheld before a stop signal, unattended, is actually presented. To further understand the internal dynamics of the model, we compute the predicted correlations between time steps and between neurons at successive layers of the architecture, with the evolution of these correlations mirrors findings from previous theoretical analyses. Significance. Overall, our framework provides a comprehensive use case for the practical implementation of deep learning tools in motor control research, highlighting both the predictive capabilities and interpretability of the proposed architecture.
引用
收藏
页数:14
相关论文
共 50 条
  • [1] Grounding Spatio-Temporal Language with Transformers
    Karch, Tristan
    Teodorescu, Laetitia
    Hofmann, Katja
    Moulin-Frier, Clement
    Oudeyer, Pierre-Yves
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 34 (NEURIPS 2021), 2021, 34
  • [2] Neural Video Compression with Spatio-Temporal Cross-Covariance Transformers
    Chen, Zhenghao
    Relic, Lucas
    Azevedo, Roberto
    Zhang, Yang
    Gross, Markus
    Xu, Dong
    Zhou, Luping
    Schroers, Christopher
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8543 - 8551
  • [3] TubeDETR: Spatio-Temporal Video Grounding with Transformers
    Yang, Antoine
    Miech, Antoine
    Sivic, Josef
    Laptev, Ivan
    Schmid, Cordelia
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 16421 - 16432
  • [4] Spatio-temporal control of neural activity using microendoscopy
    Hayashi, Yuichiro
    NEUROSCIENCE RESEARCH, 2011, 71 : E312 - E312
  • [5] Spatio-temporal control of neural activity using optogenetics
    Hayashi, Yuichiro
    NEUROSCIENCE RESEARCH, 2010, 68 : E327 - E327
  • [6] SPATIO-TEMPORAL CONDITIONS FOR APPARENT MOVEMENT
    BUNDESEN, C
    PHYSICA SCRIPTA, 1989, 39 (01): : 128 - 132
  • [7] Control strategies for spatio-temporal chaotic systems and neural networks
    Ogorzalek, MJ
    Adachi, M
    ISCAS 96: 1996 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS - CIRCUITS AND SYSTEMS CONNECTING THE WORLD, VOL 3, 1996, : 9 - 12
  • [8] Decoding-accuracy-based sequential dimensionality reduction of spatio-temporal neural activities
    Funamizu, Akihiro
    Kanzaki, Ryohei
    Takahashi, Hirokazu
    IEEJ Transactions on Electronics, Information and Systems, 2009, 129 (09) : 1648 - 1654
  • [9] Spatio-Temporal Functional Neural Networks
    Rao, Aniruddha Rajendra
    Wang, Qiyao
    Wang, Haiyan
    Khorasgani, Hamed
    Gupta, Chetan
    2020 IEEE 7TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA 2020), 2020, : 81 - 89
  • [10] Spatio-Temporal RBF Neural Networks
    Khan, Shujaat
    Ahmad, Jawwad
    Sadiq, Alishba
    Naseem, Imran
    Moinuddin, Muhammad
    2018 3RD INTERNATIONAL CONFERENCE ON EMERGING TRENDS IN ENGINEERING, SCIENCES AND TECHNOLOGY (ICEEST), 2018,