RateML: A Code Generation Tool for Brain Network Models

被引:2
|
作者
van der Vlag, Michiel [1 ]
Woodman, Marmaduke [2 ]
Fousek, Jan [2 ]
Diaz-Pier, Sandra [1 ]
Martin, Aaron Perez [1 ]
Jirsa, Viktor [2 ]
Morrison, Abigail [1 ,3 ,4 ,5 ,6 ]
机构
[1] Forschungszentrum Julich, Inst Adv Simulat, Julich Supercomp Ctr JSC, Simulat & Data Lab Neurosci,JARA, Julich, Germany
[2] Aix Marseille Univ, Inst Neurosci Syst, Marseille, France
[3] Inst Neurosci & Med INM 6, Julich, Germany
[4] Inst Adv Simulat IAS 6, Julich, Germany
[5] JARA Inst Brain, Julich, Germany
[6] Rhein Westfal TH Aachen, Comp Sci 3 Software Engn, Aachen, Germany
来源
基金
欧盟地平线“2020”;
关键词
brain network models; domain specific language; automatic code generation; high performance computing; simulation;
D O I
10.3389/fnetp.2022.826345
中图分类号
Q4 [生理学];
学科分类号
071003 ;
摘要
Whole brain network models are now an established tool in scientific and clinical research, however their use in a larger workflow still adds significant informatics complexity. We propose a tool, RateML, that enables users to generate such models from a succinct declarative description, in which the mathematics of the model are described without specifying how their simulation should be implemented. RateML builds on NeuroML's Low Entropy Model Specification (LEMS), an XML based language for specifying models of dynamical systems, allowing descriptions of neural mass and discretized neural field models, as implemented by the Virtual Brain (TVB) simulator: the end user describes their model's mathematics once and generates and runs code for different languages, targeting both CPUs for fast single simulations and GPUs for parallel ensemble simulations. High performance parallel simulations are crucial for tuning many parameters of a model to empirical data such as functional magnetic resonance imaging (fMRI), with reasonable execution times on small or modest hardware resources. Specifically, while RateML can generate Python model code, it enables generation of Compute Unified Device Architecture C++ code for NVIDIA GPUs. When a CUDA implementation of a model is generated, a tailored model driver class is produced, enabling the user to tweak the driver by hand and perform the parameter sweep. The model and driver can be executed on any compute capable NVIDIA GPU with a high degree of parallelization, either locally or in a compute cluster environment. The results reported in this manuscript show that with the CUDA code generated by RateML, it is possible to explore thousands of parameter combinations with a single Graphics Processing Unit for different models, substantially reducing parameter exploration times and resource usage for the brain network models, in turn accelerating the research workflow itself. This provides a new tool to create efficient and broader parameter fitting workflows, support studies on larger cohorts, and derive more robust and statistically relevant conclusions about brain dynamics.
引用
收藏
页数:13
相关论文
共 50 条
  • [41] Accelerating The Virtual Brain with code generation and GPU computing
    M Marmaduke Woodman
    Viktor K Jirsa
    BMC Neuroscience, 14 (Suppl 1)
  • [42] Multi-threaded Code Generation Tool for Synchronous Language
    Yang Z.-B.
    Yuan S.-H.
    Xie J.
    Zhou Y.
    Chen Z.
    Xue L.
    Bodevix J.-P.
    Filali M.
    Ruan Jian Xue Bao/Journal of Software, 2019, 30 (07): : 1980 - 2002
  • [43] Retargetable generation of code selectors for HDL processor models
    Leupers, R
    Marwedel, P
    EUROPEAN DESIGN & TEST CONFERENCE - ED&TC 97, PROCEEDINGS, 1997, : 140 - 144
  • [44] CODE GENERATION FOR CSM/ECSM MODELS IN COSMA ENVIRONMENT
    Grabski, Waldemar
    Nowacki, Michal
    COMPUTER SCIENCE-AGH, 2007, 8 : 49 - 59
  • [45] AltaRica 3.0 code generation from SysML models
    Nga Nguyen
    Mhenni, Faida
    Choley, Jean-Yves
    SAFETY AND RELIABILITY - SAFE SOCIETIES IN A CHANGING WORLD, 2018, : 2435 - 2440
  • [46] Communication pipelining for Code Generation from Simulink Models
    Yan, Rongjie
    Huang, Kai
    Yu, Min
    Zhang, Xiaomeng
    2013 12TH IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2013), 2013, : 1893 - 1900
  • [47] Structural language models for any-code generation
    Alon, Uri
    Sadaka, Roy
    Levy, Omer
    Yahav, Eran
    arXiv, 2019,
  • [48] Automated generation of simulation models for control code tests
    Barth, Mike
    Fay, Alexander
    CONTROL ENGINEERING PRACTICE, 2013, 21 (02) : 218 - 230
  • [49] Code generation from declarative models of robotics solvers
    Frigerio, Marco
    Scioni, Enea
    Pazderski, Pawel Piotr
    Bruyninckx, Herman
    2019 THIRD IEEE INTERNATIONAL CONFERENCE ON ROBOTIC COMPUTING (IRC 2019), 2019, : 369 - 372
  • [50] Sound code generation from communicating hybrid models
    Hur, Y
    Kim, J
    Lee, I
    Choi, JY
    HYBRID SYSTEMS: COMPUTATION AND CONTROL, PROCEEDINGS, 2004, 2993 : 432 - 447