Enhancing Neural Decoding with Large Language Models: A GPT-Based Approach

被引:0
|
作者
Lee, Dong Hyeok [1 ]
Chung, Chun Kee [2 ]
机构
[1] Seoul Natl Univ, Dept Brain & Cognit Sci, Seoul, South Korea
[2] Seoul Natl Univ, Neurosci Res Inst, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
Neural decoder; Large Language model; GPT; fine-tuning; GAMMA-OSCILLATIONS;
D O I
10.1109/BCI60775.2024.10480499
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Many neural decoders specialize in one function. They provide a task-dependent interpretation of the signal based on what is happening in the subject's brain and the subject's environment when performing a particular task. We tend to improve decoder performance by simplifying paradigms and removing artifacts. However, this is far from how humans operate in nature since we justify and explain it by looking for plausible reasons and references. To build the decoder different from the conventional way and to interpret more general signals, we tried to use large language models, which are becoming increasingly popular. Like other deep learning models, large language models are highly capable of processing natural language using a Transformer. The fact that they process "language" gives them unlimited potential. OpenAI's ChatGPT, a service that uses large language models, performs various tasks. Suppose the large language model is trained to learn the characteristics of neural signals. The trained model would learn which brain region matches the Brodmann area number and the relationship between cognitive function and neural signals. In that case, a fine-tuned large language model decoder can interpret neural signals and the location where the signals occur with the learned neuroscience knowledge. This large language model-based decoder can universally interpret neural signals and give us guidelines for comprehension of dynamic brain activity. We fine-tuned the 'GPT-3.5 turbo' model and prompted it with preprocessed neural signals of each region, characterized by what bands they were in. The large language model responded with an estimate of what the neural signal was like in that state and what features it used to make this judgment. We propose that GPT can be trained with the neuroscience knowledge accumulated in the neuroscience community to create a highly reliable neural decoder.
引用
收藏
页数:4
相关论文
共 50 条
  • [1] Exploring the reversal curse and other deductive logical reasoning in BERT and GPT-based large language models
    Wu, Da
    Yang, Jingye
    Wang, Kai
    PATTERNS, 2024, 5 (09):
  • [2] A GPT-Based Approach for Sentiment Analysis and Bakery Rating Prediction
    Magdaleno, Diego
    Montes, Martin
    Estrada, Blanca
    Ochoa-Zezzatti, Alberto
    ADVANCES IN COMPUTATIONAL INTELLIGENCE. MICAI 2023 INTERNATIONAL WORKSHOPS, 2024, 14502 : 61 - 76
  • [3] Exploring a GPT-based large language model for variable autonomy in a VR-based human-robot teaming simulation
    Lakhnati, Younes
    Pascher, Max
    Gerken, Jens
    FRONTIERS IN ROBOTICS AND AI, 2024, 11
  • [4] SolGPT: A GPT-Based Static Vulnerability Detection Model for Enhancing Smart Contract Security
    Zeng, Shengqiang
    Zhang, Hongwei
    Wang, Jinsong
    Shi, Kai
    ALGORITHMS AND ARCHITECTURES FOR PARALLEL PROCESSING, ICA3PP 2023, PT IV, 2024, 14490 : 42 - 62
  • [5] Assistant Dashboard Plus - Enhancing an Existing Instructor Dashboard with Difficulty Detection and GPT-based Code Clustering
    George, Samuel
    Huang, Tao
    Robinson, Chandler
    Schell, Gabriel
    Shan, Wei
    Zhao, Ziqian
    Zhou, Zeqi
    Dewan, Prasun
    COMPANION PROCEEDINGS OF 2024 29TH ANNUAL CONFERENCE ON INTELLIGENT USER INTERFACES, IUI 2024 COMPANION, 2024, : 54 - 57
  • [6] De novo design of polymer electrolytes using GPT-based and diffusion-based generative models
    Yang, Zhenze
    Ye, Weike
    Lei, Xiangyun
    Schweigert, Daniel
    Kwon, Ha-Kyung
    Khajeh, Arash
    npj Computational Materials, 2024, 10 (01)
  • [7] Enhancing health assessments with large language models: A methodological approach
    Wang, Xi
    Zhou, Yujia
    Zhou, Guangyu
    APPLIED PSYCHOLOGY-HEALTH AND WELL BEING, 2024,
  • [8] Robustness of GPT Large Language Models on Natural Language Processing Tasks
    Xuanting C.
    Junjie Y.
    Can Z.
    Nuo X.
    Tao G.
    Qi Z.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2024, 61 (05): : 1128 - 1142
  • [9] Revolutionizing radiology with GPT-based models: Current applications, future possibilities and limitations of ChatGPT
    Lecler, Augustin
    Duron, Loic
    Soyer, Philippe
    DIAGNOSTIC AND INTERVENTIONAL IMAGING, 2023, 104 (06) : 269 - 274
  • [10] Enhancing Multi-Agent Communication Collaboration through GPT-Based Semantic Information Extraction and Prediction
    Deng, Xinfeng
    Zhou, Li
    Dong, Dezun
    Wei, Jibo
    PROCEEDINGS OF THE ACM TURING AWARD CELEBRATION CONFERENCE-CHINA 2024, ACM-TURC 2024, 2024, : 81 - 85