Incorporating Context into Language Encoding Models for fMRI

被引:0
|
作者
Jain, Shailee [1 ]
Huth, Alexander G. [1 ,2 ]
机构
[1] Univ Texas Austin, Dept Comp Sci, Austin, TX 78751 USA
[2] Univ Texas Austin, Dept Neurosci, Austin, TX 78751 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work, we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.
引用
收藏
页数:10
相关论文
共 50 条
  • [1] Scaling laws for language encoding models in fMRI
    Antonello, Richard J.
    Vaidya, Aditya R.
    Huth, Alexander G.
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 36 (NEURIPS 2023), 2023,
  • [2] A natural language fMRI dataset for voxelwise encoding models
    Amanda LeBel
    Lauren Wagner
    Shailee Jain
    Aneesh Adhikari-Desai
    Bhavin Gupta
    Allyson Morgenthal
    Jerry Tang
    Lixiang Xu
    Alexander G. Huth
    Scientific Data, 10
  • [3] A natural language fMRI dataset for voxelwise encoding models
    Lebel, Amanda
    Wagner, Lauren
    Jain, Shailee
    Adhikari-Desai, Aneesh
    Gupta, Bhavin
    Morgenthal, Allyson
    Tang, Jerry
    Xu, Lixiang
    Huth, Alexander G.
    SCIENTIFIC DATA, 2023, 10 (01)
  • [4] Incorporating behavioral and sensory context into spectro-temporal models of auditory encoding
    David, Stephen V.
    HEARING RESEARCH, 2018, 360 : 107 - 123
  • [5] Memory detection using fMRI - Does the encoding context matter?
    Peth, Judith
    Sommer, Tobias
    Hebart, Martin N.
    Vossel, Gerhard
    Buechel, Christian
    Gamer, Matthias
    NEUROIMAGE, 2015, 113 : 164 - 174
  • [6] HRF estimation improves sensitivity of fMRI encoding and decoding models
    Pedregosa, Fabian
    Eickenberg, Michael
    Thirion, Bertrand
    Gramfort, Alexandre
    2013 3RD INTERNATIONAL WORKSHOP ON PATTERN RECOGNITION IN NEUROIMAGING (PRNI 2013), 2013, : 165 - 169
  • [7] Brain Encoding and Decoding in fMRI with Bidirectional Deep Generative Models
    Du, Changde
    Li, Jinpeng
    Huang, Lijie
    He, Huiguang
    ENGINEERING, 2019, 5 (05) : 948 - 953
  • [8] An Efficient Approach to Encoding Context for Spoken Language Understanding
    Gupta, Raghav
    Rastogi, Abhinav
    Hakkani-Tur, Dilek
    19TH ANNUAL CONFERENCE OF THE INTERNATIONAL SPEECH COMMUNICATION ASSOCIATION (INTERSPEECH 2018), VOLS 1-6: SPEECH RESEARCH FOR EMERGING MARKETS IN MULTILINGUAL SOCIETIES, 2018, : 3469 - 3473
  • [9] Incorporating linguistic structure into statistical language models
    Rosenfeld, R
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2000, 358 (1769): : 1311 - 1324
  • [10] Computational Models of Language Within Context and Context-Sensitive Language Understanding
    Ito, Noriko
    Sugimoto, Toru
    Takahashi, Yusuke
    Iwashita, Shino
    Sugeno, Michio
    JOURNAL OF ADVANCED COMPUTATIONAL INTELLIGENCE AND INTELLIGENT INFORMATICS, 2006, 10 (06) : 782 - 790