Incorporating Context into Language Encoding Models for fMRI

被引:0
|
作者
Jain, Shailee [1 ]
Huth, Alexander G. [1 ,2 ]
机构
[1] Univ Texas Austin, Dept Comp Sci, Austin, TX 78751 USA
[2] Univ Texas Austin, Dept Neurosci, Austin, TX 78751 USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Language encoding models help explain language processing in the human brain by learning functions that predict brain responses from the language stimuli that elicited them. Current word embedding-based approaches treat each stimulus word independently and thus ignore the influence of context on language understanding. In this work, we instead build encoding models using rich contextual representations derived from an LSTM language model. Our models show a significant improvement in encoding performance relative to state-of-the-art embeddings in nearly every brain area. By varying the amount of context used in the models and providing the models with distorted context, we show that this improvement is due to a combination of better word embeddings learned by the LSTM language model and contextual information. We are also able to use our models to map context sensitivity across the cortex. These results suggest that LSTM language models learn high-level representations that are related to representations in the human brain.
引用
收藏
页数:10
相关论文
共 50 条
  • [11] Incorporating structured assumptions with probabilistic graphical models in fMRI data analysis
    Cai, Ming Bo
    Shvartsman, Michael
    Wu, Anqi
    Zhang, Hejia
    Zhu, Xia
    NEUROPSYCHOLOGIA, 2020, 144
  • [12] Half-Context Language Models
    Schuetze, Hinrich
    Walsh, Michael
    COMPUTATIONAL LINGUISTICS, 2011, 37 (04) : 843 - 865
  • [13] Language Models for Lexical Inference in Context
    Schmitt, Martin
    Schuetze, Hinrich
    16TH CONFERENCE OF THE EUROPEAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (EACL 2021), 2021, : 1267 - 1280
  • [14] Incorporating Temporal Context in Bag-of-Words Models
    Glaser, Tamar
    Zelnik-Manor, Lihi
    2011 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION WORKSHOPS (ICCV WORKSHOPS), 2011,
  • [15] In-context language control with production tasks in bilinguals: An fMRI study
    Zhang, Yong
    Huang, Peiyu
    Song, Zhe
    Fang, Liang
    Shen, Tong
    Li, Yan
    Gong, Qiyong
    Xie, Peng
    BRAIN RESEARCH, 2014, 1585 : 131 - 140
  • [16] Encoding and decoding in fMRI
    Naselaris, Thomas
    Kay, Kendrick N.
    Nishimoto, Shinji
    Gallant, Jack L.
    NEUROIMAGE, 2011, 56 (02) : 400 - 410
  • [17] Incorporating linguistic structure into maximum entropy language models
    Fang, GL
    Gao, W
    Wang, ZQ
    JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY, 2003, 18 (01) : 131 - 136
  • [18] Incorporating Stylistic Lexical Preferences in Generative Language Models
    Singh, Hrituraj
    Verma, Gaurav
    Srinivasan, Balaji Vasan
    FINDINGS OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, EMNLP 2020, 2020, : 1074 - 1079
  • [19] Incorporating linguistic structure into statistical language models - Discussion
    Cussens, J
    Rosenfeld, R
    James, DB
    Taylor, PA
    PHILOSOPHICAL TRANSACTIONS OF THE ROYAL SOCIETY OF LONDON SERIES A-MATHEMATICAL PHYSICAL AND ENGINEERING SCIENCES, 2000, 358 (1769): : 1324 - 1324
  • [20] Incorporating linguistic structure into maximum entropy language models
    GaoLin Fang
    Wen Gao
    ZhaoQi Wang
    Journal of Computer Science and Technology, 2003, 18 : 131 - 136