Inducing brain-relevant bias in natural language processing models

被引:0
|
作者
Schwartz, Dan [1 ]
Toneva, Mariya [1 ]
Wehbe, Leila [1 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
基金
美国国家科学基金会; 美国国家卫生研究院;
关键词
INTERFERENCE; REVEALS; MEG;
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Progress in natural language processing (NLP) models that estimate representations of word sequences has recently been leveraged to improve the understanding of language processing in the brain. However, these models have not been specifically designed to capture the way the brain represents language meaning. We hypothesize that fine-tuning these models to predict recordings of brain activity of people reading text will lead to representations that encode more brain-activity-relevant language information. We demonstrate that a version of BERT, a recently introduced and powerful language model, can improve the prediction of brain activity after fine-tuning. We show that the relationship between language and brain activity learned by BERT during this fine-tuning transfers across multiple participants. We also show that, for some participants, the fine-tuned representations learned from both magnetoencephalography (MEG) and functional magnetic resonance imaging (fMRI) are better for predicting fMRI than the representations learned from fMRI alone, indicating that the learned representations capture brain-activity-relevant information that is not simply an artifact of the modality. While changes to language representations help the model predict brain activity, they also do not harm the model's ability to perform downstream NLP tasks. Our findings are notable for research on language understanding in the brain.
引用
收藏
页数:11
相关论文
共 50 条
  • [1] A Natural Bias for Language Generation Models
    Meister, Clara
    Stokowiec, Wojciech
    Pimentel, Tiago
    Yu, Lei
    Rimell, Laura
    Kuncoro, Adhiguna
    [J]. 61ST CONFERENCE OF THE THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS, ACL 2023, VOL 2, 2023, : 243 - 255
  • [2] Informing disease modelling with brain-relevant functional genomic annotations
    Reynolds, Regina H.
    Hardy, John
    Ryten, Mina
    Taliun, Sarah A. Gagliano
    [J]. BRAIN, 2019, 142 : 3694 - 3712
  • [3] Five sources of bias in natural language processing
    Hovy, Dirk
    Prabhumoye, Shrimai
    [J]. LANGUAGE AND LINGUISTICS COMPASS, 2021, 15 (08):
  • [4] Structural bias in inducing representations for probabilistic natural language parsing
    Henderson, J
    [J]. ARTIFICAIL NEURAL NETWORKS AND NEURAL INFORMATION PROCESSING - ICAN/ICONIP 2003, 2003, 2714 : 19 - 26
  • [5] An analysis of gender bias studies in natural language processing
    Costa-jussa, Marta R.
    [J]. NATURE MACHINE INTELLIGENCE, 2019, 1 (11) : 495 - 496
  • [6] Editorial: Bias, Subjectivity and Perspectives in Natural Language Processing
    Basile, Valerio
    Caselli, Tommaso
    Balahur, Alexandra
    Ku, Lun-Wei
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2022, 5
  • [7] An analysis of gender bias studies in natural language processing
    Marta R. Costa-jussà
    [J]. Nature Machine Intelligence, 2019, 1 : 495 - 496
  • [8] Natural language processing in the era of large language models
    Zubiaga, Arkaitz
    [J]. FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2024, 6
  • [9] Interpreting and improving natural-language processing (in machines) with natural language-processing (in the brain)
    Toneva, Mariya
    Wehbe, Leila
    [J]. ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
  • [10] On the Explainability of Natural Language Processing Deep Models
    El Zini, Julia
    Awad, Mariette
    [J]. ACM COMPUTING SURVEYS, 2023, 55 (05)