Text Genre and Training Data Size in Human-Like Parsing

被引:0
|
作者
Hale, John T. [1 ]
Kuncoro, Adhiguna [1 ]
Hall, Keith B. [2 ]
Dyer, Chris [1 ]
Brennan, Jonathan R. [3 ]
机构
[1] DeepMind, London, England
[2] Google Res, New York, NY USA
[3] Univ Michigan, Ann Arbor, MI 48109 USA
基金
美国国家科学基金会;
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Domain-specific training typically makes NLP systems work better. We show that this extends to cognitive modeling as well by relating the states of a neural phrase-structure parser to electrophysiological measures from human participants. These measures were recorded as participants listened to a spoken recitation of the same literary text that was supplied as input to the neural parser. Given more training data, the system derives a better cognitive model - but only when the training examples come from the same textual genre. This finding is consistent with the idea that humans adapt syntactic expectations to particular genres during language comprehension (Kaan and Chun, 2018; Branigan and Pickering, 2017).
引用
收藏
页码:5846 / 5852
页数:7
相关论文
共 50 条
  • [1] A practical system for human-like parsing
    Huyck, CR
    ECAI 2000: 14TH EUROPEAN CONFERENCE ON ARTIFICIAL INTELLIGENCE, PROCEEDINGS, 2000, 54 : 436 - 440
  • [2] Training human-like bots with Imitation Learning based on provenance data
    Ramos Cavadas, Lauro Victor
    Clua, Esteban
    Kohwalter, Troy Costa
    Melo, Sidney Araujo
    2022 21ST BRAZILIAN SYMPOSIUM ON COMPUTER GAMES AND DIGITAL ENTERTAINMENT (SBGAMES), 2022, : 55 - 60
  • [3] Broad-Coverage Parsing Using Human-Like Memory Constraints
    Schuler, William
    AbdelRahman, Samir
    Miller, Tim
    Schwartz, Lane
    COMPUTATIONAL LINGUISTICS, 2010, 36 (01) : 1 - 30
  • [4] Refining LLMs with Reinforcement Learning for Human-Like Text Generation
    Harish, Aditya
    Prakash, Gaurav
    Nair, Ronith R.
    Iyer, Varun Bhaskaran
    Kumar, Anand M.
    10TH INTERNATIONAL CONFERENCE ON ELECTRONICS, COMPUTING AND COMMUNICATION TECHNOLOGIES, CONECCT 2024, 2024,
  • [5] Will human-like machines make human-like mistakes?
    Livesey, Evan J.
    Goldwater, Micah B.
    Colagiuri, Ben
    BEHAVIORAL AND BRAIN SCIENCES, 2017, 40
  • [6] Exploring Human-Like Reading Strategy for Abstractive Text Summarization
    Yang, Min
    Qu, Qiang
    Tu, Wenting
    Shen, Ying
    Zhao, Zhou
    Chen, Xiaojun
    THIRTY-THIRD AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE / THIRTY-FIRST INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE / NINTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2019, : 7362 - 7369
  • [7] Human-like Explanation for Text Classification With Limited Attention Supervision
    Zhang, Dongyu
    Sen, Cansu
    Thadajarassiri, Jidapa
    Hartvigsen, Thomas
    Kong, Xiangnan
    Rundensteiner, Elke
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 957 - 967
  • [8] Generating human-like soccer primitives from human data
    Calderon, Carlos A. Acosta
    Elara, Mohan Rajesh
    Hu, Lingyun
    Zhou, Changjiu
    Hu, Huosheng
    ROBOTICS AND AUTONOMOUS SYSTEMS, 2009, 57 (08) : 860 - 869
  • [9] Human-like Patient Robot for Injection Training by Chaotic Behavior
    Kitagawa, Yoshiro
    Song, Wei
    Minami, Mamoru
    Mae, Yasushi
    PROCEEDINGS OF THE SECOND INTERNATIONAL SYMPOSIUM ON TEST AUTOMATION & INSTRUMENTATION, VOLS 1-2, 2008, : 136 - 139
  • [10] Text-based robot emotion and human-like emotional transition
    Chae, Yu-Jung
    Jeon, Tae-Hee
    Kim, ChangHwan
    Park, Sung-Kee
    2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 838 - 845