Pushdown Layers: Encoding Recursive Structure in Transformer Language Models

被引:0
|
作者
Murty, Shikhar [1 ]
Sharma, Pratyusha [2 ]
Andreas, Jacob [2 ]
Manning, Christopher D. [1 ]
机构
[1] Stanford Univ, Dept Comp Sci, Stanford, CA 94305 USA
[2] MIT CSAIL, Cambridge, MA USA
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recursion is a prominent feature of human language, and fundamentally challenging for self-attention due to the lack of an explicit recursive-state tracking mechanism. Consequently, Transformer language models poorly capture long-tail recursive structure and exhibit sample-inefficient syntactic generalization. This work introduces Pushdown Layers, a new self-attention layer that models recursive state via a stack tape that tracks estimated depths of every token in an incremental parse of the observed prefix. Transformer LMs with Pushdown Layers are syntactic language models that autoregressively and synchronously update this stack tape as they predict new tokens, in turn using the stack tape to softly modulate attention over tokens-for instance, learning to "skip" over closed constituents. When trained on a corpus of strings annotated with silver constituency parses, Transformers equipped with Pushdown Layers achieve dramatically better and 3-5x more sample-efficient syntactic generalization, while maintaining similar perplexities. Pushdown Layers are a drop-in replacement for standard self-attention. We illustrate this by finetuning GPT2-medium with Pushdown Layers on an automatically parsed WikiText-103, leading to improvements on several GLUE text classification tasks.
引用
收藏
页码:3233 / 3247
页数:15
相关论文
共 50 条
  • [1] LVCSR with Transformer Language Models
    Beck, Eugen
    Schlueter, Ralf
    Ney, Hermann
    INTERSPEECH 2020, 2020, : 1798 - 1802
  • [2] Informative Language Encoding by Variational Autoencoders Using Transformer
    Ok, Changwon
    Lee, Geonseok
    Lee, Kichun
    APPLIED SCIENCES-BASEL, 2022, 12 (16):
  • [3] Molecular language models: RNNs or transformer?
    Chen, Yangyang
    Wang, Zixu
    Zeng, Xiangxiang
    Li, Yayang
    Li, Pengyong
    Ye, Xiucai
    Sakurai, Tetsuya
    BRIEFINGS IN FUNCTIONAL GENOMICS, 2023, 22 (04) : 392 - 400
  • [4] Structural Guidance for Transformer Language Models
    Qian, Peng
    Naseem, Tahira
    Levy, Roger
    Astudillo, Ramon Fernandez
    59TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS AND THE 11TH INTERNATIONAL JOINT CONFERENCE ON NATURAL LANGUAGE PROCESSING (ACL-IJCNLP 2021), VOL 1, 2021, : 3735 - 3745
  • [5] Staged Training for Transformer Language Models
    Shen, Sheng
    Walsh, Pete
    Keutzer, Kurt
    Dodge, Jesse
    Peters, Matthew
    Beltagy, Iz
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 162, 2022,
  • [6] Disentangling Transformer Language Models as Superposed Topic Models
    Lim, Jia Peng
    Lauw, Hady W.
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 8646 - 8666
  • [7] When Language Models Fall in Love: Animacy Processing in Transformer Language Models
    Hanna, Michael
    Belinkov, Yonatan
    Pezzelle, Sandro
    2023 CONFERENCE ON EMPIRICAL METHODS IN NATURAL LANGUAGE PROCESSING (EMNLP 2023), 2023, : 12120 - 12135
  • [8] Probing for Bridging Inference in Transformer Language Models
    Pandit, Onkar
    Hou, Yufang
    2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), 2021, : 4153 - 4163
  • [9] Developmental Negation Processing in Transformer Language Models
    Laverghetta, Antonio, Jr.
    Licato, John
    PROCEEDINGS OF THE 60TH ANNUAL MEETING OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS (ACL 2022): (SHORT PAPERS), VOL 2, 2022, : 545 - 551
  • [10] BAYESIAN TRANSFORMER LANGUAGE MODELS FOR SPEECH RECOGNITION
    Xue, Boyang
    Yu, Jianwei
    Xu, Junhao
    Liu, Shansong
    Hu, Shoukang
    Ye, Zi
    Geng, Mengzhe
    Liu, Xunying
    Meng, Helen
    2021 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP 2021), 2021, : 7378 - 7382