DFX: A Low-latency Multi-FPGA Appliance for Accelerating Transformer-based Text Generation

被引:22
|
作者
Hong, Seongmin [1 ]
Moon, Seungjae [1 ]
Kim, Junsoo [1 ]
Lee, Sungjae [2 ]
Kim, Minsub [2 ]
Lee, Dongsoo [2 ]
Kim, Joo-Young [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
[2] NAVER CLOVA, Seongnam, South Korea
来源
2022 55TH ANNUAL IEEE/ACM INTERNATIONAL SYMPOSIUM ON MICROARCHITECTURE (MICRO) | 2022年
关键词
Natural Language Processing; GPT; Text Generation; Datacenter; Multi-FPGA Acceleration; Model Parallelism;
D O I
10.1109/MICRO56248.2022.00051
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Transformer is a deep learning language model widely used for natural language processing (NLP) services in datacenters. Among transformer models, Generative Pretrained Transformer (GPT) has achieved remarkable performance in text generation, or natural language generation (NLG), which needs the processing of a large input context in the summarization stage, followed by the generation stage that produces a single word at a time. The conventional platforms such as GPU are specialized for the parallel processing of large inputs in the summarization stage, but their performance significantly degrades in the generation stage due to its sequential characteristic. Therefore, an efficient hardware platform is required to address the high latency caused by the sequential characteristic of text generation. In this paper, we present DFX, a multi-FPGA acceleration appliance that executes GPT-2 model inference end-to-end with low latency and high throughput in both summarization and generation stages. DFX uses model parallelism and optimized dataflow that is model-and-hardware-aware for fast simultaneous workload execution among devices. Its compute cores operate on custom instructions and provide GPT-2 operations end-to-end. We implement the proposed hardware architecture on four Xilinx Alveo U280 FPGAs and utilize all of the channels of the high bandwidth memory (HBM) and the maximum number of compute resources for high hardware efficiency. DFX achieves 5.58x speedup and 3.99 x energy efficiency over four NVIDIA V100 GPUs on the modern GPT-2 model. DFX is also 8.21 x more cost-effective than the GPU appliance, suggesting that it is a promising solution for text generation workloads in cloud datacenters.
引用
收藏
页码:616 / 630
页数:15
相关论文
共 50 条
  • [31] A Survey of Controllable Text Generation Using Transformer-based Pre-trained Language Models
    Zhang, Hanqing
    Song, Haolin
    Li, Shaoyu
    Zhou, Ming
    Song, Dawei
    ACM COMPUTING SURVEYS, 2024, 56 (03)
  • [32] Co-design of an FPGA-based Low-latency Controller for MZI-based SiP Switches
    de Magalhaes, Felipe Gohring
    Xiong, Yule
    Hessel, Fabiano
    Liboiron-Ladouceur, Odile
    Nicolescu, Gabriela
    2016 PHOTONICS NORTH (PN), 2016,
  • [33] Implementation of Reed Solomon Encoder on Low-Latency Embedded FPGA in Flexible SoC based on ARM Processor
    Saidi, Hajer
    Turki, Mariem
    Marrakchi, Zied
    Obeid, Abdulfattah
    Abid, Mohamed
    2020 16TH INTERNATIONAL WIRELESS COMMUNICATIONS & MOBILE COMPUTING CONFERENCE, IWCMC, 2020, : 1347 - 1352
  • [34] Implementation of low-latency electrolaryngeal speech enhancement based on multi-task CLDNN
    Kobayashi, Kazuhiro
    Toda, Tomoki
    28TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2020), 2021, : 396 - 400
  • [35] Multi-stream-Based Low-Latency Viewport Switching Scheme for Panoramic Videos
    Wang, Yong
    Man, Hengyu
    Wang, Xingtao
    Fan, Xiaopeng
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XI, 2024, 14435 : 104 - 116
  • [36] A Transformer-based Network for Multi-view 3D Mesh Generation
    Shi, Wuzhen
    Liu, Zhijie
    Li, Yingxiang
    Wen, Yang
    Liu, Yutao
    Proceedings - 2023 IEEE SmartWorld, Ubiquitous Intelligence and Computing, Autonomous and Trusted Vehicles, Scalable Computing and Communications, Digital Twin, Privacy Computing and Data Security, Metaverse, SmartWorld/UIC/ATC/ScalCom/DigitalTwin/PCDS/Metaverse 2023, 2023,
  • [37] ToEx: Accelerating Generation Stage of Transformer-Based Language Models via Token-Adaptive Early Exit
    Kang, Myeonggu
    Park, Junyoung
    Shin, Hyein
    Shin, Jaekang
    Kim, Lee-Sup
    IEEE TRANSACTIONS ON COMPUTERS, 2024, 73 (09) : 2248 - 2261
  • [38] Transformer-based Label Set Generation for Multi-modal Multi-label Emotion Detection
    Ju, Xincheng
    Zhang, Dong
    Li, Junhui
    Zhou, Guodong
    MM '20: PROCEEDINGS OF THE 28TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, 2020, : 512 - 520
  • [39] Low-Latency FPGA-Based PLC Microprocessor for Industrial Automation in Compliance with IEC-61131-3
    Cancino-Escobar, Manuel
    Delgado-Del-Carpio, Marcelo
    Solis-Cisneros, Horacio I.
    Mota-Grajales, Rafael
    Hernandez-Gutierrez, Carlos A.
    ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING, 2024, 49 (12) : 16407 - 16420
  • [40] AN FPGA-BASED HIGH-SPEED, LOW-LATENCY TRIGGER PROCESSOR FOR HIGH-ENERGY PHYSICS
    de Cuveland, Jan
    Rettig, Felix
    Angelov, Venelin
    Lindenstruth, Volker
    2008 INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE AND LOGIC APPLICATIONS, VOLS 1 AND 2, 2008, : 293 - 298