Smart-Infinity: Fast Large Language Model Training using Near-Storage Processing on a Real System

被引:1
|
作者
Jang, Hongsun [1 ]
Song, Jaeyong [1 ]
Jung, Jaewon [1 ]
Park, Jaeyoung [2 ,4 ]
Kim, Youngsok [3 ]
Lee, Jinho [1 ]
机构
[1] Seoul Natl Univ, Dept Elect & Comp Engn, Seoul, South Korea
[2] Univ Texas Austin, Dept Elect & Comp Engn, Austin, TX 78712 USA
[3] Yonsei Univ, Dept Comp Sci, Seoul, South Korea
[4] Yonsei Univ, Seoul, South Korea
基金
新加坡国家研究基金会;
关键词
ARCHITECTURE;
D O I
10.1109/HPCA57654.2024.00034
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The recent huge advance of Large Language Models (LLMs) is mainly driven by the increase in the number of parameters. This has led to substantial memory capacity requirements, necessitating the use of dozens of GPUs just to meet the capacity. One popular solution to this is storage-offloaded training, which uses host memory and storage as an extended memory hierarchy. However, this obviously comes at the cost of storage bandwidth bottleneck because storage devices have orders of magnitude lower bandwidth compared to that of GPU device memories. Our work, Smart-Infinity, addresses the storage bandwidth bottleneck of storage-offloaded LLM training using near-storage processing devices on a real system. The main component of Smart-Infinity is SmartUpdate, which performs parameter updates on custom near-storage accelerators. We identify that moving parameter updates to the storage side removes most of the storage traffic. In addition, we propose an efficient data transfer handler structure to address the system integration issues for Smart-Infinity. The handler allows overlapping data transfers with fixed memory consumption by reusing the device buffer. Lastly, we propose accelerator-assisted gradient compression/decompression to enhance the scalability of Smart-Infinity. When scaling to multiple near-storage processing devices, the write traffic on the shared channel becomes the bottleneck. To alleviate this, we compress the gradients on the GPU and decompress them on the accelerators. It provides further acceleration from reduced traffic. As a result, Smart-Infinity achieves a significant speedup compared to the baseline. Notably, SmartInfinity is a ready-to-use approach that is fully integrated into PyTorch on a real system. The implementation of Smart-Infinity is available at https://github.com/AIS-SNU/smart-infinity.
引用
收藏
页码:345 / 360
页数:16
相关论文
共 5 条
  • [1] Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model
    Imamura, Shota
    Hiraki, Hirotaka
    Rekimoto, Jun
    2024 IEEE CONFERENCE ON VIRTUAL REALITY AND 3D USER INTERFACES ABSTRACTS AND WORKSHOPS, VRW 2024, 2024, : 588 - 590
  • [2] Serendipity Wall: A Discussion Support System Using Real-time Speech Recognition and Large Language Model
    Imamura, Shota
    Hiraki, Hirotaka
    Rekimoto, Jun
    AUGMENTED HUMANS 2024, AHS 2024, 2024, : 237 - 247
  • [3] Near-Real-Time Ocean Color Data Processing Using Ancillary Data From the Global Forecast System Model
    Ramachandran, Sathyadev
    Wang, Menghua
    IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2011, 49 (04): : 1485 - 1495
  • [4] Pandemic tele-smart: a contactless tele-health system for efficient monitoring of remotely located COVID-19 quarantine wards in India using near-field communication and natural language processing system
    Balasubramanian, Vishal
    Vivekanandhan, Sapthagirivasan
    Mahadevan, Venkatesh
    MEDICAL & BIOLOGICAL ENGINEERING & COMPUTING, 2022, 60 (01) : 61 - 79
  • [5] Pandemic tele-smart: a contactless tele-health system for efficient monitoring of remotely located COVID-19 quarantine wards in India using near-field communication and natural language processing system
    Vishal Balasubramanian
    Sapthagirivasan Vivekanandhan
    Venkatesh Mahadevan
    Medical & Biological Engineering & Computing, 2022, 60 : 61 - 79