LNPU: A 25.3TFLOPS/W Sparse Deep-Neural-Network Learning Processor with Fine-Grained Mixed Precision of FP8-FP16

被引:0
|
作者
Lee, Jinsu [1 ]
Lee, Juhyoung [1 ]
Han, Donghyeon [1 ]
Lee, Jinmook [1 ]
Park, Gwangtae [1 ]
Yoo, Hoi-Jun [1 ]
机构
[1] Korea Adv Inst Sci & Technol, Daejeon, South Korea
关键词
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
引用
收藏
页码:142 / +
页数:3
相关论文
共 4 条
  • [1] An Energy-Efficient Sparse Deep-Neural-Network Learning Accelerator With Fine-Grained Mixed Precision of FP8-FP16
    Lee, Jinsu
    Lee, Juhyoung
    Han, Donghyeon
    Lee, Jinmook
    Park, Gwangtae
    Yoo, Hoi-Jun
    [J]. IEEE SOLID-STATE CIRCUITS LETTERS, 2019, 2 (11): : 232 - 235
  • [2] A 4.27TFLOPS/W FP4/FP8 Hybrid-Precision Neural Network Training Processor Using Shift-Add MAC and Reconfigurable PE Array
    Lee, Sunwoo
    Park, Jeongwoo
    Jeon, Dongsuk
    [J]. IEEE 49TH EUROPEAN SOLID STATE CIRCUITS CONFERENCE, ESSCIRC 2023, 2023, : 221 - 224
  • [3] A Dynamic Execution Neural Network Processor for Fine-Grained Mixed-Precision Model Training Based on Online Quantization Sensitivity Analysis
    Liu, Ruoyang
    Wei, Chenhan
    Yang, Yixiong
    Wang, Wenxun
    Yuan, Binbin
    Yang, Huazhong
    Liu, Yongpan
    [J]. IEEE JOURNAL OF SOLID-STATE CIRCUITS, 2024, 59 (09) : 3082 - 3093
  • [4] A 146.52 TOPS/W Deep-Neural-Network Learning Processor with Stochastic Coarse-Fine Pruning and Adaptive Input/Output/Weight Skipping
    Kim, Sangyeob
    Lee, Juhyoung
    Kang, Sanghoon
    Lee, Jinmook
    Yoo, Hoi-Jun
    [J]. 2020 IEEE SYMPOSIUM ON VLSI CIRCUITS, 2020,