共 50 条
- [1] Scalable Methods for 8-bit Training of Neural Networks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
- [3] Exploring 8-bit Arithmetic for Training Spiking Neural Networks 2024 IEEE INTERNATIONAL CONFERENCE ON OMNI-LAYER INTELLIGENT SYSTEMS, COINS 2024, 2024, : 380 - 385
- [4] Training Deep Neural Networks with 8-bit Floating Point Numbers ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
- [5] DC-MPQ: Distributional Clipping-based Mixed-Precision Quantization for Convolutional Neural Networks 2022 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE CIRCUITS AND SYSTEMS (AICAS 2022): INTELLIGENT TECHNOLOGY IN THE POST-PANDEMIC ERA, 2022, : 130 - 133
- [6] Sub-8-Bit Quantization Aware Training for 8-Bit Neural Network Accelerator with On-Device Speech Recognition INTERSPEECH 2022, 2022, : 3033 - 3037
- [7] PTMQ: Post-training Multi-Bit Quantization of Neural Networks THIRTY-EIGHTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 38 NO 14, 2024, : 16193 - 16201
- [8] Hybrid 8-bit Floating Point (HFP8) Training and Inference for Deep Neural Networks ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 32 (NIPS 2019), 2019, 32
- [10] Training Deep Neural Networks in 8-bit Fixed Point with Dynamic Shared Exponent Management PROCEEDINGS OF THE 2021 DESIGN, AUTOMATION & TEST IN EUROPE CONFERENCE & EXHIBITION (DATE 2021), 2021, : 1536 - 1541