共 50 条
- [1] Quantized Guided Pruning for Efficient Hardware Implementations of Deep Neural Networks [J]. 2020 18TH IEEE INTERNATIONAL NEW CIRCUITS AND SYSTEMS CONFERENCE (NEWCAS'20), 2020, : 206 - 209
- [2] A Hardware Accelerator Based on Quantized Weights for Deep Neural Networks [J]. EMERGING RESEARCH IN ELECTRONICS, COMPUTER SCIENCE AND TECHNOLOGY, ICERECT 2018, 2019, 545 : 1079 - 1091
- [3] Efficient Hardware Acceleration for Approximate Inference of Bitwise Deep Neural Networks [J]. 2017 CONFERENCE ON DESIGN AND ARCHITECTURES FOR SIGNAL AND IMAGE PROCESSING (DASIP), 2017,
- [4] EBSP: Evolving Bit Sparsity Patterns for Hardware-Friendly Inference of Quantized Deep Neural Networks [J]. PROCEEDINGS OF THE 59TH ACM/IEEE DESIGN AUTOMATION CONFERENCE, DAC 2022, 2022, : 259 - 264
- [5] Hardware for Quantized Mixed-Precision Deep Neural Networks [J]. PROCEEDINGS OF THE 2022 15TH IEEE DALLAS CIRCUITS AND SYSTEMS CONFERENCE (DCAS 2022), 2022,
- [6] Inference and Energy Efficient Design of Deep Neural Networks for Embedded Devices [J]. 2020 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2020), 2020, : 36 - 41
- [7] Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices [J]. Neural Computing and Applications, 2019, 31 : 8101 - 8116
- [8] Adaptive learning rule for hardware-based deep neural networks using electronic synapse devices [J]. NEURAL COMPUTING & APPLICATIONS, 2019, 31 (11): : 8101 - 8116
- [9] A Pipelined Energy-efficient Hardware Accelaration for Deep Convolutional Neural Networks [J]. 2019 IEEE INTERNATIONAL CONFERENCE ON DESIGN & TEST OF INTEGRATED MICRO & NANO-SYSTEMS (DTS), 2019,
- [10] FLightNNs: Lightweight Quantized Deep Neural Networks for Fast and Accurate Inference [J]. PROCEEDINGS OF THE 2019 56TH ACM/EDAC/IEEE DESIGN AUTOMATION CONFERENCE (DAC), 2019,