共 50 条
- [2] Direct Quantization for Training Highly Accurate Low Bit-width Deep Neural Networks [J]. PROCEEDINGS OF THE TWENTY-NINTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, 2020, : 2111 - 2118
- [3] Combinatorial optimization for low bit-width neural networks [J]. 2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2246 - 2252
- [5] Accelerating Low Bit-Width Convolutional Neural Networks With Embedded FPGA [J]. 2017 27TH INTERNATIONAL CONFERENCE ON FIELD PROGRAMMABLE LOGIC AND APPLICATIONS (FPL), 2017,
- [6] MXQN:Mixed quantization for reducing bit-width of weights and activations in deep convolutional neural networks [J]. Applied Intelligence, 2021, 51 : 4561 - 4574
- [9] Towards Accurate Low Bit-Width Quantization with Multiple Phase Adaptations [J]. THIRTY-FOURTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, THE THIRTY-SECOND INNOVATIVE APPLICATIONS OF ARTIFICIAL INTELLIGENCE CONFERENCE AND THE TENTH AAAI SYMPOSIUM ON EDUCATIONAL ADVANCES IN ARTIFICIAL INTELLIGENCE, 2020, 34 : 6591 - 6598
- [10] Accelerating Low Bit-width Neural Networks at the Edge, PIM or FPGA: A Comparative Study [J]. PROCEEDINGS OF THE GREAT LAKES SYMPOSIUM ON VLSI 2023, GLSVLSI 2023, 2023, : 625 - 630