共 50 条
- [1] Energy-efficient deep learning inference on edge devices [J]. HARDWARE ACCELERATOR SYSTEMS FOR ARTIFICIAL INTELLIGENCE AND MACHINE LEARNING, 2021, 122 : 247 - 301
- [3] PIE: A Pipeline Energy-efficient Accelerator for Inference Process in Deep Neural Networks [J]. 2016 IEEE 22ND INTERNATIONAL CONFERENCE ON PARALLEL AND DISTRIBUTED SYSTEMS (ICPADS), 2016, : 1067 - 1074
- [4] Energy-Efficient Embedded Inference of SVMs on FPGA [J]. 2019 IEEE COMPUTER SOCIETY ANNUAL SYMPOSIUM ON VLSI (ISVLSI 2019), 2019, : 165 - 169
- [6] Energy-efficient cooperative inference via adaptive deep neural network splitting at the edge [J]. ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS, 2023, : 1712 - 1717
- [7] TIE: Energy-efficient Tensor Train-based Inference Engine for Deep Neural Network [J]. PROCEEDINGS OF THE 2019 46TH INTERNATIONAL SYMPOSIUM ON COMPUTER ARCHITECTURE (ISCA '19), 2019, : 264 - 277
- [8] Poster Abstract: MicroBrain: Compressing Deep Neural Networks for Energy-efficient Visual Inference Service [J]. 2017 IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2017, : 1000 - 1001
- [9] SONIC: A Sparse Neural Network Inference Accelerator with Silicon Photonics for Energy-Efficient Deep Learning [J]. 27TH ASIA AND SOUTH PACIFIC DESIGN AUTOMATION CONFERENCE, ASP-DAC 2022, 2022, : 214 - 219