共 50 条
- [23] Fast Bayesian Inference of Sparse Networks with Automatic Sparsity Determination 1600, Microtome Publishing (21):
- [24] Work-in-Progress: Flexible Group-Level Pruning of Deep Neural Networks for Fast Inference on Mobile GPUs INTERNATIONAL CONFERENCE ON COMPILERS, ARCHITECTURE, AND SYNTHESIS FOR EMBEDDED SYSTEMS (CASES) 2019, 2019,
- [25] Boosting Mobile CNN Inference through Semantic Memory PROCEEDINGS OF THE 29TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2021, 2021, : 2362 - 2371
- [26] Pantheon: Preemptible Multi-DNN Inference on Mobile Edge GPUs PROCEEDINGS OF THE 2024 THE 22ND ANNUAL INTERNATIONAL CONFERENCE ON MOBILE SYSTEMS, APPLICATIONS AND SERVICES, MOBISYS 2024, 2024, : 465 - 478
- [27] Performance Evaluation of INT8 Quantized Inference on Mobile GPUs IEEE ACCESS, 2021, 9 : 164245 - 164255
- [28] Fast CNN Inference by Adaptive Sparse Matrix Decomposition 2021 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2021,
- [29] PAQSIM: Fast Performance Model for Graphics Workload on Mobile GPUs 21ST ACM SIGPLAN/SIGBED CONFERENCE ON LANGUAGES, COMPILERS, AND TOOLS FOR EMBEDDED SYSTEMS (LCTES '20), 2020, : 3 - 13