With the increasing use of technology in our daily lives, data privacy has become a critical issue. It is essential to carefully design technologies to ensure the protection of people’s personal information. In fact, what we need are privacy-enhancing technologies (PETs) rather than solely focusing on technologies themselves. Artificial intelligence (AI) and deep learning technologies, which are considered societal locomotives, are no exception. However, AI practitioners usually design and develop without considering privacy concerns. To address this gap, we propose a pragmatic privacy-preserving deep learning framework that is suitable for AI practitioners. Our proposed framework is designed to satisfy differential privacy, a rigorous standard for preserving privacy. It is based on a setting called Private Aggregation of Teacher Ensembles (PATE), in which we have made several improvements to achieve a better level of accuracy and privacy protection. Specifically, we use a differential private aggregation mechanism called sparse vector technique and combine it with several other improvements such as human-in-the-loop and pre-trained models. Our proposed solution demonstrates the possibility of producing privacy-preserving models that approximate ground-truth models with a fixed privacy budget. These models are capable of handling a large number of training requests, making them suitable for deep learning training processes. Furthermore, our framework can be deployed in both centralized and distributed training settings. We hope that our work will encourage AI practitioners to adopt PETs and build technologies with privacy in mind. © 2023, The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd.