Federated Learning (FL), as an emerging distributed machine learning technique, reduces the computational burden on the central server through decentralization, while ensuring data privacy. It typically requires client sampling and local training for each iteration, followed by aggregation of the model on a central server. Although this distributed learning approach has positive implications for the preservation of privacy, it also increases the computational load of local clients. Therefore, lightweight efficient schemes become an indispensable tool to help reduce communication and computational costs in FL. In addition, due to the risk of model stealing attacks when uploaded, it is urgent to improve the level of privacy protection further. In this paper, we propose Federated Adaptive Pruning (FAP), a lightweight method that integrates FL with adaptive pruning by adjusting explicit regularization. We keep the model unchanged, but instead try to dynamically prune the data from large datasets during the training process to reduce the computational costs and enhance privacy protection. In each round of training, selected clients train with their local data and prune a portion of the data before uploading the model for server-side aggregation. The remaining data are reserved for subsequent computations. With this approach, selected clients can quickly refine their data at the beginning of training. In addition, we combine FAP with differential privacy to further strengthen data privacy. Through comprehensive experiments, we demonstrate the performance of FAP on different datasets with basic models, e.g., CNN, and MLP, just to mention a few. Numerous experimental results show that our method is able to significantly prune the datasets to reduce computational overhead with minimal loss of accuracy. Compared to previous methods, we can obtain the lowest training error, and further improve the data privacy of client-side.