Federated Learning (FL) allows users to train a global model without sharing original data, enabling data to be available and invisible. However, not all users are benign and malicious users can corrupt the global model by uploading poisonous parameters. Compared with other machine learning schemes, two reasons make it easier for poisoning attacks to succeed in FL: 1) Malicious users can directly poison the parameters, which is more efficient than data poisoning; 2) Privacy preserving techniques, such as homomorphic encryption (HE) or differential privacy (DP), give poisonous parameters a cover, which makes it difficult for the server to detect outliers. To solve such a dilemma, in this paper, we propose VPPFL, a verifiable privacy-preserving federated learning scheme (VPPFL) with DP as the underlying technology. The VPPFL can defend against poisoning attacks and protect users' privacy with small computation and communication cost. Specifically, we design a verification mechanism, which can verify parameters that are perturbed by DP Noise, thus finding out poisonous parameters. In addition, we provide comprehensive analysis from the perspectives of security, convergence and complexity. Extensive experiments show that our scheme maintains the detection capability compared to prior works, but it only needs 15%-30% computation cost and 7%-14% communication cost.