Recently, Deepfake has posed a significant threat to our digital society. This technology allows for the modification of facial identity, expression, and attributes in facial images and videos. The misuse of Deepfake can invade personal privacy, damage individuals' reputations, and have serious consequences. To counter this threat, researchers have proposed active defense methods using adversarial perturbation to distort Deepfake products which can hinder the dissemination of false information. However, the existing methods are primarily based on image-specific approaches, which are inefficient for large-scale data. To address these issues, we propose an end-to-end approach to generate universal perturbations for combating Deepfake. To further cope with diverse Deepfakes, we introduce an adaptive balancing strategy to combat multiple models simultaneously. Specifically, for different scenarios, we propose two types of universal perturbations. Disrupting Universal Perturbation (DUP) leads Deepfake models to generate distorted outputs. In contrast, Lapsing Universal Perturbation (LUP) tries to make the output consistent with the original image, allowing the correct information to continue propagating. Experiments demonstrate the effectiveness and better generalization of our proposed perturbation compared with state-of-the-art methods. Consequently, our proposed method offers a powerful and efficient solution for combating Deepfake, which can help preserve personal privacy and prevent reputational damage.