Federated learning is a collaborative machine learning approach that enables distributed users totrain diverse models on resource-constrained devices by sharing gradients, thereby mitigating storage and computational burdens. However, due to a lack of full trust in cloud service providers, users oftenprefer to outsource sensitive data in an encrypted manner, which introduces significant complexities indata processing, analysis, and access control. In this context, the privacy leakage issue in the processof federated learning highlights a critical concern. To address these issues, this paper presents a newfederated learning framework based on homomorphic encryption to protect data privacy and achievecollaborative model training, the proposed framework presents two notable benefits. Firstly, it employsproxy homomorphic encryption to ensure the security of gradients, especially in situations where theserver's reliability is constrained. This strategy effectively preserves gradient confidentiality within anenvironment of partial trust in the server. Secondly, the framework allocates gradient weights basedon the caliber of user data, ensuring privacy preservation even when operating asynchronously. Byfactoring in data quality, the model accommodates disparities in data contributions and adapts gradientweights correspondingly. This not only enhances overall model performance but also bolsters the privacyof individual data. Through a series of experiments, we validate the efficacy of the proposed frameworkin both privacy preservation and model performance, demonstrating its capability to uphold excellent-model performance while ensuring data privacy