Federated learning is a privacy-preserving distributed machine learning paradigm. However, due to client data heterogeneity, the global model trained by a traditional federated averaging algorithm often exhibits poor generalization ability. To mitigate the impact of data heterogeneity, some existing research has proposed clustered federated learning, where clients with similar data distributions are grouped together to reduce interference from dissimilar clients. However, since the data distribution of clients is unknown, determining the optimal number of clusters is difficult, leading to reduced model convergence efficiency. To address this issue, this paper proposes a personalized federated learning algorithm based on dynamic weight allocation. First, each client is allowed to obtain a global model tailored to fit its local data distribution. During the client model aggregation process, the server first computes the similarity of model updates between clients and dynamically allocates aggregation weights to client models based on these similarities. Secondly, clients use the received exclusive global model to train their local models via the personalized federated learning algorithm. Extensive experimental results demonstrate that, compared to other personalized federated learning algorithms, the proposed method effectively improves model accuracy and convergence speed.