In multi-turn dialogue, there are often ambiguous phenomena such as pronoun referencing and omission of language components, resulting in incomplete discourse information and ambiguous language understanding. This leads the dialogue system unable to produce accurate responses information. This situation usually requires more dialogue history information to understand the correct intention of the dialogue process. To address this issue, we propose a dialogue ambiguity resolution model based on contextual knowledge, which utilizes commonsense knowledge and domain knowledge to obtain relevant knowledge subgraphs through a dynamic weight parameter model. By utilizing historical dialogue information to evaluate the association level between knowledge triplets and current dialogue context, the association score is used as the basis for searching for context-associated relational knowledge. This kind of knowledge can be served as a reasoning path, thereby helping to better resolve ambiguity in conversation. In order to fully integrate the context-associated relational knowledge and the original dialogue information, while preserving the original dialogue feature information, we design a knowledge gate regulator that blends the last layer of hidden feature vectors of the original dialogue context with the feature vectors generated by the information aggregation layer. For the ambiguity resolution task, we achieve state-of-the-art accuracy on both the MuDoCo and SB-TOP datasets, with F1-scores of 96.07 (+2.95 higher than previous work) and 97.95 (+1.15 higher than previous work), respectively. For the coreference resolution task on the WSC dataset, our method improves the performance of the same base model by 4.8%, demonstrating a significant advantage when the number of parameters is comparable.