Using artificial intelligence to solve medical problems has been an intriguing yet challenging topic. In recent years, with the availability of electronic medical records (EMRs), many researchers have focused on diagnose diseases by mining EMRs. They apply machine learning algorithms to train EMRs, and modify models to improve the accuracy of diseases diagnosis. However, these studies input the patient characteristics into the model at once, without multiple online interactions. Or, put another way, although those methods have performed well in the laboratory, they are not suitable for a real consultation environment. In the real world, doctors often guide patients to describe the condition step by step, and then combine the checkup value to diagnose the disease. In this paper, we simulate this process and propose a novel model called DKDR, which combine a knowledge graph and deep reinforcement learning to diagnose disease. The medical knowledge graph is built by crawling 100K web pages, which help users improve the description of disease characteristics. We use q-learning to find the combination of symptoms in the best diagnosis and use convolutional neural networks (CNN) to train each strategy. Finally, we experiment on real medical datasets and synthetic medical datasets. DKDR finds the best combination of symptoms for diagnosing disease. The diagnostic accuracy rates for pneumonia, hyperlipidemia and obesity are 80%, 82% and 91%, respectively.