Deep Neural Networks (DNNs) have demonstrated outstanding performance in various medical image processing tasks. However, recent studies have revealed a heightened vulnerability of medical DNNs to adversarial attacks compared to their natural counterparts. In this work, we present a novel perspective by analyzing the disparities between medical datasets and natural datasets, specifically focusing on the dataset collection process. Our analysis uncovers unique differences in the data distribution across different image classes in medical datasets, a phenomenon absent in natural datasets. To gain deeper insights into medical datasets, we employ Fourier analysis tools to investigate medical DNNs. Intriguingly, we discover that high-frequency components in medical images exhibit stronger associations with corresponding labels compared to those in natural datasets. These high-frequency components distract the attention of medical DNNs, rendering them more susceptible to adversarial images. To mitigate this vulnerability, we propose a preprocessing technique called Removing High-frequency Components (RH) training. Our experimental results demonstrate that the application of RH training significantly enhances the robustness of medical DNNs against adversarial attacks. Notably, in certain scenarios, RH training even outperforms traditional adversarial training methods, particularly when subjected to black-box attacks.