Deep Learning (DL) based classification of conventional objects using mm-Wave Frequency Modulated Continuous Wave (FMCW) radars is useful for multiple real-world automotive applications. These applications require highly precise and accurate embedded setups to tackle undesirable mishaps. In general adversarial attacks can account for significant degradations in the performance of DL models. Provided a specific hardware setup, the system should be equipped with robust classification algorithms to counter adversarial attacks. The goal of this paper is to present an experimental study regarding the effects of a classical adversarial attack on some of the state-of-the-art DL algorithms. The first phase is to acquire experimental data and construct Range-Angle profile datasets using mm-Wave FMCW radars in real-world situations with and without objects, including a human and car on 5, 10, and 20 meters distances. Afterward, we have developed four DL models, including a self-designed RadarNet using a Convolutional Neural Network (CNN) and transfer learning-based ResNet34, InceptionV3, and GoogleNet2. The models yield an average accuracy of 94% using confusion matrix as a performance parameter. We further applied the Fast Gradient Sign Method (FGSM) adversarial attack on all four models and present a comparative study of its effects on classification accuracy. The results demonstrate that the average accuracy for DL-models degrades to 18.87%, 17.53%, and 16.08% for epsilon values of 0.001, 0.005, and 0.009, respectively. Having a significant degradation in accuracy highlights that adversarial retraining is essential to counter the effects of FGSM adversarial attacks.