Gait event detection is a critical task in numerous healthcare applications to identify and recognize a user's gait, such as based on accelerometer-based data. Many deep neural network techniques have been explored in recent years with significant achievements of accurate gait event detection. There is a need to study the adversarial vulnerability of the gait event detection in deep neural network-based solutions when they are evaluated with adversarial examples, which can be synthesized by strategically inserting minor perturbations into the original examples, in different gait scenarios and environments. In this paper, we first build a convolutional neural network (CNN)-based model and a long short-term memory (LSTM)-based model as the target gait event detection models. Then, we apply our proposed adversarial example generation approach to generate adversarial examples, which are further fed into the target CNN and LSTM networks as the testing data to evaluate their adversarial vulnerability performance. Our experimental results show that, when deceived using the generated adversarial examples, the gait event detection performance of the target CNN model is significantly reduced by as much as 0.168, 0.115, 0.197, 0.221, for the accuracy, precision, recall, and F1-score, respectively, from its original attack free performance. The performance of the target LSTM model has also suffered from a substantial reduction as 0.136, 0.116, 0.167, 0.186, for these four metrics, respectively. This study highlights the crucial issue of the adversarial vulnerability of deep neural network-based gait event detection frameworks based on accelerometer based data.