This article presents a wearable sensor system for gesture recognition using electrical impedance tomography (EIT). The designed EIT system integrates the MAX30009 module (for generating excitation current signals and measuring response voltages), the CH74HC4067 multiplexer (for electrode configuration), and the STM32F4 microcontroller (for system control and Bluetooth data transmission). In previous EIT-based gesture recognition studies, only magnitude was typically utilized. In this work, we aim to improve recognition accuracy by leveraging both resistance and reactance data. After verifying the accuracy of the system's resistance and reactance measurements as well as the imaging quality of the EIT, we created a small dataset for American Sign Language (ASL) digit gestures. The dataset includes 3012 samples of resistance and reactance data collected at rest and during the performance of digit gestures, ranging from 0 to 9. To ensure robustness, we selected six different time periods during data collection and designated one separate time period for testing. We evaluated the performance of various commonly used classification models on this dataset, including support vector machines (SVMs), backpropagation neural networks (BPNNs), and AlexNet. In addition, we explored classification models that had not been applied to this problem, such as ConvNeXt and subspace discriminants. Experimental results showed that the subspace discriminant method, with a subspace dimension of 255 and 60 learners, achieved the highest accuracy on the test set at 99.0%, surpassing more complex neural network architectures.