In this paper, the design and evaluation of a two-stage neural network which can recognize isolated ASL signs is given. The input to this network is the hand shape and position data obtained from a DataGlobe mounted with a Polhemus sensor. The first level consists of four backpropagation neural networks which can recognize the sign language phonology, namely, the 36 hand shapes, 10 locations, 11 orientations, and 11 hand movements. The recognized phonemes from the beginning, middle, and end of the sign are fed to the second stage which recognizes the actual signs. Both backpropagation and Kohonen's self-organizing neural work was used to compare the performance and the expandability of the learned vocabulary. In the current work, six signers with differing hand sizes signed 14 signs which included hand shape, position, and motion fragile and triple robust signs. When a backpropagation network was used for the second stage, the results show that the network was able to recognize these signs with an overall accuracy of 86%. Further, the recognition results were linearly dependent on the size of the finger to the metacarpohalangeal (MP) joint and the total length of the hand. When the second stage was a Kohonen's self-organizing network, the network could not only recognize the signs with 84% accuracy, but also expand its learned vocabulary through relabeling.