It has been observed that feedforward neural nets with a single hidden layer are capable of forming either convex decision regions or nonconvex but connected decision regions in the input space. In this correspondence, we show that two-layer nets with a single hidden layer are capable of forming disconnected decision regions as well. In addition to giving examples of the phenomenon, we explain why and how disconnected decision regions are formed. Assuming neural nodes with threshold elements, we first derive an expression for the number of cells that are formed in the input space by the hyperplanes associated with the first (hidden) layer. This expression can be useful in deciding how many nodes to have in the first layer. Each hyperplane in the second layer then determines a decision region in the input space which consists of a number of cells that are typically connected to each other. However, through the hypothesization of the existence of additional virtual cells formed by the first layer, we show how the decision regions formed by the second layer can indeed be disconnected. Far from being isolated examples, we show that the number of such disconnected regions can be very large. Using a recent theoretical result about the sufficiency of two layers to approximate arbitrary decision regions in a finite portion of the space, we give an example of how that is possible with the use of virtual cells.