Learning to read places a strong challenge on the visual system. Years of expertise lead to a remarkable capacity to separate similar letters and encode their relative positions, thus distinguishing words such as FORM and FROM, invariantly over a large range of positions, sizes and fonts. How neural circuits achieve invariant word recognition remains unknown. Here, we address this issue by recycling deep neural network models initially trained for image recognition. We retrain them to recognize written words and then analyze how reading-specialized units emerge and operate across the successive layers. With literacy, a small subset of units becomes specialized for word recognition in the learned script, similar to the visual word form area (VWFA) in the human brain. We show that these units are sensitive to specific letter identities and their ordinal position from the left or the right of a word. The transition from retinotopic to ordinal position coding is achieved by a hierarchy of "space bigram" unit that detect the position of a letter relative to a blank space and that pool across low- and high-frequency-sensitive units from early layers of the network. The proposed scheme provides a plausible neural code for written words in the VWFA, and leads to predictions for reading behavior, error patterns, and the neurophysiology of reading. Reading is a fundamental skill in modern society, yet the neural mechanisms that allow us to quickly recognize words remain poorly understood. Our research aims to unravel how the brain achieves invariant word recognition-the ability to recognize words regardless of their position, size, or font. We studied artificial neural networks trained to recognize words, mirroring human learning. Our findings reveal that these networks develop specialized units for word recognition, similar to the Visual Word Form Area in the human brain. These units are sensitive to specific letters and their positions within a word. Crucially, we discovered that they achieve this by detecting the spaces around words as reference points. This creates a hierarchical system where early layers detect basic features and spaces, while higher layers combine this information to recognize specific letters at certain positions relative to word edges. This "space bigram" model reconciles previous theories of letter bigrams and letter-position coding. Our results suggest that most written languages may be processed using similar basic principles. This understanding could inform better methods for teaching reading and treating reading disorders.