The human visual system overcomes ambiguities, collectively known as the aperture problem, in its local measurements of the direction in which visual objects are moving, producing unambiguous percepts of motion. A new approach to the aperture problem is presented, using an adaptive neural network model. The neural network is exposed to moving images during a developmental period and develops its own structure by adapting to statistical characteristics of its visual input history. Competitive learning rules ensure that only connection "chains" between cells of similar direction and velocity sensitivity along successive spatial positions survive. The resultant self-organized configuration implements the type of disambiguation necessary for solving the aperture problem and operates in accord with direction judgments of human experimental subjects. The system not only accommodates its structure to long-term statistics of visual motion, but also simultaneously uses its acquired structure to assimilate, disambiguate, and represent visual motion events in real-time. © 1990.