In previous work, the author and Dr. Jer-Nan Juang contributed a new neural net architecture, within the framework of "second generation" neural models. We showed how to implement backpropagation learning in a massively parallel architecture involving only local computations - thereby capturing one of the principal advantages of biological neural nets. Since then, a large body of neural-biological research has given rise to the "third-generation" models, namely spiking neural nets, wherein the brief, sharp pulses (spikes) produced by neurons are explicitly modeled. Information is encoded not in average firing rates, but in the temporal pattern of the spikes. Further, no physiological basis for backpropagation has been found, rather, synaptic plasticity is driven by the timing of spikes. The present paper examines the statistical dynamics of learning processes in spiking neural nets. Equations describing the evolution of synaptic efficacies and the probability distributions of the neural states are derived. Although the system is strongly nonlinear, the typically large number of synapses per neuron (similar to 10,000) permits us to obtain a closed system of equations. As in the earlier work, we see that the learning process in this more realistic setting is dominated by local interactions; thereby preserving massive parallelism. It is hoped that the formulation given here will provide the basis for the rigorous analysis of learning dynamics in very large neural nets (10(10) neurons in the human brain!) for which direct simulation is difficult or impractical.