Complex neural networks, such as those found in the human brain, are able to very accurately discriminate and classify external stimuli. Some of their topological and computational properties have been extracted and used to great effect by the artificial intelligence community. However, even our best simulated neural networks are very pale abstractions of reality, partly because (in general) they fail to account for the temporal dynamics and recurrence inherent in natural neural networks, and instead employ feed-forward architecture and discrete, simultaneous activity. In this paper we begin to develop an intuitive, geometric framework to explore the ways in which different inputs could be discriminated in recurrent linear dynamical networks, with the eventual goal of being able to facilitate a transition to more realistic and effective artificial networks. We first establish a useful, closed-form measure on the space of minimum-energy inputs to a linear system, which allows an elucidation of how discrepancies between inputs impact output trajectories in the state space. We characterize, to an extent, the relationship between input and output difference as it relates to system dynamics as manifest in the geometry of the reachable output space. We draw from this characterization principles which may be employed in the design of dynamic, recurrent artificial networks for input discrimination.