We propose a framework for analyzing and evaluating system-wide algorithmic fairness. The core idea is to use simulation techniques in order to extend the scope of current fairness assessments by incorporating context and feedback to a phenomenon of interest. By doing so, we expect to better understand the interaction among the social behavior giving rise to discrimination, automated decision making tools, and fairness-inspired statistical constraints. In particular, we invite the community to use agent based models as an explanatory tool for causal mechanisms of population level properties. We also propose embedding these into a reinforcement learning algorithm to find optimal actions for meaningful change. As an incentive for taking a system-wide approach, we show through a simple model of predictive policing and trials that if we limit our attention to one portion of the system, we may determine some blatantly unfair practices as fair, and be blind to overall unfairness.