Selection of process and measurement noise statistics, commonly referred to as "filter tuning", is a major implementation issue for the Kalman filter. This process can have a significant impact on the filter performance. In practice, Kalman filter tuning is an ad hoc process involving a considerable amount of trial-and-error. Maybeck [1] and others have suggested that a Kalman Filter may be tuned with a numerical minimization technique. The numerical minimization technique applied here is the Downhill Simplex Method, which is a function optimization algorithm available in several programming languages in the popular Numerical Recipes series [4], and which uses function evaluations only to locate a local minimum of some objective function. Here, the objective function is the RMS of the state estimation errors (estimate minus truth), which assumes "true" states, are available. This is the case when the filter is applied to simulated data. In practice, a filter designer must tune the alter using a trial-and-error process to obtain some desirable performance according to some measure - quantitative or qualitative - of that performance. The idea here is to allow a digital computer to replace the designer in this tedious, repetitive task, the type of task at which digital computers excel. The paper describes the application of the Downhill Simplex Method to a number of example filter tuning problems of increasing order and complexity. The results demonstrate that the Downhill Simplex technique has great utility for tuning the Kalman Filter for linear and nonlinear applications. It can be applied to simulated data or to real data when a highly accurate "truth" reference is available, as is often the case in post processing. Although the technique is applied here to an orbit determination problem, it has been applied to other tuning problems [2, 5], and should extend well to other similarly defined filtering applications.