The optimality equation of discrete time dynamic programming is considered when state space and action space are finite dimensional Euclidean spaces. Based on a measurable selection theorem, an elementary derivation is given of sufficient conditions to assure that the optimal value operator behaves well. For the author's model these conditions are weaker than those described in the existing literature. Under related conditions it is easy to prove that an optimal Markovian strategy exists for a finite stage Markovian stochastic optimization problem, and that the optimal strategies are completely characterized by the minimum sets of the optimality equations. This is illustrated with a general N-stage inventory control model.