We propose and analyze a generic mathematical model for optimizing rewards in continuous-space, dynamic environments, called Reward Collecting Markov Processes. Our model is motivated by request-serving applications in robotics, where the objective is to control a dynamical system to respond to stochastically generated environment requests, while minimizing wait times. Our model departs from usual discounted reward Markov decision processes in that the reward function is not determined by the current state and action. Instead, a background process generates rewards whose values depend on the number of steps between generation and collection. For example, a reward is declared whenever there is a new request for a robot and the robot gets higher reward the sooner it is able to serve the request. A policy in this setting is a sequence of control actions which determines a (random) trajectory over the continuous state space. The reward achieved by the trajectory is the cumulative sum of all rewards obtained along the way in the finite horizon case and the long run average of all rewards in the infinite horizon case. We study both the finite horizon and infinite horizon problems for maximizing the expected (respectively, the long run average expected) collected reward. We characterize these problems as solutions to dynamic programs over an augmented hybrid space, which gives history-dependent optimal policies. Second, we provide a computational method for these problems which abstracts the continuous-space problem into a discrete-space collecting reward Markov decision process. Under assumptions of Lipschitz continuity of the Markov process and uniform bounds on the discounting, we show that we can bound the error in computing optimal solutions on the finite-state approximation. Finally, we provide a fixed point characterization of the optimal expected collected reward in the infinite case, and show how the fixed point can be obtained by value iteration.