This work gives a logical characterization of the (ethical and social) obligations of an agent trained with Reinforcement Learning (RL). An RL agent takes actions by following a utility-maximizing policy. We maintain that the choice of utility function embeds ethical and social values implicitly, and that it is necessary to make these values explicit. This work provides a basis for doing so. First, we propose a probabilistic deontic logic that is suited for formally specifying the obligations of a stochastic system, including its ethical obligations. We prove some useful validities about this logic, and how its semantics are compatible with those of Markov Decision Processes (MDPs). Second, we show that model checking allows us to prove that an agent has a given obligation to bring about some state of affairs - meaning that by acting optimally, it is seeking to reach that state of affairs. We develop a model checker for our logic against MDPs. Third, we observe that it is useful for a system designer to obtain a logical characterization of her system's obligations, which is potentially more interpretable and helpful in debugging than the expression of a utility function. Enumerating all the obligations of an agent is impractical, so we propose a Bayesian optimization routine that learns to generate a system's obligations that the system designer deems interesting. We implement the model checking and Bayesian optimization routines, and demonstrate their effectiveness with an initial pilot study. This work provides a rigorous method to characterize utility-maximizing agents in terms of the (ethical and social) obligations that they implicitly seek to satisfy.