In this paper, we investigate the use of emotional information in the learning process of autonomous agents. Inspired by four dimensions that are commonly postulated by appraisal theories of emotions, we construct a set of reward features to guide the learning process and behaviour of a reinforcement learning (RL) agent that inhabits an environment of which it has only limited perception. Much like what occurs in biological agents, each reward feature evaluates a particular aspect of the (history of) interaction of the agent history with the environment, thereby, in a sense, replicating some aspects of appraisal processes observed in humans and other animals. Our experiments in several foraging scenarios demonstrate that by optimising the relative contributions of each reward feature, the resulting "emotional" RL agents perform better than standard goal-oriented agents, particularly in consideration of their inherent perceptual limitations. Our results support the claim that biological evolutionary adaptive mechanisms such as emotions can provide crucial clues in creating robust, general-purpose reward mechanisms for autonomous artificial agents, thereby allowing them to overcome some of the challenges imposed by their inherent limitations.