In general, strategies for spatial navigation could employ one of two spatial reference frames: egocentric or allocentric. Notwithstanding intuitive explanations, it remains unclear however under what circumstances one strategy is chosen over another, and how neural representations should be related to the chosen strategy. Here, we first use a deep reinforcement learning model to investigate whether a particular type of navigation strategy arises spontaneously during spatial learning without imposing a bias onto the model. We then examine the spatial representations that emerge in the network to support navigation. To this end, we study two tasks that are ethologically valid for mammals-guidance, where the agent has to navigate to a goal location fixed in allocentric space, and aiming, where the agent navigates to a visible cue. We find that when both navigation strategies are available to the agent, the solutions it develops for guidance and aiming are heavily biased towards the allocentric or the egocentric strategy, respectively, as one might expect. Nevertheless, the agent can learn both tasks using either type of strategy. Furthermore, we find that place-cell-like allocentric representations emerge preferentially in guidance when using an allocentric strategy, whereas egocentric vector representations emerge when using an egocentric strategy in aiming. We thus find that alongside the type of navigational strategy, the nature of the task plays a pivotal role in the type of spatial representations that emerge. Author summary Most species rely on navigation in space to find water, food, and mates, as well as to return home. When navigating, humans and animals can use one of two reference frames: one based on stable landmarks in the external environment, such as moving due north and then east, or one centered on oneself, such as moving forward and turning left. However, it remains unclear how these reference frames are chosen and interact in navigation tasks, as well as how they are supported by representations in the brain. We therefore modeled two navigation tasks that would each benefit from using one of these reference frames, and trained an artificial agent to learn to solve them through trial and error. Our results show that when given the choice, the agent leveraged the appropriate reference frame to solve the task, but surprisingly could also use the other reference frame when constrained to do so. We also show that the representations that emerge to enable the agent to solve the tasks exist on a spectrum, and are more complex than commonly thought. These representations reflect both the task and reference frame being used, and provide useful insights for the design of experimental tasks to study the use of navigational strategies.