Due to the rapidly evolving landscape of cybersecurity, the risks in securing cloud networks and devices are attesting to be an increasingly prevalent research challenge. Reinforcement learning (RL) is a subfield of machine learning that has demonstrated its ability to detect cyberattacks, as well as its potential to recognize new ones. Many of the popular RL algorithms at present rely on deep neural networks, which are computationally very expensive to train. An alternative approach to this class of algorithms is hyperdimensional computing (HDC), which is a robust, computationally efficient learning paradigm that is ideal for powering resource-constrained devices. In this article, we present CyberRL, a HDC algorithm for learning cybersecurity strategies for intrusion detection in an abstract Markov game environment. We demonstrate that CyberRL is advantageous compared to its deep learning equivalent in computational efficiency, reaching up to 1.9x speedup in training time for multiple devices, including low-powered devices. We also present its enhanced learning quality and superior defense and attack security strategies with up to 12.8x improvement. We implement our framework on Xilinx Alveo U50 FPGA and achieve approximately 700x speedup and energy efficiency improvements compared to the CPU execution.