Neural networks have remained the most widely used learning method for at least one decade thanks to their flexibility in solving problems across fields. Neural networks attempt to model learning based on a simplified version of neural activity in a human brain. However, there are other algorithms supported by theories that have yet to be as fully explored as neural networks, such as hierarchical temporal memory and genetic programming. In this paper, the learning capabilities of common and less common sequence prediction methods from across domains are analyzed. The data used to validate these methods, integer sequences, does not appear regularly in sequence prediction literature, and will produce new understanding with respect to the learning algorithms’ abilities. The author contributes insights using a lens built from research in cognitive psychology, neuroscience, and machine learning. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd 2024.