In the rapidly evolving domain of machine learning, the ability to adapt to unforeseen circumstances and novel data types is of paramount importance. The deployment of Artificial Intelligence is progressively aimed at more realistic and open scenarios where data, tasks, and conditions are variable and not fully predetermined, and therefore where a closed set assumption cannot be hold. In such evolving environments, machine learning is asked to be autonomous, continuous, and adaptive, requiring effective management of uncertainty and the unknown to fulfill expectations. In response, there is a vigorous effort to develop a new generation of models, which are characterized by enhanced autonomy and a broad capacity to generalize, enabling them to perform effectively across a wide range of tasks. The field of machine learning in open set environments poses many challenges and also brings together different paradigms, some traditional but others emerging, where the overlapping and confusion between them makes it difficult to distinguish them or give them the necessary relevance. This work delves into the frontiers of methodologies that thrive in these open set environments, by identifying common practices, limitations, and connections between the paradigms Open -Ended Learning, Open -World Learning, Open Set Recognition, and other related areas such as Continual Learning, Out -ofDistribution detection, Novelty Detection, and Active Learning. We seek to easy the understanding of these fields and their common roots, uncover open problems and suggest several research directions that may motivate and articulate future efforts towards more robust and autonomous systems.