The introduction of artificial intelligence (AI) into educational institutions is part of a global trend shaped by the capabilities of this technology. However, due to the disruptive nature of AI technologies, it greatly affects the way of teaching and learning. It is therefore essential to establish clear guidelines that not only ensure that all competencies required by the curricula are still effectively taught, but also empower students to use the new technology in a productive manner. Developing such guidelines for emerging and dynamic technologies is a very challenging task, as rules often struggle to keep pace with rapidly evolving advancements. The European Union found a good way to tackle this problem in its AI Act by introducing a risk-based approach to regulate AI applications of organizations. Depending on the level of risk, applications might be prohibited, require extensive analysis and safeguards, have transparency obligations, or need no further action. This paper adapts the core structure of the AI Act regulation for the education sector to provide teachers and students with a structured framework for dealing with AI. Various use cases, based on teaching and learning life cycles, are presented to illustrate the versatility of AI in teaching and the learning process. By establishing such a framework, we not only promote competence development in dealing with AI but also contribute to the creation of an ethical and responsible use of AI in education.