One of the big challenges in the field of explainable artificial intelligence (XAI) is how to evaluate explainability approaches. Many evaluation methods (EMs) have been proposed, but a gold standard has yet to be established. Several authors classified EMs for explainability approaches into categories along aspects of the EMs themselves (e.g., heuristic -based, humancentered, application-grounded, functionally -grounded). In this vision paper, we propose that EMs can also be classified according to aspects of the XAI process they target. Building on models that spell out the main processes in XAI, we propose that there are explanatory information EMs, understanding EMs, and desiderata EMs. This novel perspective is intended to augment the perspective of other authors by focusing less on the EMs themselves but on what explainability approaches intend to achieve (i.e., provide good explanatory information, facilitate understanding, satisfy societal desiderata). We hope that the combination of the two perspectives will allow us to more comprehensively evaluate the advantages and disadvantages of explainability approaches, helping us to make a more informed decision about which approaches to use or how to improve them.