In the current discussion about "evidence" of effectiveness in health promotion and prevention, there is a call for including knowledge about the management processes and context of intervention implementation. Evaluation traditionally takes these aspects into account in judging the value and effectiveness of interventions. But evaluation reports are not often integrated into the evidence building process as they do not match the quality criteria of "published" research. However, we argue that this does not necessarily mean that their scientific quality is inferior. This paper looks at a specific system of quality assurance and assessment procedures for managing evaluation studies as a basis for the discussion on how to broaden the concept of "evidence" to include information gathered through evaluation studies. The Competence Centre for Evaluation (CCE) of the Swiss Federal Office of Public Health (SFOPH) commissions external evaluation studies of public health interventions. By introducing and using a quality assurance system the CCE wants to achieve two main objectives. Firstly, the evaluation studies need to be of sound scientific quality. Secondly, they need to be useful and practicable, i.e. they need to produce conclusions that can be understood by the target group of the study and recommendations that can be implemented. The two main tools for assessing the quality of a report are described as well as how they are embedded within a wider quality assurance system. Our meta-evaluations (evaluation of the evaluation) take into account the Evaluation Standards of the Swiss Evaluation Society (standards of good practice for conducting evaluations, www.seval.ch). Four quality dimensions of an evaluation are mentioned: Propriety, Accuracy, Utility and Feasibility (each with 3 to 10 standards). They refer to the process as well as the product of an evaluation (the report). Wider scope for the discussion: In addition to including "quality assured" evaluations, which other "grey" material could/should we include as "evidence" of effectiveness (e.g. policy papers, guidelines, good practices papers, expert opinion etc.)? Such grey literature provides a lot of information on implementation processes, management and context, which is important for understanding about why and how interventions are "effective". What kind of criteria could be developed to assess such knowledge? Or do they already exist? Could/Should this type of evidence be graded according to classical concepts of "rating evidence"?