This paper aims at explaining the extent of evaluation activity in Swiss cantonal health policy. It is a quantitative analysis of determinants that promote evaluation. For the first time, it draws together data on the frequency of health policy evaluations in the Swiss cantons, shows the results of bi- and multivariate analysis and interprets them based on policy analysis hypotheses. The investigation allows the conclusion that only a minority of cantons conducts systematic evaluations of health policy measures or regularly initiates research studies. The base for evidence-based cantonal health policy in Switzerland can therefore be considered as narrow. The most important factor explaining the differences among the cantons in their evaluation activity is their population size.
Growing interest in the institutionalization of evaluation in the public administration raises the question as to which institutional arrangement offers optimal conditions for the utilization of evaluations. Institutional arrangement denotes the formal organization of processes and competencies, together with procedural rules, that are applicable independently of individual evaluation projects. It reflects the evaluation practice of an institution and defines the distance between evaluators and evaluees. This article outlines the results of a broad-based study of all 300 or so evaluations that the Swiss Federal Administration completed from 1999 to 2002. On this basis, it derives a theory of the influence of institutional factors on the utilization of evaluations.
The authors conducted a qualitative analysis of a random sample of 12 external evaluation reports from the year 2002. A list of 23 evaluation standards was devised, based on the DAC Minimum Sufficient Evaluation Standards (DAC standards) and the widely used SEVAL standards of the Swiss Evaluation Society. The 23 evaluation standards were divided into the four categories also used by the SEVAL standards: utility, referring to readable, accessible and timely evaluations with a good and useful summary; feasibility, ensuring that an evaluation be executed in a realistic, well-thought out manner; propriety, referring to ethical aspects; and accuracy, ascertaining that proper methods and procedures are used. Each of the 23 evaluation standards was translated into a set of questions and applied to the evaluation reports systematically. The authors reviewed relevant documents (evaluation reports, terms of reference, agreements, budgets, financial statements) and interviewed the SDC desk managers who commissioned and accompanied the evaluations as well as the evaluators. For all 12 evaluations the list of criteria was worked through, the findings filled in fact sheets and then compared synthetically for each evaluation standard.
This approach follows the common understanding of evaluation and takes into account empirical findings that showed that the use of evaluations depends decisively on the interest of the program managers and decision makers in the results of the study.