The question of what are the best methods to evaluate development assistance have been subject to intense debate over decades.
The Evaluation Department has no policy or position on what is the best methods. Rather, we think that the choice of methods should be based on careful consideration of which methods that are most appropriate to the actual evaluation object and evaluation purpose in each case, within what is realistic given data available, the range of possible methods available given the nature of the evaluation object, and costs.
Some of the different methods are indicated below. The terms in italics are suggested search terms for further study.
Qualitative methods include a wide range of methods utilizing many types of different data. They are normally most useful to provide insight into the complex dynamics of the actual interventions in the particular context and hence have a great potential to help understanding and improve interventions.
They can also be used to address causality by investigating actual processes of change in-depth. Qualitative methods may be more or less participatory by closely involving stakeholders in the different parts of the evaluation process, which can greatly enhance the learning effects, but possibly at the cost of independence and impartiality.
Quantitative methods are most appropriate to analyse large amounts of data, if the data can be expressed by numbers. Both the data and the methods for analysis are transparent and the analysis is less vulnerable to the evaluators’ own judgement, making quantitative methods well fit to draw conclusions that can be regarded as ‘objective’. In some cases, provided that large amounts of good data is available, sophisticated statistical methods make it possible to conclude with a great degree of certainty (or at least the uncertainty can be precisely indicated).
Experimental and quasi experimental methods are particularly well suited to draw conclusions about causality; whether a development intervention is actually the cause of changes observed. These methods can tell whether an intervention works or not with high degree of certainty – or at least the level of certainty can be precisely estimated.
All sets of methods may be more or less rigorous (but quantitative methods are deservedly most often associated to rigor), something that greatly enhances reliability as it enables external scrutiny into (or even replication of) most parts of the analysis.
A subset of methods are found in what is called impact evaluations, based on experimental or quasi experimental evaluation designs, but involving other methods as well. When the nature of an intervention makes an impact evaluation possible and when data is available, impact evaluations enable the most precise estimate of the actual effects of an intervention.
The choice of methods is normally a matter of which questions one wants to answer since the different methods are often not appropriate to answer the same questions; but also a matter of which data is available as they depend on different types of data. Evaluation of development cooperation very often suffers from poor availability and/or quality of data.
Most evaluations commissioned by the Evaluation Department are based on mixed methods, but are most often dominated by qualitative methods. This is both due to the purpose of the evaluation and the often lack of reliable quantifiable data. Due to the nature of Norwegian aid and the multi-purpose nature of evaluations (well beyond merely testing whether interventions work or not), impact evaluations are often not feasible.