Civil society organisations and donors: from independent to credible evaluations

In this Evaluation view Javier Fabra-Mata discusses the standard of independence in evaluations, and asks the question: is the search for independence in evaluations doing more harm than good?
Javier Fabra-Mata, PhD, is senior advisor for programme analysis and research at Norwegian Church Aid. He has worked for Norad, and UNDP, among others.
Independence seems to be an undisputable gold standard for evaluations in the development aid system, also for civil society organisations (CSOs). Donors and CSOs alike tend to understand independence as external consultancies. But what if we got it all wrong? Is the search for independence in evaluations doing more harm than good?

Keeping evaluators at arm's length

Accountability and learning are evaluation triggers. These two goals are not necessarily at odds but do not automatically complement each other either. More upward accountability to donors usually leads to embracing a positivistic approach, with less -and a different type of- involvement of certain intended and potential evaluation users. Some of these users are after all those who were part of the intervention being evaluated. They are thus put under the microscope and should not be allowed to ‘contaminate’ the evaluation. This triggers mechanistic responses among donors and CSOs, all of them detrimental to learning.
In the minds of many, upward accountability lies on independence, and in the name of independence, donors expect a third-party to carry out an evaluation. NGOs do their best to stay (and to be seen as staying) at arm’s length from the intervention being evaluated. Some third parties have fully embraced this and are overzealous of their independence.
But what is problematic about idolising independence? It might seem counterintuitive to want to do evaluations differently. There are at least four arguments against a narrow understanding of independence equating it to external third party.
First, third parties are not free from outside forces: it is naïve to believe that external consultants are non-bias and immune to influence. There are forces pressuring evaluators, paramount amount which is the agency commissioning the evaluation. Some commission parties might be more prone to pressuring the evaluator than others, and some evaluators might be easier to influence than others.
Second, an exclusive focus on upward accountability comes at the expense of learning. Those who can learn from evaluations might arguably be less inclined to do so if they are alienated, kept in the dark or treated as ‘suspects’ until proven innocent.
Third, evaluations by external third parties are not necessarily of the utmost quality. There are top-notched, good-enough and mediocre evaluators out there, as there top-notched, good-enough and mediocre public servants and NGO workers. Good evaluation management must be paired with methodological capacity, contextual and thematic knowledge and skills.
Fourth -and connected with the third point- there is a value-for-money consideration: external evaluations are not always worth the money spent on them. Not all evaluation commissioners can attract top-notched evaluators and some evaluations do not meet the minimum quality standards necessary for having any instrumental or conceptual use.
Similarly, external evaluators often need to spend (too) much time to get familiar with the organization and the (organisational) context of the intervention. This often comes of the expense of planned field work, i.e. stakeholder interviews or field site visits that "had to be dropped".
If the shortcomings and side effects are so many and so important, why is it then that CSOs keep falling into the independence trap? The mainstream culture in the aid sector is at the core of this, an obsession with upward accountability where civil society actors must show donors the truthfulness of their results by engaging external evaluators and letting them do without ‘interference’. Understood in these terms, evaluations become a transactional exercise narrowly focused on compliance and upward accountability. 

Shifting to credibility

If the independence paradigm is obsolete, what is the alternative? A shift to credibility. Trust in findings from evaluations can get us a long way in utilisation and learning. An evaluation should not be trusted or questioned based on whether there is an external expert leading it but on whether it is credible. Methodological rigor, transparency and replicability should be the three building blocks of credibility.
How can a civil society organisation turn towards credibility? With an organisational fit, a cultural twist and professional mindset. An organisational fit so that the evaluation function within the organisation is autonomous, reporting to the senior management at the appropriate level; a cultural twist from narrow upward accountability to sincere interest in learning, from an ‘external always’ approach to ‘external maybe’, internal or hybrid models -  from ‘external is best’ to appreciation of in-house competence; with a professional mind, approaching evaluations with skills and interest in rigour.
But let’s not fool ourselves: There is only so much a civil society organisation can do on its own. Even if willing to do so, a civil society organisation can hardly achieve that without a change in mindset on the donor side as well.
Published 19.06.2019
Last updated 19.06.2019