Developing a monitoring and evaluation framework

Why do we need an evaluation framework?

A well thought-out monitoring and evaluation framework will guide effective and systematic data collection and form the evidence base for assessment of progress and impact over time. It should be developed up front, as part of the programme design phase, to help clarify assumptions about how investments and initiatives are likely to deliver intended outputs, outcomes and impacts.

An evaluation framework captures key information about how progress and impact will be evaluated. There are different types of monitoring and evaluation, including:

Due to the developmental nature of integrated care, most evaluations will have a formative element – one that helps to understand what works and why – to build evidence for what should be scaled up, what should be improved and what should be decommissioned.

Evaluation planning

An evaluation framework – or matrix – sets out the plan for how to measure an outcome, as well as collect and analyse data. For each level, it sets out the aim, outcomes, measures, and a plan for data collection, analysis and reporting.

Counterfactual analysis

How do we know what has made a difference? Counterfactual analysis enables judgements to be made about what changes are a direct result of the intervention(s). This helps establish which changes in outcomes are directly attributable to a project or a programme, versus those which would have occurred anyway. In other words, an analysis of the counterfactual takes an evaluation beyond just understanding whether outcomes have been achieved and allow an assessment of the extent to which observed changes in outcomes are a result of the intervention or of other factors.

While counterfactual analysis takes time and resources, it provides a stronger evidence base for decisions about whether there is value in continuing or scaling up an intervention. The Public Service Transformation Academy has an introductory guide on evaluation, a section of which is dedicated to understanding the counterfactual.

It highlights the following approaches.

What would have happened in the absence of your integration programme or project?

What would have happened had you done nothing?

When is evidence ‘good enough’?

The quality of evidence varies depending on the evaluation design and methodology. At times, it is not possible to do more extensive impact evaluations, yet evidence about progress and impact is needed to report on progress and drive decision-making. While not as robust as more systematic evaluations, other less structured forms of evidence can be useful. These include case studies, surveys, stakeholder interviews, peer or stakeholder review sessions, point-in-time data analysis and literature reviews (of learning from implementation of similar initiatives elsewhere).

The example of a Better Care Fund programme review in Hounslow illustrates how good quality information is gathered without undertaking a full impact evaluation.

In summary: top tips to develop a monitoring and evaluation framework

How to... understand and measure impact of integrated care
Previous section | All sections | Next section