Developing a monitoring and evaluation framework
In this section of the guide How to... understand and measure impact of integrated care
Why do we need an evaluation framework?
A well thought-out monitoring and evaluation framework will guide effective and systematic data collection and form the evidence base for assessment of progress and impact over time. It should be developed up front, as part of the programme design phase, to help clarify assumptions about how investments and initiatives are likely to deliver intended outputs, outcomes and impacts.
An evaluation framework captures key information about how progress and impact will be evaluated. There are different types of monitoring and evaluation, including:
- point-in-time through to a programme of ongoing data collection and assessment
- focus on a single project or a range of programmes
- summative evaluation (to assess if outcomes and targets have been met)
- formative evaluation (to identify how and why progress has been made)
- formal audits, external evaluations, internal reviews or performance assessments.
Due to the developmental nature of integrated care, most evaluations will have a formative element – one that helps to understand what works and why – to build evidence for what should be scaled up, what should be improved and what should be decommissioned.
Evaluation planning
An evaluation framework – or matrix – sets out the plan for how to measure an outcome, as well as collect and analyse data. For each level, it sets out the aim, outcomes, measures, and a plan for data collection, analysis and reporting.
-
Specific aim
What are you trying to evaluate?
For example, was the programme effective?
-
Outcomes
What outcome is this regarding?
-
Measure
What is the measure that you are basing this evaluation on?
Is there an evaluation norm (i.e. standard to be met). For example, 80% of those accessing services.
-
Data collection
Is the data that is being collected relevant to stakeholders?
What source will be used?
Is the data already available? What approach will you take if it is not? For example, primary/ secondary research
Is baseline data available? What is your approach if baseline data isn’t available?
What timeframe is the data collection based on (for example, weekly/monthly)? Is it consistent across all stakeholders?
What method are you going to choose to collect data (for example, survey)?
Who will be accountable?
Have you taken equal opportunities and ethical issues into account?
Do you have permission to use the collected data?
Is the collected data kept safe and confidential?
-
Analysis and reporting
How will the data be analysed (for example, annual evaluation)?
Are the selected methods manageable regarding resource requirements?
Who will scrutinise the data?
Which meetings/boards will this be reported to (including timeframe/ frequency)?
What changes will be made as a result?
Counterfactual analysis
How do we know what has made a difference? Counterfactual analysis enables judgements to be made about what changes are a direct result of the intervention(s). This helps establish which changes in outcomes are directly attributable to a project or a programme, versus those which would have occurred anyway. In other words, an analysis of the counterfactual takes an evaluation beyond just understanding whether outcomes have been achieved and allow an assessment of the extent to which observed changes in outcomes are a result of the intervention or of other factors.
While counterfactual analysis takes time and resources, it provides a stronger evidence base for decisions about whether there is value in continuing or scaling up an intervention. The Public Service Transformation Academy has an introductory guide on evaluation, a section of which is dedicated to understanding the counterfactual.
It highlights the following approaches.
-
Randomised controlled trials (RCTs)
These are the ‘gold standard’ of evaluation. Several people are randomly assigned to two or more groups to test a specific change. One group (the experimental group) receives the change being tested while the other (the control group) receives an alternative service or the existing service. RCTs are the most methodologically challenging form of evaluation, and need expert input to design and deliver.
-
Matched group comparisons
Where RCTs identify target and control groups and collect data over time, matched group comparisons use existing data sets to select similar groups or cohorts of patients or service users, some of which are users/recipients of the service evaluated and others who are not. Matched group analysis can also focus on comparing a geographical area with a similar area elsewhere. This often involves significant statistical analysis and may be difficult to design and deliver without significant evaluation expertise.
-
Experimental or quasi-experimental approaches
The basic structure of a quasi-experimental evaluation involves examining the impact of an intervention by taking measurements before and after it is implemented. This type of ‘time series’ analysis can be a very effective method and may not be as expensive to implement as other methodologies. It can be delivered without significant evaluation expertise.
What would have happened in the absence of your integration programme or project?
What would have happened had you done nothing?
-
Case study: Evaluating Sunderland’s All Together Better Vanguard Programme (MCP) Open
The evaluation of Sunderland’s All Together Better Multi-Specialty Community Provider Vanguard programme ran between November 2016 and June 2017. The evaluation, which was co-developed with specialist research and consulting organisation Cordis Bright, had a number of aims:
- Consider the context of the Sunderland Care model
- Review the Recovery at Home/Older People’s Assessment and Liaison service
- Review the Community Integrated Teams
- Review the programme of Enhanced Primary Care
- Consider the leadership and governance functions specific to Sunderland
- Review the overall outcomes of the Vanguard programme, performance against expectations, and any unintended outcomes
A collaborative approach, based on the principles of action research, was taken to deliver the evaluation. All research approaches, methods and tools were agreed with an evaluation steering group before use in the field. The diagram below outlines the approach and evaluation activity undertaken.
A collaborative evaluation approach
Phase 1. Baseline report and evaluation framework (Dec 2016)
- Project launch
- Review of documentation
- Literature review of ‘what works’ in implementing an MCP
- In-depth interviews with key senior stakeholders
- Deliver draft baseline report and evaluation framework
- Circulate and meeting with steering group for feedback
- Evaluation framework and baseline report signed off
Phase 2. Implementation of evaluation framework (Jan–Mar 2017)
- Review of documentation, performance management and budget information
- In-depth interviews with stakeholders
- e-Survey of staff and stakeholders
- Analysis and reporting
- Delivery of draft final report
Phase 3. Delivery of final report and dissemination (Apr–Jun 2017)
- Circulate draft report for feedback
- Sense-testing workshop with senior stakeholders
- Finalise and sign off final report
- Presentation of report findings
Key learning
- Collaborative approach meant that the evaluation addressed and answered the key evaluation questions
- Development of a clear SMART evaluation framework meant that all evaluation stakeholders understood the key evaluation questions, activity and outputs as well as how process, impact and outcomes were going to be demonstrated and evidenced
- Sense-testing workshops with stakeholders helped to ensure a set of grounded, practically useful recommendations which also increased ‘ownership’ and ‘buy-in’ among stakeholders
The final report can be accessed here: Sunderland All Together Better (MCP)
When is evidence ‘good enough’?
The quality of evidence varies depending on the evaluation design and methodology. At times, it is not possible to do more extensive impact evaluations, yet evidence about progress and impact is needed to report on progress and drive decision-making. While not as robust as more systematic evaluations, other less structured forms of evidence can be useful. These include case studies, surveys, stakeholder interviews, peer or stakeholder review sessions, point-in-time data analysis and literature reviews (of learning from implementation of similar initiatives elsewhere).
The example of a Better Care Fund programme review in Hounslow illustrates how good quality information is gathered without undertaking a full impact evaluation.
-
Case study: Point-in-time programme reviews – Hounslow Open
At times it is not possible to undertake a full evaluation over a longer period. Point-in-time reviews enables leaders to take stock on a project or a programme, and identify learning about what works well and what works less well. In 2016, leaders in Hounslow decided to undertake a review of all their BCF schemes. The straightforward but systematic approach can be used in a number of different settings.
The aim of the review was to inform a revised approach to the BCF in 2016/17. Over a period of five weeks, the key stakeholders involved in the BCF came together to explore what was working well and where there were issues and performance challenges.
The review aimed to answer the following key questions:
- Are the BCF programmes delivering what they set out to deliver?
- How does the BCF need to evolve to reflect what is happening on the ground?
- How can we maximise the impact of the schemes that are working for 2016/17?
- How can we ensure that the BCF effectively enables the vision for whole-systems integration?
- How can the BCF be best used to support the Hounslow health and social care economy?
In addition to the workshops with key stakeholders, project leads, staff and other stakeholders were interviewed and performance data and existing reviews were collated. An analytical framework was used to map all the evidence gathered for each project against the overall goals of the BCF plan.
Against the analytical framework below, schemes were rated on a scale of 1 – 10 where 1 is ‘not at all’ and 10 is ‘to a great extent’.
Measure (1-10) Scheme one Scheme two Scheme etc Is working as planned and delivering on outcomes Represents value for money in the long term Builds long-term capacity for integration locally; enables new models of health and social care Evidently supports people effectively, improving patient/ service user satisfaction Has buy-in from all stakeholders and workforce: frontline staff and political, clinical, managerial leaders Reflects a truly whole-system approach Supports shift towards prevention/ early help and community support/self-help Total The results of the qualitative assessment and the costs of each scheme were mapped against a cost/impact matrix as set out below.
The findings were used to inform decisions about what schemes should continue and be scaled up (schemes with highest impact and lowest cost scores) and which ones should be improved or decommissioned (schemes with lowest impact and highest costs).
In summary: top tips to develop a monitoring and evaluation framework
- Consider evaluation up front. What information is needed about progress and impact? How will the information be used?
- Be systematic. Providers should draw up an evaluation plan or framework to set out the aims, expected outcomes, measures and targets and a plan for how they will to monitor and analyse the results.
- Be realistic. It is not always possible or practically feasible to implement a bestin- class evaluation framework. Instead, partners should think carefully about which key measures could be used to monitor progress.
- Flexibility is important. Include both qualitative and quantitative measures and consider what the evidence is indicating as the evaluation progresses. It is important to be able to adapt and change an approach over time, if required.
- Take time to understand the data.
How to... understand and measure impact of integrated care
Previous section |
All sections |
Next section