Skip to content

Evaluating personalised care

Published February 2020

This guide aims to help practitioners measure and evaluate the impact of personalised care programmes, initiatives or new ways of working. It is for anyone who is involved in delivering a personalised care intervention or initiative at a local level, e.g. commissioners, performance or data managers, operational managers, lead professionals or practitioners in health, local government or voluntary and community sector organisations.

N.B. From this point on, personalised care programmes and initiatives will be referred to as interventions.

Alongside this guide, you can download assets including a directory of activity and outcome measures.

Find out more about evidence and evaluation of personalised care by joining the Personalised Care Collaborative Network. To gain access, please contact england.personalisedcaredemonstrator@nhs.net

What is personalised care?

Personalised care means people have choice and control over the way their care is planned and delivered. It is based on ‘what matters’ to them and their individual strengths and needs. Personalised care is one of the five major, practical changes to the NHS that will take place over the next five years, as set out in the recently published Long Term Plan. Working closely with partners, the NHS will roll out personalised care to reach 2.5 million people by 2023/24 and then aim to double that again within a decade.

Universal Personalised Care [1] sets out the evidence base for these changes, including how personalised care could help to reduce health inequalities. In England the mortality gap between the richest and poorest areas is over seven years for women and nine for men. The evidence shows that levels of knowledge, skills and confidence to manage their health tend to be lower for people with lower incomes and lower levels of education.

The guide helps you work through a series of steps to plan and carry out an evaluation.

Co-production and evaluating personalised care

Local evaluation leads should strive to work together with people with lived experience in designing, carrying out and analysing the results of evaluation. This is sometimes referred to as co-producing your evaluation.

It is important to include people with lived experience in:

  • Developing your theory of change (see ‘developing interventions in evaluating personalised care’), including the outcomes that you are seeking to achieve for people who use services and carers.
  • Developing research tools, such as interview guides and surveys.
  • Being involved in conducting interviews.

Developing interventions in evaluating personalised care

Before thinking about how to measure outcomes or evaluate your work, it is important to understand the nature of the intervention you intend to measure. Only by understanding what the intervention involves, and how it will lead to changes for people and services, can you work out what you need to measure and how.

What is a logic model?

We recommend using a logic model to help develop your intervention. A logic model is a diagram that describes your theory of change – how your interventions will bring about the desired outcomes. It usually describes the five points below.

  1. Context: What needs to be in place locally to support a successful intervention.
  2. Inputs: The things you put into the intervention, such as people’s time, money, infrastructure to make it work.
  3. Activities: What you actually do to make the intervention bring about change.
  4. Outputs: The immediate results of the intervention, e.g. number of people receiving a single care plan, number of staff trained.
  5. Outcomes: The short, medium and long term.

The value of the logic model is that it helps you to think through the details of how you expect your intervention to work and explore the assumptions that lie beneath this.

We recommend that you work in partnership with your stakeholders and people who use services and carers to develop the logic model. Co-designing the logic model allows you to explore the different views, values and priorities of each stakeholder and what they would like to gain from both the intervention and the measurement or evaluation of it.

A good logic model emerges from asking questions. Some people prefer to start with the question about outcomes and work backwards to talk about inputs, but we think the most important issue is to cover all the key questions.

Key questions

What is the intervention trying to achieve?
This is an important question; it enables you to explore how the intervention intends to bring about positive changes to peoples’ lives – or outcomes.

Be as specific as possible about what these outcomes are. You may want to achieve ambitious outcomes for people, but it is important to challenge yourself and think hard about whether this intervention will really bring them about.

You also need to be specific about when you think the outcomes will be achieved by. When can they realistically be delivered?

What are the immediate changes you expect to see?
These are called output measures. They are often things you can count which can tell you if certain changes are happening, e.g. the number of staff who have been trained, the number of people who have been referred into a new service, the number of people receiving a single care plan.

What will need to happen to bring about these outcomes?
This question seeks to pin down what the essential features of your intervention are – or activities. This could be about a new approach to assessments, a new way to organise a team, a new approach to working with people, a new IT system. It is important, again, to challenge yourself and really think hard about what activities are associated with this intervention.

What resources are needed to deliver the intervention?
The resources which go into an intervention are sometimes called inputs. They are the human and other resources, such as new equipment or buildings, which are needed to support an intervention. It’s important to be clear what these are from the outset. If you want to work out the costs of intervention, and work out whether it brings about savings, you will need to pin down all the inputs e.g. staff time.

Unintended consequences

When we design a new model of care or service, we are often testing a hypothesis – if I change X, Y will happen. However, people’s lives are complex and so is the health and care system.

Your intervention may have impacts beyond those you intend, such as knock-on effects in the local health economy. Developing a theory of change and considering the various stages in which your programme is intended to operate will help you identify additional effects. These can then be incorporated into your evaluation.

Achieves expected outcomesUnintended consequences
More knowledge, skills and confidenceIdentification of unmet needs and more use of services
Person achieves their goalLong-term dependency is created
Fewer crises and less unplanned use of servicesDemand on VCSE increases
Table 1: Example of expected outcomes and unintended consequences [2]

Practice example – Wessex Academic Health Science Network

The health and social care system has identified a number of actions to prevent ill health and to promote healthy choices; education and active support for self-care and self-management; and action to promote mental wellbeing. One of the prioritised projects of a Social Prescribing service will support local people to stay well and is focused on the most vulnerable people in the local population. It is a key component in the delivery of the NHS Long Term Plan and local commissioning strategies.

Designing the evaluation of personalised care

The word evaluation refers to the making of a judgement about why something turned out the way it did. It is a step on from just measuring outcomes and is often used to make decisions about whether to continue a service or scale up a pilot.

Robust evaluation tells us not only whether an intervention worked, but also why and how. This helps us to learn lessons for spreading successful interventions and developing new ones.

In order to know whether you are on the right track to achieve your goals, you will need to decide on a few key questions, and collect evidence to answer them.

Once you know what questions you are seeking to answer, you can than work out the right approach to the evaluation.

Key questions

  • Has the personalised care intervention improved outcomes for people who access care and support?
  • What outcomes have been achieved e.g. improved wellbeing, reduced social isolation, people having greater choice and control over decisions about their care?
  • How is your intervention working for people from different groups in the local population, including people who experience health inequalities?
  • How will you identify people who will benefit from the intervention e.g. through risk stratification, practitioner selection, assessment of frailty?
  • Have you changed the way you deliver services in the ways you expected?
  • What skills and capabilities do staff need to deliver this intervention?
  • Does the system have enough capacity to deliver the interventions?
  • Were people who use services involved effectively in the intervention?
  • Have different professionals worked well together to deliver the intervention?
  • Has the intervention reduced demand for certain services, including more intensive statutory services?

Considerations

Selecting the evaluation approach

There are many approaches to evaluation, sometimes referred to as evaluation design. In thinking about which approach to use, it is helpful to think about the following questions:

  • What is the question you need to answer, and who for?
  • What resources do you have locally to deliver the evaluation? Think about who will co-ordinate the different activities, who will develop data collection tools, who will carry out interviews or administer data, and who will analyse the results? Do people locally have the right expertise and capacity? If they don’t, what is missing
  • What are the timescales for the evaluation? When do you need results by?
  • Are the time and cost of the evaluation justified by the scale of the benefits that you are expecting to show?

Some of the main evaluation methodologies are described below. It is important to note that they all have a use for different circumstances and different audiences. Using several methods – known as a mixed methods approach – will add credibility and boost your confidence in your findings.

N.B. The types of methodologies are ordered in order of robustness – ‘Randomised control trial’ being the most robust.

Options for evaluation design

Practice example – Using a control group in a personalised care evaluation

The social prescribing scheme in City and Hackney [3] has been operating since 2014, with funding for the evaluation from the CCG and The Health Foundation. The study includes a matched control group and evaluates the effects of social prescribing on individuals, primary care awareness of relevant community issues/resources and costs associated with the services. The study followed-up with patients 12 weeks post-referral and eight months post-referral.

One control group was randomly selected from neighbouring wards based on age and condition, with the second group sourced from anonymised electronic patient record data sets.

Prior to referral to the social prescribing service, the control group had an average of 8.6 GP appointments a year, with those referred an average of 11.5 appointments a year. Eight months post referral, the control group had an average of 14 appointments a year, with those referred an average of 12 appointments a year.

There were no significant changes in general health, wellbeing, anxiety, depression, social integration or health care resource use over time in either the social prescribing or the control groups.

The study recognises that the impact of the service was limited but highlights a number of limitations within the area in question, identifies a limited number of contacts with link workers, and therefore highlights the need for better application of social prescribing in City and Hackney.

Deciding what to measure in your personalised care evaluation

Once you have defined your logic model it is time to start thinking about how you will measure the key elements of it. There are two key areas to think about:

  • Measuring activity and output measures.
  • Measuring outcomes for people, carers, practitioners and the system.

Definitions

Measuring your activity outputs and outcomes

Measuring activity and outputs

Activity and output measures can provide an early indication of how well the implementation is going but predominantly tell us how quickly the intervention is being rolled out and how many people have accessed it.

For personalised care, there are a number of mandatory activity and output reporting requirements from NHS England. These currently are focused on the number of personal health budgets and are submitted by your local Clinical Commissioning Group to NHS Digital on a quarterly basis.

An important part of measuring outputs is to understand who is taking up your intervention, including whether you are reaching groups affected by health inequalities.

It is important to continue to monitor outputs as you implement and there will be some specific ones that make sense to your work. For example, in Gloucester they have been interested in monitoring the number of pre-payment cards used for personal health budgets. Try to keep monitoring of outputs to a minimum. There can be a tendency to monitor a lot of outputs because this kind of data is fairly easy to obtain.

Measuring outcomes

We collect a lot of information on people every time they go to the GP or see a social worker. Almost all of this is focused on how many times people use services or information about their physical health status e.g. blood pressure or weight.

However, information on whether people feel better, happier or have had a good experience isn’t collected routinely at a local level, so it’s important to think about how to measure this before you start your intervention as you might need frontline practitioners to collect it from the beginning. This means they need to understand why you want them to collect it, because it might not normally be part of their job.

With personalised care we are interested in delivering three key outcomes:

  1. Improving the health and wellbeing of people, families and carers.
  2. Improving the experience of people, carers and practitioners.
  3. Improving use of health and care services.

Choosing measures across the outcomes

There are many tools you can use to measure impact. There are benefits to using validated tools that allow you to compare your data with other areas, but if the tool doesn’t measure what matters to you, then it won’t give you what you need.

There is a balance to be struck between collecting data covering all of the outcomes you think you might achieve and ensuring the collection doesn’t get in the way of the conversation. It can be tempting to require multiple questionnaires or tools, so you can understand – for example – loneliness, mental wellbeing and experience. This might not be cost effective and can have an adverse impact on people taking part

We recommend asking the following questions when deciding on the tool you want to use:

  • Can it work for the whole population, or any specific cohort you are interested in?
  • Is it validated?
  • Is there a licence fee?
  • How long does it take to fill in?
  • Is it in multiple languages?

We have made some suggestions below.

Practice example – Evaluation of health and wellbeing hubs in South Devon and Torbay

In their evaluation of how the health and wellbeing hubs are impacting on personalised care for older people with complex health conditions, South Devon and Torbay were keen to embed evaluation of outcomes and impacts from the beginning [4]. They collaborated with the University of Plymouth who developed a Researchers in Residence model, which takes a participatory, action-orientated approach to evaluation.

The researchers formed an Evaluation Group that brought together senior managers and the voluntary sector providers from both South Devon and Torbay to discuss how to undertake a robust evaluation of the Wellbeing Coordination service, how to make data collection methods more aligned across Torbay and South Devon and how to share information and learning from the evaluation.

Three key achievements of the group have been:

  1. Ensuring a uniform approach to asking consent from service users, so they could collate health and social care data from across the system.
  2. Ensuring an information governance agreement was in place to allow both providers to share data and learning between organisations across the system.
  3. Strengthening partnership working across the voluntary sector. This enabled the Trust to get some good data across the health and social care system with excellent follow up, particularly for South Devon.

The evaluation included use of the Short Warwick-Edinburgh Mental Wellbeing Scale and Patient Activation Measure© (PAM), alongside analysis of service use.

Bringing it all together – an evaluation framework

An evaluation framework is a summary document which sets out how you are going to do your evaluation.

It can help you and your colleagues to focus on the key questions you are trying to answer and keep on track with collecting data.

It will usually include:

  • What: evaluation questions you are trying to answer.
  • Where: will the data be collected and by whom?
  • When: over what time period?
  • How: measurement tools and methods will be used e.g. surveys, focus groups.
  • Who: is the data collected from and who will gather and analyse data?

Example: Evaluation framework from Gloucestershire’s Integration Accelerator Pilot

Project Name: Integration Accelerator Pilot

Inputs:

  • Joined up assessment process between health and social care
  • (Initial cohort focus – people with serious mental illness who have funded care packages from 2gether)
  • Integrated budget or personal health budget for some people
  • Pre-paid cards used by people
  • Signposting to voluntary sector organisations

Outputs:

  • Number of personalised plans produced
  • Number of integrated budgets
  • Number of personal health budgets
  • Number of pre-paid cards
  • Number of positive comments from people

Outcomes:

Individuals/families/carers:

  • Improved wellbeing
  • Improved experience of integrated assessment process and care including:
    • Health needs are considered earlier in the assessment process
    • Choice and control over their outcomes, including the offer of an integrated or personal health budget where required
    • Increased knowledge, confidence and skills to manage their condition
    • Carers needs are taken into account

Practitioners/staff:

  • Improved job satisfaction/morale

What is being evaluated?

  1. Whether there is an improvement in people’s wellbeing as a result of a joined up and personalised assessment and care planning process
  2. What the experience of the joined up and personalised assessment and care planning process is for:
    1. people and carers
    2. practitioners
  3. Demand and cost of services and whether earlier assessment of health needs reduces demand over time.

How is it being evaluated?

  1. SWEMWBS and (where appropriate) EQ5D questionnaires at start of process, and at 3, 6, and 12 months
  2. Qualitative interviews with people and their carers 6 months after they take up a budget
  3. Qualitative interviews with practitioners
  4. Linking health and (if possible) social care data using the pseudonymised NHS number to enable understanding of:
    • Cost of social care package
    • Acute, community, mental health and primary care service usage and cost
    • Medication prescribing and cost.
    • Compare 2 years pre-intervention, baseline, 6 months and 12 months post intervention to understand shift in services.

Next steps in evaluating personalised care

Governance

You will need to consider how to keep track of your evaluation and who needs to be involved in making decisions. If you don’t already have a suitable group, you may need to put one in place. There might need to be a risk assessment to help you decide what to do if things don’t go to plan.

There are some specific issues that you should consider in planning the evaluation. These include ethical approval, information sharing and consent.

Reporting – making sense of the evidence

Once you have developed your theory of change, designed your evaluation and gathered the data to support it, it’s time to assemble the evidence. What do the results tell you about your theory of change? Do the results support each other or are there contradictions?

There are no hard and fast rules for drawing the data together. Focus on ensuring that you have answered your evaluation question and presenting a clear and honest narrative about your programme’s impact. It’s really important to make clear any limitations in the evidence, for example if you have only been able to use a less robust design.

Conclusion

We hope this guide helps your thinking on how to go about measuring impact and outcomes for personalised care.

Here are a few final tips:

  • Find out what’s happening locally – there might be other projects, services or pilots already measuring outcomes. Build on local enthusiasm and tools already in use.
  • Develop your logic model – bring together a range of stakeholders including people with lived experience to support collaboration and co-design.
  • Think about how to build in measurement from the beginning, and make it part of normal business.
  • Keep it simple – start small and just focus on measuring one outcome to begin with such as wellbeing or costs.
  • Know your local audience – choose an evaluation approach that meets your local need and answers your local questions. You don’t have to do something academic or extensive if that doesn’t work for you.
  • Don’t be overwhelmed – it’s better to measure something than nothing and if the tool you’re using doesn’t work, choose something else.
  • Don’t forget activity measures – they help to demonstrate spread, scale and identify challenges.
  • Get to know your information governance colleagues – understand where they are coming from and focus on what’s best for the person as the centre of discussion.

Appendix

References

  1. Universal Personalised Care: Implementing the Comprehensive Model (NHS England, 2019) [Accessed 22 August 2019]
  2. Your guide to using logic models (Midlands and Lancashire Commissioning Support Unit, 2016) [Accessed 4 September 2019]
  3. Shine 2014 final report Social Prescribing: integrating GP and Community Assets for Health (The Health Foundation, 2014) [Accessed on 3 September 2019]
  4. Researchers in residence (NIHR, 2018)