Making a difference: measuring the impact of IMHA

An effective Independent Mental Health Advocate (IMHA) service is one that delivers good outcomes for the person (sometimes called the ‘advocacy partner’ or ‘partner’) receiving the advocacy support. A service may look good on paper, meet all of its performance indicators, but if it is making no or little difference to partners, the service is not being effective. It is vital therefore, that IMHA services find ways to capture information to show what difference this advocacy is making.

The trend towards talking about outcomes is driven by the rise in person-centred values and the need for organisations to show their service is value for money, moving from simply monitoring what a service does to looking at what difference it makes. This report looks at the difference that IMHA services can make to the lives of people subject to compulsion under the Mental Health Act 1983. It provides service users, IMHA providers, commissioners and mental health services with information to discuss outcomes, what they are, how they will know they have been achieved, what indicators can be used, and how outcomes can be measured.

Key messages

  • It is vital that IMHA services agree ways to capture and analyse information to show what difference their service is making.
  • Setting outcomes and agreeing ways of measuring them can help IMHA organisations set priorities and become more effective.
  • There is a lack of evidence about the difference IMHA services make, and there are gaps in how outcomes are recorded by advocacy organisations.
  • The outcomes of IMHA are hard to measure partly because it is often about developing people’s potential rather than the achievement of a final state. Also it may not be possible to know for sure how much change is due to IMHA.
  • Research suggests there are ‘process outcomes’ and ‘change outcomes’ from advocacy, and that it is important to measure both.
  • Reaching agreement about what is a meaningful outcome and how these are to be measured should be done in co-production with people who use services.
  • Using theory-driven approaches such as logic models supports the involvement of different stakeholders in defining outcomes, and deciding methods of data collection.
  • Decisions about what outcomes data is collected will need to take into consideration what resources the organisation has, and be negotiated with commissioners.

Why measure outcomes?

It is important for IMHA services and commissioners to measure the outcomes of advocacy to:

There are lots of terms used in outcomes monitoring:

Input: The resources or things such as people, energy, information, or finance that are put into a system

Output: Outputs are what the service or organisation does with its inputs, basically its activities and participation. For example, some IMHA service outputs are meetings held with advocacy partners, mental health professionals, or attending tribunals with advocacy partners.

Outcome: Outcomes are the changes or learning that result from the service’s activities. For example, service users’ voices are heard, and/or they feel empowered. Sometimes used interchangeably with ‘impact’.

Process outcome: Process outcomes are improvements to how things are done and how they feel to individuals but which do not necessarily lead to different decisions or treatment plans.

Change outcome: Change outcomes are measurable impacts to decisions, service quality, treatment or individual wellbeing.

Longitudinal: Longitudinal measurements are taken at different points in time to show what has changed (or not changed).

Logic models: Logic models are graphic representations of a service or project showing the intended relationships between inputs, outputs and outcomes.

Likert scales: The format of a typical five-level Likert scale, is: 1.Strongly disagree 2.Disagree 3.Neither agree nor disagree 4.Agree 5.Strongly agree

Should only outcomes be measured?

Whilst it is important for the focus of understanding the effectiveness of IMHA services to be on measuring outcomes, IMHA services and commissioners will also need a systematic approach to data collection on inputs and activities. For example, it is particularly important that the demographic characteristics of people using IMHA services is recorded so that IMHA services and commissioners can evaluate whether particular groups are being disadvantaged in service design and provision.

What difference can advocacy make?

The benefits from advocacy are commonly identified as:

A key finding from research including the Right to be Heard (Newbigging et al, 2012) finds the difference advocacy makes can be looked at in terms of process outcomes as well as the change outcomes that are achieved, as illustrated below.

Short-term and long-term impacts of IMHA

Outcome: Process

Outcome: Change

It is important for IMHA services to look for evidence of both process and change outcomes.

Challenges with measuring outcomes

There are three key challenges to be aware of:

  1. Problems with people’s understanding of advocacy – it is difficult to compare the impact of different types of advocacy.
  2. Difficulty defining desirable, quantitative outcomes makes evidence gathering difficult.
  3. Methodological challenges – there are problems with establishing cause and effect with advocacy, which presents challenges around what information to collect. The impact on emotions and behaviour is highly personal and hard to compare. This points to the importance of including methods that capture qualitative as well as quantitative information on what difference IMHA services make and how.

Some of the reasons for the lack of reported data on advocacy outcomes are that:

The consequences of this are that the information about IMHA services that is collected is either minimal, restricted to information about outputs, or consists of individual stories (that are not looked at for themes), making it difficult to assess the effectiveness of IMHA services. (Newbigging et al, 2012)

Importance of co-production

In 'Lost in translation', Action for Advocacy (2009) recommends developing monitoring systems that incorporate multiple viewpoints. Different stakeholders and their organisations will have different motivations for wanting to understand the impact of IMHA services. Service users might frame the purpose of understanding impact in terms of reducing the number of people detained; IMHA services might seek to understand whether there is a relationship between impact and the length of time advocates spend with individual partners; and for commissioners, the purpose might be to know whether IMHA services are good quality and represent good value for money.

These agendas are not mutually exclusive but, not least because of limited human and financial resources, it is important to develop a shared view of the purpose of evaluating impact. This will also shape the kind of information that will be sought. Using theorydriven approaches, such as logic models, offers a way to plan and express what different stakeholders see as the desired impact of the service by linking inputs, outputs, and outcomes (short, medium and long-term; process and change). One of the most common formats for logic modelling comes from the University of Wisconsin. (University of Wisconsin logic model).

Other useful sources of help include the Evaluation Support Scotland’s guide to developing a logic model.

To help create a logic model of the IMHA service in co-production with others, bring different stakeholders including service users together to discuss the following questions:

Key Questions

Inputs: What resources do we need or are we using? (e.g. IMHAs, equipment, money, building, technology, training)

Outputs: What are we doing (what activities) and who are we reaching (participants)?

Outcomes: What change do we expect as a result of these outputs/activities?
What will happen immediately (short term), along the way (medium term), and what is the longer time change (long term)?

A logic model of an IMHA service could look like this:

Figure 1: Basic logic model for IMHA

Inputs (resources). Outputs (Activities and participation). Outcomes (Short Medium Long term)

Inputs

e.g.

  • Funding
  • Staff (IMHAs, advocacy coordinator, manager)
  • Staff training and support
  • Service infrastructure

Outputs - Activities

e.g.

  • Meetings with advocacy partners
  • Partners supported at tribunal or CPA meeting
  • Patient records accessed

Outputs - Participation

e.g.

  • Number and type of qualifying patients using IMHA services
  • Number of personal advocacy plans
  • Number and types or activities

Outcomes

  • Short-term: e.g. Changes in partners’ knowledge, attitudes, skills, and behaviour
  • Medium: e.g. Improvements in the quality of appeals
  • Long-term: e.g. Increased accountability in mental health system

Outcome indicators and measures

There are several useful guides to identifying outcome indicators and measures. One of these, the Scottish Community Development Centre’s (SCDC) framework known as LEAP (Learning Evaluation and Planning) offers useful and clear advice on its website.

In short, outcome indicators measure how well outcomes are being achieved by a service or project by helping to show whether things are changing in ways that were intended. Outcome indicators point to what to measure to determine whether desired outcomes have been achieved. They can be quantitative (numbers or quantities) such as, the number or percentage of advocacy partners who are satisfied with IMHA; or they can be qualitative (based on people’s experience, perceptions etc) such as people’s statements about how their goals for advocacy have been met.

It can be useful to think of these different aspects of outcome indicators:

Selecting appropriate indicators is a complex task that takes time, and they may need to be developed over several stages until they clearly measure the aspect of change that is of interest. There are usually several potential indicators, so identifying and prioritising these in co-production with service users and other stakeholders will help ensure the ones identified are meaningful.

The SCDC advises that outcome indicators need to be (LEAP):

The outcome indicators chosen will determine what information needs to be collected, and what the most appropriate ways to collect this information (tools) are. It is best to be realistic about what can be measured and the amount of information that is needed to make reasonably well informed decisions.

Further guidance on developing outcome indicators and monitoring frameworks, including a bank of outcome indicators, can be from the Charities Evaluation Services.

Options for capturing outcomes information

There is no consensus about the best way to capture advocacy outcomes information, but there are useful suggestions in various sources such as Lost in translation (Action for Advocacy, 2009). Outcomes information can be captured, for example, via questionnaires, interviews, focus groups, online or paper surveys. Using specific question formats and Likert scales for responses to questions that are asked at the beginning and end of a service, can be a good way to collect key information in a time efficient way.

It can also be useful to collect qualitative information, for example, interviewing or getting advocacy partners’ opinions’ through focus groups; asking advocacy partners to keep written or visual diaries; or to express what changes happen for them through drawing or creative writing. The important thing is to be flexible and collect a range of information, and collate and analyse this information for themes, building on individual accounts and experiences.

When deciding on which methods are best to use, it is useful to ask:

Decisions about appropriate methods to use also need to take account of advocacy partners’ willingness to participate, especially if seeking to collect information after contact with the IMHA service has ended. Over-reliance on written formats might exclude people who don’t have English as their first language, or people who have learning difficulties. It is also important to consider who will collect the information – should it be done in-house or by an independent body such as a user-led organisation or consultancy group?

A person-centred approach is one that assesses whether advocacy partners’ aspirations have been fulfilled by matching what happened (actual outcome) with the initial request(s) (desired outcome). There may however be difficulties with establishing what the advocacy partner wanted. Help and Care charity involves service users in identifying what they call ‘position statements’ at the beginning of the advocacy intervention, and then revisit these statements at the end to see what impact advocacy has made (Macadam et al, 2013, p21). Another example of this, is measuring against the ‘I’ statements developed by Think Local Act Personal (TLAP and NSUN, 2014).

The following ‘I’ statements were developed by a practice site during the IMHA implementation project. Statements such as these could be used to measure whether what service users value about advocacy has been achieved.

Draft ‘I’ statements for an IMHA service

Outcome type: Process

Desired outcome written as an ‘I’ statement (for example):

Outcome type: Change

Desired outcome written as an ‘I’ statement (for example):

(Adapted from Newbigging et al, 2015)

These ‘I’ statements can be used in two main ways:

  1. The statements can be used in questionaires that ask services users to answer yes/no in response to the statements.
  2. The statements can be used by organisations as a statement of their commitment to service quality and empowerment

Another way of measuring outcomes is to look at how much has changed over time. That is, to take measurements at different points to show change or impact since, for instance, initial access to the advocacy service ending (Action for Advocacy, 2009). An example of this is the Advocacy Outcomes Scale Tool developed by the Gateshead Advocacy Information Network (GAIN). See Figure 1.

Practice example

GAIN, together with their seven local advocacy project members undertook a pilot project 2009 – 2012 exploring advocacy outcomes relating to Putting People First and developing frameworks to provide a broader picture of advocacy service delivery. Prior to the project the services largely reported on outputs rather than outcomes. Advocates found that introducing the outcome radar sparked useful conversations about the service users’ wider life, their independence, sense of choice and supported advocates to reflect on their own practice too. Soft outcomes were measured using a simple radar chart to map the advocacy partner’s sense of choice, control, independence, dignity and respect and health and wellbeing at the beginning of the advocacy intervention and as their case was closed. A sheet was attached to the back of the outcome radar to capture additional information that services and commissioners may find useful, something that GAIN piloted to allow advocates to reflect on their practice and reflect on cases during supervision. As well as collecting data on outcomes and impact of services the intelligence gathered in output data was also enhanced - for example recording time spent on cases, whether the service user felt their issue had been resolved, and whether the issues presented at referral were the same as the key issues worked on during the advocacy intervention.

Figure 2: Advocacy outcomes radar tool

Advocacy Outcomes Scale Tool comparator. Showing Dignity and respect, choice, health and wellbeing, control, and independence on a scale of 1 (not at all) to 7 (fully).

Note: blue line is before, orange line is after. (Broadbridge, 2012)

Conclusion

Measuring outcomes is a complex area and measuring the impact of IMHA services presents some challenges. However, these difficulties are not unique to IMHA services. Contemporary thinking and practice about outcome measurement does take account of issues such as establishing cause and effect or monitoring complex interventions. IMHA services increasingly need to measure outcomes in order to understand and improve service quality, and to maintain and expand funding.

References

Action for Advocacy (2009) 'Lost in translation: towards as outcome focused approach to advocacy provision'.

Broadbridge, A (2012) 'Developing Personalised Advocacy Outcomes: Learning from A Local Pilot Study', Gateshead: Gateshead Advocacy Information Network

Macadam, A., Watts, R. and Greig, R. (2013) 'The impact of advocacy for people who use social care services: Scoping Review'. London: School for Social Care Research.

Newbigging, K., Ridley, J., McKeown, M., Machin, K., Poursanidou, K., Able, L., Cruse, K., Grey, P., de la Haye, S., Habte-Mariam, Z., Joseph, D., Kiansumba, M. and Sadd, J.(2012) 'The right to be heard: review of the quality of IMHA Services. Report for the Department of Health'. Preston: University of Central Lancashire.

Newbigging, K., Ridley, J., McKeown, M., Sadd, J., Machin, K., Cruse, K., de La Haye, S., Able L and Poursanidou, K. (2015) The value of independent mental health advocacy: ‘My Voice, My Rights’. London: Jessica Kingsley Publishers.

Townsley, R., Marriott, A. and Ward, L. (2009) Access to independent advocacy: An evidence review: Report for the Office for Disability Issues. London: HM Government: Office for Disability Issues.

Wetherell, R. and Wetherell, A. (2008) ‘Advocacy: Does it really work?’ In C. Kaye and M. Howlett (eds) Mental Health Services Today and Tomorrow: Experiences of Providing and Receiving Care. Abingdon: Oxford. pp. 81–91.

Download

All SCIE resources are free to download, however to access the following download you will need a free MySCIE account:

Available downloads:

  • Making a difference: measuring the impact of IMHA