Participation - finding out what difference it makes
Summary
Introduction
This guide is based on research commissioned by the Social Care Institute for Excellence to develop measures that can be used to help evaluate the impact of service user and carer participation.
With service user and carer participation firmly on the agenda, there is a need to find out what difference service user and carer participation is making. No matter how right participation is, we also need to know how we can measure the differences that it is making. Whilst the service user participation movement has achieved much in terms of the principle, it is less clear what changes have resulted in practice.
Purpose
- To find out what ways service user and carer participation is being evaluated.
- To suggest ways of finding out what difference service user and carer participation is making to social care services.
Methodology
The research that informs the guide focuses on three main areas:
- What does the available literature tell us about how participation is being evaluated?
- What can we learn from examples of service user and carer participation (the 'practice sites')?
- Are there 'toolkits' that can help individuals and organisations to find out what impact participation is having?
The literature review built on the work of SCIE Position Paper 3: Has service user participation made a difference to social care services?, so searches were conducted electronically and manually of reviews from 2001. Studies were excluded if they simply reviewed service user involvement without any evaluation of the participation. Thirty key reviews met the criteria for inclusion and these were analysed using a standard pro-forma developed by the team of academic and service user researchers.
In order to access the ‘grey literature’ and to identify ten practice sites as examples of evaluation of participation, 1599 social care organisations were contacted across England, Wales and Northern Ireland in the summer of 2006. Thirty responses were received and from these and other ‘snowballing’ techniques, ten practice sites were selected. These criteria were used to help the selection: geographical spread; client group spread; recency of evaluation; variety of evaluation methods being used. From the literature review, the practice survey and informal methods, twelve toolkits were studied in more detail.
An Advisory Group of service users and carers was facilitated by a service user researcher and gave advice to the research group about specific elements of the research.
Findings
The research pointed very clearly to a gap between participation of service users and carers (considerable activity) and systematic evaluation of what difference this is making (relatively little). This gap can be seen both in the literature and in practice, and is probably one of the reasons for the very low return rate from the practice survey.
In part, this gap between participation levels and evaluative activity can be explained by the barriers. If we understand these barriers, we can begin to overcome them in order to make evaluation an essential part of participation. The main themes are listed here.
- Power differences between professionals and service users can make honest evaluations difficult to achieve; power is also important in terms of who sets the stage for the evaluation (who decides what will be evaluated and how?).
- Expectations about what will be evaluated might be unclear; for example, is it the process of participation or the outcomes or, more likely, both? Intrinsic benefits of participation (the value of participation in itself) are linked to, but also separate from, extrinsic benefits (the results of participation).
- Evaluation needs to be built in from the beginning, along with the resources to conduct it. Although project funding might include evaluation, often it does not include evaluation of participation.
- Participation is like breathing for many organisations that are led by service users and carers, so it can be hard to know what aspects to evaluate.
- It is difficult to know whether 'A' caused 'B'; in other words, did this participation here cause that difference there?
- Commitment to the principle of participation can make it difficult to be objective about the difference it is making or, indeed, whether it is making any difference at all.
- Effective evaluation might require training (e.g. for service users to become research interviewers) or support (e.g. to ensure that the experience of evaluation is constructive and not hurtful).
- The culture in an organisation, including staff attitudes, can be hostile to evaluation. There may be fears, real or not, about what evaluation of participation will discover.
- It can difficult to include people who are seldom heard in the evaluation.
- Tokenism occurs when an organisation feels satisfied that it has ticked the boxes, yet the reality is experienced very differently by service users and carers.
- There are different timescales for service users, carers, workers, managers and researchers. One reason for the low response rate to the survey in this research was probably the fact that the timescale was too tight for most service user-led organisations that would want to consult with all their members.
There are different kinds of evaluations for different kinds of purpose. Some of the evaluations included in the practice survey were on-going and these fell into two categories: those that had a continuing commitment to evaluation as part of a ‘quality loop’, or those that had moved into a new phase of evaluation, building on the learning from the first phase. Some of the sites involved the evaluation of a specific, time-limited project or one project moving into another, in which evaluation had been built into the terms of the project and, importantly, its budget.
Making use of the findings
It is not possible to declare which methods of evaluation are best for what kinds of participation. However, nine big questions did emerge from the research findings, along with a checklist of twenty pointers. If individuals and organisations ask themselves these questions and address these pointers, they will be helped to develop the most fitting approach to evaluating the difference that participation is, or is not, making. With responses to these questions, individuals, groups and organisations will be better equipped to develop measures to evaluate the effectiveness of service user and carer participation.