Participation - finding out what difference it makes
Most toolkits, guides and models focus on participation rather than specifically on evaluation. However, the examples below all include sections on evaluation, and the tools they describe to facilitate participation are also useful for evaluation. They are intended to cover many different kinds of evaluation in different settings.
Y. A. Harrison (2004) Patient Carer and Public Involvement Staff Toolkit, Darlington Primary Care Trust
This is a toolkit for staff and is based on the requirement of Section 11 of the Health and Social Care Act (2001) for PCTs (Primary Care Trusts) to progress patient and public involvement in a systematic and coherent way. A section on Recruiting participants suggests ways of reaching groups who are seldom heard. The aspects specifically relating to evaluation are on pp51–2. The toolkit considers the way in which results are presented and who they are presented to. It suggests a series of questions that staff should consider and includes an example procedure for securing patient and public presentation on committees, groups and panels. There is useful further reading and websites. Ideas box 5 is adapted from this toolkit.
Cathy Street and Barbara Herts (2005)
A guide for practitioners working in services to promote the well-being of children and young people. The central focus is on participation. Of most relevance to the question of evaluation is a sub-section on involving young people in research and evaluation. This usefully reminds us of the ethical issues involved in evaluating. The authors note that ‘there has been a general move away from large scale survey approaches towards methods that more actively involve service users in research about mental health services’. The guide includes case studies and tools for developing participation. Stage 3 of this process focuses on feedback, in which a strong message is the need to ensure that participation and evaluation are part of the planning cycle and not just an ‘add-on’. The guide looks at the advantages and disadvantages of different methods of gathering information from young people, families and carers. There is a comprehensive list of references.
Link: Young minds website
Waltham Forest Council (2004)
This focuses on consultation as one aspect of participation. A checklist on consultation (p16) includes ‘Make it clear who will manage the process and ensure that contact details are available’ and ‘Coordinate the consultation with any others taking place at the same time or covering similar topics or sections of the community’.
Toolkit 4: Improving Health Services Through Consumer Participation: a resource guide for organisations
Department of Public Health (2000), Flinders University and South Australian Community Health Research Unit
Looking outside the UK context, this provides a useful list of toolkits in the Australian context. Again, it is primarily concerned with participation per se, but Section 5 (pp 111–4) concerns evaluation. The notion of a participation cycle has been incorporated into Ideas box 9. The authors introduce the idea of evaluation as dialogue, so that service users and carers can come to the table to talk about what is of merit, value, worth or significance to them. This toolkit has a useful Evaluation checklist (p113) and an alphabetical ‘Frequently asked questions’ section (pp115–127).
Toolkit 5 A guide to involving public, patients, users and carers in developing Lewisham Primary Care Trust
Marion Gibbon, Lewisham Primary Care Trust (2003)
Although this focuses more on getting people to participate, rather than evaluating the effect of the participation, this is a useful guide to participation at a community level.
Not so much a toolkit as a methodology for exploring the strengths in organisations. It takes its evaluation from this strengths perspective. As the term implies, Appreciative Inquiry focuses on appreciating and then giving leverage to an organisation’s core strengths, rather than seeking to overcome or minimise its weaknesses. It focuses on exploration and discovery of moments of excellence in the organisation through deep inquiry, and an openness to seeing new potentials and possibilities from that collective knowledge. ‘Organizations grow in the direction of what they repeatedly ask questions about and focus their attention on.’ Most of us have grown up in organisations that were comfortable (some addicted to!) identifying and analysing problems. AI suggests that there is another, more powerful model for organisational change, that treats organisations as mysteries to be embraced rather than problems to be solved. This alone is a powerful shift in thinking.
Link: Ai practitioner website
McGinn, P., Southern Health and Social Services Council (2006)
This is more of a review than a toolkit, but it is a useful example of an evaluation of service user participation in terms of its impact. The review explains the terms of reference for the review, the methodology used, includes a literature review and an overview of the policy context. The process and impact of service user participation are illustrated with case examples. A user-centred model is presented to promote participative practices (this has been adapted in Ideas box 4), with specific examples of evidence from practice. The review concludes with some helpful guidelines.
Hutchinson, R. and Stead, K.
This tool is used by one of the example Practice sites (Site 1). It has been created for practitioners by practitioners, with a minimum of recording documentation. It measures ‘distance travelled’: the ‘soft’ outcomes that people achieve, such as dealing with barriers to employment, training or education, by overcoming limiting beliefs, and gaining confidence and self-esteem. The Rickter Scale® is essentially a colourful plastic board with sliders on scales that read from 0 to 10. It is non-paper based, with the specific intention of providing an experience that appeals to different senses and learning styles.
It is a participative tool which can be used to evaluate participation. Ready-made questions are available for a number of different situations (available on the website); these could be adapted to measure the impact of participation. The Rickter Scale® helps people to make informed choices and set goals which are realistic and achievable, and to take responsibility for their own action plan and determine the level of support they require. It is designed to enable people to take up new perspectives which reflect their capabilities, beliefs, values and sense of identity. The significance of people keeping their fingers in contact with the Rickter Scale® during the questioning is related to what is known in neuro-linguistic programming as ‘anchoring’. At the second or subsequent use of the Rickter Scale®, comparison is made with this first ‘profile’ and thus ‘distance travelled’ is measured.
Gwynneth Strong and Yvonne Hedges and Welsh translation by Emyr Huws Jones (2000)
This accessible guide is written in English and Welsh is dotted with cartoons and illustrative quotations (‘if there’s too many pages, I can’t be bothered, so I put it in the bin’). Its main audience is the learning disability sector, but it is relevant to other sectors, too. Like the other guides and toolkits we have mentioned, its main focus is on participation and involvement, but there are also useful ideas about feedback and evaluation. The guide squares up to issues such as representation, for example ‘nobody ever asks the paid workers if their views are ‘representative’. Can you imagine asking in the middle of a meeting “Are you sure Mr/Mrs Social Worker that the views you are giving are representative of every employee in social services?”’ Ideas box 2 is adapted from this toolkit.
Toolkit 10: Using Logic Models to Bring Together Planning, Evaluation, & Action: Logic Model Development Guide
W. K. Kellogg Foundation (2004)
Like AI (Toolkit 6), Logic models are not so much a toolkit as an aid to planning and evaluation. They set out the logical relationships between needs, goals, services and outcomes. They provide a structure for understanding the process of change, whether for projects or for individual work. Typically, a logic model will focus on the problem and what is wanted as the end result, identifying that as the goal, but noting also a series of mini-goals or milestones towards achieving the goal. Indicators that will show whether each mini-goal has been achieved need to be made specific – in other words ‘how will we know if change has occurred?’ Logic models measure results (the progress at the end of the project or piece of work) and they also measure outcomes (the progress at a later stage which shows whether the effects have been sustained). There is increasing experience of using logic models at individual and service level.
A closely linked method is task-centred practice, which is a highly participative method of practice in social work and social care (Marsh and Doel, 2006).
Toolkit 11: Measuring the Magic: Evaluating children’s and young people’s participation in public decision making
Perpetua Kirby with Sara Bryson, London: Carnegie Young People Initiative
This publication reviews approaches to participation of young people in decision making and provides guidance on the best practice to follow. The publication details the problems and pitfalls in processes used in participation and provides a range of useful sources of materials (see R22).
National Evaluation of the Children’s Fund (2005)
This resource contains 25 ‘recipes’ for participatory evaluation exercises for use with children and young people. Interactive versions of a number of these exercises are available on the NECF website. The Cookbook now includes ‘templates’ for a number of the exercises. The Cookbook is designed as a resource for anyone working with children to use – either as part of an overall evaluation of children’s services/funds or as a tool for evaluating particular activities (play schemes, single sessions, etc.). Although primarily a resource for those working with 5–13-year-olds, all the exercises have also been piloted for use with adults.