Research designs in social work and social care
There are many different ways to produce research, as you will have seen from what you have read here already. Here we discuss a few of the approaches that you might commonly come across when you are looking for social care research evidence to help you in your work. The SSCR website provides further information.
An evaluation is a way to check how a policy or programme has been run - to look at its value. The government produces an in-depth guide to evaluation called the Magenta Book It identifies three main types of evaluation, and adds that it is usually beneficial to ask 'Why did it happen this way?' in each case.
- Process evaluation asks: 'How was the policy delivered?'
- Impact evaluation asks: 'What difference did the policy make?'
- Economic evaluation asks: 'Did the benefits of the policy justify the costs?'
The extent to which an evaluation should be called 'research' instead of 'audit' varies according to its design and purpose. Newburn (2001) argues that the difference lies in the purpose of evaluation compared to some other sorts of research: evaluation research, in some way, looks at the value of a programme or intervention.
SCIE has produced a guide called SCIE's approach to economic evaluation in social care, because traditional cost-benefits analysis (CBA) may not include the sorts of contexts seen in social work and social care, nor reflect the value base of social care practitioners. SCIE's guide includes attention to the unpaid nature of care, an approach which takes all stakeholders, including those who use services and their families, into account. The guide looks at outcomes defined by service users, transferability between contexts and the 'equity implications of resource allocation' (Frances and Byford 2011: v).
These are a way to survey existing literature and use it to explore a topic. Literature reviews have particular value as a way to assess what is already known about an area of study, providing the opportunity to take the findings from existing projects and reuse them to answer your own questions. They are an integral starting point for most research studies and a popular research method for students. There is more than one kind of review and they are usually conceptualised as the following (e.g. Orme and Shemmings 2010; Jesson 2011; McLaughlin 2012).
- Traditional literature reviews ,sometimes called critical, normative or narrative reviews. These are a qualitative approach to literature reviewing, in that they allow leeway for exploration and interpretation, although like all research they should be conducted in a systematic way (see Finding research). Jesson (2011) argues that their strength comes from being both critical and reflective, and being able to access a wide range of material.
- Systematic reviews gather together all the sources available on a particular topic using a set protocol, and then use a stringent set of criteria to exclude irrelevant studies or those whose quality cannot be assured. Their transparency and replicability are a strength: 'The aim of conducting a research review is to gather together systematically a comprehensive, transparent and replicable review of all the knowledge in a particular area, including the five knowledge sources identified in social care' (Rutter et al. 2010: 14). However (The five types of knowledge are Policy, Organisational, Practitioner, User and Carer.), they are very time consuming and despite their wide search criteria they have constraints on what they can deliver
A practice enquiry takes a specific area of practice and examines it, or alternatively takes a group of people and studies practice in relation to them. Practice enquiries use a range of methodologies to investigate their topic, usually using qualitative or mixed methods to do so. Practice enquiries are sometimes called 'practice surveys'.
Outcome studies, or measures, are closely aligned with impact evaluation and look (often statistically) at the end result of an intervention or programme. There is a danger that, in doing this, the important processes that came before will be lost. Further, McLaughlin (2012) points out that outcome measures are often set by official or dominant bodies, and not unpicked to discover what they mean. 'Well-being' and 'quality of life' are two rather fuzzy notions that are allegedly measured by outcome studies. However, it is possible to have outcomes determined by service users or practitioners, which in turn shifts the power and control to those who took part in the intervention.
Ethnography is a research design which came originally from anthropology. To engage in this type of research, researchers need to spend time observing with the research participants - this may be as a participant themselves, or as a non-participant observer. Observation is a key method in ethnographic research, although interviews and document analysis may also take place. Ethnographic researchers often interview 'key informants' who have particularly relevant knowledge for the project, rather than a random or snowball sample.This is where a small group spreads the word that participants are needed until enough are taking part.
These are useful to explore an area in rich detail: 'The use of case studies can be effective to the extent that it provides a level of detail and a sense of an unfolding process of change which could be lost in more generalized accounts' (Smith 2009: 121). While case studies may not be generalisable across an entire population, Andrew Cooper (2009: 432) writes about how we need to become close to the 'complex particulars' of a case to understand it deeply. Case studies are not always about an individual or individuals; they may be, for instance, about a residential home or a school. What is important is that a case study has clear boundaries. Case studies use a range of methods, both qualitative and quantitative, to explore their area of study.