Theorising Social Work Research
What works as evidence for practice? The methodological repertoire in an applied discipline 27th April 2000 Cardiff
Evidence for practice: the contribution of competing research methodologies Joyce Lishman
Earlier papers in this ESRC seminar series (Lyons, 1999, Trevillion, 1999, Parton, 1999) examine the complex, ambiguous and problematic nature of social work practice, of social work knowledge and of social work research.
In this paper I explore the contribution of the methodological repertoire of social work research to the provision of 'evidence for practice'. I suggest that the range of methods employed and the ensuing methodological debates reflect the tensions in defining the nature of social work and social work knowledge. While recognising that the choice of a specific methodology reflects a particular perspective of social work, its purpose and context, I suggest that any polarisation between a positivist, empirical methodological stance and a qualitative/interpretative approach detracts from the development of evidence for practice which has to encompass:
- knowledge to understand and analyse individuals and society
- knowledge to underpin effective social work interventions
- attention to the meaning and experience of the intervention for the participants.
What is required therefore is a realistic assessment of the relative strengths and weaknesses of contested methodological positions and judicious choice of method appropriate to the purpose of the enquiry.
I began with a brief exploration of the complexity of social work practice and of social work knowledge. I then examine the relationships and boundaries of research, evaluation and social work and what might constitute 'evidence for practice.' I examine critically the contribution of a range of research methodologies to the development of evidence for practice. Finally I try to identify the potential unique contribution of social work research methodology to the wider development of evidence for practice in other related disciplines, while raising issues in relation to social work research which remain problematic, uncertain and ambiguous.
The complexity of social work practice
Parton (1999) examines and analyses the tensions and complexities which characterise the nature of social work, its 'uncertainty, confusion and doubt' (Jordan, 1978). In this paper I briefly highlight problematic issues in the nature of social work which have relevance to any analysis of the contribution of different research methodologies to 'evidence for practice.'
Lyons (1999) argues: 'Social Work is difficult because the subject matter is problematic, and the form and quality of the response is determined not just by the values, knowledge and skills of workers and managers, but also be the demands of government and the perceptions of other professionals, the press and the public.'
Social work operates in an ideological and political context which currently promotes market forces, consumerism and managerialism in the public sector, remains critical of post war welfarism and the ensuing 'dependency culture' and emphasises the necessity for regulation, audit, performance monitoring and inspection. Social work operates in an economic and financial context of concern about containing public sector spending, given its growth in proportional terms relative to the private sector, and of a consequent emphasis on efficiency savings and ensuring value for money, encapsulated in the Best Value requirements for local authorities (DETR, 1998). Social work also operates in a context of rapid legal and organisational change incorporating significant changes in policy direction (see, for example, the National Health Service and Community Care Act 1990, the Children Act, 1989, the Criminal Justice Act, 1991, local government reorganisation, 1996). Currently the modernising agenda of 'Modernising Social Work Services' (1998) and 'Aiming for Excellence' (1999) seeks to improve the regulation of both social services and of the individual workers providing them, and thereby to enhance public confidence in social work and social care provision.
Within this complex context, social work practice itself is a complex, uncertain and ambiguous activity which involves an ethical base, legal accountability, responsibility for complex assessment and decision making about relative risks, safety, harm and protection, and intervention in the lives of people who are in distress, conflict or trouble.
Ethical issues in social work involve tensions between individual rights and public welfare, between individual responsibilities and structural inequality and oppression. 'They lead to moral dilemmas and a balancing of rights, duties and responsibilities for which there may be no 'right answer.' An ethical response may conflict with financial accountability and resource availability; it may inform or conflict with legal accountability (Lishman, 1998, p90).
Parton (1999) in addressing what he considers to be an essential characteristic of social work argues '... this ambiguity arises from its commitment to individuals and their families and their needs on the one hand, and its allegiance to and legislation by the state in the guise of the court and its 'statutory' responsibilities on the other.' Social work practice is required to provide care, protection and control and to ration resource allocation. It is also expected both to work in partnership with users (Marsh and Fisher, 1992, The Children Act, 1989) and to promote the empowerment of individuals, groups and communities to take control of their own lives (Braye and Preston-Shoot, 1995).
The tensions between these different expectations and purposes of social work lead to its ambiguity and uncertainty, particularly in practice, but also render simple definitions of the nature of 'evidence for practice' problematic.
Knowledge and social work practice
Understanding what constitutes the knowledge base of social work is, like the nature of social work, contested, but requires examination for subsequent analysis of the contribution of different research methodologies to 'evidence for practice.'
The Oxford English Dictionary (O.E.D) defines knowledge as:
'Intellectual acquaintance with, or perception of, fact or truth: the fact, state or condition of understanding.
Theoretical or practical understanding of an art, science, language, etc.
The sum of what is known'.
In a sense these definitions highlight some problematic aspects of knowledge in social work. In particular the concept of fact as a universal objective truth, challenged in the natural sciences (Popper, 1969) is more problematic in social work, where individuals' perceptions, judgements, interpretations and meanings contribute critically to developing understanding and knowledge of the field. As Lyons (1999) argues 'scientific knowledge (concerned with prediction the workings of the natural world and controlling it) has been valued in society at the expense of hermeneutic and emancipatory forms of knowledge (concerned with comprehending and communicating with each other and developing views of the world which lead to changed understanding)' (Habermas, 1978).
In social work 'scientific knowledge' based on the positivist paradigm deals not with 'truth' and certainties but rather with probabilities. For the individual practitioner and user or client there is no set causal link between a problem or situation, a response and an outcome because individuals, their problems and situations are unique. Further Lyons (1999) following Henkel (1995) argues that in social work we are 'reflective participants in, rather than privileged observers of particular phenomena and situations.' Schon (1983, 1987) also questions the concept of a knowledge base for professional practice which depends only on positivist research, using techniques which are describable, testable and replicable and which assures objectivity and neutrality. Rather, Schon emphasises the uniqueness, uncertainty and potential ethical conflicts of each new practice encounter and argues for the development of practice knowledge based on reflection, on and in action.
Conceptions of knowledge in social work inevitably underpin analysis of the strengths and weaknesses of research methodologies in contributing to 'evidence for practice.'
Research, evaluation and social work practice
I have touched on the contested nature of social work practice and of social work knowledge, and will now explore potential implications for social work research. I also want to examine the relationship and boundaries between research and evaluation since for me the title 'What works for practice?' implies a focus on research on the effectiveness of social work practice or its evaluation, rather than the wider research agenda which addresses and analyses the nature and causation of social and individual problems.
Parton (1999) contrasts contesting views of social work practice as a 'rational-technical' or a 'practical moral' activity. I have already argued that the implications for research of how we view social work knowledge and practice require careful consideration. I believe we need to attend to both perspectives of social work practice since social work is a practical and ethical activity which also needs to account for what it does or fails to do, within the legal, political, cultural and economic parameters in which it operates.
We need to attend to both of these perspectives, not simply because of the internally contested nature of social work practice and knowledge but also because of externally driven requirements of accountability and regulation (Everitt and Hardiker, 1996, Shaw, 1996). When the Chief Inspector of Social Work Services in Scotland (1998) argues, quoting T.S. Eliot:
"and we shall judge, as this world does, by results'
the social work community, whether in practice, management, education or research, needs to determine what we mean by 'results'."
I want now to explore the relationship between research and evaluation, and there on limit my focus to the evaluation of social work practice and how different research methodologies and paradigms may contribute to it.
The Oxford English Dictionary defines research as:
'An investigation directed to the discovery of some fact by careful study of a subject:
A course of critical or scientific enquiry.'
To evaluate is defined as follows:
''To find out the value of:
To find the numerical expression for'
The definitions highlight debates found in both social work research and evaluation between qualitative and pluralistic 'critical' enquiry and quantitative, experimental 'scientific' research, but do not particularly help to distinguish research and evaluation.
The distinction between research and evaluation seems blurred. While research relevant to social work asks relatively broad questions, for example, about the effect of structural characteristics on individual problems presenting to social work, social work research has frequently examined how social work practice may lead to more effective outcomes e.g. in criminal justice to reduced reoffending (McGuire, 1995), and in child care, using child care research to inform assessment and good practice with children in care (Looking After Children: Good Parenting, Good Outcomes, Scottish Office, 1997). Such research is difficult to distinguish from evaluation. Indeed Shaw (1999) argues that 'the boundaries are indistinct and the range of methods overlap.' 'There are some methodological activities that are likely to be used mainly in research, some mainly in evaluation, and others in evaluating-in-practice. But some methods are used by practitioners in all three contexts' (p18).
My focus is on evaluation of and in practice, as an integral dimension of social work and social care practice. Such evaluation is concerned with how effective practice is in terms of achieving desired outcomes but also how practice is experienced and evaluated by both its users and its practitioners.
Sadly a gap continues between the practice of social work and social work research and evaluation (Fisher, 1997, Everitt, 1998, Shaw and Lishman, 1999). Why should this be? To put the arguments briefly:
- the criteria for 'success' in social work may be differentially selected by researchers, funders and practitioners.
- evaluation may be perceived as a management tool which highlights and even punishes perceived poor practice but does not necessarily enhance and reward effective practice.
- research knowledge tends to be general and probabilistic: it can therefore be difficult to apply specifically in an individual case (Fisher, 1997).
- research, even in randomised control trials, may examine practice in so broad a way that details of difference in methods of practice and intervention, of importance to the individual practitioner, cannot be correlated with effectiveness (Fisher, 1997).
If we are seriously to consider what works for practice and what constitutes evidence for practice both research and evaluation have to be incorporated in practice and its development and not 'done' to it. The case for evaluation as an integral part of professional responsibility, accountability, and development is reinforced by the need to provide a credible, alternative assessment of the value and effectiveness to that provided by increasing external review, inspection and audit.
The contributions of competing methodologies to the evaluation of and evidence for social work practice
In examining whether social work is effective, for whom and in what ways there has been a tendency to a polarisation between quantitative and qualitative methodologies, a 'paradigm war,' with quantitative methodologies associated with measurement, causality, experiment and fact, and qualitative methodologies with judgement, values, interpretation, meaning and experience. Waterhouse and Gould in this seminar develop analyses of the contribution of each methodology to social work research and evaluation. My purpose is not to rehearse these arguments but to argue that such polarisation is unhelpful and detracts from the real contribution of different methodologies to developing evidence for practice. Rather, a pluralistic approach, utilising qualitative and quantitative methods, as appropriate, and drawing on the concept of triangulation (Denzin, 1989a, 1989b) may better encapsulate the complexity of social work practice and address the range of stakeholders and competing interests.
As Sherman and Reid (1994) summarise:
"There was the recognition that the controlled and reductive procedures of quantitative research tended to selectively ignore much of the context of any study and thereby miss significant factors in the situation that more holistic qualitative observation and description might identify. There was also a recognition that the study and analysis of what goes on in the actual process of practice had been shortchanged in favour of measurable outcomes. Further, a need existed for more knowledge about the interactive and subjective experience of the client in the clinical change process."
A further approach to evaluation, complementary to reflective, qualitative approaches but relatively ignored in the quantitative, qualitative, debate, is the concept of participatory, 'bottom up' evaluation where evaluation becomes a means of empowerment for users of social work practice. User led research, for example, which lies within this strand of evaluation, challenges traditional notions of power and expertise whether of the researcher in experimental evaluation, the practitioner in reflective evaluation, or the managerial and political agenda of external audit and review.
In reviewing these three broad approaches for evaluation in social work it is useful to consider what constitutes evidence for practice.
Macdonald and Sheldon (1998) draw on a definition from Sackett et al (1996). 'Evidence-based social care is the conscientious, explicit and judicious use of current best evidence in making decisions regarding the welfare of individuals' (p71). Few would disagree that social work should draw on current best evidence, conscientiously (from an ethical base), explicitly (clearly and openly) and judiciously (critically, analytically, and carefully balancing and judging the evidence). We should not simply practise on the basis of habit or unchecked practice information or wisdom. However, given the complexity and contested nature of social work practice and knowledge, nor should we over simplify the nature of evidence and reduce it entirely to the results of a positivist, experimental paradigm. Social work deals with individuals who are unpredictable: positivist research, as I have argued earlier, deals with probabilities but not certainties, or an inevitable causal link between intervention and outcome.
We do, in social work, need to examine, as Macdonald and Sheldon (1998) argue whether outcomes of social work practice are attributable to our interventions or other factors including time and unrelated changes in the lives of service users which may influence measured outcomes negatively or positively.
Evidence, therefore, should include what methods of assessment and intervention have proved most successful in meeting the specific aims of a particular area of practice or service. Evidence also needs to include how the service user experiences such assessment and intervention, and what other factors have influenced their lives and actions.
Our concept of evidence needs to take account of the range of stakeholders in social work practice and the differential power they can employ. Our concept of evidence also needs to take account of different and conflicting expectations and requirements of social work: it is not possible simultaneously to ensure that no child dies at home through parental violence and neglect and that no child is admitted into care without grounds that meet subsequent legal requirements for proof and 'evidence' (an apparently unproblematic expectation of child care from a civil servant, personal communication).
And so to examine the three methodological approaches relevant to this seminar's focus: what works for practice?
Briefly the positivist, quantitative approach to evaluation in social work focuses on empirical practice (Reid, 1994), also referred to as 'research-based' or 'scientific' practice. The characteristics of this approach have been:
- case monitoring and evaluation - through single system designs
- the application of scientific perspectives and experimental design and methods in practice
- application of and knowledge based on interventions whose effectiveness has been demonstrated through the research methods identified (i.e. from a scientific, experimental perspective)
The strengths of this approach include:
- the direct linking of evaluation and individual cases and the practitioner's ownership of evaluation in practice
- the explicitness of specifying a client's problem, recording change during intervention, and, as a result, evaluating the success of the intervention
- the more general introduction, to social work critical analysis of practice, of the importance of specifying aims and goals of intervention, of working with clients and users within specific and explicit contracts, the use of time limited intervention and review, and the evaluation of intervention based on the original specified aims.
The weaknesses include:
- the very specific, clear and measurable outcomes may not reflect the complex and 'messy' problems which social work practice encounters
- the limitations of criteria for success which are based entirely on client change as a measure of the effectiveness of intervention
- the failure to recognise that what social work offers is contingent on the context. Any rigorous analysis of 'what works' has to question the context of the programme, and what elements of it work for some people in particular circumstances.
The danger of the empirical practice movement, interpreted in its positivist extreme, is that it may be viewed politically and managerially as providing 'evidence' based recipes for rather simplified expectations of success.
The arguments for qualitative research have already been rehearsed in examining the complexity of social work practice and knowledge.
The characteristics of this approach have been:
- the utilisation of a range of social science methods, including ethnography, discourse analysis, case studies and narrative enquiry
- the contribution of practitioners to the construction of social work knowledge (Fook, 1996)
- the lack of correlation between formal knowledge and effectiveness in practice
- the use of Schon's model of reflective practice (1983) which criticises the authority of scientific knowledge and practice derived from 'pure' academic research and values applied and performance based models of professional knowledge and research.
The strengths of the approach include:
- recognition of the need for evaluation in social work to include the role of values and judgements about 'good' practices and processes
- recognition of the importance of meaning and perceived experience in social work encounters and not simply of prescribed outcomes
- recognition of the importance of the voice of the consumer, user or client in evaluating the experience of receiving a social work service
- recognition of the social worker's understanding and perception of assessment, process, decision making and intervention, in the light of the professional ethical and knowledge base, and wider organisational and resource influences and constraints.
The potential weaknesses include:
- a lack of clarity about specific purposes of intervention and related outcomes
- a focus on individual, specific experience, rather than data which is generalisable
- an emphasis on individual learning and experience which may be seen as irrelevant, when success is measured at political and programme level by relatively crude indicators, for example, risk of re-offending, reduction of unemployment.
Finally we need to consider the contribution of participatory and empowering research and evaluation (Dullea and Mullender, 1999), not yet a coherent paradigm in social work research and evaluation, but an approach which begins to address the power and authority imbalance between managers, practitioners and users or recipients of services in evaluation of practice and services. Examination of participatory and empowering evaluation needs to take realistic account of the complexity and diverse purposes of social work practice with its divergent purposes of care, protection, control and empowerment.
Participatory research includes the following characteristics:
- people are seen as experts in their own lives
- the strengths of local people are used to plan action for change based on communally owned values
The strengths of the participatory research paradigm lie particularly in its inclusiveness. It draws on:
- feminist theory and methodology
- the social model of disability
- 'people first' and 'equal people' perspectives in the field of learning disabilities
- the psychiatric survivor movement and the challenge to mental health/psychiatric 'knowledge' as derived from medical research and practice
- theories derived from children's rights and perspectives
- theorising and knowledge about gay and lesbian choices, lifestyles and behaviours
- theorising about race, and ethnicity
The strengths of this approach are clear:
- the inclusion in a research/evaluation agenda of the voices of people who may be excluded by race, gender, disability, mental health, age, learning disability or poverty, or a combination of these factors
- the emphasis and promotion of the user contribution, if not control, of the evaluation agenda
- the social inclusion, in policy and practice development, of previously excluded voices
- the recognition of the need for accountability of practitioners to service users, not just to employing organisational hierarchies.
Potential weaknesses are:
- conflicts between user requirements and needs and resource allocation
- conflicts between user perceptions and social work legal requirements in terms of risk assessment and protection (in particular in relation to children).
- conflicts between empowerment and the protection and control purposes of some aspects of social work.
Concluding issues and questions
The strengths and weaknesses of each of these approaches to research and evaluation lie not simply in the methodology involved but also in the application to social work practice, the contested nature of social work knowledge and the complex nature of social work with its multiple purposes, agendas, constraints and stakeholders. How we choose to apply and use our methodological repertoire in research and evaluation must attend to the tensions and complexity involved in practising social work.
Preparing for this seminar has raised for me a number of questions and uncertainties which centre around:
- the boundaries, nature and definitions of social work practice
- the range of competing research methodologies
- the differential evaluation agendas of different stakeholders including users, practitioner managers, researchers and policy makers
- the importance of power differentials in deciding the relative influence of different research and evaluation contributions
- the need to apply different methodologies in evaluating social work appropriate to different identified purposes, e.g. care, protection and control
- the need to recognise that, however rigorous our methodologies, our findings about what works are probabilistic.
Further issues for this seminar include:
- how the value and ethical base of social work may best contribute to rigorous and conscientious evaluation of practice, rather than constituting and iterating an ideological base which can then not be analysed or contested
- how user or client perceptions are routinely incorporated into evaluation and how user led research and evaluation is established as a legitimate method of external audit and review
- how the different research perspectives identified (quantitative/empirical, qualitative/interpretative, and participatory/empowering) may contribute to and develop an extended knowledge base for social work practice which acknowledges and incorporates the complex and contested nature of social work noted earlier in this paper.
Within this context of complexity and uncertainty is there a contribution from social work research to the development of evidence for practice in other related disciplines?
I would suggest that the complexity and uncertainty we experience also applies to our colleagues in health. Paradoxically in health care and pharmacy practice, the concept of evidence based practice is being extended from an empirical experimental base to include consumer/user evaluation of the treatment or service provided and a recognition of the need for partnership with patients to ensure that treatment is effective. For example if patients fail to complete a prescribed drug regime, its experimentally proven clinical effectiveness is of little use. The general practitioner or pharmacist is left with problems common to social work of how to persuade the user to endorse the treatment!
Social Work's historical commitment to examining and attending to the voice of the 'client' (Mayer and Timms, 1970, Rees and Wallace, 1982), its recognition of the importance of the 'meaning' of professional intervention to the recipient (Everitt and Hardiker, 1996) and its current engagement in participatory and empowering research (Dullea and Mullender, 1999) could usefully be applied to patients' experience of health care provision and should contribute to the current debate about the effectiveness of health care practice and service delivery.
I began this discussion with uncertain and problematic definitions of social work practice, knowledge, research and evaluation and lack of clear definitional boundaries. I conclude by suggesting that these issues are not unique to social work but may apply also to evidence based practice and what works in health care. However, to continue the argument with confidence, we do need, in social work, further debate about the research paradigms addressed briefly in this paper, and more extensively in the seminar, their relative contributions to social work knowledge, and, indeed, whether such a model of pluralistic evaluation, with a contribution from different methodologies based on different and competing perspectives of the nature of social work and its knowledge base, represents a logical and intellectually defensible position.
Acknowledgements: To Ian Shaw, Cardiff University, for encouraging me to take part in the publication of Evaluation and Social Work Practice and challenging and developing my understanding of social work evaluation.
To Kath Sharp for careful typing and presentation of this paper.
To my family for putting up with me while I agonised over what to write!
Braye, S and Preston-Shoot, M (1995) 'Empowering Practice in Social Care', Open University Press, Buckingham
Denzin, N K (1989a) 'Interpretative Interactionism', Sage, California
Denzin, N K (1989b) 'The Research Act', Prentice Hall, New Jersey
Dullea, K and Mullender, A 'Evaluation and Empowerment' in Evaluation and Social Work Practice', (eds) Shaw, I and Lishman, J, Sage, London
Evans, C and Fisher, M (1999) 'Collaborative Evaluation with Service Users' in 'Evaluation and Social Work Practice', (eds) Shaw, I, and Lishman, J, Sage, London
Everitt, A and Hardiker, P (1996) 'Evaluating For Good Practice', Macmillan, London and Basingstoke Everitt, A (1998) 'Research and Development in Social Work' in 'Social Work: Themes, Issues and Critical Debates' (eds) Adams, R, Dominelli, L and Payne, M, Macmillan, London and Basingstoke
Fisher, M (1997) 'Research, knowledge and practice in community care', Issues in Social Work Education, 17 (2): 17-30
Habermas, J (1978) 'Knowledge and Human Interests', Heinemann, London
Henkel, M (1995) 'Conceptions of knowledge in Social Work Education', 'Learning and Teaching in Social Work: Towards Reflective Practice', (eds) Yelloly, M and Henkel, M Jessica Kingsley Publishers, London
Jordan, B (1978) 'A Comment on Theory and Practice in Social Work', British Journal of Social Work', 8 (11), 23-25.
Lishman, J (1998) 'Personal and Professional Development' in 'Social Work: Themes, Issues and Critical Debates', (eds) Adams, R, Dominelli, L, and Payne, M, Macmillan, Basingstoke and London
Lyons, K (1999) 'The Place of Research in Social Work Education', ESRC funded seminar 1, Theorising Social Work Research: What kinds of knowledge?
Lyons, K (1999) 'Social Work in Higher Education: Demise or Development', Ashgate, Aldershot
Macdonald, G and Sheldon, B (1998) 'Changing One's Mind: The Final Frontier? Issues in Social Work Education', Vol 18, No 1, p3-25
McGuire, J (ed) (1995) 'What Works: Reducing Reoffending: Guidelines from Research and Practice', Chichester, Wiley.
Marsh, P and Fisher, M (1992) 'Good Intentions: Developing Partnership in Social Services', Joseph Rowntree Foundation, York
Mayer, J E and Timms, N (1970) 'The Client Speaks' R.D.P., London
Popper, K (1969) Conjectures and Refutations: The Growth of Scientific Knowledge', R.K.P., London
Rees, S and Wallace, A (1982) 'Verdicts on Social Work', Edward Arnold, London
Reid, W J (1994) 'The empirical practice movement', Social Service Review, 68 (2): 165-184
Reid W J and Zettergren, P (1999) 'A Perspective on Empirical Practice' in Evaluation and Social Work Practice, (eds) Shaw, I and Lishman, J
Sage, London Sackett, D L, Rosenberth, W M, Gray, J A M, Haynes, R B, and Richardson, W S (1996)
'Evidence based practice: what it is and what it isn't,' British Medical Journal 312 (7023): 71-72
Schon, D (1987) 'Educating the Reflective Practitioner', Jossey Bass, San Francisco
Scottish Office Home Department (1997) 'Looking After Children: Good Parenting: Good Outcomes', Scottish Office, Edinburgh
Shaw, I (1996) 'Evaluating in Practice', Ashgate, Aldershot
Shaw, I (1999) 'Evidence for Practice', Ashgate, Aldershot
Sherman, E and Reid, W (eds) (1994) 'Quantitative Research in Social Work', Columbia University Press, New York
Tevillion, S 'Social Work, Social Networks and Network Knowledge', ESRC funded Seminar 1, Theorising Social Work Research: What Kinds of Knowledge?