Theorising Social Work Research

Researching the Social Work process Seminar topics

Researching the Social Work process 11th July 2000 Luton

Methods and Measurement in Evaluative Social Work Professor Ian Sinclair Social Work Research and Development Unit University of York


Social work research has no agreed definition. In part it is marked out by its subject matter - social work, its context and the services related to it. Nevertheless other researchers - for example, pyschiatrists, gerontologists and criminologists - have overlapping interests. So further defining characteristics are needed. One of these is the purpose behind the work. Gerontologists are unlikely to be centrally concerned with the development of social work or related services. Social work researchers are.

Given their interest in the development of social work researchers are naturally concerned with evaluating 'what works'. This talk is about quantitative methods in evaluative social work research. My focus is on the difficulties of carrying out this kind of research, the kinds of design which may overcome these difficulties, and their potential contribution to the practice and management of social work itself. In this way I will try and tease out what is, if not unique about social work research, at least distinctive of it. By a similar token I will not be concentrating on the details of measurement, sampling, statistical techniques and so on, which in my view social work research shares with subjects as diverse as agriculture, psychology and economics.

My analysis of the difficulties of this kind of research is built round the tasks that have to be accomplished if it is to be successful. First, there must be some agreement on the values and criteria against which an intervention is to be evaluated. Second, it must be possible to describe the intervention being evaluated at least to the degree that it is possible to identify examples of such interventions. Without this it is not possible to judge the circumstances in which success is likely to be repeated. Third, and for precisely similar reasons, it is desirable to have a model of what aspects of an intervention leads to what kind of outcome. I will use these three concepts of values/criteria, description, and model to structure my account of the evaluative process.

The three activities I cover are related - for example by deciding on a particular outcome it becomes logical to describe aspects of an intervention likely to influence it. However, they are not necessarily carried out in the same project. For example, descriptive projects usually (or at least ideally!) precede analytical studies aiming at more definitive evaluations which can provide a model of what leads to what. So my concern is with a programme of logically related activities which may or may not be incorporated in the same study. My concern also stops short of two further crucial steps - interpreting results and making recommendations. The talk probably ranges too widely as it is.

In focussing on quantitative methods, I am not denying the central role of qualitative methods in this field. Both are required. Even physics does, as I understand it, need the concept of an 'observer'. However, this observer is an abstract individual who does not have purposes or attribute meanings. Social work research deals with real individuals, people centrally concerned with making sense of their worlds and achieving their ends. Qualitative work is the most direct research approach to understanding these persons in the context of their interactions with services and social workers. This talk is concerned with the complementary role of quantitative research. I shall, however, say a little about the role of qualitative research in relation to quantitative work.

One final comment in this preamble. I will not reference anyone's work except - in a gesture of, I hope, understandable politeness - that of one of our hosts, David Berridge. Obviously I have had examples in mind - often my own work and often that of people I know quite well. Equally obviously these examples are sometimes of good practice and sometimes of bad. I have no desire to rubbish my own work or alternatively imply that I am one of the few exemplars of good practice in this field. Nor do I wish to single out the work of others - it would be unfair and risk causing upset to no good purpose. This principled objection to referencing goes conveniently with another less worthy consideration - shortage of time. Hopefully I will be sufficiently concrete to be understood without specific examples of previous research.

Values and Criteria

Social work research takes places against a background of conflicting criteria and uncalculated risks. In part this is because of 'trade-offs' for the same individual. Is it better for a child to be safe in the care system but risk losing contact with her or his family or to remain at home with the dangers that may involve? In part it reflects the different interests that social workers have to consider - for example, those of the child, the parent(s), the extended family, the disabled or older person, the carers, the neighbours, other users of the service, and the general public, even assuming for the moment that interests within these groupings are the same.

The complexity of the criteria against which social work can be evaluated provides researchers with both challenges and opportunities. It is possible for research to challenge not so much the values themselves, for on these there is quite widespread agreement, but rather the priority that is given to particular values in particular situation. It is possible for it work within these values contributing to the technology of routine measurement and monitoring and also developing tools which can be used in further evaluative research. I look below at the role of research in contributing to debates over criteria, informing and critiqueing management measures, and developing measures for the evaluation of specific interventions.

Contributing to the debate over values

Much of the contribution to the debate over values comes from qualitative research. Frequently this involves interviews which give a voice to people whose views have been ignored - for example, carers or children in the care system. Other techniques, however, are also relevant. Films of young children going into hospital, careful descriptions of the way the young chronically sick spent their days, non-participant observation of life in therapeutic residential homes, and sound recordings of the way old people were bathed in chronically understaffed homes have all contributed to a change in perceptions of a service, a new and more human understanding of what it is about and how it should be judged.

Quantitative research can also play its part in increasing the priority given to some criteria. This can involve demonstrating the frequency with which certain needs occur - for example, the proportions of children in the care system who get no educational qualification or who become homeless when leaving it at 16 or 17. Frequency counts of this kind have undoubtedly helped to raise the priority given to education in the care system and to support for care leavers. Similarly qualitative studies of the plight of carers may be given added weight by quantitative studies of the proportions of them suffering from ill-health and low morale according to some standard measures.

Less common but still important is the contribution of research to changing the view of an intermediate measure - one that is not obviously good or bad in itself but can be seen as an indicator that some other agreed goal is more or less likely to be attained. An example is provided by running away from children's homes (formerly known as absconding). In the early 1970s it was argued that absconding could be seen as a sign of spirit and psychologicial health on the part of the child involved. Absconding rates which varied widely between institutions therefore cast no light on their performance. Against this it was argued that a) absconding was high in establishments which were performing poorly in other ways b) absconders did 'worse' than other children in a way which suggested that absconding contributed to this outcome c) absconders had to survive while on the run and often resorted to criminal or risky behaviour in order to do so.

On its own research seems rarely to succeed in gaining a shift in priorities. Research on absconding did not change the priority accorded to preventing it. Renewed attention to running away awaited police concern over runaways in London. Similarly studies of care leavers have been accompanied the development of 'who cares' groups, and other work intended to give the issue a higher profile. Research by itself did not put carer leavers 'on the map'. However, this does seem to be a kind of work which is particularly common in quantitative social work research and where it has an honourable history.

Contributing to the evaluation of routine targets and performance measures

No one who has lived under the RAE can be unaware of the potential impact of targets and performance measures. Such measures are now pervasive in social as in other services and are encouraged, for example, by Quality Protects initiative.

Researchers, particularly perhaps those based in local authorities, are in a good position to develop these measures. For example, there is considerable evidence on the qualities which older people value in their home care service - e.g. that it should be reliable and delivered at a time that suits their routine. Using these criteria it is possible to create measures of performance. The performance of different teams, divisions, and departments can then be monitored using postal or face to face questionnaires. Alternatively other measures - e.g. delays in the delivery of aids and adaptations - can be used on the grounds that these are known to be important to potential recipients. In this way the views of older people can be routinely fed into the operation of the services.

In an ideal world performance measures link the purposes of individual practice to aggregated data on the performance of units. So it would be desirable if social workers could record their work with a case and its success on a form which was then analysed along with others at a higher organisational level. So, for example, records may show that numerous different social workers are tackling particular problems (e.g. provision of cooked meals for muslim elders) which might benefit from an organised scheme. Efforts were made to achieve this kind of aggregation as far back as the 1960s and continue. To date, however, I am not aware that it has been successfully done. Part at least of the difficulty lies in the kind of description required at different levels. At the level of practice the more detailed the description of objectives the greater the understanding of what is being done. At a higher organisational level more abstract descriptions are required, if only to enable different activities to be aggregated under the same headings. Nevertheless this area continues to attract research attention with the potential for bridging part of the gap which exists between management and practice.

Equally valuable should be the use of quantitative approaches as a corrective to routine measures. Typically these measures are expensive to collect, difficult to interpret, and as liable to depress performance as to enhance it. For example, many of the measures currently advocated in the quality protects initiative are likely to be 'better' if less difficult children are kept within the care system. Such a practice is arguably undesirable but would nevertheless lower turnover, enhance performance in public examinations, increase the proportion of young people who enter employment on leaving the care system and generally make the authority come out well on the quality protects indicators. Since none of the measures are to be adjusted for characteristics of service users or the local (as opposed to local authority) employment market, their potential to act as perverse incentives or in lowering morale in hard-pressed authorities is obvious.

Contributing to further research

Quantitative evaluative research requires outcome measures. It is part of the business of research to develop these measures. In this field, however, there are three particular dangers.

First the quantitative criteria selected for study may pay insufficient attention to the process of intervention. For example, those using services often put a high value on the degree to which they have been consulted and involved. These criteria are thought to be associated with other good outcomes. However, they are also of importance in themselves. Quantitative research tends to concentrate the mind on the final outcome not on the process by which this was reached. This may be right. For example, if it could be shown that removal of a teenager had a very high probability of avoiding highly undesirable outcomes, it might be right to proceed with this irrespective of what the teenager wanted. This does not mean that what the teenager wants is irrelevant.

Second the quantitative criteria may be insufficiently specific. Often social work interventions aim at some quite small and particular gain - e.g. that a debt is paid off or an accommodation problem resolved. Evaluative researchers may, however, be interested in more general criteria - e.g. that an individual's well-being is enhanced on some scale. This interest in general criteria reflects a need for comparisons to involve relatively large numbers of people. Specific criteria produce very small groups and are unlikely to generate significant results. They may, however, be more appropriate to the activity of social work.

Third, and in part as a consequence of the last point, the criteria set may be too ambitious and out of all proportion to the scale of intervention. As a very rough rule of thumb, social work interventions seem capable of improving mood, morale and satisfaction with service. They have great difficulty in changing the way people behave - e.g. whether they engage in delinquency or suicide attempts. This is particularly so if the people concerned have not asked for the service. However, in these respects all services struggle - psychiatric and psychological services no less than ours. By concentrating on ambitious and inappropriate targets social work research may have given social work an undesirably bad name. Judged against over-ambitious and innappropriate criteria it is set up to fail.

That said quantitative researchers undoubtedly have had important successes in developing measures appropriate to the field. These include measures which

capture the process from the point of view of the recipient (e.g. measures of the supportiveness of social workers as seen by foster carers).

reduce an umanageably large number of criteria to simpler robust outcome measures through the use of factor analysis and related techniques

capture the specificity of interventions and the need to relate them to particular cases (e.g. through goal attainment scaling)

In these ways the sensitive construction of measures may take account of the particular difficulties of the subject.


A key problem in 'theorising', managing or researching social work lies in the 'slipperiness' of the activities involved. A pill or a surgical procedure are in some sense definite. They may be seen from varied points of view, for example as a means for controlling or placating patients. However, from the point of view of studying their medical effects there is some consensus over how they should be described. In the field of social work, however, what you are told you get may not be what you see. Is assessment assessment more properly described as rationing, negotiation, fobbing off, filling in forms or any of the other terms under which the same set of interactions might be described?

Against this background descriptive studies play an important part in evaluative quantitative research. They do so in two main ways. First, a careful description may in itself amount to an evaluation. Second, description is commonly an essential preliminary to the more analytical approach described in the next section.

Description as evaluation

A common feature of social work research is a kind of study which can be called 'analytic descriptive'. A service or project is described from a variety of points of view. Quite often case studies are included to illustrate the points made by the statistics or to suggest an interpretation. The result leaves the reader with a sense of knowing their way around a new subject. Our host, David Berridge, has provided a number of excellent examples of this kind of work, not least in the fields of foster care and children's homes.

An interesting feature of analytic descriptive studies lies in the evaluative conclusions that can be drawn from apparently descriptive data. To take some examples from services for children,

by describing the costs of children's residential care (in excess of 60K per annum) along with its outcomes it is possible to raise serious questions about the wisdom and equity of spending money in this way

by noting differences between those receiving 'preventive' services and those entering the care system it is possible to argue that the services are missing their target and actually preventing few people from being looked after

by noting the numbers of children who enter the care system who have been abused and the relative infrequency with which they are offered treatment it is possible to suggest that more intensive treatment for their difficulties should be available

by noting that over 90% of children entering the care system go home it is possible to raise equally serious questions about any policy that fails to prepare them for it

by noting the frequency with which there is sometimes evidence of abuse on children's files of which their current social workers are unaware, it is possible to argue for a reorganisation of files at the least.

Examples could be multiplied from this and other client groups. Such arguments have two key features. First, they depend on the existence of common values and beliefs - it is only because we believe that abuse is bad but that its effects are treatable that we can argue for more treatment of those abused. Second this kind of research gives rational backing to a proposed recommendation but does not guarantee its success. For example, it may well be that if treatment were offered to those who had been abused it would be ineffective or resented or would make matters worse. The analytic descriptive study is not therefore an alternative to more rigorous evaluation of the 'what works' kind. However, as argued below it may be a necessary precursor to it.

Descriptive research as a precursor to full scale comparative evaluation

In preparing for such evaluative studies descriptive research is almost an essential preliminary.

First description can bolster or weaken the case for evaluative work - a project which was not implemented as planned or in which few believe may not be worth evaluation. By contrast it may be possible to show that a project reaches the groups for which it was intended, is apparently delivered as intended, is praised by recipients and providers, and has what look like good outcomes. In these circumstances the case for evaluation is strong.

Second description can sharpen the hypotheses to be tested. It can clarify the different kinds of interventions which are made and the different kinds of service users who receive it. Some interventions may look more promising than others either for all or for particular groups. In these ways it may suggest the comparisons that need to be made.

Third a descriptive study may provide the research tools for describing service users, intervention and outcomes. It can identify the factors that characterise those service users likely to have good or bad outcomes. These become the factors on which it is important to match or which need to be taken into account in the analysis. Similarly it may suggest the features which characterise effective social work and which should be included in any description of it. In short it provides the essential tools for the kind of work we describe next.

Model building

Policies in social work depend heavily on models - beliefs about what leads to what. So we train social workers in the belief that training will lead to better performance. We advocate supervision for home carers in the same belief. Such beliefs are not necessarily shared by all involved. A brief examination of social services will show massive variations in the intensity and nature of what is provided. To take the example of children's homes,  authorities vary greatly in the staffing ratios, proportions of trained staff and sizes of home they provide. So the question arises of whether these variations make a difference to outcomes as they certainly do to costs. Similar differences are apparent over time - for example, in views over the importance of training and the kind of training required, or the policies which should be pursued in relation to contact between families and children in the care system.

On the face of it it is irresponsible to ignore questions about the effect of these variations. They have major implications for cost - and hence for services to other groups - and, it must be assumed, major impacts on the lives of service users. The key question, however, arises of how far rigorous research is possible in this field. I will look first at the difficulties and then at the way these might be overcome.

Problems of comparative evaluative research

In essence research on models involves comparsions. We are concerned with what happens if something is not done as against what happens when it is. It is on the basis of such comparisons that we build a picture of what leads to what. Unfortunately in our field quantitative, comparative, evaluative research is very difficult.

Some of the difficulties are practical. It is difficult to recruit people into studies. Often recruitment depends on the willingness of social workers to introduce the study. They are busy, not necessarily great believers in research, and have a complicated enough agenda to transact with their clients as it is. Consent may well be needed at a variety of levels (e.g. parents, foster carers, children). For these and other reasons it may be difficult to acquire adequate sample sizes.

Small sample sizes matter less when variables can be measured with precision and hypotheses are precise. Unfortunately, as we have seen, the data are sensitive, 'soft' and fugitive. People are not necessarily keen on discussing 'private matters' e.g. abuse. Some key informants (e.g. young children or 'confused' older people) may require particularly skilled interviewing. Hard variables - e.g. ages of foster carers - may be weak predictors of outcome. Softer variables (e.g. the 'warmth' of a foster carer) may be more important but difficult and expensive to measure with precision.

These features argue for heavy funding - to spend time on recruitment and ensure careful measurement of elusive variables. Unfortunately funds are short. As a consequence there are two few studies to cover the ground. Inevitably funders then look for wider and more ambitious outcomes from those they do fund. Projects with precise, manageable questions (e.g. on whether a training programme enables social workers to negotiate more successfully with Social Security) are likely to be rejected. Projects that offer only description may fail to get funding. More ambitious projects which offer evaluation may acquire funding, only to fail because the necessary preliminary work has not been done.

More fundamental problems relate to the difficulty of describing interventions in a replicable way. Social work is a field in which success depends on context. An approach which is relevant in the U.S.A is not necessarily relevant in Tanzania. A project may succeed given support backing from related services and senior management. A similar project in a different area with a different history may fail. Much may depend on particular individuals - a feature which works against against replication. So too does the key role of beliefs and choices. If the culture is such that workers and clientele believe in a particular intervention and choose it, it is more likely to succeed. For reasons of this kind an evaluation may suggest that an intervention works without providing strong grounds for thinking that its success will generalise.

In illustrating these difficulties it is useful to examine the 'gold standard' in medical research - the random controlled trial (RCT). At first sight this approach and its analogues in the field of agriculture are ideally suited to social work. Essentially they are designed to yield robust evaluative conclusions in a situation where outcomes may also depend on factors which are ill-understood. So this, it seems, is what social work research has been waiting for. In this respect, however, all is not necessarily as it seems.

Consider for a moment an 'ideal' RCT in some medical field. A pill of known composition is given according to an agreed protocol in similar hospital environments to a sample of individuals who have been diagnosed by standard methods as having a known disorder and whose response can be compared with that of others picked like them at random from the same patient population. The mechanisms through which this pill is likely to work are known and relevant to the disorder. The effects on these mechanisms can be measured as well as effects on the 'ultimate' outcomes. There is likely to be a dose response relationship with efficacy rising predictably with amount and duration of treatment.

There are also certain ethical and practical considerations. From an ethical point of view there is agreement on the effects of interest and little difficulty in measuring them. Patients are eager to take part in the trial and the patient's choice of treatment is unlikely to be a major factor in the treatment's efficacy. From a practical point of view there is no need to modify the dosage of the pill in the light of the patient's response so that standard amounts can be given and those administering the pill can be blind to whether it is active or a placebo. Large numbers can be recruited to the trial, which can take place in different countries since there is no reason to suspect that culture, or other contextual factors play a major part. The outcomes can be measured on a continuous scale, the effect expected is large, and there is no reason to suspect interactions between patient type and treatment. All this makes it possible to reduce the size of sample required while nevertheless recruiting large numbers, thus making it less likely that effects will be missed. Doctors are interested in taking part in the trial and there are no major practical difficulties in their doing so.

No doubt many of these factors are missing from many medical trials. Nevertheless they are routinely absent from almost all social work ones. As a result these trials commonly lack scientific depth, are sometimes open to ethical attack, and are practically very difficult to mount. Two points in particular are significant.

First, as we have seen treatments (and their effective ingredients) are very difficult to describe. As a result it is very difficult to unpick what is needed to roll out a successful trial to another setting. So the apparent success of a trial of trained social workers in raising the morale of elderly people may reflect the fact that the two workers were trained, were particularly talented, had lower caseloads than the welfare workers serving the comparison group, were more enthusiastic, or had a particularly charismatic supervisor, from some combination of these factors or from something which no one has considered. Second, it is very difficult to acquire the size of numbers required to ensure that a negative result could not occur by chance an unacceptably large number of times.

So it is commonplace for large experiments on say, caseload size, to yield results which are not repeated from one experiment to another. Equally it is possible, for example, for a large trial of health visiting with elderly people, to show major beneficial effects in the case of one health visitor but none at all for the other.

Overcoming problems of comparison

One response to these problems is to dismiss all thought of RCTs, mutter something about positivism and move on to more exciting and illuminating designs. I find this reaction understandable but mistaken. A more constructive approach is, I think, to take the difficulties seriously but to work on overcoming them. This involves, in my view, attention to description, methodological triangulation, and replication.

Description is an essential element in evaluation research. Ideally four aspects of intervention are described:

context - key features of the context of the intervention which are likely to be relevant to its success.

clientele - who they are, what they want, what they 'need' and what they think of the service

operations - what is done, by whom, for whom, for what purpose, with what rationale, over what period, at what cost

outcome - what happens and how this relates to clientele and operations

It is only if these aspects are covered that it is possible to have a sense of the conditions in which the success of an intervention is likely to generalise (or alternatively to realise that a negative result can be discounted because the conditions for success did not apply).

Methodological triangulation is, as I see it, a fancy name for checking conclusions using different approaches. In contrast to what may be the situation in medical research in this field there is no 'gold standard' design. RCTs are valuable. We need more of them. But we also need more studies of ther kinds.

Other statistical designs avoid some of the problems of RCTs. Quasi-experimental designs are less intrusive and avoid some ethical dilemmas. Cross-institutional designs can exploit natural variation making it easier to attribute particular outcomes to the particular characteristics of school or treatment unit. Statistical techniques which take account of context (e.g. the multi-level modelling which is now available) may be particularly valuable. Longitudinal designs can reduce the sample sizes required. Applied to single cases or groups of cases they can take account of natural variation over time, by examining changes in behaviour or other outcomes of interest in relation to the timing of interventions. And in general multi-variate models, informed by a strong theory, and employing a small number of variables can yield plausible causal accounts. There is, however, a trade-off. Conclusions reached by the analysis of natural situations are more plausibly 'rolled-out' to the wider world. However, in so far as the conclusions apply to effects they are less certain than would have been the case with an RCT.

More appreciative, qualitative methods also have their advantages. In many ways they are more adapted to the complexity of the practitioner's world than the blockbuster RCT. Social workers and users rely on their ability to make sense of the situations facing them. While their analysis may sometimes be wrong (as may that of the researcher observing them), their ability to interact successfully depends on the fact that they are often right. An empathic understanding of their viewpoints leads to plausible realistic analysis. As argued about qualitative research draws attention to features of a situation that others may have missed but which once seen have major implications for practice. It counteracts a tendency to treat the powerless as creatures with something less than normal human feelings. It contributes to an ethically defensible selection of outcome measures. And in combination with simple statistical description, it can lead to an informed and incisive evaluation of programmes in social services.

Yet such evaluation also has its drawbacks. On the one hand the qualitative researcher may be encouraged by her of his analysis to generalise too far. So some researchers in the residential field have been led by observations of single institutions to formulate somewhat deterministic theories which grossly underestimated the variety of residential establishments. On the other hand researchers, led by an active appreciation of the active role of their subjects, may reject entirely any causal or 'deterministic' account of the results they observe. Yet such academics are likely to be interested in statistics on the frequency with which black people are stopped in Brixton or executed in America and in the other regularities which govern oppression and exclusion. The factors which lead to these regularities are sometimes outwith the observation of those subjected to them. In uncovering them statistical methods have their place.

So designs tend to have different draw-backs. An RCT trades relative certainty over the probability that outcomes are affected against uncertainty about what ingredients brought this result about and whether it is repeatable. Other designs and qualitative approaches may be better adapted to teasing out the ingredients of effective interventions but less certain in their conclusions about whether success was achieved. Conclusions supported by a variety of methodological approaches would seem to be most secure.

Finally there is a need for replication. If an intervention works in a variety of situations, it may well work quite generally. Earlier I suggested that social workers can raise morale. This conclusion depends in my mind on a variety of studies. Some relate to findings on analogous services - visiting by lay people and by health visitors. Some relate to random controlled studies - - e.g. of social work with older people, with people who have tried to commit suicide and with prisoners. Some relate to the vividness and compelling power of the accounts of service users explaining how they have been helped. Some relate to statistical correlations between visiting frequency of social workers and the attitude of those visited to the social worker (a kind of dose response relationship). Some relate to a hazier and more theoretical perception of the likely effect of social work visits (e.g. derived from ideas about support and the role of listening and confidante's).

I give this example not to defend it but rather to illustrate the way conclusions can be built up. As in all science the final conclusion is more or less uncertain. Some of them are also quite abstract. For example, I happen to believe that there is growing evidence on relative roles of different individuals (front line staff, supervisors, higher managers) on different kinds of outcomes. In certain circumstances - for example, in children's homes - supervisors are crucial. In others, for example, home care teams, they are important in certain respects (e.g. in ensuring reliability), but not in others (e.g. in relation to the sensitivity with which carers treat the service user). These tentative generalisations cast doubt on the ability of some management approaches (e.g inspections) to achieve certain ends (e.g. more sensitive handling of clients). However, the generalisation, even if true, leaves open the question of how good frontline workers can be selected trained and supported. In this field as in others we move towards the truth, slowly and uncertainly, and, if we are wise, with a cautious respect for the wisdom of those in the field.


How far does quantitative evaluative research in social work differ from similar research in other fields? In one way not at all. Researchers in this field need to understand sampling, measurement, and statistical techniques as they do in others. In another way the field is unique but only, if this is not a contradiction in terms, in the way other fields are unique. Social work research has an agenda set in part by its values, in part by its history and in part by the current policy dilemmas. It faces particular practical difficulties (e.g. over recruiting samples) and it has groups of people working on related issues who are familiar with both agenda and difficulties. All this is similar to what occurs in other disciplines. On first entry to the field others may appear as naive as social work researchers moving into say educational research.

That said it is possible to argue that quantitative social work research does face peculiarly acute difficulties arising from the intangible nature of its variables, the fluid, probabalistic way in which these variables are interconnected, and the degree to which outcome criteria are subject to dispute. These difficulties are not restricted to social work. Studies of G.P. practices might raise many of the same issues. However, they are a central feature of social work research and provide it with an opportunity to make a contribution to other disciplines. In my view this contribution is likely to be of two sorts:

a demonstration of the power of careful analytic description in applied social research - in this field evaluative research need not always use a classic evaluative design

approaches to evaluation that value the complementary contribution of different methodologies and do not, as say in medicine, operate according to a strict hierarchy of methodological merit.

In these ways social work research strives as Weber advocated for an approach to evaluation which is adequate both at the level of cause and at the level of meaning.