Research mindedness

Misuse of research

The misrepresentation of research findings may arise for a number of reasons. It may be wilful, dishonest, accidental, partisan, political, ignorant, biased, careless or any combination of these.

Research is about doubt and certainty, about complexity and simplicity. Research evidence may sometimes replace questions with greater certainty, while just as often such evidence may raise more questions than it answers. Sometimes researchers may claim a degree of certainty that is not warranted by the evidence. It is through understanding how research is 'misused' that you will be able to understand better how it 'should' be used.

Common ways in which research findings are misrepresented are explored under the following sub-headings:

Flawed research

If a piece of research has not been designed or carried out in a professional and ethical manner then this will impact on the quality of the findings. Understanding the principles in research design and delivery will help you assess research findings. Here are some questions you should ask, although not all will be relevant in every case.

Using findings out of context

This is a particular hazard in the social care arena, especially for research into the effectiveness of particular projects or schemes. For example, a scheme to provide support and advice for lone mothers was extremely successful and produced measurable changes in the quality of life for the women involved across a range of social, personal and economic measures. However, it does not follow that this scheme could be replicated equally successfully in other towns by other staff with other groups of lone mothers. What is needed is for the research to identify as accurately as possible the factors that enabled success and those that hindered it. Some useful questions to help establish this might look at the success of the project in different ways, for instance:

A good example of non-transferability is a small-scale study into the use of Section 47 enquiries in a London borough revealed trends in practice that were influenced heavily by population diversity and the transience of social work staff. The research report's findings were very useful as an internal planning aid for the local authority concerned, but the key messages would have little resonance for, say, a rural authority with a traditionally stable workforce.

Research produced in other countries should also be treated cautiously as such research is not always transferable. For instance, carrying out a literature search for children with a psychiatric diagnosis will bring up a great deal of North American research which may not be applicable to a British context.

Stretching findings

This is the opposite of ignoring important research (see below). Researchers can produce cautiously optimistic or positive results that are then 'stretched' or 'talked up' to give them a significance that is not warranted by the evidence. This can happen for a variety of reasons, not all intentional. Research commissioners may be looking for definitive results or findings that lean towards a particular set of outcomes, and as a result place pressure on researchers to highlight these rather than other more important results. If a researcher has an insider stance there may be an inevitable bias in the analysis or presentation of findings. Commercial research organisations may be conscious of their interest in obtaining future contracts and this may inform reporting. It can also happen through indifference to or ignorance of professional standards.

A common occurrence is that the results obtained from small sample sizes are generalised to larger sample sizes than the evidence supports. For example, interviewing three sets of carers, workers and users involved in an older persons' respite scheme for carers might identify the detail of those three individual cases, but to then use this sample to report on what works or does not work about the scheme as a whole would be bad practice. This is not to say that small samples are necessarily unreliable but that care has to be taken in drawing wider conclusions, particularly in the social care arena where individuals and their circumstances are so varied.

Distorting findings

Example 1

Selective or partial use of data: here are two different views of British cinema attendances up to 1998.



Campaigning groups have often used simple statistics like the numbers of children living in poverty to summarise issues in a powerful way. Such use is perfectly legitimate and valuable in forcing through important issues of social policy, however the truth is usually more complex than simple statistics suggest and it is essential to maintain a critical perspective when statistics are presented to support a viewpoint.

Party politics thrive on the systematic use and misuse of research and monitoring data, and are a rich arena for studying how the same information can be twisted to suit different ends. As with Example 2 above, simply using different starting points (e.g. for inflation, unemployment or crime figures) can wholly change how figures are seen and the possible interpretations placed upon them.

Example 2

A clear area of misuse of research is the interpretations and spin put on annual crime figures - it is well known that these are largely meaningless, since they represent evidence of police activity rather than of actual crime. The latter is measured far more accurately through the British Crime Surveys which identify massive under-reporting of crime in the official crime statistics. One Home Secretary focused optimistically on the 'fall in the rate at which crime is rising'.

Rejecting or ignoring findings

A common technique, much beloved of researchers themselves, is to undermine or reject research studies by the deployment of contradictory findings. This is often a legitimate and necessary activity that properly tests and critiques research. However, it can also reflect academic nit-picking or battles between entrenched theoretical perspectives that prevents useful research being used to effect action or social policy. For practitioners it is the problem of balancing complex realities against the necessity of making decisions in the real world.

Another systematic and well-established technique for dealing with research is to just pretend it isn't there.

The medical evidence about the dangers of smoking has been known for 30 years or more, with consequent deaths running into millions in this country and tens of millions worldwide. Commercial and political pressures have meant that this research has largely been ignored in terms of producing effective policies to eradicate or prevent smoking, apart from tokenistic health campaigns and warnings.

In social policy the research-based conclusion espoused in the White Paper leading up to the Criminal Justice Act 1993 that 'prison is an expensive way of making bad people worse' became transmogrified into the slogan 'prison works' by a later Home Secretary in the same government. During this period many researchers producing work for the Home Office were not able to get results published that did not accord with the political views of the government.

Similarly, some post-World War II research on North American delinquency was financed by major corporations, such as automobile manufacturers. Not surprisingly, they wanted explanations for youthful offending that l ocated it in individual/group pathological terms, rather than, for example, as a product of social breakdown wrought by industrialisation.

Researchers in the area of social care should take on a greater responsibility for enabling the translation of research findings into social policy and action. Conversely, practitioners need to help shape the research agenda more closely to the needs of clients and practice as they experience them.