Learning together to safeguard children: developing a multi-agency systems approach for case reviews

Putting it into practice - Organising and analysing data

Producing a narrative of multi-agency perspectives

  • The conversation structure organises the data so that the review team can draw together the differing accounts of the history of the case. 
  • Reviewers must be transparent about their sources of evidence, whether documentation or conversation.
  • Gaps and disputes need to be highlighted.

The conversation structure creates an initial organisation of the data. This helps the review team reconstruct the differing accounts of the history of the case. Drawing together these potentially disparate narratives is a critical part of the working method. Reviewers need to continually manage the recurrent tendency to want to assert what really happened, or the reality of the situation.

As data is organised, it is important to identify where descriptions come from. This includes noting where key perspectives are missing and where information is unavailable. Any significant discrepancies between sources also need to be highlighted. The review team’s own judgements or responses to participants’ narratives should be kept separate.

Identifying key practice episodes and their contributory factors

  • The narrative of multi-agency perspectives contains various episodes that participants identified as key to the way the case developed or was handled.
  • The review team need to judge the adequacy of practice in these episodes.
  • They then need to identify contributory factors which meant that the practice contained seemed sensible or the right thing to do at the time.

From studying the official records and conversations with participants, the review team can then identify a number of key practice episodes within the narrative. These then need to be analysed in more detail to identify their contributory factors.

The selection of key practice episodes draws strongly on participants’ views of what episodes were significant but also requires the review team’s judgement. The review team needs to be explicit and transparent about the significance of the episodes selected – how each influenced or might have subsequently influenced actions and decisions and the way the case was handled. Ultimately a judgement needs to be made on how a particular episode was linked to outcomes for the child(ren) and family. This will involve the use of hindsight and by looking beyond the individual episode to the wider picture of the case as a whole. Each episode should be briefly described, keeping as close as possible to participants’ accounts.

Secondly, the review team need to comment on the adequacy of the judgements and decisions that make up each particular episode. It is helpful, for example, to consider what information was or should have been used to inform the process. The review team needs to consider how the using, or ignoring, of available information actually influenced, or potentially might have influenced, subsequent episodes. We found that each key practice episode tended to include both good and problematic elements of practice. As opposed to a one-off judgement, therefore, it proved more useful to break the episode down into smaller constituent parts and make judgements of each part explicit.

The final aspect involves identifying contributory factors from across the various participants’ accounts.

How to record the analysis on paper

  • A multi-stranded narrative requires a flexible form of recording; a standardised framework would obscure the choice and judgement involved.
  • Microsoft Word’s ‘comment’ function and tabular formats have proved useful tools.

Abandoning the single storyline of a chronology means that decisions are required about how to present the differing perspectives in a way that helps the reader understand the ensuing analysis of practice. We do not offer a standardised framework for structuring different perspectives in a case review. A standardised or preferred model would make it easier to compare across a range of case reviews and readers would become familiar with the layout. However, it would obscure the fact that there are always other possibilities and that the one finally chosen inevitably reflects aspects of the interpretation of the case.

In our pilots we experimented with using the ‘comment’ function in Microsoft Word to mark emerging questions and issues as we put together the multi-agency narratives. This proved useful and is illustrated below. It helped to keep judgements separate from renditions of people’s ‘local rationalities’. It also encouraged us to make our own input explicit.

Use of Microsoft’s ‘comment’ function – an example

Graphic showing the use of Microsoft's 'comment' function

To record the description and analysis of key practice episodes and their contributory factors we developed a table; this is reproduced in Appendix 4. In comparison with the narrative alternative, we found this format made the distinction between the different parts of the analysis clearer. Listing the contributory factors aided clarity and repetition across different episodes stood out strongly.

Reviewing the data and analysis

  • There is no absolute truth about a case and putting together the various accounts requires interpretation by the review team.
  • Participants provide a vital check on basic accuracy of the facts.
  • They also need to validate the prioritisation of issues by the reviewers.
  • Draft reports need to be shared for comment and group discussion meetings need to take place.

Neither data source provides a reliable, consensus view. The documentation of different agencies may conflict in the basic factual details presented or they may provide a very different focus. Similarly, interviews reveal how people’s different reasons for involvement lead them to focus on different aspects of the family.  Putting together the various accounts involves a degree of interpretation by the review team. It is therefore important that reviewers check their work with participants. This includes the accuracy of the adapted chronology, key practice episodes and contributory factors. It also entails getting feedback about the appropriateness of the review team’s emerging analysis of key themes. Have any key details and/or connections have been overlooked?

Checking can be done by sending draft reports to participants for comment as well as holding group discussion meetings. This is likely to produce some corrections or challenges to the review team’s interpretation and also some valuable additional insights. These inputs should feed into subsequent drafts of the report. In our pilots we used a three-staged process of dialogue between the review team and participants as detailed below.

Suggested stages of the dialogue with participants

  1. Preliminary report
    Individual comment
    Preliminary group meeting
  2. Interim report
    Individual comment
    Interim group meeting
  3. Final draft report
    Individual comment
    Closing meeting

Creativity and innovation is required in terms of the content and structuring of these different reports or meetings. The review team needs to think about how they can best facilitate these exchanges and be as flexible as possible about the way they accept feedback from participants on draft reports.

In our pilot sites, we held group discussion meetings over lunchtime that ran for two hours. We were delighted with the turn-out to meetings in both sites. People’s willingness to come seemed to indicate that the meetings served an important function in making concrete their joint ownership of the process.

Identifying and prioritising generic patterns of systemic factors

  • The deeper analysis of data identifies underlying patterns of systemic factors that either support good practice or create unsafe conditions in which poor practice is more likely.
  • This involves categorising types of systems issues in non-case-specific language.
  • Not all patterns can be covered so selection is necessary.
  • Different patterns will stand out to differing extents for different people so debate is necessary. There is no magic formula.

Once the multi-agency practice in the case has been analysed, the reviewers need to bring some deeper analysis to the varied and repeated practice episodes and their contributory factors that have been identified. This involves moving from context-specific data to identifying the underlying patterns of systemic factors that are either contributing to good practice or making problematic practice more likely. The six-part categorisation of types of patterns are useful here (see explanation given in Key concepts and fundamental assumptions; see also Appendix 6):

  1. human–tool operation
  2. human–management system operation
  3. communication and collaboration in multi-agency working in response to incidents/crises
  4. communication and collaboration in multi-agency working in assessment and longer-term work
  5. family–professional interactions
  6. human judgement/reasoning.

These can be used to prompt reviewers’ thinking and to organise the data into non-case-specific language.

In one of our pilot case reviews, for instance, there were several occasions in which social care had presented, and other agencies had accepted, assessments as comprehensive and definitive, rather than seeing them as ongoing works in progress linked to a clear plan that could be evaluated. This raised concerns that, across agencies, assessment was not seen as a continuous dynamic process but as a discrete stage with a service user. The underlying pattern identified here was one of human–tool operation, specifically the influence of the case management framework assessment, planning, implementation and review (APIR). Under this framework, assessment has a fixed box in the flow chart and review similarly, but falling towards the end of an intervention. So even though written guidance mentioned the need to review and add to assessments, the basic picture had already been set so that revision became an interruption in the flow of practice. Input from the participants suggested that the APIR framework encouraged ‘review’ to be understood as checking whether a plan had been implemented and not whether it had been effective, or whether, in the light of new information about the family, it was still the appropriate plan.

Any case review is likely to lead to the identification of numerous different patterns of systemic factors that either support good practice or create unsafe conditions in which poor practice is more likely. Trying to cover everything runs the risk of losing the most important in the blizzard. Judgement is therefore required to prioritise the most important. Reviewers should take into account:

Far from being a neutral and objective enterprise, different issues are likely to stand out to differing extents for different members of the review team and for different participants. For example, Woodcock and Smiley’s (1998) study found that the more senior the position of the safety specialist, the more likely they were to focus on frontline issues as opposed to systems issues emanating from further up the hierarchy. This variation between participants underlines the fact that this stage is (a) creative and (b) dependent on good background knowledge of the area.

There can be no mechanical process for formulating deep causes or prioritising them. Therefore, it is crucial to ensure both sufficient methodological consistency and transparency at this stage. A key element of this, we suggest, is recording sufficient detail of the analysis of the whole case in order that the basis from which patterns have been selected is accessible and, in principle, alternative selections can be made.

Making recommendations

  • Not all recommendations can be immediately ‘SMART’ (Specific, Measurable, Achievable, Realistic and Timely).
  • Our pilots suggest that three different kinds of recommendations are usefully distinguished: clear cut; requires judgement and compromise; needs further research.

Identifying the underlying patterns shows what issues need further exploration. It starts to shape ideas about ways of maximising the factors that contribute to good performance and minimising the factors that contribute to poor quality work.

A key lesson from the pilot sites has been appreciating the importance of recognising the difference between the overt and the covert organisational messages. Workers tend to be strongly influenced by the covert messages and, unless these change, efforts to alter practice are unlikely to be successful. One example was the perceived priority given to through-put over the quality of work, with staff reporting strong covert messages about the importance of meeting performance indicators relative to doing what was necessary to meet a specific child’s needs. Allowing assessment forms to be classed as ‘completed’ when they had serious deficiencies was one example of how such pressure was acted out.

Our pilots also suggest that it helps to distinguish three types of recommendations. First, there are those patterns for which there are clear cut solutions that can be addressed at a local level and are, therefore, feasible for LSCB member agencies to implement. An example is creating a consistent rule across agencies of when and why to copy in someone to a letter rather than addressing the letter to them directly. In this instance it matters less what the rule is and more that there is one and that it is adhered to consistently.

Secondly, there are recommendations that cannot be so precise because they will highlight weaknesses in practice that need to be considered in the light of other demands on and priorities of the different agencies. This is a task more properly done by the senior management than the review team. An example would be when greater attention in supervision to detecting errors in reasoning requires more time allocated to the critical review aspects of the supervisory role. Can that be obtained by cutting back on some other tasks? How will the agency manage time differently?

The third category of recommendations relates to practice issues that need detailed development research in order to find solutions, although those solutions might then have wide relevance to children’s services. For example, difficulties in capturing risk well when completing core assessments indicates a need to research how widespread this problem is and, if necessary, experimentation with alternative theoretical frameworks, structuring and formatting of forms and possibly software.

Summary of three different kinds of recommendation

  1. Issues with clear cut solutions that can be addressed locally and by all relevant agencies.
  2. Issues where solutions can not be so precise because competing priorities and inevitable resource constraints mean there are no easy answers.
  3. Issues that require further research and development in order to find solutions, including those that would need to be addressed at a national level.

Next section: Next steps.