What’s the link between military intelligence and Learning Together reviews?

Featured article - 09 February 2018
By Sarah Peel, SCIE's Deputy Head of Learning Together

Head-shot of the author, Sarah Peel, SCIE's Deputy Head of Learning Together

My great aunt Mabel was a code-breaker in the First World War. She was the first woman to be appointed to a secret unit of the Women’s Auxiliary Army Corps (WAACs) in September 1917, based in Northern France with the task of deciphering encrypted German communications. The results were then sent through to the War Office for analysis and use in strategic planning or more immediate action. Thanks to my grandfather and the care he took to keep Mabel’s things after her death, I found myself last week in the Museum of the Military Intelligence Corps in Bedfordshire.

It was in conversation with the Museum Curator that I began to see the similarity between the task of a Learning Together review and that of military intelligence. First base for both is intelligence gathering from a range of sources. Second, and arguably most importantly, comes the analysis of that intelligence – and here’s the rub for a systems review – with a view to predicting what was likely to happen in the future should no action, or the wrong action, be taken. This takes the form of a ‘best guess’ based on a variety of evidence, including anecdotal, pointing to the likely scale of the problem.

The analyst then makes clear the stakes, critically, in terms of what senior officers need to know as opposed to what they might want to know. The Curator might easily have been describing the way a Learning Together Finding is developed, evidenced and then passed to a Board or Committee with key considerations to think about prior to deciding the most appropriate way forward.

What would Mabel have made of it?

I don’t want to take the military analogy too far, but there may be mileage in learning from it when considering the dynamic there can be between a lead reviewer and a review’s commissioners. How open are strategists to hearing what they need to know about the functioning of their systems as opposed to what they might want to know – particularly if that concerns the operation, for example, of a newly restructured service? How able are reviewers to resist the pressure that can be experienced on delivery of a report that attempts to do this and remain on message, confident both in their analysis and the evidence it is based upon? And how likely is the right learning to result if the balance is skewed?

If a reviewer (and their Review Team) is convinced enough of the basis for a Systems Finding it is not their responsibility to change it because of pressure to make it more palatable, but to put the case for it sharply enough for the implications to speak for themselves. Mabel would, I think, have enjoyed this work!

Media Contact

Steve Palmer, Press and Public Affairs Manager
Tel: 020 7766 7419 | Mob: 07739 458 192 | Email: steve.palmer@scie.org.uk