Accelerating police tradecraft
All of us face unexpected situations in our lives. But police officers and analysts have little room for error. It’s their role to identify situations where there is a significant risk of harm and to decide whether, when and how best to respond. If they get it wrong someone could get hurt and their job could be on the line.
The decision-making itself, assessing what action to take, is the responsibility of police officers supported by police analysts – but that doesn’t mean technology doesn’t have a role to play. Anything but.
Cops and robots
Information is the lifeblood of policing. Today’s digital world helps ensure information is available and reduces manual errors. But using this information to understand what is happening out there and which situations should be attracting attention right now is far from easy. Could technologies such as machine learning be trained to spot high risk situations?
One issue is that situations in the real world vary enormously and finding or creating a representative dataset to train a machine learning model is hard and fraught with ethical challenges around bias. In the sensitive fields of crime prevention and terrorism, where some ethnic and religious groups already feel they are unfairly singled out by the authorities, getting this wrong has a significant impact.
Machine learning outputs are also hard to validate. And these approaches mostly have binary outputs which intrinsically lead to false positives, situations which look like high risk situations but end up not being relevant.
But what if we revisit how expert policing analysts do this work at the moment, can we learn from them to find a new approach?
Capturing tradecraft
Police analysts rely on advanced tradecraft which enables them to sift through the available information, extract the key factors and infer from them what's going on.
The trouble is that it takes a long time to do this and they can’t possibly get through it all.
But what if we could capture and write down intuitively this analyst tradecraft for each area of expertise? It could be managed, challenged, improved and shared. And what if we could use this tradecraft articulation to configure a system to replicate an analyst applying that tradecraft, to identify high risk situations?
-
Policing could understand and validate how the system was working – it’s following the tradecraft
-
Analysts could understand the output that was produced, why a situation was highlighted as risky and assess each situation for action as appropriate
Introducing ‘inference’ technology
Let’s take it a step further. What if we could apply the concept of confidence to each situation, based on the contributing information, to provide a sense of how likely it is that this situation is actually playing out and not a coincidence?
-
There would be no more false positives, instead a range of situations with varying confidences based on the contributing information, that can be understood and validated by the analyst
-
We could focus on high confidence, high severity situations
-
We could also understand where there may be opportunities for early intervention
We refer to this new approach as ‘inference’ technology, as it is ‘inferring’ what’s going on. We’ve built it for policing, and it’s already being trialled in a police force where the early signs are positive.
We’re on the cusp of a new ‘Augmented Intelligence’ approach to support police officers and analysts on the frontline in keeping citizens safe, harnessing new inference technology in the race to stay ahead of criminals as they seek to exploit the vulnerable and damage lives.
Here at least, there’s no value in coming second.
About the authors
Matt Boyd is Head of Futures at BAE Systems Digital Intelligence
Richard Thorburn is a Venture Lead at BAE Systems Digital Intelligence
Georgie Morgan
Georgie joined techUK as the Justice and Emergency Services (JES) Programme Manager in March 2020, then becoming Head of Programme in January 2022.