Investigating AI: industry support for AI adoption in criminal justice
Artificial Intelligence (AI) has now become a vital took to inform decision making and strategy in a wide range of business areas. Organisations that successfully leverage AI tools have distinct advantages, with AI Machine Learning (AI/ML) invaluable for finding actionable insights within data. For the MoJ, successful adoption of AI could improve prison safety, justice outcomes, citizen services and staff wellbeing – and save money.
So how can the MoJ learn from broader industry experience? Consider how industry is training its algorithms to eliminate bias – this knowledge could be used to deliver fairer sentencing, for example. By adopting techniques similar to those used to predict demand for goods on supermarket shelves, could the Prison Service improve how it predicts demand for cell usage? The calculation of risk in the insurance industry could be adapted to predict risk of violence in inmate populations. Analysis of IoT data used to support smart cities could be applied to prisons to improve prison safety. Healthcare providers use AI to predict peak demand – this approach could help inform resource planning and allocation across the justice sector.
Advanced analytical techniques are already being used within the justice sector in other parts of the world to support decision making and drive improvements to justice outcomes:
- One US Department of Corrections has reduced recidivism rates for the first time in seven years using insight derived from AI driven dashboards.
- Another has used AI and predictive analytics with real-time data to reduce incidents of violence against staff by 50%.
- A US Justice Agency has produced efficiency savings estimated at $12 million annually by providing a holisitic view of each offender through a single solution.
Important considerations for adopting AI
As with any organisation leveraging AI today, the MoJ’s approach will need to consider:
Transparency: Justice agencies must be open about their use of AI tools and data sources and be able to understand and explain what they do with the technology. This openness will build public trust and allow informed debates, for example on the scope of generative AI technologies.
Accountability: Developers and users of predictive tools need mechanisms in place to ensure AI predictions do not become the sole basis for justice actions. Rigorous testing on past cases, including miscarriages of justice, will help to ensure AI performs on “edge” cases.
Open source integration: A holistic AI platform combines Open Source technologies with Commercial off the Shelf (COTS) solutions - uniting the agility of OS with COTS’ governance, auditability and speed of deployment.
Auditability: A robust governance and audit process for records and versioning will identify which user and which version of a model made a decision at a particular point in time. Testing and deployment processes including associated sign offs for models in production need to be auditable.
Human in the loop: Officers and staff will always be irreplaceable in justice with a wealth of experience, the focus needs to be on finding innovative ways to make AI and the results of AI accessible to them.
Bias Mitigation and agility: Regular audits and model updates are essential to identify and mitigate biases in data and algorithmic design and to align with evolving societal norms and legal standards. As AI model performance degrades, an AI platform should manage the complete model lifecycle including model development, implementation and use, automation and simplification, ensuring end-to-end traceability and delivering changes promptly and accurately.
Community Engagement: Involving the community can lead to more equitable practices and foster a collaborative approach to justice outcomes.
Performance and cost effectiveness: Proactive steps to reduce AI’s carbon footprint are essential. The energy required to run AI tasks is already accelerating with an annual growth rate between 26% and 36%. This means by 2028, AI could be using more power than the entire country of Iceland used in 2021. See the Futurum report.
Actionable: AI outputs should be accessible, explainable and available through a user-friendly interface with explainable text descriptions that support onward investigation.
In summary
An integrated, holistic AI platform addresses these considerations and aligns with the MoJ’s goal to improve justice outcomes. AI will always need the oversight of experienced professionals to harness its benefits. If the lessons from wider industries translate to the justice sector, the benefits could be widespread.
Georgie Morgan
Georgie joined techUK as the Justice and Emergency Services (JES) Programme Manager in March 2020, then becoming Head of Programme in January 2022.
Cinzia Miatto
Cinzia joined techUK in August 2023 as the Justice and Emergency Services (JES) Programme Manager.
Ella Gago-Brookes
Ella joined techUK in November 2023 as a Markets Team Assistant, supporting the Justice and Emergency Services, Central Government and Financial Services Programmes.
Digital Justice updates
Sign-up to get the latest updates and opportunities on our work around Digital Justice from our Justice and Emergency Services programme.