Artificial Intelligence in Offender Management - Could machines be more successful?
Last week the International Corrections and Prisons Association in collaboration with Europris organised their bi-annual Global Technology for Corrections Conference. Besides the wide variety of interesting presentations and conversations related to how a global pandemic suddenly made it possible to implement non-security focussed technologies in prisons all over the world, the most attractive topic during this 3 days event was undoubtedly the use of Artificial Intelligence. With a theme as disrupting corrections, we understood the disruptive character of implementing technology in prisons and probation services mainly comes from the rather conservative and risk-averse nature of many of those institutions where any change seems to be disruptive. However, the average perception of Artificial Intelligence and the aspiration to use AI in the context of offender management seemed to be a kind of exception to this rule. During this conference, we’ve spotted in several presentations and discussions a huge enthusiasm related to this emerging technology and how it could help us reduce risk, make better choices, and reduce recidivism.
In a recent article, we’ve analysed the current and potential future use of Artificial Intelligence (AI) in prisons including the use of algorithms in offender management. The literature reviews and the results of the survey confirmed this ambition to start using AI and is fed by the desire to improve decision-making related to finding the best trajectory for the offenders regarding their needs and minimizing their risks. Using algorithms to assess risks has already a long history so the step to include machine learning to improve those tools should not surprise us. And even though the success of AI algorithms is still limited and not always resulting in better outcomes in comparison with control groups where humans did the assessment increased effort and investments in using more complex sets of parameters seems to improve AI’s capabilities and showing more promising results.
Unraveling the complexity of parameters related to understanding human behaviour is key in this, both for analysing criminogenic factors, assessing risks as well as the elements that would support rehabilitation and a successful re-entry in society. As it is now widely acknowledged that progression from persistent offending to desistance from crime is the outcome of a complex interaction between subjective/agency factors and social/environmental factors, any algorithm that has the ambition to guide our decision-making should be trained based on datasets that include those factors. So even without entering into the discussion if it would be uberhaupt possible to achieve an acceptable level of accuracy to make a difference in advising offender case managers, it seems already almost impossible today to find the necessary datasets that contain all those factors which would enable training such an algorithm. Many correctional organisations are still struggling with implementing integrated, end-to-end offender management systems, supporting offender case management in both prisons and community as well as the operational work and collaboration in the field.
And if then, if we should have such large datasets available, they would be based on our existing correctional environments: places that are on an average still far from the engaging, motivating, and humane environments we would expect them to be. A machine trained on data harvested in those environments doesn’t seem to have a huge chance to make a fundamental difference and lead to real positive behavioural change. So, without minimalizing the potential use of AI in the future, I would advocate some modesty at this point: let us first invest in embracing technology to improve our existing correctional systems based on what we as humans already know for quite some time before expecting the help from machines or hoping they will be more successful in this domain then we are.