Police use of AI: A Force for good or a public threat?
Artificial intelligence (AI) is one of the fastest growing, most talked about digital technologies of the moment. While used in many industries to supercharge businesses with insights and agility, AI is not widely used in law enforcement and public safety, primarily because the rules are different in this sector. In this blog, we look at how AI can help police forces become more effective without damaging public trust.
Endless opportunities with AI
AI has the potential to transform the way police forces prevent, investigate, and even solve crime. By identifying patterns and links between reported crimes faster and more accurately than a human, AI can also rapidly sort and analyze huge amounts of data.
With these insights, AI can potentially reduce the time between crimes and convictions, and can be used to better understand the behaviors and chains of events that lead to crime. This gives police forces a head start to implement measures that can block those pathways. However, despite the many potential benefits of AI, there is widespread public concern over how police forces may use this technology ethically. In their endeavors to build and maintain trust between the police force and the public it serves, it is crucial to fully understand and address this concern within the design and implementation of any AI capability.
Key public concerns
One of the public’s key concerns is that the use of AI may only serve to justify existing discrimination or bias — and may even amplify it. However, AI lacks human emotions and, as a result, may have great potential to be free of discrimination and bias.
Nevertheless, if the data used by AI systems contains a bias, if those who built the algorithm are themselves discriminatory, or if the effects of biased data simply haven’t been considered, there is a real risk that the AI tool will amplify discrimination and inequality within society. Consider this example: if crime data collected from a largely minority ethnic neighborhood is used by an algorithm to predict crime in areas where there is an ethnic minority, then without recognizing and correcting for this bias, the AI outcomes could lead to police interventions that are disproportionately focused on these communities.
Using the right data in the right way
Steering clear of biases and discrimination caused by misinformation or misappropriated data requires a concerted effort on the part of the police. To ensure this doesn’t happen, they must collect and use the right data in an appropriate, unbiased manner. This means developing and deploying AI capabilities with a diverse team of people that together can ensure the goal of eliminating bias is achieved. Once the capability is deployed, police forces must communicate clearly, effectively and transparently about its purpose, how it is used and how they have mitigated the risk of bias.
Another key public concern is that data will be misused in a way that infringes upon peoples’ rights and freedoms, or that it will not be safeguarded. Complying with current legislation, standards and other relevant regulations, such as management of police information (MoPI), must be a given — but the public will expect police to go further.
For their part, police forces must future-proof their AI capability against new rules and regulations which may emerge during the lifespan of the capability. They must also ask whether, and to what extent, they need to use personal data to ensure that its use is clearly justified. If there is a clear public benefit to using certain data to keep them safe, the police should retain data only for as long as is necessary, while ensuring it cannot be misused — either purposefully or accidentally. When building AI algorithms, it is useful to include rules as to how different data can be used to protect against inappropriate usage.
Staying current and relevant
Technology is constantly changing and developing. While criminals are harnessing the latest technology to outwit governments and law enforcement agencies, police can use technology not only to keep up but to stay one step ahead. If they are to realize the significant benefits that AI can deliver for law enforcement, the police need to embrace AI technology fully and transparently. In this way, the public that they protect can see how and why AI tools are being used to protect them, safeguard their freedoms, and ensure that their concerns have been understood and acted upon.
The positive impact of AI in law enforcement
Alarmist headlines often focus on the dangers of machines taking over and risk to our personal freedoms. However, the reality is that AI is a powerful tool which — if used responsibly and appropriately — will lead to more crimes being solved more quickly, as well as providing valuable insights into how to prevent further crimes from being committed. With a lower crime rate, communities will be safer, and the police will have greater trust and confidence from the people they serve.
About the author:
Colin Stonelake, Client Consulting Partner, Atos
Colin is Atos’ client consulting partner for the UK Emergency Services Sector. His role includes working with clients on their digital transformation journey and supporting implementation of digital capability to achieve strategic business goals. Colin has been with Atos since 2002 and outside the UK, he has also worked with Governments and Law Enforcement agencies in Afghanistan, Nigeria, Ethiopia, Sudan, South Sudan and Ukraine.
https://www.linkedin.com/in/colin-stonelake-3b27811/
Twitter handle- @atos