Unleashing the Power of AI: Explainability and Transparency are the Keys to Promote Ethics, Trust and Inclusion in Digital Policing
Since Artificial Intelligence is inherently neither good nor bad, it can be used in improper ways—even unintentionally—causing significant harm and infringing on fundamental rights, but it also holds great promise to promote individual wellbeing and solve societal problems. The development and use of AI require ethical values to ensure that AI tools are developed and used for societal good, to identify bias and mitigate any corresponding discriminatory effects, and to enhance rather than replace human decision making. Ethics unleashes the power of AI to help solve societal issues in a trustworthy way. Due to recent advances, digital policing tools have outpaced the policies that regulate them, thereby making ethics, especially, ethics-by-design, a must-have for the policing sector.
Despite its name, ethics-by-design is not limited to ethical considerations for design and development. This approach focuses on how, when, where and by whom an AI tool will be used. Ethical principles can be both intrinsically valuable (e.g. human dignity) as well as instrumentally valuable in facilitating societal trust and promoting inclusivity of all individuals impacted by a the use of an AI tool. Societal trust and inclusivity are critical elements of “policing-by-consent”, which is the central organising philosophy of policing in the U.K.
Yet, even if can readily agree that ethics is crucial for AI, we are confronted by a perplexing number of AI ethics guidance documents instructing developers and end users on which ethical principles they ought to promote. Furthermore, the question of how to operationalize abstract ethical principles into concrete, actionable steps is often unclear. Finally, as digital policing tools require collecting and processing diverse kinds of data, there is a high potential for confirmation and reporting bias, as well as creating proxies for vulnerability or crime through socio-economic status, race or ethnicity. Indeed, a well-known and regularly articulated public concern about policing data is that it contains implicit and explicit biases resulting in discriminatory practices (see, for example, HMICFRS 2020; Burgess 2020; Babuta and Oswald 2019; El-Enany and Bruce Jones 2015).
These challenges can become so complex in the context of digital policing, that it is often difficult, yet essential, to know where to start tackling them. In several of our projects and products for police, Trilateral Research found that transparency and explainability are the key starting points to meet the challenges of mitigating bias and promoting the ethical values of dignity, fairness and human oversight. We fulfill the principle of transparency by assessing and documenting what, from where and how datasets are ingested into models. We also conduct comprehensive data protection impact assessments and ethical impact assessments for clients and on our own tools. In addition, it is essential that our AI applications clearly communicate their scope—what they do, but also what they do not do, so that users have a clear sense of when and how to challenge or accept the output of the AI tool. Transparency is not only a democratic value, it achieves “policing-by-consent” and provides police end users the requisite information to use digital tools in ethical, trustworthy and inclusive ways.
Explainability in AI is likewise a principal vehicle for achieving ethical values. Explanation is a means of communication; it strives to facilitate understanding on the part of the end user. Combining ethical, domain and technical expertise, Trilateral Research has developed essential explainability features for police. For example, the CESIUM application—a research and analytical capability, empowering early intervention and harm reduction approaches to child safeguarding—indicates the relevance that each input feature has on the output of the algorithm. This relevance can be applied to an individual, aggregated over all individuals to communicate the population feature importance, or aggregated over different demographic groups. Value graphs explain how the algorithm works with the body of data it was trained upon. To complement this, CESIUM presents histograms to visualise the variation in algorithm scores across a population, which allows the end user to contextualise any individual result within the broader population. By engaging with these features, police can see whether a specific characteristic (e.g. gender, age, ethnic background) is influencing the algorithmic output thereby ensuring fairness and avoiding discrimination. Furthermore, using these features in the application is unavoidable thereby making it certain that end users must engage with the tool. This engagement promotes human oversight and helps avoid automation bias.
The ethical landscape of digital policing is complex. Ethical guidance documents are helpful but can also be abstract, obfuscating a starting point for Ethical AI, thereby risking a lack of societal trust and inclusivity that hinder the adoption and use of AI tools. As evidenced through work at Trilateral Research, transparency and explainability are key steps to traverse this landscape, promote ethical values and unleash the power of AI to help police solve societal problems in a trustworthy and inclusive manner.
Author:
Dr. Zachary J. Goldberg
Ethics Innovation Manager, Trilateral Research
Georgie Morgan
Georgie joined techUK as the Justice and Emergency Services (JES) Programme Manager in March 2020, then becoming Head of Programme in January 2022.