In Praise of the Canadian Algorithmic Impact Assessment framework
The AI regulatory environment is rapidly evolving, with key new pieces of legislation being discussed, including the highly anticipated European Union AI Act and the Brazilian AI strategy, still in the works. One approach to managing compliance in anticipation of the legislation is the use of algorithmic impact assessment frameworks (AIAs). Many institutes and organisations are seeking to embed responsible AI principles, reflecting common themes including accountability, explainability, transparency, human oversight, and data protection (see for example OECD, Deloitte and Accenture). The principles are designed to ensure the trustworthiness of AI products, and AIAs frameworks are tools for operationalising the principles with an organisation. This article explores several publicly available AIA tools, highlights the key questions, and emphasises the value of the Canadian AIA framework as a starter for ten. Tessa Darbyshire is the Responsible AI & Data Science Program Manager at Elsevier, specialising in the ethical governance of algorithmic and data intensive systems.
Navigating a crowded space
There are several AIA tools, or frameworks with similar goals, that are currently publicly available. The UK Central Digital and Data Office’s Algorithmic Transparency Standard, goes some of the way to creating an auditable framework for companies providing algorithmic services to government. Twitter has published an Algorithmic Harms Rubric, which represents a method for weighting the significance of certain potential harms. And the EU Higher Level Working Group on AI has released an assessment list for trustworthy AI, which guides developers in adhering to the higher-level ethical principles. This is a snapshot of a busy space, in which, lacking formal legislation, the public and private sectors are iterating on early efforts as we wait for the regulatory environment to settle. Part of the uncertainty centres on two key questions: which technologies do we want to be governed by the legislation (or colloquially, what is AI?); and how do we define and measure risk, impact, and harm? Given that no majority consensus has yet emerged on either question, should we just throw in the towel? Or are there plateaus of sanity amidst the mountains of madness?
Glimmers of hope
An example of a fully operational AIA framework has been created by the Government of Canada, which is designed to support the Treasury Board’s Directive on Automated Decision Making. The approach taken in the framework neatly sidesteps the first key question, by shifting the focus to automated decision making, and tackles the second by introducing different grounding concepts for discussing “impact”, which is a nebulous concept at best. A question in the impact section asks whether the system / algorithm supports a human decision maker, automates a decision that would otherwise be made by a human or automates a decision which requires human judgement or discretion. This tiering is a quick way to assess the degree of human oversight, which is one proxy for ethical risk. Other questions focus on the durability and reversibility of potential impacts, addressing themes including individual rights and freedoms, health and wellbeing, economic interests, and the ongoing sustainability of an environmental system. These different windows onto “impact” can provide teams with a framework for discussing how to put responsible AI principles into practice across a broad range of applications. The framework also has the delightful advantage of limiting free-text options, to reduce the likelihood of participants writing something akin to ‘my system has no possible negative impacts’ in every box.
Forging a path ahead
The glory of the Canadian AIA is that it is available under an open licence and can be tailored to a broad range of applications. For example, one might consider adding additional dimensions such as how easy it is to detect a given effect of an AI system, which was introduced by the Ada Lovelace Institute in testing the framework with the NHS. Or, perhaps in a domain specific context, one might add more detailed dimensions, such as the impact on healthcare provider - patient relationships in a medical context, which was introduced in the same test. The risk-mitigation scoring mechanism can also be adapted and is easy to interpret for a broad spectrum of audiences including technical teams, senior leaders, and customers, which is critical to successfully embedding the framework in an organisation. Legislators don’t yet require AIAs but being ahead of the curve demonstrates a commitment to developing AI and data intensive applications responsibly and sustainably. I wouldn’t argue that this approach is a magic bullet, but it’s an excellent place to start in creating trustworthy systems that empower individuals and communities.