Responsible Technology Adoption Unit Expands Portfolio of AI Assurance Techniques
On 26 September, the Responsible Technology Adoption Unit (RTA) enhanced its Portfolio of AI Assurance Techniques with new use cases. This portfolio, developed by the RTA (a directorate within DSIT) in initial collaboration with techUK, serves as a valuable resource for individuals and organisations involved in the design, development, deployment, or procurement of AI-enabled systems.
The portfolio showcases real-world examples of AI assurance techniques, supporting the development of trustworthy AI. These additions provide valuable resources for organisations, offering practical insights into assurance mechanisms and standards in action.
TechUK members Anekanta and Kainos shared best practice in this update of the Portfolio of AI Assurance Techniques:
This case study describes Anekanta® AI's Facial Recognition Privacy Impact Risk Assessment System™, a tool designed to address the ethical and legal challenges associated with facial recognition technology. The system helps organisations identify and mitigate risks related to the use of facial recognition, ensuring compliance with relevant laws and regulations while promoting responsible and ethical use. Anekanta® specialises in de-risking high-risk AI and contributes to global best practices and standards, including input on the BS9347 British Standard. Their system is based on recognised regulations, principles, and standards for AI technology, including the EU AI Act, and considers specific regional, national, and local requirements. Using a proprietary regulation database, the system provides an independent pre-mitigation report with tailored recommendations for compliance and risk minimisation. The report covers potential risk levels, applicable legislation, EU AI Act requirements, recommended mitigations, and residual risks requiring ongoing management, helping organisations navigate the complex landscape of facial recognition technology implementation.
This case study outlines Kainos' collaboration with the Defence Science and Technology Laboratory (Dstl) on the Defence AI Centre (DAIC) programme, focusing on implementing the UK Ministry of Defence's AI ethics principles in defense-related AI products and services. The approach centered on conducting ethics and harm workshops, inspired by Microsoft's Harms Modelling, which brought together a diverse team of experts to identify potential benefits, harms, and mitigations of AI systems. These workshops, structured around the MoD's AI ethical principles, were integrated into the agile delivery cycle from the start and revisited throughout the project, ensuring an ethics-by-design approach. The process was part of a broader framework addressing safety, legal considerations, and testing, highlighting the critical importance of ethical implementation in AI development, particularly in sensitive areas like defense.
We welcome these enhancements to the Portfolio, as they offer concrete examples of ethical principles in practice and guidance for ensuring responsible AI implementation across various sectors.
If you are interested in learning more about digital ethics join us at the eighth annual Digital Ethics Summit by registering here.