TechUK Hosts Workshop with the Responsible Technology Adoption Unit on Forthcoming AI Management Essential
On 11 September, techUK held a workshop from 9:30 to 12:30 with DSIT’s Responsible Technology Adoption Unit (RTA), featuring an address from Felicity Burch, Director of RTA and facilitation by Nuala Polo, AI Assurance Lead of RTA with attendance from techUK’s Digital Ethics working group members. This session allowed for testing and feedback on a forthcoming assurance tool set for public consultation in Autumn 2024.
Over the last year, an increasing number of AI governance frameworks and standards have emerged, aiming to help organisations develop governance practices that support the development and deployment of responsible and trustworthy AI. However, many organisations, particularly SMEs and startups, have difficulty navigating this landscape.
To address these challenges, RTA is developing an AI Management Essentials tool. The tool, which was presented at the workshop will provide open-access, simplified baseline requirements for responsible and trustworthy AI development and deployment.
In this half-day workshop, the RTA shared the draft tool and discussed their research into developing the tool, followed by an interactive session where participants engaged with and provided insight on the usability of the tool, with the ultimate goal of ensuring it is fit for purpose and meets the needs of startups and SMEs.
Some attendees were impressed to see such a cohesive tool that funneled leading frameworks into concise prompts requesting evidenced answers. This tool is set to find many use cases; however, one significant application is using the tool to ensure responsible AI management systems are in place within an organisation.. This assessment capability recognises that organisations are at different stages in their digital ethics journey and can help them understand their current position and areas for improvement
Throughout the session, key themes in membership feedback emerged, focusing on the tool's usability and implementation within organisations. Members raised questions about how to interpret the tool and which roles within a company would be best suited to engage with it. They also discussed the importance of identifying key stakeholders who should be involved in the process and who would be most qualified to respond to the tool's prompts.
The workshop also addressed regulatory and legal considerations, exploring the complexities of development versus deployment and the associated challenges in intellectual property. Participants appreciated the inclusion of references to existing relevant laws, which provided valuable context. However, there were calls for clearer lines of accountability and ownership of the processes outlined in the tool. Additionally, members expressed a desire for more detailed signposting and explanations regarding the origins and rationale behind the questions posed in the assessment.
We look forward to supporting AI Management Essentials and encourage feedback once it is published for public consultation in Autumn 2024.
For more information on joining techUK’s Digital Ethics Working Group, please contact [email protected].
Tess Buckley
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.