Key Outcomes of the AI Seoul Summit
On 21-22 May 2024, six months after the historic Bletchley Summit hosted by the UK, the international community convened virtually and in South Korea for the AI Seoul Summit to build on the momentum and further global cooperation on AI Safety, Innovation and Inclusion. The two-day summit brought together leaders from governments, industry, civil society, and academia to discuss the responsible development and deployment of frontier AI.
The AI Seoul Summit reaffirmed the international community's commitment to shaping the trajectory of AI development through global cooperation and shared guidelines, setting the stage for continued dialogue and concerted action in the months ahead on the road to the France Summit. This insight outlines the key outcomes of the AI Seoul Summit.
Pre-Seoul Summit: Reports and events to inform Seoul Summit dialogue
-
Global Commitment to AI Safety reaffirmed: On May 14, 2024, techUK welcomed the Rt Hon Michelle Donelan MP, Secretary of State for Science, Innovation and Technology, at an industry event focused on bringing different voices to discuss their expectations and hopes for the upcoming AI Seoul Summit. With over 50 senior representatives from businesses across the techUK membership, the event served as a platform for direct engagement between the industry and the UK Government, setting the stage Seoul.
-
Scientific Report on Safety of Advanced AI published: The interim International Scientific Report on the Safety of Advanced AI, published on 17 May, is the product of a collaborative effort following the Bletchley Park AI Safety Summit in November 2023. This is the first international scientific report on advanced AI safety which presents the current and anticipated AI capabilities, the kinds of risks that we should expect and demonstrates approaches to mitigate and evaluate risks to better inform public policies.
-
UK AI Safety Insitute published fourth progress report: The progress report, published on May 20, highlights several significant developments and initiatives such as onboarding over 30 technical researchers, appointing Jade Leung as the CTO, launching Inspect an open-sourced AI Safety evaluation platform, published their first technical blog, supported the interim report on AI Safety, opened a new office in San Franscisco and partnered with the Canadian AI Safety Institute:.
Outcomes from Day 1: 21 May
-
Industry made new voluntary commitments to promote responsible developments of advanced AI systems: 16 leading tech companies have signed up to the 'Frontier AI Safety Commitments,' a set of principles and practices aimed at promoting the responsible development of advanced AI systems. TThis collective commitment from major industry players represents a significant step towards establishing global norms and standards for AI safety and responsible innovation.
-
10 countries agreed to launch an international network of AI safety institutes: The goal is to accelerate the advancement of AI safety science through forging a common understanding of AI safety, aligning research efforts and establishing shared standards and testing methodologies. The agreement highlights the need for a multinational approach to ensure the safe and responsible development of AI technologies.
-
The UK government has unveiled 10 finalists for the inaugural Manchester Prize: which recognises pioneering work in applying artificial intelligence for societal benefit. Each of the 10 finalists will receive £100,000 in seed funding to support their innovative projects that harness the power of AI for public good in areas such as transport, manufacturing and agriculture.
Outcomes from Day 2: 22 May
-
27 nations signed up to develop proposals for assessing AI risks over the coming months: The 'Seoul Ministerial Statement' sees these countries agreeing to develop shared risk thresholds for frontier AI development and deployment, including agreement on when model capabilities could pose 'severe risks' without appropriate mitigations and further identifying severe risks, such as helping malicious actors acquire or use chemical or biological weapons or AI's ability to evade human oversight. By aligning their efforts, these nations aim to foster a safer and more responsible development and deployment of AI capabilities globally.
-
The UK AI Safety Institute (AISI), partnering with the Alan Turing Institute, UKRI, and other institutes, announced £8.5 million in research funding for 'systemic AI safety:' Moving beyond just the risks of individual AI models, this funding will focus on understanding and mitigating the systemic risks that AI poses when integrated into larger systems and infrastructures. The AISI will invite grant proposals that directly address systemic AI safety problems or improve understanding in this area, prioritising applications offering actionable approaches to significant systemic AI risks. This initiative aims to broaden AI safety efforts to encompass the complex systems and infrastructures in which AI operates, recognising the potential for wide-ranging societal impacts.
Post-Seoul Summit
-
The AI Fringe will host a discussion on the AI Seoul Summit, covering topics such as AI safety, innovation, and inclusion. The panel discussions will feature prominent members of the UK AI ecosystem as well as representatives from the organisers of the next official AI Summit in France. These experts will share insights on the outcomes of the Seoul Summit and the responsible development of AI technologies. You can register to join this insightful event here.
If you found this summary helpful and want to learn more about techUK's programming in AI, for AI Adoption, please contact [email protected]. If you are interested in AI Policy, contact [email protected]. If you are interested in Digital Ethics and AI Safety, contact [email protected].
Tess Buckley
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.