OECD incident report: definitions for AI incidents and related terms
As AI gets more widely used across industries, the potential for AI systems to cause harm - whether unintentional bugs, misuse, or malicious attacks - also increases. Definitions help identify and prevent such incidents. By providing common definitions of AI incidents, hazards, etc., this allows the UK tech industry, regulators, and others to align on terminology. This shared understanding facilitates cross-organisation and cross-border learning from AI incidents.
The OECD has released a report proposing a draft definition of an "AI incident" as an event where the development, use or malfunction of an AI system directly or indirectly leads to harms such as injury, disruption of critical infrastructure, human rights violations, or property/environmental damage.
Agreed definitions like this allow for consistent governance of AI's risks and preparation for possible technology or application failures. This governance is crucial for enabling the tech industry's trustworthy and sustainable development of ever more capable AI systems. This OECD report marks an important first step in the process of establishing governance frameworks to ensure AI's safe and trusted development.
Defining through Differentiation: Actual versus Potential Harms
The OECD report takes the differentiation between actual and potential harm as a starting point for defining AI incidents and related terms.
Understanding the distinction between actual and potential harm allows organisations to proactively address risks associated with AI systems and ensure responsible deployment practices. This awareness enables stakeholders to navigate the complexities of AI incidents, fostering transparency and accountability in AI governance.
Actual harm refers to tangible consequences that have already occurred, whereas potential harm denotes the risk or likelihood of harm occurring in the future. Evaluating potential harm is as crucial as assessing actual incidents. This differentiation is essential for effective risk management and the ethical deployment of AI.
Actual Harms: AI incident, serious AI incident and AI disaster
The report defines three types of actual harms, read more and review examples here (pg. 11 – 12), these are the definitions provided:
-
AI Incident: An AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to harms (the report provides a list of harms)
-
Serious AI Incident: A serious AI incident is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems directly or indirectly leads to harms (the report provides a list of harms)
-
AI Disaster: An AI disaster is a serious AI incident that disrupts the functioning of a community or a society and that may test or exceed its capacity to cope, using its own resources. The effect of an AI disaster can be immediate and localised, or widespread and last for a long period of time.
Potential Harms: AI Hazards and Serious AI Hazards
The report defines two types of potential harms, read more and review examples here (pg. 13 – 14) , these are the definitions provided:
-
AI Hazards: An AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to an AI incident (the report provides a list of harms)
-
Serious AI Hazards: A serious AI hazard is an event, circumstance or series of events where the development, use or malfunction of one or more AI systems could plausibly lead to a serious AI incident or AI disaster (the report provides a list of harms)
So, What does this mean for UK AI companies / UK AI industry?
As AI risk management regulations emerge (e.g. EU AI Act), UK tech companies need clear definitions to comprehend their compliance obligations around AI incident reporting, risk assessments and other forms of AI assurance. Well-defined terms for AI incidents lay the groundwork for frameworks to systematically report, analyse and respond to such events. This increases accountability and incentives for responsible AI development.
The OECD dimensions of harm outlined can guide how tech companies assess and mitigate risks across the AI system lifecycle - from data issues to model vulnerabilities to real-world deployment hazards.
Read more here: Defining AI incidents and related terms | en | OECD
Tess Buckley
Tess is the Programme Manager for Digital Ethics and AI Safety at techUK.