AI in physical security: Opportunities, risks and responsibility
AI, or more accurately its subsets of machine learning (ML) and deep learning (DL), stand to transform the physical security industry.
This brief primer elaborates on the potential and limitations of subsets of AI in physical security applications for public spaces. Helping security professionals to better match AI-based technologies to appropriate use cases.
What is AI in a physical security context?
Machine learning (ML) and deep learning (DL) are the subsets of AI typically used in physical security systems. These algorithms use learned data to accurately detect and classify objects. When working with data collected by physical security devices such as cameras, doors or other sensors, machine learning uses statistical techniques to solve problems, make predictions, or improve the efficiency of specific tasks. Deep learning analyses the relationship between inputs and outputs to gain new insights. Recognising objects, vehicles, and humans, or sending an alert when a barrier is breached are all examples of what this technology can do in a physical security context.
Machines are exceptionally good at repetitive tasks and analysing large data sets (like video). This is where the current state of AI can bring the biggest gains. The best use of machine and deep learning are as tools to comb through large amounts of data to find patterns and trends that are difficult for humans to identify. The technology can also help people make predictions and/or draw conclusions.
Physical security technology does not typically incorporate the subset of AI called large language models (LLM); the model used by ChatGPT and other generative AI. It is designed to satisfy the user as its first priority, so the answers it gives are not necessarily accurate or truthful. This is dangerous in a security context.
Reality Checks
Any manufacturer using AI in its offerings has a responsibility to ensure that the technology is developed and implemented in a responsible and ethical way.
Here are a few of the biggest misconceptions about AI in physical security that must be consistently challenged:
MYTH: AI can replace human security personnel:
The reality: AI technology can automate repetitive and mundane tasks, allowing human security personnel to focus on more complex and strategic activities. However, human judgment, intuition, and decision-making skills are still crucial in most security scenarios. AI can assist in augmenting human capabilities and improving efficiency, but it requires human oversight, maintenance, and interpretation of results.
MYTH: AI-powered surveillance systems are highly accurate and reliable:
The reality: AI systems make mistakes. They are trained based on historical data and patterns, and their accuracy heavily relies on the quality and diversity of the training data. Biases and limitations in the data can lead to biased or incorrect outcomes. Moreover, AI systems can be vulnerable to attacks where malicious actors intentionally manipulate the system's inputs to deceive or disrupt its functioning.
MYTH: AI can predict security incidents:
The reality: AI can analyse large amounts of data and identify patterns that humans might miss, but it is not capable of predicting security incidents. AI systems rely on historical data and known patterns, and they may struggle to detect novel or evolving threats. Additionally, security incidents can involve complex social, cultural, and behavioral factors that may be challenging for AI algorithms to fully understand and address.
MYTH: AI technology is inherently secure:
The reality: While AI can be used to enhance security measures, the technology itself is not immune to security risks. AI systems can be vulnerable to attacks, such as data poisoning, model evasion, or unauthorised access to sensitive information. It is crucial to implement robust security measures to protect AI systems and the data they rely on.
Striking a balance
As with any new technology, acknowledging the risks of AI doesn’t eliminate its potential benefits. With judicious application and proper oversight, AI can increase efficiency and security while also minimising negative impact.
Cyber Security updates
Sign-up to get the latest updates and opportunities from our Cyber Security programme.