How AI innovation can assist cyber defenders
Artificial Intelligence (AI) techniques have been helping cyber defenders protect their networks for a number of years. The pace of innovation in this space in the last 12 months and subsequent entry into mainstream public consciousness has understandably caused a mixture of excitement and concern for potential use and misuse across industry and society. Defending against cyber threats is no different, and opportunities exist to leverage innovative new, and existing, AI approaches to improve cyber hygiene, protection, detection and response.
Prevention is better than cure. Software developers can utilise AI assistants to help them develop applications securely, with auto-generated code running through analysis to filter out insecure design patterns raising the barrier to entry for attackers by preventing common vulnerabilities from entering the codebase. This is complemented by existing security testing techniques such as static and dynamic analysis, and fuzzing implemented as part of a secure software development lifecycle.
Understanding your adversary, their tactics, techniques and procedures helps you to stay one step ahead and plug gaps before they are exploited. Knowing that there is a threat actor active and targeting organisations like yours with specific types of attacks enables you to prepare your defences. Threat intelligence is built on large unstructured and constantly evolving datasets, data which can be fed into AI models to improve trend detection in the data and boost the speed at which an analyst can turn data into action. Chat based interfaces to Large Language Models (LLMs) built on threat intelligence could free an analyst from the drudgery of data gathering, transformation and analysis, allowing them to focus their time on where they have the most impact. Querying data in natural language, getting links to relevant evidence, developing action plans and communicating with stakeholders.
Two areas where Machine Learning (ML) models have been established in cyber for some time are anomaly detection and biometric authentication. Models which detect anomalous user or device behaviour have been built into detection products for a number of years, flagging potential cyber security events to security analysts and even triggering automated responses such as requiring additional authentication information or triggering scans and quarantining devices. Biometric authentication such as fingerprint or facial recognition on mobile devices uses ML to authenticate users and prevent those seeking to impersonate users and gain access to sensitive data through cloned fingerprints and pictures, models or even deepfakes of a user.
And then there is the threat that keeps Chief Information Security Officers up at night ... ransomware. Whether it's used to exfiltrate and threaten the leaking of sensitive data or to encrypt crucial systems rendering them inoperable, ransomware has an outsized impact on the organisations it affects. ML models can be used alongside file scans in operating systems and email applications to detect if a file is likely to be malware and therefore trigger appropriate action to prevent execution of the file.
However, a lot of these same capabilities are available to attackers. As NCC Group’s Chief Scientist Chris Anley highlighted in his evidence to the UK Parliament’s Digital and Communications Committee last month, while the verifiable increase in cyber risk as a result of AI tools is – today - “small to moderate”, the wider fast-evolving cyber threat landscape, particularly when it comes to ransomware and supply chain risk, means that the “increase is noteworthy and worth monitoring.” It is therefore critical that cyber defenders not only invest in their own AI capabilities, but seek to understand and respond to the ways in which AI may transform the threat landscape.
There are many opportunities to innovate at the intersection of AI and cybersecurity. We need to make sure that we seize this opportunity to get, and stay, ahead of attackers by turning a pipeline of research and innovation into practical solutions for industry and society at large. If you would like to learn more about how AI is relevant to cyber security, including its use in defence, attack and regulatory and safety concerns, NCC Group has published a whitepaper “Safety, Security, Privacy & Prompts: Cyber Resilience in the Age of Artificial Intelligence (AI)”. The paper summarises many years of NCC Group research into the application of AI to cyber security and vice versa, you can download the whitepaper here.
Cyber Security updates
Sign-up to get the latest updates and opportunities from our Cyber Security programme.