Safeguarding the UK economy: Unlocking the security potential of Generative AI
In an era of deep changes, cyber threats are constantly evolving. Although demand for Cyber Security skills continues to outstrip supply, protecting the UK economy has never been more critical. Our Cyber Security team is pioneering the use of Generative AI to meet this challenge head-on, offering scalable and dynamic solutions that secure services and contribute to the nation's economic stability.
While we're at the forefront of adopting the technology that will help safeguard the future, we are also safeguarding the technology itself by contributing to industry leading LLM security standards and ensuring the technology is used responsibly and safely.
Innovative Applications
Leveraging our culture of technical exploration, realised through hackathons, dedicated innovation periods, and domain-specific investigations, we have explored the following applications of Generative AI:
- ChatGPT for OWASP ASVS Tests: Leveraging ChatGPT's code generation capabilities, we’ve automated the generation of OWASP Application Security Verification Standard (ASVS) tests from threat models and system contexts. By automating the generation of critical security tests, we are ensuring faster compliance and fortifying the digital infrastructure the UK economy relies on.
- DASTy, a Chrome Developer Plugin: An intelligent plugin that performs on-the-fly HTTP traffic analysis. Using ChatGPT, our plugin provides real-time assessment of security vulnerabilities, mitigations, and tailored recommendations whilst offering a chat-based investigation mode. The plugin exemplifies the use of Generative AI to detect evolving vulnerabilities and help security professionals evaluate them in context.
- Auto Configuration of Secure Cloud Environments: Our members are participating in the UK cyber community, and one of our Security Engineers recently won first place at the HackTheHub Fintech Hackathon. Their winning proof-of-concept uses ChatGPT to auto-configure secure cloud environments from text-based non-functional requirements. This is a step towards ensuring the cloud services essential to the UK's economy are secure by design, reducing the risk of cyber threats.
Setting the Global Standard for Securing Generative AI
We work as part of the Core Team for the OWASP Top 10 for Large Language Model (LLM) Applications. The recently published version 1 of this list has already become an industry de-facto standard, adopted by U.S. departments and agencies, and inspiring a wealth of tutorials, including contributions from IBM's distinguished engineers. As part of the OWASP Top 10 for LLM apps project, we are actively collaborating with NIST's Trustworthy AI researchers and other standard bodies and agencies to share knowledge and findings. This helps ensure the UK is both leading and benefiting from a collaborative approach to Generative AI security.
Safeguarding Usage, Data, and Customer Privacy
Our use of Generative AI has privacy by default as its heart, preventing the re-use of sensitive data for model training and memorisation. Clear rules and guidelines that protect data and customer privacy help ensure that Generative AI is harnessed securely and does not pose risks to the data-driven UK economy.
We work closely with Suzanne Brink, our Data Ethicist, to ensure we harness a culture of Ethical AI. Having a human in the loop to review and verify Generative AI output is central to safe usage, protecting us from bias and model hallucinations.
The Future Is Bright, but Only If We Make It So
Generative AI is a technological revolution in the making with both possibilities and peril. We are committed to discovering the former while mitigating the latter. Through diligent and proactive use, we increase understanding. More importantly, we navigate toward a future where Generative AI is not just powerful but also safely harnessed for the effective delivery of cyber security.
Cyber Security updates
Sign-up to get the latest updates and opportunities from our Cyber Security programme.