Skip to navigation Skip to main content Skip to footer

NCC Group's 'Cyber Resilience in an AI-Driven World' Webinar:

Key takeaways for security leaders

03 June 2025

By NCC Group

Navigating cyber risk in the age of AI

As organisations race to harness the transformative potential of artificial intelligence, they face an evolving threat landscape and a shifting regulatory climate.

Securing AI: Cyber Resilience in an AI-Driven World, the first in NCC Group's webinar series, delivered an expert panel discussion on how cyber security leaders and professionals can approach AI adoption in a secure, strategic, and responsible manner.

With insights from leaders across technical, strategic, policy, and regulatory domains, this session laid a roadmap for responsible AI adoption that protects reputation, reduces risk, and drives value.

 

1. Strategic AI adoption and maximising ROI

"The time for gimmicks is over," noted Olly Howard, Lead Propositions Manager at NCC Group. As the novelty of generative AI fades, organisations are looking beyond pilots to deliver real business value. 

Our Chief Technology Officer, Siân John, cut to the heart of the issue: "You don't need AI. You need outcomes."

AI adoption without a clear objective often leads to project failure. Businesses must define specific, measurable outcomes that AI can help achieve. This includes identifying opportunities to automate repetitive processes, enhance threat detection, or augment human capabilities — not replace them. Equally important is ensuring the use of AI is secure and sustainable.

Key takeaways:

  • Anchor AI initiatives to concrete business goals, not abstract innovation mandates.
  • Differentiate between automation, machine learning, and generative AI to choose the right tool.
  • Embed principles of "green by design" and "secure by design" into AI projects.

"We're going into the trough of despair with AI because people aren't defining the outcomes clearly."

Siân John MBE | NCC Group CTO

 

2. Regulatory readiness: Stay grounded, stay ahead

The regulatory environment for AI is volatile. From the EU AI Act to the UK's pro-innovation stance and the US deregulation pivot, the landscape varies widely by region. However, uncertainty is not an excuse for inaction.

Organisations must stay grounded in current laws like GDPR while preparing to adapt to new standards. This includes monitoring AI-specific frameworks and collaborating with public policy experts to anticipate changes. Whether you're developing AI tools or integrating third-party models, compliance should be considered early and often.

Key takeaways:

  • Prioritise compliance with existing frameworks such as GDPR and sector-specific standards.
  • Keep pace with evolving regulations, especially for high-risk and critical infrastructure applications.
  • Understand whether your organisation must comply with the EU AI Act and other relevant regulations.

"Don't wait for perfect clarity. Work with what exists – and plan for what's coming."

Verona Johnstone-Hulse | NCC Group UK Government Affairs & Global Institutions Engagement Lead

 

3. Risk management: A new security paradigm for AI

AI introduces a host of new attack vectors. From model inversion to adversarial manipulation, the risks are complex and often poorly understood. 

Chris Anley, Chief Scientist, explained that, "Trained models can often contain code." Model outputs can also be easily corrupted by changing small amounts of training data. Therefore, it's essential to protect the supply chain for both models and training data.

David Brauchler, Technical Director, highlighted that traditional "point and patch" strategies are ineffective in an AI context. AI systems are often embedded with massive data sets and black-box logic that can be exploited in ways legacy systems weren't designed to handle.

Key takeaways:

  • Conduct a data security audit to assess your AI systems' worst-case data breach scenarios.
  • Recognise that AI model security is inseparable from data governance.
  • Conduct supply chain risk assessments on AI models and datasets.
  • Treat AI artifacts like software code: sign, audit, and sandbox before deployment.
  • AI introduces new infrastructure to your network: data services, notebook servers, training, and inference components. It's important to be sure this new infrastructure is authenticated, patched, and locked down.
  • Ensure you do the basics: patching, authentication, authorization, auditing, monitoring, and lockdown.

"Security is still security. We just need to change the lens."

David Brauchler | NCC Group Technical Director

 

4. Responsible AI: Governance, ethics, and human oversight

AI systems are only as ethical as the humans behind them. Even then, a perfectly functioning model can pose risks if the surrounding data environment is flawed. Siân highlighted that without proper access controls or data protection, you're not just compromising privacy—you're handing sensitive data to the model on a silver platter. 

We saw this in early issues with Large Language Models deployed in organisations, where models inadvertently surfaced internal or confidential project information simply because they had access.

Sian bluntly said, "You can't rely on security through obscurity. If you haven't gated your data, your AI will expose it."

There are also risks with blindly trusting outputs from AI systems trained on biased or opaque data: a hallucination is just a confident wrong answer.

Responsible AI development requires transparency, explainability, and rigorous human oversight. Ethical principles should be woven into governance frameworks from the outset—not bolted on as a compliance afterthought. While regulation may lag, business leaders must define what ethical AI looks like in their context.

Key takeaways:

  • Develop a clear data governance program to control and protect sensitive data used for or accessed by AI models.
  • Build a framework for ethical AI that includes data quality, bias mitigation, and cultural awareness.
  • Incorporate human-in-the-loop oversight for high-risk AI decisions.
  • Evaluate the sustainability of your AI stack: if it's power-hungry and unreliable, it's not resilient.

Conclusion: Secure AI, secure the future

AI is not the boogeyman when it comes to cyber security, but it requires a fundamental shift in how we think about security, compliance, and resilience. Our first webinar made one thing clear: successful organisations will integrate cyber security, compliance, and ethical strategy into every layer of AI adoption. 

At NCC Group, we help businesses navigate this complexity through our Securing AI services, which include technical assurance, training, and policy advice. If you're building with AI, build it resiliently.

Discover how NCC Group can secure your AI adoption journey.

Webinar FAQs

Q: In what area has AI increased the risk for security the most?
A: The data layer. Misalignment of access controls and misplaced trust in AI models—particularly when users are given access to data they shouldn't or when attackers exploit system logic in ways the developers didn't anticipate.

While breaches involving training data do occur, the more pressing concern for most organisations is unintended access or actions stemming from insufficient access control, monitoring, or validation. This applies even if you're not doing fine-tuning or custom training.

Q: How do we manage AI hype?
A: Start with use-case clarity. Define the problem before choosing the tool. "AI" is not a strategy – outcomes are.

Q: Is AI impacting real-world attacks?
A: Yes, especially in social engineering. Deepfakes and language precision make phishing and impersonation more convincing.

Q: Should we wait for regulation?
A: No. Focus on current laws like GDPR and the EU AI Act and build adaptive strategies to meet future compliance needs.

Q: What's one key thing to do today?
A: Audit your data governance. If you don’t know where your sensitive data is, your AI might find it – and attackers definitely will.

Q: What new security issues are specific to vector databases?
A: In short, data breaches and Remote Code Execution (RCE). As we discussed in the session, doing the security basics is essential. We are seeing many cases where the focus has been placed on AI rather than security, resulting in poorly protected datastores. 

This is especially relevant for vector databases, which are often used as search back-ends in Retrieval Augmented Generation architectures. The vector database is likely to hold sensitive, organization-specific data, so it's important to limit the "blast radius" for a data breach by data governance – try to limit the amount of sensitive information an attacker can obtain when they breach a particular database. 

It's also important to understand that most vector databases are rich programming environments in their own right and can act as a "beachhead" into a network if an attacker is able to manipulate the queries that are executed. User-defined functions, especially "external" functions, and built-in facilities for HTTP requests are especially dangerous.

Q: Given the rapid growth of AI forecasting and infrastructure — and the resulting increase in energy consumption — is this trend even remotely sustainable, especially considering most businesses are striving to reduce their carbon footprints and there appears to be a lack of clear policy addressing AI's environmental impact?

A: This is where the focus on green AI is important. The first thing to consider is whether the outcome or use case you are pursuing needs AI or is it something that could be addressed with a script or automation. If it does need AI consider how renewable the energy source for the computing is, there’s a lot of focus on greener energy sources at the moment and a move towards more sustainability.

Also consider what model or approach you are using; does it need to be generative AI or an LLM or is it machine learning? Do you need the latest models or will a less powerful one work? Also, how real time does the response need to be? Can you slow down the speed of response to consume less power? Can you move the processing to lower peak times when there is less demand on power?

This is an emerging area of research that will look to build more green and secure by design AI. There is an equation looking at how to reduce the carbon density of AI.

SCI = ((E*I) + M) per R

Where E= Energy Consumed by software in kWh, I = Carbon emitted per kWh of energy (gCO2/kWH), M = carbon emitted through the hardware that the software is running, R=functional unit on how software scales e.g. per user or per device.

For those who went to this year's RSA conference, you can view the session Siân delivered on Green and Sustainable AI.

Q: I’d love to hear your thoughts on how IT-OT convergence is set to transform the UK’s Critical National Infrastructure. What role will AI play in enabling real-time, intelligent decision-making at the edge — especially as the explosion of IoT devices drives data generation at an unprecedented scale in human history?

A: This is another emerging area of research and consideration. With the added challenge that OT devices are often running legacy software and hardware. Gaining insights from devices and running an “intelligent edge” is one of the main drivers for IT-OT convergence. Mostly the analysis is happening currently separate to the OT infrastructure, but this is likely to change. Building that secure interaction needs to be a key level of focus.

Q: Do you believe that the topics discussed in this session are best addressed and managed through an AI Management System (AI MS) as outlined in ISO 42001 — alongside an ISO 27001 Information Security Management System (ISMS)? Or do you rely on other frameworks to manage AI-related risks?

A: These frameworks are a strong starting point, but the real challenge with AI management systems lies in ensuring they remain agile. It's crucial that AI threat modelling and risk management are tailored to the specific context and needs of your business.