AI threat landscape

The attacker has new tools. So should you.

AI-augmented attacks are not theoretical. Deepfake-enabled BEC, autonomous vulnerability discovery, and shadow-AI data leakage are happening to UK firms this quarter. Here's what's changing in the threat landscape and how to respond.

Threat 01

AI-aided phishing and BEC

Phishing remains the dominant initial-access vector against UK firms, and LLM-drafted lures have measurably increased attacker yield. Deepfake voice and video have moved from research demos into production fraud, with AI-supported social engineering and synthetic-media vishing both rising sharply through 2025. Business email compromise continues to route through UK banking infrastructure as a primary intermediary destination.

Citable statistic

$2.77B in BEC losses, 2024

Source: FBI IC3 2024 Annual Report, Apr 2025

Mitigation pattern

Out-of-band callback verification on any payment or credential-change request — regardless of channel — paired with FIDO2 phishing-resistant MFA and DMARC enforced at p=reject.

Threat 02

Autonomous vulnerability scanning and exploit generation

AI is now embedded across the attack lifecycle. Threat actors have moved beyond coding assistance to integrating LLM APIs directly into malware for just-in-time code generation that defeats signature-based detection. Offensive AI platforms are now competitive with elite human researchers on public bug-bounty leaderboards, and NCSC assesses AI will almost certainly raise both volume and impact of attacks through 2027.

Citable statistic

204 nationally significant incidents — a 129% increase year-on-year

Source: NCSC Annual Review 2025, Oct 2025

Mitigation pattern

Behaviour-based detection (EDR/XDR) over signature-based tooling, with CISA KEV-prioritised patching and a 72-hour SLA on critical CVEs facing external systems, supported by continuous attack-surface management.

Threat 03

Prompt injection against agentic systems

Prompt injection is OWASP's top LLM risk and is named by Microsoft as one of four primary attack vectors against AI systems. Indirect prompt injection — where malicious instructions are hidden inside retrieved emails, documents or web pages rather than user input — has been confirmed exploitable in production environments. Blast radius scales with agent capability: once an LLM can browse, retrieve, write or execute code, embedded instructions become real exploit primitives.

Citable statistic

Prompt injection present in 73%+ of production AI deployments assessed in 2025

Source: Redbot Security analysis, 2025

Mitigation pattern

Treat retrieved content as untrusted: enforce least-privilege scoping on agent tools (read-only by default, writes require human confirmation), sandbox external content before it enters the context window, and monitor agent action logs as privileged-user sessions.

Threat 04

Model and supply-chain weaponisation

The AI model supply chain is a primary attack surface. Public model registries host hundreds of thousands of unsafe or suspicious artefacts, with techniques including pickle deserialisation exploits, namespace hijacking of deleted maintainer accounts, and coordinated uploads of malicious agent skills. Slopsquatting — weaponising LLM hallucination of nonexistent package names — gives attackers predictable, pre-registerable targets in `requirements.txt` and `package.json` files.

Citable statistic

~352,000 unsafe findings across 51,700 Hugging Face models

Source: Protect AI, 2025

Mitigation pattern

Pin models and packages by hash rather than name, maintain an allow-listed internal mirror for both code dependencies and AI model artefacts, and require human review on every AI-suggested dependency before it enters a manifest file.

Threat 05

AI-assisted insider data exfiltration

AI assistants collapse the friction of insider exfiltration — pasting a contract, customer list or source code into a public LLM is a five-second action with no email attachment, no USB event, and no DLP signature on most current configurations. Shadow-AI breaches now carry a measurable cost premium over baseline incidents, and a majority of breached organisations lack proper AI access controls. The Samsung ChatGPT leaks remain the canonical reference incident in ENISA, NCSC and ICO guidance.

Citable statistic

Shadow-AI breaches cost +$670k on average versus baseline

Source: IBM Cost of a Data Breach 2025, Jul 2025

Mitigation pattern

Sanction enterprise-tier AI tools (M365 Copilot with UK/EU data residency, ChatGPT Enterprise, Anthropic for Work), block public LLM domains at the egress proxy except through the enterprise broker, and deploy browser-extension DLP for paste events — with an approved alternative for staff so bans do not drive usage underground.

Threat 06

Shadow-AI risk inside organisations

Shadow AI is the dominant AI risk for mid-market firms because adoption is outpacing governance. Around a third of UK businesses are using, adopting or considering AI, but only around a quarter of those have any cyber security practices in place to manage AI risk. The majority of paste events to GenAI tools originate from unmanaged personal accounts, placing most data flow outside any DLP control plane — and most organisations report being effectively blind to AI data flows.

Citable statistic

68% year-on-year increase in shadow GenAI usage inside enterprises

Source: Menlo Security 2025 State of Browser Security

Mitigation pattern

Run a four-step control loop: discover existing AI usage via egress logging and SaaS discovery; sanction an approved enterprise tier with UK/EU data residency; block unsanctioned domains at the egress proxy; and train staff on what data is acceptable, with a no-blame route for accidental disclosure.

A conversation, no commitment

Worried about
any of these?

A 30-minute call with a senior consultant. We'll talk through your specific exposure and what a relevant audit would look like.

Speak to a consultant