AI Governance & Compliance

How do we know our AI use is safe, sanctioned, and defensible?

A board-level question. Most firms can't answer it today. The work of getting to the answer is what we mean by AI Governance & Compliance.

Speak to a consultant
Scope of work

What governance covers

Six work streams. Each stream produces specific artefacts your board and regulators can review.

Policy framework development

Written AI use policy, vendor approval process, role-based usage rules, exceptions process. Drafted to your sector, signed off by your board.

Role and accountability mapping

Who owns AI risk? Who signs off vendor approvals? Who responds when an AI tool is misused? Documented before incidents force the answer.

Vendor approval processes

Structured intake for new AI tools. Commercial, security, data residency, and exit-clause review. Approval lands in days, not weeks.

Monitoring and reporting cadence

Quarterly board reports. Monthly operational reviews. Anomaly alerting on data egress to AI services. Audit-ready logs.

Regulatory alignment

EU AI Act categorisation. ISO 42001 alignment work. Sector overlays (FCA AI Principles, ICO ADM guidance, NHS DSPT). Mapped to controls.

Training programme advisory

Role-based AI use training. We design the curriculum and identify the right delivery partner. Outcomes-based, not theatre.

The regulatory landscape (as of May 2026)

What you're being asked to demonstrate

The five regulatory threads UK regulated firms most often need to address. Refreshed quarterly.

Regulator / Standard Current state (May 2026) Upcoming milestone What it means for a UK regulated firm
EU AI Act (Reg. 2024/1689) High-risk system rules, Article 50 transparency obligations and Commission enforcement powers for GPAI all enter into force 2 August 2026; prohibitions, AI literacy and GPAI provider rules are already live. 2 August 2027 — Article 6(1) high-risk classification rules apply in full, and GPAI models placed on the market before 2 August 2025 must achieve full compliance. UK firms placing AI on the EU market, acting as deployers there, or whose AI output is used in the EU need an AI inventory, risk classification, conformity assessments for high-risk use cases, and Article 50 transparency labelling.
UK pro-innovation framework Principles-based, sector-led — no horizontal AI Act, with five cross-cutting principles delegated to existing regulators and the AI Growth Lab cross-economy sandbox consultation opened 21 October 2025. The King's Speech 2026 confirmed intent to legislate for frontier-model providers but no Bill has yet been introduced; the Regulating for Growth Bill is expected to carry sandboxing powers. No single UK "AI law" to comply with, but every existing regulator now has explicit AI expectations — the compliance map must run regulator-by-regulator across ICO, FCA, PRA, MHRA, SRA, CMA and Ofcom.
ISO/IEC 42001:2023 Voluntary certifiable AI Management System (AIMS) standard, with major certification bodies (BSI, A-LIGN, Schellman, KPMG, DNV) fully operational and first-wave certifications complete. Continued accelerating adoption through 2026–2027 as procurement clauses propagate, with ISO/IEC 42006 (audit requirements) maturing in parallel. Not a legal requirement but rapidly becoming a procurement gate; firms with ISO 27001 already in place can typically certify in 6–9 months, and it remains the strongest signal that AI risk is under management.
NIST AI RMF (US, voluntary) The core framework plus Generative AI Profile (NIST AI 600-1) are widely adopted as the reference taxonomy for AI risk, with a Critical Infrastructure Profile concept note released 7 April 2026. Critical Infrastructure Profile and AI Agent Interoperability Profile both expected late 2026, alongside ongoing harmonisation with ISO 42001 controls. Voluntary in the UK but referenced explicitly by the ICO, NCSC and AI Safety Institute as the working risk taxonomy — UK firms with US enterprise or federal customers are increasingly asked for NIST AI RMF mapping.
Sector overlays (UK) ICO statutory ADM Code in preparation under DUAA s.80; FCA Mills Review reporting summer 2026; PRA AI in 2026 supervisory priorities; MHRA AI-as-Medical-Device framework due 2026; SRA AI policy webinar held February 2026. Treasury Select Committee has asked the FCA for comprehensive AI guidance by end 2026; MHRA National Commission report due summer 2026; SRA AI public-use research published April 2026. Sector regulators are moving from "principles" to specific enforcement signals in 2026 — legal, finance and healthcare firms should expect direct supervisory questions on AI governance, model risk and bias monitoring within current examination cycles.

Source: Dossier D.2 — AI Governance Regulatory State, last updated 2026-05-13. Refreshed quarterly.

How we deliver
Three pillars

Audit-first methodology

Every engagement begins by establishing facts: what AI is in use, who's using it, what data flows through it. No governance work runs on assumption.

Senior-led delivery

The consultant who writes your AI policy is the consultant who briefs your board. No account managers, no junior analysts ghost-writing for senior names.

Sector-specific, not template-led

An AI policy for a Solicitors Regulation Authority-regulated law firm is not the same document as one for an FCA-authorised broker. We don't pretend otherwise.

AI governance, written for your sector

Most AI policy work on the market is template-driven — a generic document with the firm's name dropped in, lifted from a publication or a generic consultancy library. It reads well in a procurement questionnaire and falls apart the first time a regulator asks for evidence that it's been operationalised.

Our work starts from what your firm actually does, what your regulators actually expect, and what AI is actually in use across your business. The policy that comes out the other side is short, specific, and defensible — because it was built to fit, not adapted from a template.

The same logic carries through every artefact in the engagement: vendor approval criteria, board reporting templates, exception logs, training curricula. Each one tied to a named regulatory expectation, each one operable by your team after we hand it over.

Engagement

How clients engage

Three engagement models. Each one matched to a specific governance posture — ongoing, catch-up, or named accountable owner.

Continuous oversight

Quarterly retainer

Ongoing governance, board-grade reporting

For firms that need AI governance to live as a managed function — quarterly board reports, monthly operational reviews, anomaly alerting on data egress, and a named accountable lead on call between cycles.

Catch-up engagements

Annual programme with quarterly reviews

A defined push, then light-touch ongoing

For firms making a one-time push to bring AI governance current — policy framework, role mapping, vendor process, regulatory alignment delivered as a programme, followed by light-touch quarterly reviews.

Senior-level accountability

Fractional CISO with AI scope

A named accountable owner, part-time

For firms that need a named accountable AI risk owner without hiring full-time — a senior consultant who attends leadership meetings, signs off on AI risk decisions, and represents the firm to regulators when needed.

Earlier in the journey

Start with the audit if you haven't already

Governance built on assumed AI usage falls apart in the first regulator question. The AI Usage Audit establishes the facts your governance programme will be built on.

See the AI Usage Audit
Higher up the agenda

Need the board-level conversation first?

If the question is "what should our AI position be?" rather than "is our AI use safe?", the AI Strategy engagement is the better starting point.

See AI Strategy
Free consultation

Make AI use
defensible.

A 30-minute call with a senior consultant. We'll walk through where your firm is today and what a meaningful governance engagement looks like.

Speak to a consultant