AI Usage Audit

Your employees are using AI.
Do you know which tools?

Most teams don't. They don't know what data is flowing to which model providers, what the vendor terms of service actually say, or what their regulator would think.

The AI Usage Audit answers all three — in a written report your board can read in ten minutes.

Speak to a consultant
Scope of work

What's in every AI Usage Audit

Nine findings categories. Mapped to your sector's regulatory state. Calibrated against the AI threat landscape.

Shadow tool discovery

Network and endpoint scans to identify every AI tool in active use across the organisation — sanctioned and unsanctioned.

Data egress mapping

Trace where employee prompts, attachments, and outputs are going. Identify regulated data flowing to model providers.

Vendor T&Cs review

Read what your employees clicked through. Commercial terms, IP retention, training-data rights, data-residency clauses.

Employee usage patterns

Anonymous behaviour analysis — which roles use AI most, for what tasks, with what data classifications.

Regulatory mapping

EU AI Act categorisation, ICO guidance alignment, sector overlays (FCA, NHS DSPT, SRA) — what each finding triggers.

AI-attack readiness

Where your existing controls handle AI-augmented threats (deepfake BEC, prompt injection of agent integrations) — and where they don't.

Training-gap analysis

Which employee groups need AI safety training first, prioritised by data sensitivity and current usage volume.

Internal policy gap

Compare current AI policies (if any) to what your sector and regulators expect. Concrete drafting recommendations.

Controls inventory

Technical controls in place to constrain AI use — DLP, network egress rules, conditional access. What's working, what's missing.

Who it's for

For firms where the regulator will ask

The AI Usage Audit is built for regulated UK mid-market firms where "we don't know" is not a defensible answer when AI use shows up in an inspection, a client questionnaire, or a contract review.

Regulatory frameworks the audit maps to
EU AI Act ICO AI Guidance ISO 42001 NIST AI RMF FCA AI Principles NHS DSPT SRA AI Guidance
Financial services & fintech
Legal & professional services
Healthcare & life sciences
Accountancy & audit firms
Insurance
Regulated manufacturing

"Most boards I speak to could not, today, name three AI tools their employees are using. The AI Usage Audit closes that gap before the next regulator visit does."

— Rhentech advisory team
Process

How an AI Usage Audit works

Six steps. Designed to be minimally disruptive to your operations. Typical duration: five to eight business days from kickoff to debrief.

01

Discovery call

A senior consultant call. We learn the AI landscape your firm already has — known tools, suspected shadow usage, regulatory drivers.

02

Scoping & agreement

Written scope covering: which systems we inspect, which employee groups we interview, what data classifications are in play.

03

Discovery & inspection

Network telemetry analysis, endpoint scans, anonymised employee survey, vendor T&Cs collection. Typical duration: five to eight business days.

04

Risk categorisation

Every finding mapped to: data classification involved, regulatory exposure, vendor T&Cs implications, and a severity rating.

05

Report & debrief

An executive register of AI tools in use, a risk-ranked exposure register, and a live debrief with the consultant who ran the audit.

06

Remediation roadmap

Prioritised next steps. Many clients flow into our AI Governance & Compliance engagement to execute the roadmap.

After the audit

What clients do with the findings

Most clients flow into our ongoing AI Governance & Compliance engagement to execute the remediation roadmap. Some take the report and run it internally. Both work — we'll tell you which we think fits.

See AI Governance & Compliance
Free consultation

Find out which AI tools
your firm is already using.

Book a free consultation. A senior consultant will scope an AI Usage Audit calibrated to your sector.

Speak to a consultant