Technology Risk Assurance

AI Governance and Technology Risk Audit

Regulators expect boards to demonstrate credible oversight of AI and automated decision-making. Internal audit is expected to provide that assurance independently. We help firms get there in practice, not just in policy.

Regulatory direction is clear. The FCA and PRA have both published expectations on AI governance, model risk management, and algorithmic accountability. The Bank of England's AI and machine learning discussion paper and the FCA's guidance on algorithmic systems signal that firms using AI in regulated activities need board-level governance and independent audit coverage. Internal audit functions that have not yet scoped AI into their plans are behind the curve.

Why AI audit is different from IT audit

Traditional IT audit looks at access controls, change management, and system availability. AI governance audit goes further. It asks whether the models driving decisions are fit for purpose, whether the data they rely on is clean and representative, whether the outcomes they produce are explainable and fair, and whether the board has genuine oversight of what the systems are doing on its behalf.

These are questions that require a combination of audit discipline and substantive understanding of how AI systems work in a regulated environment. Most internal audit teams do not yet have that combination in-house. We provide it as a standalone engagement or as subject matter expert support alongside your existing team.

Our approach

We do not provide generic IT checklists applied to AI. Every engagement starts with understanding how your firm uses AI and automated decision-making, which regulatory obligations attach to those uses, and what your current governance and control framework actually looks like rather than what the policy says it looks like. From there, we scope the right assurance work for your situation.

Four audit components for AI and technology risk

Each can be commissioned independently or structured as a rolling programme of technology risk assurance.

01
AI Governance Gap Analysis
Where your framework stands against regulatory expectations

A structured assessment of your AI and model risk governance framework against FCA expectations, PRA SS1/23, and your own policy commitments. Identifies gaps before the regulator does.

Inventory of AI and automated decision-making systems in scope
Governance structure review: ownership, accountability and board reporting
Model risk management framework assessment against PRA SS1/23
Gap analysis against FCA algorithmic accountability expectations
RAG-rated findings with board-ready summary
Enquire →
02
Model Risk Audit
Independent assurance over model development, validation and use

Tests whether your model risk management process is functioning as the board expects. Particularly relevant for firms using models in credit decisioning, pricing, fraud detection, or AML.

Model inventory completeness and tiering review
Model development, validation and approval process audit
Model performance monitoring and outcome testing
Challenger and fallback model governance
Senior manager accountability mapping for model risk
Enquire →
03
Algorithmic Fairness and Consumer Outcomes
Testing whether automated decisions deliver good outcomes

Where AI drives customer-facing decisions, Consumer Duty requires that those decisions deliver good outcomes. This component audits whether your automated systems can evidence that requirement.

Algorithmic decision audit in credit, pricing and eligibility
Outcome testing by customer segment, including vulnerable groups
Explainability review: can decisions be explained to customers and regulators?
Bias and fairness assessment against regulatory expectations
Human override and escalation process review
Enquire →
04
Generative AI and Third-Party Model Risk
Governance over LLM and vendor AI deployments

Firms deploying generative AI tools, third-party AI platforms, or embedded AI in vendor products face governance gaps that most audit frameworks have not yet caught up with. We provide practical assurance.

LLM deployment governance: data inputs, prompt controls, output validation
Third-party AI vendor due diligence and contractual review
Embedded AI in critical systems: risk identification and control assessment
Staff use policy compliance and training adequacy review
Incident response and model failure escalation procedures
Enquire →

Firms using AI in regulated activities

This service is most relevant to:

  • Firms using AI or automated models in credit decisions, pricing, fraud detection or AML systems
  • Internal audit functions that need to build AI into their annual plan but lack in-house expertise
  • Boards and audit committees seeking independent assurance that AI governance is adequate
  • Firms deploying generative AI tools or third-party AI platforms that have not yet been formally audited
  • Firms subject to PRA SS1/23 model risk management expectations

We work with firms of all sizes. The right scope of assurance depends on how materially AI affects your regulated activities, not on firm size.

Need to get AI governance into your audit plan?

Whether you are starting from scratch or need SME support on a specific model risk engagement, we are happy to have a practical conversation about your situation.