AI Governance and Technology Risk Audit
Regulators expect boards to demonstrate credible oversight of AI and automated decision-making. Internal audit is expected to provide that assurance independently. We help firms get there in practice, not just in policy.
Why AI audit is different from IT audit
Traditional IT audit looks at access controls, change management, and system availability. AI governance audit goes further. It asks whether the models driving decisions are fit for purpose, whether the data they rely on is clean and representative, whether the outcomes they produce are explainable and fair, and whether the board has genuine oversight of what the systems are doing on its behalf.
These are questions that require a combination of audit discipline and substantive understanding of how AI systems work in a regulated environment. Most internal audit teams do not yet have that combination in-house. We provide it as a standalone engagement or as subject matter expert support alongside your existing team.
Our approach
We do not provide generic IT checklists applied to AI. Every engagement starts with understanding how your firm uses AI and automated decision-making, which regulatory obligations attach to those uses, and what your current governance and control framework actually looks like rather than what the policy says it looks like. From there, we scope the right assurance work for your situation.
Four audit components for AI and technology risk
Each can be commissioned independently or structured as a rolling programme of technology risk assurance.
A structured assessment of your AI and model risk governance framework against FCA expectations, PRA SS1/23, and your own policy commitments. Identifies gaps before the regulator does.
Tests whether your model risk management process is functioning as the board expects. Particularly relevant for firms using models in credit decisioning, pricing, fraud detection, or AML.
Where AI drives customer-facing decisions, Consumer Duty requires that those decisions deliver good outcomes. This component audits whether your automated systems can evidence that requirement.
Firms deploying generative AI tools, third-party AI platforms, or embedded AI in vendor products face governance gaps that most audit frameworks have not yet caught up with. We provide practical assurance.
Firms using AI in regulated activities
This service is most relevant to:
- Firms using AI or automated models in credit decisions, pricing, fraud detection or AML systems
- Internal audit functions that need to build AI into their annual plan but lack in-house expertise
- Boards and audit committees seeking independent assurance that AI governance is adequate
- Firms deploying generative AI tools or third-party AI platforms that have not yet been formally audited
- Firms subject to PRA SS1/23 model risk management expectations
We work with firms of all sizes. The right scope of assurance depends on how materially AI affects your regulated activities, not on firm size.
Need to get AI governance into your audit plan?
Whether you are starting from scratch or need SME support on a specific model risk engagement, we are happy to have a practical conversation about your situation.