On 30 March 2026, the FRC published guidance to support audit firms adopting AI technologies. The Generative and Agentic AI Guidance and accompanying Factsheet are notable for their technical depth. They identify the main risks AI poses to audit quality and set out possible mitigating measures.
Jamie Smith KC and Seohyung Kim explain.
Context
Before turning to the Guidance itself, four points stand out:
First, FRC’s overall stance is supportive. It has said that AI, if deployed responsibly, has the potential to enhance audit quality significantly and linked AI adoption in audit to wider UK economic growth.[1]
Secondly, firms are already required to obtain, develop, implement and maintain appropriate technological resources to support quality management and engagements.[2] That may be read as placing firms under a positive obligation to engage seriously with AI.
Thirdly, the Guidance is not limited to generative AI (GenAI). It also addresses agentic AI i.e. systems capable of coordinating and executing multiple tasks with some degree of autonomy. That matters because audit firms are likely to move increasingly towards more autonomous AI tools, making the governance and quality risks identified by the FRC more significant over time.
Fourthly, although the Guidance is broad enough to cover firms using third-party tools, its centre of gravity is clearly the largest firms with the resources to build AI systems themselves. The Factsheet says it is aimed primarily at central functions responsible for developing AI tools and methodologies.
Against that backdrop, the Guidance – described in the press release as the first of its kind from any audit regulator globally – is divided into three parts.
Part 1 – Risks
The FRC identifies three headline risks to audit quality:
- Deficient output – AI system produces flawed output.
- Misuse of output – the output is sound, but the auditor uses it wrongly.
- Non-compliant methodology – the firm’s methodology allows AI-enabled approaches that do not meet auditing standards, even if the output itself is appropriate.[3]
- Risk of deficient output
This is the most detailed part of the Guidance. It covers:
- System performance risk: failures in the GenAI component, other system components, or the way components interact.
- Runtime input risk: failures caused by human inputs at runtime (i.e. while in operation) or poor-quality information accessed by the system.
Some of this is familiar territory: hallucinations, omissions, distortions, faulty reasoning and inconsistencies.[4] But the FRC also highlights risks arising from prompt design, sequencing errors,[5] and problems caused by interaction of different components within the wider AI system.[6]
The Guidance also focuses on risks created during live operation: for example, by “the human in the loop”,[7] or by the system drawing on external or internal information sources such as workpapers or professional standards.[8]
- Risk of misuse of output
This is split into misinterpretation of output and misinterpretation of methodology. Both concern how the auditor respond to AI outputs. To give an example: an auditor may misread or misunderstand what the system has produced or wrongly treat the output as a conclusion rather than an aid to professional judgment.
- Risk of non-compliant methodology
This arises where AI is used in place of a traditional audit procedure, but where the output – albeit accurate and correctly interpreted – is insufficient to satisfy auditing standards.[9]
Part 2 – Mitigating Measures
The FRC deals briefly with Risks B and C. Risk B may be mitigated through training and peer review; Risk C may be reduced by involving methodology teams throughout AI development.[10]
By contrast, the Guidance devotes substantial attention to mitigating Risk A. Those measures are often highly technical, covering matters such as component choice, rules-based protocols and workflow design. That again suggests the Guidance is most immediately useful to firms designing their own AI systems, rather than simply buying third-party tools off the shelf.
Part 3 – Illustrative Examples
The Guidance ends with two worked examples, which are useful in translating the FRC’s abstract risk framework into practical audit scenarios. The first example relates to an AI system that summarises board minutes.[11] The second concerns the sampling of contracts for revenue recognition purposes.[12] Any auditor using or developing AI would do well to read these scenarios carefully, since they provide concrete illustrations of where the principal risks arise and how they may be mitigated.
Conclusion
The FRC’s 2026 Guidance is dense and technical but its core messages are clear and familiar:
First, AI systems are only as good as the data they use.
Secondly, auditors must understand how the AI systems work and where they sit within the audit process.
Thirdly, and most importantly, AI does not replace auditor judgment.
As the FRC put it in its 30 March 2026 press release, accountability remains unchanged. However sophisticated the tool, the human auditor remains responsible for the quality of the audit.
© Jamie Smith KC and Seohyung Kim, April 2026
This article is not intended as a substitute for legal advice. Advice about a given set of facts should always be taken.
[1] FRC’s Guidance on AI in Audit, June 2025
[2] The International Standard on Quality Management (UK) 1
[3] Guidance, page 6
[4] Guidance, pages 8-11
[5] Guidance, pages 12-13
[6] Guidance, page 14
[7] Guidance, page 4
[8] Guidance, pages 15-17
[9] Guidance, page 19
[10] Guidance, pages 39-40
[11] Guidance, pages 43-48
[12] Guidance, pages 49-59

