Resources

Is your AI Assistant causing a privilege or supervision problem?

For a long time the AI cases hitting the headlines have only concerned “hallucinations”. But that may be starting to change- as the focus shifts to how AI impacts on confidentiality and privilege, and also reinforces the need for supervision in using AI tools.  Helen Evans KC and Isabel Barter explain what’s going on.

US v Heppner – no privilege in “Claude”

On 17 February 2026 in New York, a judge concluded in United States v Heppner that documents generated by publicly available AI tools are not privileged and are therefore not protected from disclosure.

The defendant in Heppner had been charged with securities and wire fraud, making false statements to auditors and falsifying corporate records. Agents executed search warrants at his home. The seized materials included 31 documents showing interactions that Mr Heppner had had with “Claude”, a generative AI programme operated by Anthropic, after the grand jury subpoena. Mr Heppner argued that these documents were privileged because he had put into “Claude” information about his dealings with legal counsel or had created documents for the purposes of discussing his case with counsel.

The judge disagreed and held that the documents were not privileged:

  • First, they were not themselves communications between Mr Heppner and his legal advisers.
  • Second, they could not be regarded as confidential because “Claude” collected data from inputs to “train” its large language model- and its terms and conditions made clear that Anthropic reserved the right to disclose data to third parties including “governmental authorities”.
  • Third, the judge decided that Mr Heppner did not communicate with “Claude” for the purposes of obtaining legal advice. He pointed out that “Claude” expressly disclaims giving legal advice, stating (if asked) “I’m not a lawyer and can’t provide formal legal advice or recommendations” and suggesting that users “consult with a qualified attorney who can properly assess your specific circumstances”.

The judge concluded by saying at (p. 12):

“Generative artificial intelligence presents a new frontier in the ongoing dialogue between technology and the law. Time will tell whether, as in the case of other technological advances, generative artificial intelligence will fulfil its promise to revolutionise the way we process information. But AI’s novelty does not mean that its use is not subject to longstanding legal principles, such as those governing the attorney-client privilege …”

UK v Secretary of State for the Home Department – supervise AI users and don’t put materials in the public domain

Closer to home, in a judgment also published in February 2026 at [2026] UKUT 81 (IAC),[1] lawyers have been warned by the Upper Tribunal that:

  • Uploading confidential documents into an open-source AI tool places the information on the internet in the public domain, breaches client confidentiality and waives legal privilege.
  • Such conduct can culminate in difficulties with legal regulators or the Information Commissioner.

In this matter, the Upper Tribunal dealt with two cases together where documents put before the Court had included “hallucinated” cases (in grounds of appeal and in judicial review proceedings respectively). These problems had culminated in a “Hamid” hearing being arranged, to probe the practices of the lawyers- like the procedure adopted in June 202 in R (Ayinde) v LB Haringey, Al-Haroun v Qatar National Bank [2025] EWHC 1383.

Part of the focus was on “hallucinated” authorities- and the Upper Tribunal made clear that it could not afford to have more of its time absorbed by representatives placing false information before it.

In addition, there were particular concerns about supervision of AI use. In the second case before the Upper Tribunal, a supervisor had failed to ensure the work of a more junior fee-earner. At [37] the judgment states that:

It would be easy to think that this is a case about the naïve use of generative AI, but it is not merely about that; it is principally about supervision and the obligation to ensure that the tribunal is not misled.”

The judgment made clear at [58] that:

“A solicitor or other legal professional who delegates their work to another fee-earner remains responsible for the supervision of their work and for ensuring its accuracy. Such supervisors must ensure that fee-earners under their supervision are aware of the dangers of using non-specialist AI for legal research and drafting. Failures to do so, or to undertake appropriate checks on the drafting of fee-earners is likely to result in a referral to the Solicitors Regulation Authority or other regulatory body. A supervisor who fails to ensure that the work of a more junior fee-earner does not contain false cases or citations is likely to be more culpable than a lawyer who fails to ensure that his own work is free from such “hallucinations”.

However the truly “new” issue (in this jurisdiction) was the privilege and confidentiality ramifications of AI use. Having noted that some of the solicitors concerned had lacked understanding of how AI operates, the Upper Tribunal pointed out (at [60]) that:

Uploading confidential documents into an open-source AI tool, such as ChatGPT, is to place this information on the internet in the public domain, and thus to breach client confidentiality and waive legal privilege, and any such conduct might itself warrant referral to the regulatory body and should, in any event, be referred to the Information Commissioner’s Office.”

Outcome foreseen?

Both judgments bear out a warning in the Judicial Guidance on AI in England & Wales dated October 2025:

“Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter information which is private or confidential. Any information that that you input into a public AI chatbot should be seen as being published to all the world.”

The Bar Council’s updated Guidance dated November 2025 entitled “Considerations when using ChatGPT and generative artificial intelligence software based on large language models” also suggests that barristers should be vigilant with sharing with a generative LLM system any legally privileged or confidential information, or personal data. Even as regards more bespoke systems, the Guidance warns (at [28]) that:

barristers need fully to understand how the tool they are using operates in this respect, including any relevant protective setting”.

Of course, whilst lawyers have been warned about AI and its impact on privilege- the same awareness can not necessarily be expected of clients. Anecdotal evidence suggests that users of legal services are increasingly using AI to generate instructions, or even running legal advice through AI to check it once received. Many lawyers’ client care letters contain paragraphs about privilege and how it can be lost- it could be time for these materials to be updated.

© Helen Evans KC and Isabel Barter, 4 New Square Chambers, February 2026

This article is not intended as a substitute for legal advice. Advice about a given set of facts should always be taken.

[1] Although it was promulgated in November 2025.

Related People

Helen Evans KC

Call: 2001 Silk: 2022

Isabel Barter

Call: 2010

Search

Expertise

Related resources

AI and the Courts: The Civil Justice Council’s Consultation on Preparing Court Documents


A Civil Justice Council Working Group (the “CJC”), chaired by Lord Justice…

Discover more

When should you know what you don’t in fact know?


Kay v Martineau Johnson [2026] EWCA Civ 224 is the latest in…

Discover more

If you would like to know more or have a question please talk to our clerks

Portfolio Builder

Select the expertise that you would like to download or add to the portfolio

Download    Add to portfolio   
Portfolio
Title Type CV Email

Remove All

Download


Click here to share this shortlist.
(It will expire after 30 days.)