Legal AI is developing at a dizzying rate. In January 2025 the Law Society reported that 75% of larger law firms were using it- a figure that has doubtless increased in the last 6 months. Its uses already include reviewing and generating contracts, cutting through conveyancing searches or other document-heavy tasks, conducting legal research, devising chronologies, and drafting letters. The pace of its development and adoption has not so far been matched by speedy updated guidance from either the Solicitors Regulation Authority or the Bar Council- although the Law Society appears to have updated its material more recently. [1]
In this article, Helen Evans KC, Ben Smiley and Melody Hadfield consider some of the lessons learned so far, and the likely future flashpoints in terms of both regulation and court procedure.
AI and research
Most of the notable issues with generative AI to date, in both this country and North America, have related to whether AI is prone to “hallucinating” cases. The most notorious US case- Mata v Avianca Inc – concerned fake authorities not only being cited in argument but also being attached to an affidavit. The lawyer in question even conducted what appeared to be a conversation with ChatGPT in which he asked whether a case he had cited was real, and the AI answered “yes” and assured him that the source for the information was “legal databases”.[2]
Mata brings to life one of the main problems with generative AI. The fact that it talks like a human tempts users to trust it. The Bar Council’s Guidance calls this “anthropomorphism” and warns that large language models (“LLMs”) “do not have human characteristics in any relevant sense”. Hammering the point home, the Bar Council explains that LLMs operate:
“a very sophisticated version of the sort of predictive text systems that people are familiar with from email and chat apps on smart phones, in which the algorithm predicts what the next word is likely to be”.
Viewed in this way, the risks of carrying out legal research on AI is clear. It is hard to see how predictive text systems could sort what is real from what is not. There is therefore no substitute for a human reading the results of any AI search- including going through the underlying cases suggested. If the lawyer had done so in Mata, he would have found (as the US court concluded) that the legal analysis in the fake cases was “gibberish”.
Closer to home, in the recent conjoined cases of Ayinde v LB Haringey and Al Haroun v Qatar National Bank [2025] EWHC 1383 the Divisional Court stated that searches on freely available AI tools can:
“produce apparently coherent and plausible responses to prompts, but those coherent and plausible responses may turn out to be entirely incorrect. The responses may make confident assertions that are simply untrue. They may cite sources that do not exist”.
The Divisional Court therefore pointed out the need for users of AI to check the accuracy of their searches by reference to authoritative sources. But they did not think it was enough for lawyers merely to ensure that their own research was valid. Instead, they identified a broader training and supervision issue, as follows [9]:
“There are serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused. In those circumstances, practical and effective measures must now be taken by those within the legal profession with individual leadership responsibilities (such as heads of chambers and managing partners) and by those with the responsibility for regulating the provision of legal services”.
We note that the refreshed Judicial Guidance on the Use of AI (published in April 2015) has identified legal research and analysis as a particularly dangerous area. In fact, it features on its lists of “tasks not recommended” for AI use.
For our part, we see the recent learning about “hallucinations” as merely the start of a process in terms of identifying and managing risk. Legal practices need to ensure that they know what their lawyers are using AI for and whether they understand its limitations and pitfalls.
This is not to knock AI or suggest that it is too dangerous to use. AI clearly has great potential to streamline tasks and cut costs. Instead it is to suggest that its use must be deliberate and monitored – much like the way in which the courts encouraged parties to be open about their adoption of technology assisted reviews in disclosure exercises when those tools emerged some ten years ago.[3] We note that in the disclosure context, practices have developed which encourage transparency in advance of using technology plus the ability to involve humans more heavily if required. [4] The Bar Council’s Guidance points out that a similar approach has been taken to the use of generative AI in court materials in Manitoba, Canada- and we would not be surprised if it develops here too.
AI and court procedure
Any review of how AI is being used for court work will necessarily need to focus on whether there are any “red lines” that cannot be crossed. This is because misuse of AI in relation to court documents carries with it not only disciplinary risk for lawyers, but in extreme cases the possibility of contempt charges. [5]
For our part, we find it difficult to see how generative AI could be used to produce a witness statement without falling foul of court rules (most particularly the requirements that statements be in the witness’s own words and set out their own recollections)[6]. However, this does not mean that AI can have no use at all in relation to witness statements. AI notetakers may for instance have a part to play in terms of transcribing meetings- with appropriate checks of course.
The trouble is that once fee earners get into a pattern of using AI to generate first drafts of documents, their “radar” is not always going to identify that some types of documents cannot properly be generated in that way. This type of danger therefore needs to be headed off by information gathering about AI use within firms, development of policies, and supervision of the type identified in Ayinde. We note that the surveyors’ regulator, RICS, has produced draft Professional Standard on the Responsible use of AI (on which it is currently consulting) which suggests that firms could be required to carry out formal risks assessments for AI use and operate a risk register. We commend a reading of RICS’ draft document to those dealing with risk in legal practices, in that it synthesises numerous issues that apply to lawyers as well as surveyors and is a revealing glimpse of the more recent thinking of a regulator.
Another area where we see AI as having potential to disrupt existing practices further is as regards disclosure. As noted above, AI has in fact been used for some time to conduct “technology assisted reviews” of what documents are disclosable. What is new, however, is the ability of generative AI to allow unscrupulous litigants to create documents. It seems to us that there will be some areas of law that are particularly susceptible to forgeries- potential examples being fraud cases or bitter financial remedy disputes between divorcing spouses. More is likely to be asked of lawyers in terms of identifying whether documents are genuine or not. This may lead to a more suspicious mindset becoming the norm when reviewing disclosure, and more widespread analysis of metadata and other digital “tells”.[7]
There is currently no guidance in the Civil Procedure Rules or from any legal regulators in England and Wales about evaluating “deepfake” evidence in court proceedings. This is one of many areas where we think more assistance from regulators or trade bodies may be required.
AI and transactional work
AI is being increasingly used in the transactional sphere. For example, in the conveyancing context, tools are already available to review leases, summarise the content of such documents[8], as well as purchase relevant land registry materials. Other legal tools advertise that they can compare contracts against market norms.
Although the concerns identified below apply across the board, to our mind the use of AI on transactions brings to the fore at least the following questions:
- Have we done enough due diligence on the AI tool our firm is using?
- Are we doing enough checks that the AI is spotting what it needs to spot?
- Do our clients need to know and consent to the fact that we are using AI to replace work that would otherwise be done by a human?
- Is the firm’s insurer aware of and happy with this?
- What is the AI doing with the material we are inputting? Does our client need to know and consent to this?
At present, whilst the SRA has stated in its Risk Outlook on AI that clients should be informed of how AI is involved in their matters, neither the BSB nor the SRA have published clear guidance about the extent to which this needs to be done.
In its draft Professional Standard on the Responsible Use of AI, RICS makes numerous observations about the use of AI which may also be relevant for lawyers to consider, including:
- Have firms carried out an assessment of whether AI is the most appropriate tool for use, including considering the risk for error and/or bias in AI systems?
- What is being done to scrutinise the output of AI?
The Law Society’s materials on AI include at the end a helpful checklist for the lifecycle of embarking on and reviewing the use of AI.
Confidentiality and privilege
It is the nature of LLMs that they learn from and use the information that is put into them. So by feeding a client’s information into them, a lawyer runs the risk of disseminating that material to a wider audience- a potential pitfall that is particularly pronounced if widely available tools like ChatGPT are used. The SRA’s Risk Outlook on AI refers to an incident in 2022 where information on ChatGPT about users’ credit card details mistakenly became visible to other users as an illustration of its point.[9] There is also a risk of AI providers being hacked. [10]
We note that the refreshed Judicial Guidance on the Use of AI suggests that judges should “treat all AI tools as being capable of making public anything entered into them”. Likewise, the Bar Council’s Guidance says “be extremely vigilant not to share with a generative AI system any legally privileged or confidential information”.[11]
Both the SRA and Bar Council have also pointed to the need to protect client information. The Law Society has also flagged this issue in its article entitled “Generative AI: the Essentials”. It has also identified an issue about who owns both input and output data.
While AI products designed for the legal market are likely to have more stringent inbuilt safeguards than public tools, it is the responsibility of lawyers to understand them, and how confidential data will be kept safe. In this regard, we point out that the information put into AI models goes far beyond asking questions for legal research and can include uploading suites of documents to be summarised, or turned into chronologies. AI tools can therefore end up with a lot of client information. The RICS draft Professional Standard on the Responsible use of AI explains the need for firms to ensure that their AI systems are properly secured and encrypted and for due diligence about confidentiality and data protection to be carried out when negotiating with providers. The Law Society material makes similar points.
AI and retainer letters
The first “AI law firm”, Garfield AI, states on its website that it will not offer advice on the merits of a case and will instead use AI to generate documents. Its website even goes so far as to say:
“Garfield is not a solicitor or a barrister. In fact, Garfield is not even human! It hasn’t sat any law degrees. It is a legal assistant that is a machine learning application”.
Although this is an extreme example, it does demonstrate the need for firms to think about where AI use needs to be declared and agreed. Thought needs to be given to the position where AI use deviates from what a client would usually expect.[12] We note that whilst the Court of Appeal has in the past expressed a wish to support the “unbundling” of legal services[13], arguments about limiting work do not always survive in front of judges considering the scope of a lawyer’s duties, particularly if the position has not been made clear to a client in advance.
Again the RICS draft Professional Standard on the Responsible use of AI suggests that surveyors should be transparent with their clients about AI and disclose key information in their engagement terms about how it will be used. RICS suggests that surveyors may be required to give numerous details to their clients, including what AI tools are used, how they have been tested internally and what insurance cover there is for their use. The Law Society material points out that the SRA has not yet generated specific guidance about use of AI from a client care perspective.
Another example of where AI might lead to arguments over the scope of any duties assumed might be the use of a chatbot to provide advice or assistance. Similar issues could arise as in the recent case of Miller v Irwin Mitchell [2024] EWHC Civ 53 where it was held that solicitors offering a free legal helpline did owe some duties, particularly to ensure that the advice in fact given was correct.[14]
The message is- is AI affecting your practice in such a way that clients need to understand that, and its possible limitations?
AI and working with others
One further issue to consider is how other people you are working with are using AI. In Al Haroun v Qatar National Bank, a solicitor lodged witness statements at court (one in his own name) which included numerous “fake” cases that were the fruits of legal research carried out by his client using AI.
The message remains that work done by others has to be checked- and it may be that increasingly, other people’s use of AI has to be understood. Even where work is carried out by counsel, solicitors cannot fully delegate responsibility for the final product.[15]
Conclusions
It is hard to summarise the position better than the Divisional Court did in Ayinde v LB Haringey and Al Haroun v Qatar National Bank:
“Artificial intelligence is a tool that carries with it risks as well as opportunities. Its use must take place therefore with an appropriate degree of oversight, and within a regulatory framework that ensures compliance with well-established professional and ethical standards if public confidence in the administration of justice is to be maintained”.
In circumstances where the most recent SRA and Bar Council Guidance are some 18 months old, and despite the Law Society’s more helpful and more recent input, it currently falls on the shoulders of law firms and sets of chambers to think through how AI is being used- and what policies, changes to practice and changes to client terms AI requires.
© Helen Evans KC, Ben Smiley and Melody Hadfield, 4 New Square Chambers, July 2025
Disclaimer: this article is not to be relied on as legal advice. The circumstances of each case differ and legal advice specific to the individual case should always be sought.
[1] See the Law Society’s Guidance “Generative AI: the Essentials”, updated August 2024 (and possibly May 2025 as well).
[2] In fact AI has not just hallucinated cases but has also suggested that existing authorities are propositions for things not in fact contained in them. So checking the mere existence of an authority is not enough.
[3] See e.g. cases such as Triumph Controls UK Limited v Primus International Holding Co [2018] EWHC 176 (TCC).
[4] See for instance the wording of the e-disclosure questionnaire. Part of the reason for this is the difficulty unpicking what has happened after the event. We note that AI uses “black box” models, which are hard to probe- see para [18] of the Bar Council’s Guidance on AI.
[5] Ayinde v LB Haringey and Al Haroun v Qatar National Bank [2025] EWHC 1383.
[6] See e.g. CPR 32 PD para18.
[7] See for instance the document analysis in Crypto Open Patent Alliance v Craig Steven Wright [2024] EWHC 1198.
[8] The refreshed Judicial Guidance on the Use of AI does identify “summarising large bodies of text” as a potentially useful application for AI tools.
[9] By reference to articles on CSO Online (Sharing sensitive business data with ChatGPT could be risky | CSO Online) and Mashable (The ChatGPT bug exposed more private data than previously thought | Mashable)
[10] The Law Society’s materials cover cyber security concerns.
[11] It also refers to data protection concerns- which are beyond the scope of this paper. The Law Society materials do refer to this.
[12] The RICS draft Professional Standard on the Responsible Use of AI has useful material on this.
[13] Minkin v Landsberg [2015] EWCA Civ 1152.
[14] In Miller, the firm was held not liable for failing to give further nuanced advice beyond that in fact given. This is very much a fact-specific area, and one can imagine similar problems arising in the chatbot sphere.
[15] See eg Ridehalgh v Horsefield [1994] Ch 205 in the wasted costs context.


