Lessons from Ayinde v London Borough of Haringey [2025] EWHC 1383 (Admin)
Artificial intelligence (“AI”), such as Chat GPT, is becoming increasingly popular in everyday life and it has found its way into the legal profession, with tasks like drafting and research being supported by AI. In this article, we discuss the recent case of Ayinde v London Borough of Haringey, which details the various risks and implications of using AI-generated content for legal drafting and research.
In the Ayinde case, a pupil barrister submitted legal documents to the High Court in support of their client’s case. These documents referenced multiple legal authorities, cases and statutes that turned out to be entirely false. The judge noted several warning signs: inaccurate case law, misapplied legal principles and even American spelling, which is a common and telling indicator that the content may have come from an AI tool.
Despite the barrister denying the use of AI, the court found the errors serious enough to issue a wasted costs order and refer the matter to legal regulators. Importantly, the judge emphasised that anyone submitting documents to court must ensure they are accurate, well-founded and properly sourced, AI-generated or not.
While the case involved a pupil barrister, AI tools are commonly being used by self-represented individuals to save time and avoid legal costs. They can quickly draft documents, suggest arguments or provide summaries of legal principles.
However, the significant risk of using AI, as emphasised in Ayinde, is that it often generates content that sounds credible but is not based on real or accurate information. This is known as “hallucination”, a major issue with AI. AI can produce fake case names, misstate the law or cite non-existent legislation. If you do not confirm what AI gives you, you risk misleading the court and therefore being in contempt, your claim being thrown out and facing financial consequences, such as being ordered to pay the other side’s legal costs.
In 2023, the SRA provided their “Risk Outlook” report on the use of AI in the legal industry. Within this report, the SRA provided numerous ways in which firms can manage the risks of AI, including appropriately training and supervising staff in AI safety and where it should and should not be used. The most significant point the SRA raise under their ways of managing risk, is to “not trust an AI system to judge its own accuracy”.
The SRA also recommend that clients are made aware of the fact that a solicitor will be using AI on their case and how it will operate.
These methods along with other recommendations outlined under headings of safety and security, transparency, fairness and accountability within the SRA’s report should altogether mitigate any potential risks.
Undoubtedly, AI has benefited the legal sector in expediting various aspects of work and will continue to do so in the future. It is, however, paramount that laypersons and solicitors recognise that AI, while helpful, can result in significant errors, as demonstrated in the Ayinde case.
We recommend that AI should be used safely and under supervised conditions, with any work produced through AI being thoroughly checked by the individual or solicitor prior to it being sent.
This article is for general information only and does not constitute legal or professional advice. Please note the law may have changed since this article was published. We do not accept responsibility or liability for any actions taken based on the information in this article
This website is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply