Local Government Lawyer

Government Legal Department Vacancies


Francesca Whitelaw KC highlights key points from recent guidance and authorities on the use of AI in legal practice.

Guidance for lawyers seeking Intelligent and not Artificial results

As Artificial Intelligence rapidly becomes embedded in society, lawyers are seeking clear and practical advice about how to use AI tools efficiently in legal practice while guarding against professional, ethical and legal risk. The following recent guidance and case law may provide a useful starting point.

The latest guidance for the Judiciary

In October 2025, updated guidance was issued to judicial office holders and associated staff, replacing the previous April 2025 guidance. It also provides assistance to the legal profession more widely. The guidance articulates the overarching principle, that any use of AI must protect the integrity of the administration of justice and it emphasises the importance of personal responsibility for ensuring this. It helpfully contains a glossary of key AI-related terms and breaks down responsible use into several key principles, which may be summarised as follows:

a. Understand AI and Its Limitations

    • Recognise that public AI chatbots generate responses probabilistically, not from authoritative legal databases.
    • Outputs may be influenced by the quality and bias of the training data.
    • AI is not reliable for discovering new, unverified legal information — it is better used for confirmation or preliminary exploration.
    • There is a risk of inaccuracy, incompleteness, and bias.
    • Some LLMs are trained on material heavily weighted toward U.S. or older law, which may distort the presented “view” of English and Welsh law.

b. Confidentiality and Privacy

    • Do not input non-public, private, or confidential material into public AI systems.
    • Treat anything entered into public AI chatbots as though it may become publicly available.
    • If the chatbot allows disabling chat history, do so; but even with history off, assume data may be disclosed.
    • Be cautious about app permissions: refuse any that grant the AI tool access to device data.
    • If confidential data is inadvertently disclosed, report it via the Judicial Office’s data-incident processes.

c. Accuracy and Accountability

    • Always verify any AI-generated information before relying on it.
    • Recognise risks of hallucination: AI can invent cases, misquote, or misstate law, making factual errors.
    • Judicial office holders remain personally responsible for all material produced under their name.
    • Judges must still read and engage with underlying documents; AI is a support, not a substitute.
    • Use secure / approved devices and tools; avoid untrusted or insecure AI systems.
    • If staff (clerks, legal officers, etc.) are using AI, discuss and oversee their usage to ensure compliance with these principles.

d. Awareness of Use by Others

    • Be aware that legal representatives and unrepresented litigants may use AI in preparing documents and whether they need refer to this depends on the context.
    • Judges may ask parties whether they used AI, and what checks they made on its outputs.
    • There is a risk of deepfakes or manipulated content (text, images, video) generated by AI — judicial office holders should remain vigilant.
    • Hidden / “white text” prompts may be embedded in documents; such practices increase risk and underscore the need for personal responsibility.

The guidance goes on to provide examples of potentially useful tasks that may be undertaken by AI tools such as summarising long texts, drafting presentations or administrative tasks; tasks not recommended such as legal research to find new, unverified information or deep legal analysis or reasoning; and red flags indicating AI use such as references to unfamiliar cases or odd citations.

The latest Bar Council guidance

On 25 November 2025, the Bar Council published its own updated guidance, Considerations when using ChatGPT and generative artificial intelligence software based on large language models, which is explicitly not “guidance” for the purposes of the BSB Handbook I6.4, but rather principles and warnings to reflect current professional expectations, particularly in light of recent High Court judgments on professional responsibility. The stated purpose of the guidance is ‘To provide barristers with a summary of considerations if using ChatGPT or any other generative AI software based on large language models (LLMs)’. As well as OPEN AI’s ChatGPT, it names Google’s Gemini, Perplexity, Harvey and Microsoft Copilot (also based on Open AI technology) as general examples and Lexis+ AI, Clio Duo and Thomson Reuters Co-Counsel as law-specific examples, while recognising these technologies and software are advancing rapidly and there is a need for professionals to be flexible and adaptable.

The Bar Council guidance begins not just by defining LLMs but it explains them: how they differ from traditional research tools and how they work. It highlights and explains ChatGPT specifically as it remains the most widely known LLM and also shares technology with Microsoft Copilot. This introduction sets up the focus of the guidance which is on the risks of LLMs – anthropomorphism; hallucinations; information disorder; bias in training data; mistakes and confidential training data; cyber security vulnerabilities. The guidance then translates these risks into the challenges for barristers – which are equally applicable to all lawyers using LLMs:

  1. Verification of outputs and human oversight is mandatory. These tools are aids and not substitutes for independent legal research, verification, analysis and judgment.
  2. ‘Black box syndrome’: the lack of explain-ability of the underlying training materials and internal decision-making processes again means these tools are no substitute for the exercise of professional judgement, quality legal analysis and expertise.
  3. Legal professional privilege (LPP) and confidential information must be respected and there must be adherence to data protection.
  4. Intellectual property (IP) must not be infringed.

The Solicitors’ Regulation Authority has yet to issue equivalent specific AI guidance but on 1 October 2025 the Law Society published an article entitled, Generative IT: the essentials, which is intended to be a living document and which makes useful reading alongside both the above sets of guidance. Like the Bar Council guidance, it specifically considers the recent decision of the Divisional Court in R (on the application of Ayinde) v Haringey LBC [2025] EWHC 1383 (Admin).

Authorities    

Ayinde highlighted the key principles and risks associated with using generative AI tools such as ChatGPT in legal research and drafting and also explicitly demanded guidance for the legal profession:

  • Freely available generative AI tools are not capable of conducting reliable legal research. They can produce apparently coherent and plausible responses that may be entirely incorrect, cite non-existent sources, or fabricate quotations [6].
  • Lawyers who use AI for legal research have a professional duty to check the accuracy of such research by reference to authoritative sources (which include the Government’s database of legislation, the National Archives database of court judgments, the official Law Reports published by the Incorporated Council of Law Reporting for England and Wales and the databases of reputable legal publishers) before using it in their professional work [7].
  • This duty applies whether the research is conducted personally or by reliance on another (for example, a trainee solicitor, pupil barrister or via an internet search) [8].
  • The Court emphasised serious implications for the administration of justice and public confidence if AI is misused, and called for leadership within the legal profession to ensure that all legal service providers understand and comply with professional and ethical obligations regarding AI use [9 and passim].

Ayinde made clear that where a legal representative relies on false authorities due to unverified legal research, the Court’s decision in that case not to initiate contempt proceeding was not a precedent and referrals may also be made to professional regulators such as the Bar Standards Board (BSB) or Solicitors’ Regulation Authority.

In MS (Bangladesh) (Professional Conduct: AI Generated Documents), [2025] UKUT 00305 (IAC) the Upper Tribunal applied the Ayinde guidance and referred a barrister to the BSB after they cited a false case generated by ChatGPT, having failed to check the citation’s authenticity.

There have now been a series of judgments highlighting the dangers of reliance on AI tools for legal research without human checks but lawyers shouldn’t be deterred from using AI responsibly, as a useful tool. In Evans v Revenue and Customs Commissioners [2025] UKFTT 01112 (TC) a Judge of the First Tier Tribunal, Tax Chamber, concluded his Judgment by saying “I have used AI in the production of this decision”, referring to the previous April 2025 guidance to judicial office holders and describing it as a “tool”, “well-suited” to the application before him. He explained the way in which he had used it to summarise documents in the case before satisfying himself that they were accurate. He cited Medpro Healthcare v HMRC [2025] UKUT 255 (TCC) at [43] in confirming that “the critical underlying principle is that it must be clear from a fair reading of the decision that the judge has brought their own independent judgment to bear in determining the issues before them”.

The key messages for lawyers that have emerged by way of recent guidance then, are:

  • AI can be a useful secondary tool for summarising documents, drafting, and administrative tasks, but not for unverified legal research or legal analysis.
  • Lawyers must always independently verify AI-generated outputs against authoritative sources.
  • Placing unverified or false AI-generated material before a tribunal or court may result in disciplinary or regulatory action, or even contempt proceedings.
  • Confidentiality, data protection, and client privilege must be strictly observed when using AI tools.

AI offers great opportunity: it comes with equally great responsibility.

Francesca Whitelaw KC is a barrister at 5 Essex Chambers.

Further reading: Guidance for the Judiciary

Updated guidance on generative AI for the Bar

Generative AI: the essentials

Jobs

Poll