Winchester Vacancies

Bar Council guidance says there is “nothing improper” about using artificial intelligence

There is nothing "inherently improper" about barristers using artificial intelligence programmes like ChatGPT in their work, as long as the practitioner understands the software and uses it responsibly, guidance from the Bar Council has said.

The guidance, issued on Tuesday (30 January), added that: "The best placed barristers will be those that make the effort to understand these systems and, if appropriate, use them as tools in their practice, while maintaining control and integrity in their use."

The document is aimed at barristers who are using or are considering using ChatGPT and other generative artificial intelligence (AI) large language model systems (LLMs) like Google's Bard in their work.

LLMs are a form of artificial intelligence that can create coherent written responses to questions. They are trained on vast amounts of text data.

The guidance sets out the key risks with LLMs, such as anthropomorphism, hallucinations, information disorder, bias in data training, and mistakes and confidential data training.

It noted that it is important for barristers to verify the output of LLM software due to possible 'hallucinations' and biases in answers from LLMs.

It also warned of 'black box syndrome', which refers to difficulty understanding the internal decision-making process or providing a clear explanation for the LLM's output.

On this point, the guidance said LLMs "should not be a substitute for the exercise of professional judgment, quality legal analysis and the expertise which clients, courts and society expect from barristers".

It also emphasised that barristers should be "extremely vigilant" not to share with an LLM system any legally privileged or confidential information.

Barristers should also critically assess whether content generated by LLMs might violate intellectual property rights and be careful not to use words which may breach trademarks, the guidance said.

Lastly, it noted the importance of keeping up to date with relevant Civil Procedure Rules, "which in the future may implement rules/practice directions on the use of LLMs, for example, requiring parties to disclose when they have used generative AI in the preparation of materials, as has been adopted by the Court of the King's Bench in Manitoba".

The guidance added: "In conclusion, technical progress and the pressures of competition may lead to the increasing adoption of AI, including LLMs. The best-placed barristers will be those that make the effort to understand these systems and, if appropriate, use them as tools in their practice, while maintaining control and integrity in their use.

"There is nothing inherently improper about using reliable AI tools for augmenting legal services; but they must be properly understood by the individual practitioner and used responsibly, ensuring accuracy and compliance with applicable laws, rules and professional codes of conduct."

Launching the guidance, Sam Townend KC, Chair of the Bar Council, said: "The growth of AI tools in the legal sector is inevitable and, as the guidance explains, the best-placed barristers will be those who make the efforts to understand these systems so that they can be used with control and integrity. Any use of AI must be done carefully to safeguard client confidentiality and maintain trust and confidence, privacy, and compliance with applicable laws.

"This Bar Council guidance sets out the key risks and considerations and will support barristers using LLMs to adhere to legal and ethical standards. It will be kept under review and practitioners will need to be vigilant and adapt as the legal and regulatory landscape changes."

Adam Carey