05.11.2025

British ethics guide on the use of artificial intelligence in the judicial system

On October 31, 2025, the "Internal Rules of Procedure for Courts of Law on the Use of Artificial Intelligence" (English: "Artificial Intelligence (AI) Guidance for Judicial Office Holders") – ("Guidelines") came into force in the United Kingdom. Although adapted to the common law system, its general principles are applicable to any judicial system that adopts modern technology ("AI") or interacts with it.

Thus, the first chapter of the Guidance is devoted to defining technical terms, the second covers the principles of interaction between the judicial system and AI, and the final chapter mentions the risks and benefits of its use. The principles contained in the Guide are: (I) Understanding AI and its applications, (II) Respecting confidentiality and privacy, (III) Ensuring accountability and accuracy, (IV) Paying attention to bias, and (V) Taking responsibility.

We will go through each of these in detail below.

I. Understanding AI and its applications

Before using any AI tools, make sure you have a basic understanding of their capabilities and potential limitations. Some key limitations: Public AI chatbots do not provide answers from authoritative databases. They generate new text using an algorithm based on the requests they receive and the data they have been trained on. This means that the result generated by AI chatbots is what the model predicts to be the most likely combination of words (based on the documents and data it has as source information). It is not necessarily the most accurate answer.

As with any other information available on the internet in general, AI tools can be useful for finding material that you would recognize as correct but do not have at your fingertips, but they are an imprecise way to conduct research to find new information that you cannot verify. They are best viewed as a way to obtain tentative confirmation of something, rather than providing accurate facts.

II. Respect confidentiality and privacy

Do not enter any information into a public AI chatbot that is not already in the public domain. Do not enter private or confidential information. Any information you enter into a public AI chatbot should be considered as being published worldwide. Current public AI chatbots remember every question you ask them, as well as any other information you enter. This information is then available to be used to answer questions from other users. As a result, anything you enter into them could become public knowledge.

You should disable chat history in public AI chatbots, if this option is available, as it should prevent your data from being used to train the chatbot, and after 30 days, your conversations will be permanently deleted. This option is currently available in ChatGPT and Google Gemini, but not in other chatbots. Even with history disabled, we can assume that the data entered will be disclosed.

Keep in mind that some artificial intelligence platforms, especially if used as smartphone apps, may request various permissions that give them access to information on your device, which you should refuse.

III. Ensuring responsibility and accuracy

The accuracy of any information provided to you by an artificial intelligence tool must be verified before you use or rely on it. Information provided by artificial intelligence tools may be inaccurate, incomplete, misleading, or out of date. Even if it claims to represent the law of England and Wales, it may not do so. AI may include cited source material, which may also be fabricated.

AI tools may "hallucinate," which includes the following: they may invent fictitious cases, citations, or quotes; they may refer to legislation, articles, or legal texts that do not exist; they may provide incorrect or misleading information about the law or how it might apply; and they may make factual errors.

IV. Beware of bias

LLM-based AI tools generate responses based on the dataset they are trained on. Information generated by AI will inevitably reflect errors and biases in its training data, possibly mitigated by some alignment strategies, which may work.

V. Taking responsibility

Judicial office holders are personally responsible for materials produced on their behalf. Judges should always read the underlying documents. AI tools can be helpful, but they cannot replace the court's direct involvement in the analysis of evidence. Judges are generally not required to describe the research or preparatory work that may have been done to reach a judicial decision. Provided that these instructions are properly followed, there is no reason why generative artificial intelligence cannot be a potentially useful secondary tool.

Follow best practices to maintain your own and the court's security. Use work devices (rather than personal devices) to access AI tools. Before using an AI tool, ensure that it is secure. If there has been a potential security breach, refer to (II) above.

The Guide goes on to recognize the advantages of AI in the work of the judiciary but also highlights the concrete risks that are emerging.

Thus, there are potentially useful tasks:

· Artificial intelligence tools can summarize large bodies of text. As with any summary, care must be taken to ensure that the summary is accurate.

· Artificial intelligence tools can be used in writing presentations, for example, to provide suggestions for topics to cover.

· Administrative tasks can be performed by artificial intelligence, including composing, summarizing, and prioritizing emails, transcribing and summarizing meetings, and drafting memos.

The following tasks are not recommended to be performed with the help of AI:

· Legal research: Artificial intelligence tools are an imprecise way to conduct research to find new information that you cannot independently verify. They can be useful as a way to remind you of materials that you would recognize as correct, although the final material should always be verified against authoritative legal sources;

· Legal analysis: Current public AI chatbots do not produce compelling analysis or reasoning.

However, the Guide mentions elements that could indicate to a judge that the materials brought to their attention in a case file may have been produced by AI:

· References to cases that do not sound familiar or that contain unfamiliar citations (sometimes from abroad);

· Parties citing different case law on the same legal issues;

· Contributions that are not consistent with the general understanding of the relevant legislation for the person who submitted the material;

· References that use American spelling or refer to foreign cases;

· Content that (at least superficially) appears to be very convincing and well written, but upon closer inspection contains obvious factual errors; and

· Accidental inclusion of an artificial intelligence request or retention of a "prompt rejection," such as "as an artificial intelligence language model, I cannot..."

We believe that all these simple but effective guidelines are useful to the Romanian judicial system in combating the errors that AI-generated materials could propagate in court proceedings. It is also a reminder that the Romanian judicial system should adopt its own guidelines in this area, as soon as possible, given the explosive and geometric growth, both quantitatively and qualitatively, of chatbots in recent years.

An article by Dr. Victor Dobozi (vdobozi@stoica-asociatii.ro), Senior Partner, STOICA & ASOCIAȚII.

image