The perils of artificial intelligence

4th April 2023

Much has been written about the opportunities and threats that artificial intelligence presents to lawyers and the law, but our recent experiences highlight the dangers of relying on AI for a litigant without a lawyer and what it’s like to be up against a party who is using it.

“This must be the first time my opponent has been AI!”, were our barrister’s opening words in a recent telecon. The formatting of the emails we had been receiving from our opponent, and the lack of spelling and grammatical errors evident in his previous correspondence with our client, had already led our team to suspect he was using an AI chatbot. Counsel clearly agreed. The new technology will be enticing for any litigant-in-person (LiP) acting without legal representation.

Firstly, the good news from a lawyer’s perspective. AI is lucid and polite. It thanks you for your correspondence, “appreciates your concerns” and hopes for an expeditious resolution to the matter. It doesn’t accuse you of lying, threaten to report you to the SRA or offer to meet you late at night in a dark car park.

However, what it also often doesn’t do is get the law right or apply it correctly to the facts. It refers to the wrong sections from statutes and has a blind spot when it comes to complying with the Civil Procedure Rules and understanding the cost consequences of not doing so.

A practitioner might think: ‘So what? That’s no different to dealing with many LiPs directly’.  This may be true. Perhaps the biggest challenge a litigator faces with an LiP is persuading them your exposé of the defects in their claim and the risks they face in taking the matter further is correct and not just legal sabre-rattling. Well-made points that will hit home with a solicitor can, frustratingly, be dismissed by a lay individual.

AI however makes this process more challenging. You have to persuade your LiP that you, their opponent, have it right and not the cutting-edge AI tool they are replying on. We were only able to make any real progress with our opponent when he stepped out from behind the AI and engaged directly with us, spelling mistakes and all. It was an arduous process which increased time and costs.

This also reveals a risk to the LiP and raises ethical issues for lawyers. To what extent should a lawyer be obliged to inform the court if he/she considers their opponent has been using AI?

The courts have long acknowledged that some litigants may have little option but to represent themselves and their lack of representation will often justify making allowances, for instance, in making case management decisions.

But what if a judge, when applying his or her discretion, is at risk of overestimating an LiP on the basis of AI-generated correspondence? The technology allows an individual at the touch of a button to quote from statutes they have not read and to articulate arguments they do not understand. A judge may rely on the correspondence as evidence of an LiP’s legal knowledge or state of mind, to the LiP’s detriment if they are held to know and understand more than they actually do.

And is a lawyer at risk of breaching their paramount duty not to mislead the court by inviting a judge to impute knowledge of the law to an LiP on the basis of correspondence the lawyer strongly suspects has been written by a chatbot and the LiP does not understand? The SRA Handbook after all requires lawyers not to take unfair advantage of those we deal with and to act in a manner which promotes the proper operation of the legal system.

These are issues that will become more prevalent as LiPs turn to AI for legal guidance given the restrictions on legal aid and conditional fee agreements. It can only be a question of time before the courts have to grapple with the consequences.