Before delving into the details of the subject of this Article, I’d like to re-assure you that despite the topic of this Article being about Artificial Intelligence, it has been written by a real person, not a computer…. at least for the time being!
A Supreme Court Judge in Canberra has just confronted for the first time in an Australian Court room, the ramifications of ChatGPT. How did it this “confrontation” between AI and the law play out?
The case involved the illegal sale of vapes. After being charged, the criminal pleaded guilty. As part of the sentencing process (in which the Court decides the penalty to be imposed on the criminal) written statements were tended to the Judge, stated to be written by people who could comment on the criminal’s character. One of the references given to the Judge was purportedly written by the criminal’s brother, yet there were several aspects of the text within the reference document that appeared non-authentic and which triggered concern by the Judge. For example, the Judge was perplexed that the reference written by the criminal’s brother did not say that the author was his brother and explain his close association with the offender. The statement also repeated an expression declaring a positive opinion about the criminal’s character with specific reference to cleaning (meaning as literally interpretated to be “house cleaning”) and that perplexed the Judge. Justice Mossop, in raising concern about the reference, said, “It is certainly possible that something has been lost in translation. He [the defendant] may well be committed to cleanliness. However, the non-specific repetitive praise within the paragraph which places such an emphasis on his proactive attitude towards cleaning and strong aversion to disorder is strongly suggestive of the involvement of a large language model. “
It was then revealed in the Courtroom that the reference document had been generated with “the assistance of computer translation.”
Justice Mossop ruled that the “use of language within the document [referring to the reference] is consistent with an artificial intelligence generated document” and he determined to place “little weight” on the reference when considering the sentencing of the criminal. Meaning, because the Judge considered that the reference had been generated via AI he was entitled to largely disregard the reference as having no influence on his determination about the penalty to be imposed in the case.
The lesson to be learnt from this ground breaking case is that the use of AI in the conducting of a legal case should be made with extreme care. If the content of the AI-generated document doesn’t accord with the particular situation or context in the case, then it could be disregarded or, even worse, the document could be detrimental (rather than helpful) to the case.
Although it wasn’t stated by the Judge in this case, it’s very possible to consider a situation where a computer-generated document damages a case rather than provides assistance. For example, in both criminal and civil cases the credit of the individual involved in the case is often highly influential on the Court’s decision. In the context of a Court case, “credit” is about a person’s truthfulness so that it would be highly damaging for a computer-generated document to be used in which (even innocently) the effect of the document creates a question in the Judge’s mind about the person’s truthfulness, simply because the document has a non-authentic appearance.
With the explosion in AI technologies, one type – large language models (LLMs) – has the potential to be extremely useful in the practice of law. LLMs generate text at approximately 300 words per minute, meaning more than 7.5 times faster than humans can type. A 3000-word legal document would take over an hour for a human to type, while an LLM could generate in about 10 minutes. Of course, writing legal documents involves more than just typing speed, so much attention is now being given to how to optimise models within the provision of legal service. Chatbots are one example of how LLM applications rely on prompting techniques. For example, a prompt might include some examples of haikus before asking the LLM to generate its own haiku.
A word of caution (just like in the recent Court case in Canberra): LLMs often produce “hallucinations”, meaning text emitted by an LLM that seems plausible and coherent yet is factually incorrect. LLMs rely on statistical patterns from their training data, not verified facts. The danger in the legal field is that an LLM might produce citations with proper formatting and syntax for a Court case that does not exist! The LLM, much like a parrot can repeat phrases it has “heard” in its training data, without any grasp of their accuracy or relevance.
Just after Christmas, in late December last year, The New York Times started case in a USA Court suing Chat GPT-maker Open AI and Microsoft alleging that the companies’ powerful AI models used millions of articles for training without permission. Since the Times invested massively in its journalism, it says that the AI chatbots were seeking to free-ride in building substantive products without permission or payment.
Legal issues relating to copyright has become a major battleground with AI. In this eagerly-followed Court case high interest has been placed on the way that the Ai models that power Chat GPT were trained for years on content available on the internet, under the assumption that it was fair to be used without compensation. In this case, The Times contends that such use was unlawful, notable because the new products create a potential rival to news publishers such as The Times.
Over 8 months of negotiation between Microsoft’s Chat GPT and The New York Times failed to create a resolution and so the publisher is seeking a jury Trial in its landmark legal claim. If successful, this case would have dramatic consequences and including in Australia. This is all about setting the guard rails for AI, its legal boundaries and providing clarity about compensation owed for using other peoples’ work.
Another Court case commanding interest in this sector was a class-action lawsuit again Open AI started last year by several best-selling fiction writers including “Game of Thrones” author George R. R. Martin, accusing the start up of violating their copyrights to fuel Chat GPT.
This leads to the sharing of an explanation of 2 legal expressions.
“Bona fide” is often used by lawyers (and in legal TV shows). It is the phrase which describes acting “in good faith” without intention to deceive. This expression is closely linked with the topic of “credit” in Court cases and including in the context of the use of AI.
Do you know what is the expression which represents the opposite position? It is the expression “mala fide” that captures the actions or intention of “bad faith”. This is then a way of describing where someone has acted with intention to deceive.
To finish on a serious note: “Why did the AI go on a diet? Because it had too many bytes!”