Lawyer's use of AI program confirms its unreliability

A New York lawyer was embarrassed to admit using artificial intelligence (AI) program ChatGPT when preparing his courtroom legal argument, after it was exposed to have cited several court cases that didn't exist.

When the judge and opponent lawyers couldn't find the cases, the lawyer discovered ChatGPT had simply made up six judgments that would favour his argument.

The judge was damning - condemning the lawyer for citing "bogus judicial decisions, with bogus quotes and bogus internal citations".

The US lawyer, who had 30 years' experience but had never used ChatGPT before, was apologetic, saying he had even asked the program if the cases were real, and it had replied "yes". (Please see A lawyer used ChatGPT and now has to answer for its 'bogus' citations, The Verge, 28 May 2023.)

Threat of losing control to AI

The lawyer's ChatGPT research was one of a growing number of instances where ChatGPT and similar programs have generated so-called "hallucinations", proving themselves to be sometimes unreliable, inaccurate, even capable of lying. And possibly deadly.

The Guardian reported that a US Air Force colonel described a simulated test with a drone powered by artificial intelligence which was ordered to destroy an enemy target, but when the operator sought to call it back, the drone turned and attacked the operator, who was keeping it from completing its mission. (Please see US air force denies running simulation in which AI drone 'killed' operator, 2 June 2023.)

The US Defence Department denied the drone experiment happened and the colonel later withdrew the comment, saying it was hypothetical. But the US is experimenting with AI control of fighter planes to supplement human pilots.

Tech leaders call for regulations to tame AI

Could we be on the verge of losing control of AI to the point where it threatens humans? The former chief executive of Google, Eric Schmidt, warned that it has the potential to harm or kill people "in the near future".

Artificial intelligence pioneers and experts, including Elon Musk, have urged major companies to pause training AI systems for at least six months, so that protocols and laws could come in to govern their development and deployment, as these tools present "profound risks to society and humanity". (Please see Musk, scientists call for halt to AI race sparked by ChatGPT, AP News, 20 March 2023.)

Hundreds of tech leaders signed an open letter on the website of the US-based Future of Life Institute, warning AI labs are "locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one - not even their creators - can understand, predict, or reliably control".

Australian government taking action to develop responsible AI

The Australian government has established the National AI Centre to help develop responsible national AI and digital practice. On 1 June 2023 it released a discussion paper on regulating the technology, which can receive submissions until 26 July 2023. (Please see Supporting responsible AI: discussion paper.)

It is part of an eight-week public consultation to determine possible legislative changes, even whether high-risk uses of artificial intelligence should be banned. Changes could be coming to legislation governing consumer, corporate, criminal, privacy, copyright, administrative, intellectual property and online safety laws.

The paper warns there are many concerns regarding AI being used for harmful purposes, such as generating deepfakes to influence public opinion, spread misinformation, sow public distrust and anxiety or encourage violence.

Regulating AI in a fast-developing technological landscape

Laws have always been slow to keep up with technological developments, and we need to move quickly, as AI is developing very fast despite its faults.

Professionals such as lawyers, doctors, engineers and students should be wary of using AI in their work. While extremely useful, these programs can make mistakes. "Facts" should be carefully checked before being used.

Australian scholars writing in The Conversation say there is a growing impetus to regulate AI, but it will be difficult for laws to regulate the fast-developing technology. (Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this, 5 April 2023.)

The European Union is framing laws that assign three risk categories for using AI - systems that create "unacceptable risk", which will be banned, "high-risk" applications that infringe existing laws, and unregulated applications that do not cause harm. (Please see The Artificial Intelligence Act, Future of Life Institute.)

Anneka Frayne
Business disputes and litigation
Stacks Law Firm

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.