ChatGPT and other AI tools promise vast improvements in research and efficiency, but it is essential to tread cautiously to ensure accuracy and confidentiality and avoid pitfalls

ARTIFICIAL INTELLIGENCE (AI) is a revolutionary force reshaping our everyday lives and professional environments. ChatGPT and other AI-powered chatbots are prime examples of this shift. An AI chatbot provides a conversational interface to a large language model (LLM) trained to respond to user prompts with natural language responses. The LLM itself is a statistical model of word patterns based on a very large corpus of text samples. Stated simply, an LLM is good at predicting "what comes next" after a sequence of words. ChatGPT allows users to access GPT4, an LLM released by OpenAI in March 2023.

AI chatbots can catalyze research and innovation. ChatGPT has been used to accelerate academic research, software development, product design, and research in other sectors. However, users should be mindful of relying too heavily on AI chatbots. ChatGPT allows users to submit prompts (e.g., general knowledge questions, essay topics, beginning of reports to continue, code development prompts, etc.) and outputs text responses to the prompts. Users can engage in a "conversation" with the chatbot through successive prompts and responses. The quality of the output is closely tied to the quality of the user's prompts and the pattern of the conversation.

A fundamental problem with relying on an AI chatbot's output is that it is often factually wrong. AI chatbots seem to struggle with precise facts and calculations. They also tend to "hallucinate" – that is, to concoct information that appears correct, but which does not exist.

"A fundamental problem with relying on an AI chatbot's output is that it is often factually wrong. AI chatbots seem to struggle with precise facts and calculations"

An egregious example of hallucination in the legal context occurred in a US District Court case in Mata v. Avianca. A lawyer for plaintiff Robert Mata was responding to a motion by defendant Avianca Airlines to toss the case based on a limitations period having elapsed. Mata's lawyer filed a brief citing six non-existent cases, including quotes from the cases. Under questioning from the judge, the lawyer admitted that he had used ChatGPT to write the brief and had not verified the ChatGPT output in a reliable legal research database. ChatGPT's underlying GPT4 model was trained on an enormous training dataset collected from publicly available internet content, including extensive case law and legal information. The breadth and depth of this training data, however, does not guarantee error-free outputs. Proper prompt engineering is necessary to specify that creativity should not be used if answers are not available. Even with good prompts, there is a risk that the AI model may hallucinate and make up answers that sound correct but include factual errors.

Users must also be mindful of licenses they may grant when using an AI chatbot. OpenAI's Terms of Use grant OpenAI a license to use ChatGPT input prompts and output to improve model performance. This allows OpenAI to store sensitive and confidential information, such as product specifications, customer data, business strategies, etc. that a user has entered. One user's information may be incorporated into the ChatGPT's output for another user. This could be particularly problematic if that information discloses technology development secrets such as an invention, business strategy, or other confidential information to the other user, potentially resulting in a public disclosure that can render a patent invalid. A user may request to opt out of Open AI's use of user content to train their models; however, OpenAI can still retain the user's data for monitoring purposes for up to 30 days.

"As the AI revolution progresses, lawyers must be careful to avoid using unreliable tools and to help clients protect themselves from unintended consequences of using AI when researching and developing their products"

Chatbots and other AI tools will revolutionize technology law, both in terms of the challenges presented by clients and how lawyers work on a day-to-day basis to serve their clients' needs. As the AI revolution progresses, lawyers must be careful to avoid using unreliable tools and to help clients protect themselves from unintended consequences of using AI when researching and developing their products.

Originally published by Lexpert's special edition on Technology and Health Sciences.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.