OpenAI, the creator of the infamous ChatGPT, recently announced that its chatbot will now be able to access more up-to-date information. The OpenAI chatbot will now have knowledge of the world up to April 2023. When the chatbot was first released in November 2022, it only had access to information until September 2021. Despite the "informational black hole" that previously existed, ChatGPT still reached 1 million users within five days of its launch. With OpenAI's most recent announcement, it is likely that ChatGPT's popularity will increase once again. Although exciting, the increasing popularity and use of chatbots is also something that employers should monitor in their workplaces.

Why should employers be worried?

In a rapidly evolving digital landscape, the safeguarding of sensitive data has become a critical concern for most employers. Cybersecurity Venture's Official Cybercrime Report 2022 states that the global annual cost of cybercrime is predicted to reach USD8-trillion in 2023. Conversational AI leaks have also become more frequent, exacerbating employers' data privacy concerns.

What are conversational AI leaks?

"Conversational AI leaks" is a phrase used to describe a loss of data where a chatbot is involved. These leaks involve incidents where sensitive data/information, which is fed into chatbots like ChatGPT, is unintentionally exposed. When information is disclosed to chatbots, the information is sent to a third-party server and is used to train the chatbot further. What that means in simple terms is that the information input into the chatbot may be used by the chatbot in the future generation of responses. This becomes particularly problematic when the chatbot has access to and is using confidential or sensitive information that should not be publicly available.

Within the previous few months alone, there have been multiple tech giants that have prohibited staff from making use of generative AI tools following conversational AI leaks. In this regard, employees had accidentally disclosed sensitive company data whilst making use of chatbots to (i) identify errors in source code; (ii) optimise source code; and (iii) generate meeting notes from an uploaded recording. There have also been other incidents where employees have shared internal documents with the chatbots.

How can employers regulate and control the use of generative AI in the workplace?

Historically, an employer's only option would have been to prohibit the use of ChatGPT for specific work-related tasks or queries. However, there may be an alternative or additional option. OpenAI has now launched new technology that allows individuals and employers alike to not only create their own chatbots but to ring-fence the information on which they are trained. This innovation is valuable for a multiplicity of reasons but in this context, it may reduce the risk of conversational AI leaks in certain contexts.

It is important to mention though that these chatbots are only as good as the input given to them and they are unlikely to address every question an employee might have or provide a solution for every shortcut an employee might be looking to find. This means that there is always a risk that employees will turn to other chatbots for assistance. So, employers and employees should remain cautious of where they source their information from and what information is being shared in that process.

Accordingly, the learnings from the conversational AI leaks we have seen to date should be:

For employers:

  • Ensure data security is your top priority;
  • Ensure you customise your personalised chatbots responsibly;
  • Train employees on how to use chatbots responsibly; and
  • Monitor chatbots' compliance with privacy regulations and data protection measures; and

For employees:

  • Be mindful of the sensitivity of the information shared with chatbots;
  • Confirm the accuracy of chatbot responses, particularly where the responses may influence critical decisions;
  • Familiarise yourself with acceptable chatbot usage; and
  • Report any security or privacy concerns when using chatbots.

The above takeaways are useful to consider when attempting to harness the potential of the ever-evolving generative AI space, whilst simultaneously preserving data privacy and security.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.