OpenAI has rolled out a ChatGPT application programming interface ("ChatGPT API") available to the public, enabling companies to incorporate customised ChatGPT functionality (such as content generation or summarisation), into their platforms or systems. However, using the ChatGPT API, or any API for that matter, comes with certain risks concerning cybersecurity, availability and functionality, and data protection, which will need to be carefully monitored and mitigated.

Cybersecurity

The ChatGPT API can serve as an additional attack vector within the organisation, where cybercriminals can seek to gain access to the company's API key. According to OpenAI, a compromised API key may allow a person to gain access to the API, which not only could result in your API credit account being consumed and depleted therefore leading to unexpected charges, but could also result in potential data losses and disruption to ChatGPT access. Furthermore, if a cybercriminal gains access to the ChatGPT API key, the cybercriminal may be able to unlawfully harvest or exfiltrate company data by accessing the company's databases. Therefore, cybersecurity concerns remain one of the key risks that a company will need to mitigate when using the ChatGPT API.

Availability and functionality

The availability and functionality of the ChatGPT API is dependent on a third party (ie, OpenAI). If ChatGPT's API is experiences downtime, it is likely that the company will experience disruption to the functionality of the ChatGPT component incorporated in its platform. Companies should evaluate how critical ChatGPT's functionality will be to the business and assess this against the extent to which they wish to rely on a third-party tool.

Data protection and confidential information

To the extent that users disclose confidential information or personal information on the platform, there is a risk that the API incorporated into the platform may expose the company to data privacy risks, including the unauthorised access to or disclosure of sensitive information which may be stored in third party databases.

Although OpenAI recently updated its Data Usage Policy to reduce the period for storage of personal data to 30 days before it is deleted and to exclude the usage of personal data for model improvement purposes, these changes may not entirely mitigate the risks and companies may still be susceptible to disclosed company data being used as an input to further train ChatGPT.

Illegal and malicious conduct

It may be possible for cybercriminals to bypass the ChatGPT API's anti-abuse restrictions by using the ChatGPT API as a means to execute cyberattacks by generating malware code, phishing emails, and the like. Cybercriminals have become sophisticated in their approach to cyberattacks, and there is a risk that the cybercriminals will set up Telegram bots which are linked to, and capable of, prompting ChatGPT to generate such illegal and malicious content.

Security is one of the biggest concerns regarding the use of the ChatGPT API, and a company should ensure that it applies best practices and measures for API key safeguarding, which includes:

  • always using a unique API key for each team member on the company's account;
  • not sharing API keys (which is prohibited by OpenAI's terms of use);
  • never committing API keys to repositories;
  • using a key management service; and
  • monitoring the token usage and rotation of API keys when needed.

As an alternative to using ChatGPT's API, a company can develop its own AI language model. Large language models ("LLMs") can be easily replicated and deployed within an insulated enterprise environment. Recent examples, such as the Stanford University's Alpaca, show that LLMs may be less costly to develop and offer similar functionality and advantages to ChatGPT. This approach may mitigate the company's exposure to risks surrounding the company's intellectual property, data privacy, and disclosure of confidential information.

As a secondary alternative, OpenAI has recently started releasing ChatGPT plugins, which are tools that allow ChatGPT to connect to a company's API to retrieve information from the company in order to retrieve real-time information or to assist users with certain actions. Since the ChatGPT plugins would be connected to a company's system and granted access to certain company information in real-time, the risks are akin to those identified for the ChatGPT API, and could expose the company to security vulnerabilities, performance impacts and delays, and compatibility issues causing reduced functionality of the system.

The risks associated with the use of ChatGPT plugins can be mitigated by implementing the following measures:

  • evaluation and curation of plugins;
  • conducting security assessments;
  • updating plugins regularly;
  • setting up user access control;
  • establishing a contingency plan; and
  • training users on the acceptable use of the ChatGPT plugin.

Therefore, if a company plans to integrate the ChatGPT API or plugins into its systems, the company should ensure that it has implemented all necessary safeguards to mitigate the identified potential risks referred to in this article, most importantly ensuring that the company stores and maintains the confidentiality of its API keys and deploys measures to counteract the possibility of security vulnerabilities caused by the use of the ChatGPT API or plugins.

Regardless of whether a company is looking to use the free or premium version of ChatGPT within its organisation, use the ChatGPT API, or use ChatGPT plugins, the company should have a formal policy in place to ensure and promote the responsible use of artificial intelligence and associated tools within the organisation, and should provide adequate training to users on the associated risk.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.