In the recent years, numerous companies are incorporating Generative AI (GenAI) capabilities into existing applications by forming interconnected GenAI ecosystems consisting of semi/fully autonomous agents powered by GenAI services such as ChatGPT and Google Gemini.

For instance, start-ups and companies have started building AI agents who assist in completion of simple human tasks such as calendar booking, ordering groceries and products in real time. A group of researchers and developers have begun contemplating the possibility of attackers developing malwares to exploit the GenAI component of an agent and launching cyber-attacks on the entire GenAI ecosystem. The objective of the research is to find vulnerabilities in the AI ecosystem, since they anticipate the development and proliferation of new wave of cyberattacks like prompt injections attacks, jailbreaks and use of malwares like AI worms to become rampant in the AI ecosystem in the foreseeable future. Like traditional computer viruses which are rampant on the internet, there is a risk of AI malwares taking over the AI ecosystem. These AI malwares exploit the bad architecture design in the GenAI ecosystem, and not the GenAI services provided by any specific company. For instance, for the sake of drawing parallels with Web 2.0, Google or Meta may not be directly responsible for the proliferation of computer viruses which are openly operating on the world wide web, however it is their responsibility to protect their applications from any form of viruses. This begs the question; how well can we prepare the GenAI ecosystem from attackers who are deploying newer forms of cyberattacks?

A group of researchers have already created a computer worm called Morris II (paying homage to Morris worm developed in 1988) that targets GenAI ecosystem through the use of adversarial self-replicating prompts. A traditional computer worm is a type of malware that automatically propagates or self-replicates without the need for human interaction. The researchers demonstrated the capabilities of Morris II by launching it against GenAI powered email assistants in controlled testing environments. The AI worm has shown to be capable of breaching security measures in GenAI systems by attacking AI email assistants with the intent of stealing email data and sending spams. Through such demonstration, the researchers sought to prove that although some safety measures in ChatGPT and Gemini were broken, it was meant to be a warning sign about the bad architecture design within the AI ecosystem. After reporting their findings to OpenAI and Google, OpenAI commented saying it is striving to make its system more resilient. This raises an important question; can these companies be trusted with data if they cannot even guarantee that their systems are immune to cyberattacks? And how will this tie in with the accountability principles enshrined in the data protection laws around the world?

Let us keep the e-commerce and productivity enhancing applications powered by AI aside for a second and consider data sensitive industries like finance and banking. It is not untrue to state that such sectors are also familiarising themselves with AI and employing AI powered tools to increase efficiency and productivity. Although, one might argue that there is a net positive effect generated by employment of AI powered applications in general, companies and individuals alike must use these powerful tools with a sense of vigilance and sense of circumspection.
For instance, JPMorgan is cracking down the use of ChatGPT in workplace and the concern is not limited to inaccurate results as a product of hallucination but genuine concern over leakage of confidential and proprietary information. Add AI malwares into the picture, and it completely expedites the need for building countermeasures from such attacks and stronger data security practices against loss or theft of confidential and proprietary information.

It is worth mentioning that GenAI worms are not the only concern for developers, as security researchers have been able to jailbreak large language models (LLMs) like ChatGPT. Jailbreaking involves making GenAI services like ChatGPT and Gemini disregard safety rules and produce results which normally would be against the content and moderation policies of such services which further exasperates the prevailing issue of bias and content moderation within AI models. Additionally, security researchers have demonstrated the possibility of prompt injections against LLMs, which allows attackers to override and manipulate the original instructions in the prompt and replace it with special instructions, this detracting from an AI agent's ability to perform its allocated role.

While one may argue that the research on the vulnerabilities in AI ecosystem have been timely undertaken before deployment of the technology for large-scale commercial purposes, specifically in industries that deal with sensitive data, it may be best to err on the side of caution until such vulnerabilities are plugged-in specially in jurisdictions with weak or no data protection laws.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.