Angelika Hellweger of Rahman Ravelli details how artificial intelligence was used to defraud a multinational company.

An employee mistakenly paid out $25 million when fraudsters created a deepfake of his chief financial officer (CFO) in a video conference call.

The employee, who works for a multinational company in Hong Kong, was persuaded to part with the funds after a video call that was supposedly from his London-based CFO and some of his colleagues.

He had initially been suspicious after receiving an email requesting a secret transaction. But after what appeared to be his CFO and his fellow employees on the call, he followed instructions and made 15 transfers into five local bank accounts totalling $200 million Hong Kong dollars (approximately $25 million US dollars). He only realised what had happened when he contacted his company's head office.

Hong Kong police have confirmed that everyone on the video call was fake. The AI-generated video is believed to have been created from genuine online conferences.

According to reports, the case is the latest in a string of incidents in Hong Kong where fraudsters have used deepfake technology to modify existing video footage in order to cheat people out of money. The genuine video footage is downloaded, with AI then used to add fake voices. The fraudsters also use WhatsApp, email and one-to-one video conferences to boost their chances of defrauding company employees.

Six arrests have reportedly been made in connection with these incidents. These cases show how AI is being used by those looking to make large, illegal financial gains. And this is only likely to increase.

There are currently three times as many deepfake videos and pieces of AI-generated content as there were in 2022, and eight times as many voice deepfakes being posted online. This fast-growing threat to those in business can be used to perpetrate fraud (as in Hong Kong), misrepresent brands, companies or individuals and damage reputations through targeted attacks.

It is important, therefore, that the business world adopts appropriate methods for detecting deepfakes and preventing them from having a harmful effect. Staff need to be trained in recognising and reporting the risks and identity authentication practices need to be fit for purpose. Unnatural body movements, audio not synching perfectly with a person's lips and strange colouring or shadows are just some signs of a deepfake,

The success of deepfakes is dependent on their ability to trick the intended target. As they become increasingly widespread, those in business have to ensure they are aware of this and take the necessary precautions.

The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.