Deep Fake Hoax: When AI Technology Becomes a Fraud Tool


Your supervisor sends you a video conference with multiple finance personnel present. During the meeting, the supervisor indicates that he wants to make a confidential transaction and asks you to transfer money to a certain bank account. The faces and voices of the executives in the meeting are the same as those of real people, so will you be suspicious? 

This is a video scam that a multinational company's Hong Kong branch staff really experienced earlier this year. According to the information that has been disclosed, the fraudster through the company's YouTube videos and media materials obtained from other public channels, successfully imitated the image and voice of the senior management of the British company, and then use Deepfake (depth of forgery) technology to produce counterfeit video, resulting in the effect of multiple participants in the video conference, however, only the staff who participated in the meeting a person for the However, only one of the participating employees was a "real person" in the meeting. 

As the fraudster was giving orders to the defrauded employee as a supervisor during the meeting, the employee also did not have the opportunity to communicate with the meeting participants. The employee was only asked to briefly introduce himself once, after which the fraudster ended the meeting and continued to give orders on the instant messaging software.

The defrauded employee made 15 transfers totaling HK$200 million to five bank accounts, and only learned of the fraud when he made inquiries with the head office. Due to the huge amount of money involved, this case has become the most costly "face change" case in the history of Hong Kong.

With the development of generative artificial intelligence technology, it is no longer difficult to simulate the real appearance of another person by changing their face and voice through AI technology. This technology is called "deep forgery", that is, with the help of neural network technology for large sample learning, machine learning models will be individual voice, facial expressions and body movements spliced together to synthesize false content of artificial intelligence technology.

Interface News reporter from some technology open source community to "GAN" and other keywords search, can easily get the depth of synthesis-related technology open source projects. At present, the mainstream domestic e-commerce platform has blocked the keyword "AI face-swapping", but on some social platforms, you can still contact the seller of the relevant tools through keyword searches. 

The development and popularization of technology has further blurred the boundaries between the real and the virtual, and this kind of junction zone, which is impossible to distinguish between the real and the fake, has become a breeding ground for criminal behavior. According to incomplete statistics from Reliance Wisdom, which has long been concerned about AI security, the number of AI fraud cases has been growing rapidly in recent years, with at least 16 cases occurring across the country in 2023 alone.

The underlying technology is mature

Tracing back to the original view, the technology related to deep forgery has been very mature. 

In 2014, the Generative Adversarial Network (GAN), proposed by the University of Montreal, increased the level of fidelity in data generation, but also significantly lowered the threshold for deep synthesis. In recent years, in addition to GAN, technical routes such as Diffusion Model have also proved their value in improving the fidelity of data generation.  

In the paper preprint website search "GAN" can get 10,694 related papers; to "GAN", "NeRf", A search of GitHub, an open source community, with keywords such as "GAN", "NeRf", "TTS" and other deep synthesis-related terms will yield more than 120,000 open source projects. 

The atmosphere of open source accelerates the exchange of technology and breakthroughs, but objectively also makes such technologies as AI face-switching and onomatopoeia no longer mysterious. 

Interface News reporter to "AI face" as a keyword in a video platform search, you can easily find related tutorials; at the same time, according to a social platform to find the face tool seller, the tool only a few photos can be completed in the video of a key to change the face, and the whole set of tools and tutorials for only 20 yuan. 

The increasingly lowered threshold of use makes the depth of the forgery technology has become a tool for many illegal and criminal behavior. 2023 April, millions of red blogger "Caro Lai Lai _" has posted that there are lawbreakers using face-swapping technology to produce pornographic videos, and explicitly marked price to obtain profits. On social media platforms such as Xiaohongshu, there are many bloggers who have posted articles saying that they have been "face-swapped" into the photos of strangers. 

Some industry insiders told interface news, because the depth of forgery production material is personal audio, pictures, video, the more these data, training out of the video will be more realistic, so the social media related data more bloggers, celebrities are easy to become the target of the depth of forgery.

Related Tags