Expert: Criminals Increasingly Interested in Deep Fake Technology

Published: 11 January 2021

Artificial Intelligence

Science fiction is becoming reality. Deep fake video and audio technologies are getting more and more developed, with criminal groups adopting those technologies to use them for their activities. (Photo: Gerd Altmann, Pixabay, License)

By Zdravko Ljubas

A San Francisco-based cyber analytics specialist warned last week that the use of deep fake video and audio technologies could become a major cyber threat to businesses within the next two years.

Deep fake technology creates convincing fictional video or audio content that can easily be used to spread misinformation. Such videos mostly show digitally-created fake well-known people saying or doing what they have in reality never have said or done.

CyberCube underlined in its latest report Social Engineering: Blurring reality and fake, that the “ability to create realistic audio and video fakes using AI (artificial intelligence) and machine learning has grown steadily.”

It reminded that recent technological advances and the increased dependence of businesses on video-based communication have accelerated developments, prompting criminals to invest in technology to exploit this trend.

“New and emerging social engineering techniques like deep fake video and audio will fundamentally change the cyber threat landscape and are becoming both technically feasible and economically viable for criminal organizations of all sizes,” the author of the report, CyberCube’s head of cyber security strategy, Darren Thomson, warned.

Due to such technical development, and an increased usage of technology, spurred by the COVID-19 pandemic, there have been more and more video and audio samples of business people now accessible online. Cyber criminals therefore “have a large supply of data from which to build photo-realistic simulations of individuals, which can then be used to influence and manipulate people,” according to CyberCube.

The data and analytics platform also pointed out a technology created by the University of Washington – mouth mapping – which can be used to “mimic the movement of the human mouth during speech with extreme accuracy.”

“Imagine a scenario in which a video of Elon Musk giving insider trading tips goes viral – only it’s not the real Elon Musk. Or a politician announces a new policy in a video clip, but once again, it’s not real,” CyberCube’s Darren Thomson said.

He claimed that such deep fake videos, used in political campaigns, were already seen.

“It’s only a matter of time before criminals apply the same technique to businesses and wealthy private individuals,” Thomson warned.

He added that it could be “as simple as a faked voicemail from a senior manager instructing staff to make a fraudulent payment or move funds to an account set up by a hacker.”

Besides the deep fake video and audio technology, CyberCube said it also examines the “growing use of traditional social engineering techniques – exploiting human vulnerabilities to gain access to personal information and protection systems.”

“One facet of this is social profiling, the technique of assembling the information necessary to create a fake identity for a target individual based on information available online or from physical sources such as refuse or stolen medical records,” read the report.

It stressed that the “blurring of domestic and business IT systems created by the pandemic combined with the growing use of online platforms is making social engineering easier for criminals.”

CyberCube therefore warned insurers that there is little they can do to combat the development of deep fake technologies but stressed that risk selection will become increasingly important for cyber underwriters.

“There is no silver bullet that will translate into zero losses. However, underwriters should still try to understand how a given risk stacks up to information security frameworks,” the report’s author, Darren Thomson, said.

A remedy, at least for the beginning, as he said, could be if companies would train employees to be prepared for deep fake attacks.

CyberCube stressed that the deep fake technology has a potential to create large losses as it could be used to destabilise political systems or financial markets.

Such a case was already registered in 2019, when “cyber criminals used AI-based software to impersonate a chief executive’s voice to demand the fraudulent transfer of US$243,000,” according to CyberCube.