Article

Cyber criminals have a new target: your mind

Humans have always been a weak link in cybersecurity strategies and now threats are bypassing systems and going straight for people.
Cyber criminals have a new target: your mind

It was January 2024 when a Hong Kong-based finance employee of a multinational got the email from the company’s chief financial officer in the UK. The CFO was talking about making confidential transactions, which seemed odd, but a video call would clarify the situation.

The call included several senior figures from the organization, so the Hong Kong worker went ahead with making 15 payments, totaling HK$200 million (about US$25.6 million), to five local bank accounts. It was only when they mentioned the transactions to head office that things unraveled.

The CFO had never asked for the transfers, it turned out. The people on the call were not even real. The whole affair had been set up by a cybercriminal.

“I believe the fraudster downloaded the videos in advance and then used artificial intelligence to add fake voices to use in the video conference,” police senior superintendent Baron Chan Shun-ching later told Radio Television Hong Kong.

Nor was this the only example of hackers using AI. The Hong Kong police had encountered at least 20 instances where machine learning had been used to create deepfakes and obtain money by deception, CNN reported. Experts say the trend is barely getting started.

“It is scaling,” reports information security expert Todd Wade. “Criminal gangs are putting up call centers around the world. They are running them like businesses. And they are growing.”

Already, says Luke Secrist, CEO of the ethical hacking firm BuddoBot, “The sophistication in types of attacks and types of technology to aid attacks is getting pretty hairy.”

Several factors are driving the evolution of these threats. One is that AI can be used to develop scams that bypass traditional defenses and go straight for the weakest link in any cybersecurity strategy: humans.

“Social engineering is taking a bigger and bigger part of this landscape,” says Nick Biasini, head of outreach at Cisco Talos. “You’re starting to see more and more threat actors that aren’t necessarily technically sophisticated, but are good at manipulating people,” he says.

“Because of that, they’ve become very successful. They have a lot of money. And when you have money, you can add a lot of sophistication.”

This sophistication is a second driver of AI-based threats. In the last year, advances in technology have progressed to a point where it is increasingly difficult to tell a deepfake from the real thing.

While it used to be easy to spot a deepfake through strange speech patterns or oddly drawn hands, these problems are rapidly being overcome. Even more worryingly, AI can now create realistic deepfakes based on vanishingly small training sets.

“There’s a lot of call centers that will call just so they can record your voice,” says Secrist. “The phone calls that you get, with no answer—they’re trying to record you saying ‘Hello, who is this?’ They just need a snippet.”

According to cybersecurity expert Mark T. Hofmann, “Thirty seconds of raw material—voice or video—is now enough to create deepfake clones in a quality that even your wife, husband or children could not tell from you. Nobody is safe anymore.”

In many cases, a cybercriminal may not even need to call you up. Peoples’ social media feeds are full of audio and video material. Plus, “You also have this huge amount of data breaches happening,” says Wade.

“What people don’t realize is these data breaches can include your personal information, like your address, your phone number, email, social security number…. For social engineering attacks, they can use this information to credibly claim to be someone of authority.”

Once they have initiated a social engineering attack, cybercriminals play on mental weaknesses to get what they want. They could make you think your child has been kidnapped, for instance. Or that your job is on the line if you don’t do your boss a favor. 

There is not much that standard cyber defenses can do to prevent this. Hence, “when we talk about social engineering and deepfakes, the human firewall is more important than ever,” says Hofmann. “We need to inform people about new risks, without scaring them off.”

A good rule of thumb for the deepfake world is to be wary of any out-of-the-ordinary request, no matter who it seems to come from. Hofmann says families might like to agree on a code word to use over the phone in case of doubt.

In corporate environments, meanwhile, “asking security questions or calling back the real number is a good piece of advice,” says Hofmann. “They can steal your voice, but they can’t steal your knowledge.”

Biasini agrees that knowledge is the best way to defeat the deepfake threat, at least until authentication technology finds a way of distinguishing real identities from fake ones. “When we find this type of activity, we’re going to make sure that it gets exposed,” he says.

“One of most important things you can do is bring this knowledge to the masses, because not everybody is aware of these types of threats.”