5 Emerging GenAI Security Threats In 2024

From personalized attacks and malware evasion to audio deepfakes, here’s what you need to know on five of the GenAI-powered threats that security experts are watching right now.

GenAI Threats To Know

While it’s not always possible to pinpoint exactly where generative AI has played a role in a cyberattack, organizations can assume it’s now ubiquitous in phishing and social engineering attacks, according to cybersecurity experts. In connection with the arrival of tools such as OpenAI’s ChatGPT and the obvious usefulness they hold for attackers, “you can assume the increase in sophistication and accuracy, and the linguistics modification that comes into phishing and social engineering,” said MacKenzie Brown, vice president of security at managed detection and response provider Blackpoint Cyber.

[Related: Audio Deepfake Attacks: Widespread And ‘Only Going To Get Worse’]

But while this application of GenAI is no doubt “the most useful” for attackers currently, there are a number of emerging threats utilizing the technology to intensify the pace and sophistication of attacks, experts told CRN.

As part of CRN’s Cybersecurity Week 2024, we’ve been speaking with researchers about the newer GenAI security threats that security teams and business leaders should know about. For most organizations, the bottom line is that the threat environment will continue to see growing impacts from AI-powered attacks even with the advancements that GenAI provides on the defense side, experts said.

Ultimately, for organizations with mature security programs, the arrival of GenAI is “just an acceleration and an improvement of an attack vector that they've known about for years,” said Randy Lariar, big data and analytics practice director at Denver-based Optiv, No. 25 on CRN’s Solution Provider 500 for 2024.

However, “I think there's probably greater issues where companies are late to the security game, or have not fully aligned security with the business mission,” Lariar said. “If it's to check a box or to satisfy an insurance requirement, there's probably some gaps in there.”

What follows are the details on five emerging GenAI security threats to know in 2024.

Accelerated Attacks

Security experts told CRN that the largest threat they see from GenAI is not so much a new tactic or technique — but rather, an acceleration to existing methods used by cybercriminals and nation-state hackers. GenAI allows threat actors “to do the same thing they've already been doing — but do it much faster,” said Chester Wisniewski, global field CTO at cybersecurity giant Sophos.

“What was previously a one-day window, might now be a four-hour window. What was a four-hour window might be a 10-minute window,” Wisniewski said. And the amount of work an attacker needed to do in order to attack 100,000 victims in past, might now be enough to attack 10 million victims in the same amount of time, he said.

As one example, China-linked attackers operating so-called “pig butchering” scams have been able to use new AI capabilities to expand their scope, according to Sophos X-Ops research. Previously, “human beings had to text each victim and then figure out which people were responding and respond to them. It was a much slower, smaller-scale process,” Wisniewski said. Now, however, “we have evidence that ChatGPT is being used to automate the initial stages of those conversations,” he said.

Ultimately, “the scale and scope of those [attacks] through automation is exposing many more people to a much-higher quantity of those types of threats than we were seeing before AI,” Wisniewski said. “My back of the envelope math says, we went from one person being able to try to scam 1,000 people a day, to one person being able to scam 100,000 people a day by automating it.”

Easier Malware Evasion

AI-generated code is by and large derivative of existing malware, which generally makes it easy for security tools to detect, according to experts. That means that the usefulness of GenAI tools for generating malware is limited, Sophos’ Wisniewski said.

At the same time, GenAI can be more useful for refining existing malware to bypass detection, according to Blackpoint Cyber’s Brown. “We see threat actors now using AI to help with modifying existing malware that doesn't have signatures attached necessarily,” she said. “They can evade [antivirus] through this, if they can modify the way that a signature would be seen. They're using it to leverage malware in a way to now recreate malware.”

More-Personalized Attacks

There’s a wide array of activities where threat actors can utilize GenAI for improving the personalization of targeted attacks, experts said. GenAI is “huge on the research and reconnaissance side,” Brown said. “The same way we use AI for efficiency, threat actors use AI for efficiency.”

In other words, attackers are able to cut down a significant part of what they need to gather through individual research across the internet, she said. “They're going to be able to do it in a much more effective manner.”

“Now they can tailor attacks based on the actual organization, the industry in and of itself,” Brown said. “They're using everything that they can gather so that they can decrease that amount of research and reconnaissance, so initial access goes faster. They know what external systems to hit. They know what targets to hit. They know exactly what is going to allow them to socially engineer their way in, or actually be able to get a foothold and create some scripting and automation so they can spread across faster through the environment once they've been in there.”

Improved Audio Deepfakes

As one indicator of how commonplace voice-clone attacks have become, nearly half of surveyed businesses say they’ve been targeted with audio deepfake attacks over the past two years, according to a recent report from identity verification vendor Regula. There’s no doubt that voice-clone technologies have advanced to a point where they are both easy to use and able to produce fake audio that is convincing, according to Wisniewski.

By contrast, “a year ago, [the cloning technology] wasn't good enough without a lot of effort,” Wisniewski said. Recently, however, he was able to create a convincing audio deepfake of himself in five minutes.

“Now that that is possible, that makes it much more accessible to the criminals,” Wisniewski said.

While the technology is still not good enough to generate deepfake audio in real time, though it will of course continue to improve, according to Kyle Wilhoit, technical director for threat research at Palo Alto Networks’ Unit 42 division. “I would say down the road, that's going to be more of a possibility,” Wilhoit said.

Easier Vulnerability Exploits

Wisniewski also believes that GenAI could play a role in enabling attackers to develop new vulnerability exploits. While existing tools already can automate the generation of potentially exploitable vulnerabilities in software, humans still have to determine if exploitation is actually possible, he noted.

Now, however, GenAI tools can expedite matters by helping to analyze the variants of potential vulnerability exploits, Wisniewski said. “The AI might say, ‘Here's the promising one,’” he said.

Ultimately, “that would be the place I would be watching — is there a service economy that emerges for the high-end actors, where they can pay somebody $50,000 to give them 25 variations of an exploit?” Wisniewski said.