Accenture Expert: Companies To Face New Phase Of Deepfake Threat In 2025
The rapid improvement in deepfake quality and decline in production cost is likely to result in more attacks impersonating mid-level personnel, according to Accenture Security’s Rob Boyce.
Organizations are likely to face increased threats from deepfakes of mid-level personnel in 2025 as attackers evolve beyond AI-generated impersonations of the corporate CEO, Accenture’s Robert Boyce told CRN.
The rapid improvement in deepfake quality and decline in production costs — combined with the fact that many companies are now on guard for CEO impersonation tactics — points toward a new, expanded phase of the threat coming around the corner, said Boyce (pictured), senior managing director and global lead for cyber-resilience services at Accenture, No. 1 on the CRN 2024 Solution Provider 500.
[Related: Audio Deepfake Attacks: Widespread And ‘Only Going To Get Worse’]
In response to the rise of deepfake CEO attacks in recent years, organizations very quickly drove awareness about the issue of AI-generated impersonations to their employees, Boyce noted.
Now, however, security teams are starting to see a shift to AI impersonations involving less-obvious personnel and tactics, such as deepfakes of IT administrators asking an employee to change their password, he said.
As a result, “I think there's going to be another round of awareness required from organizations to just educate more broadly on deepfakes at all levels — not just about senior executives being targeted,” Boyce said.
Several other factors are also indicating an intensified deepfake threat starting in 2025.
While voice clone attacks have represented the majority of attacks so far, video deepfakes could become more commonplace thanks to the lower cost and reduction in other requirements to produce the videos, Boyce said.
For instance, while producing a convincing video deepfake previously required at least several minutes of high-quality video, there are signs now that as little as 16 seconds may be sufficient in some cases, he said.
Meanwhile, the advancement in capabilities needed for dynamic deepfake attacks — in which attackers can provide responses in near-real-time rather than just pre-produced clips — is also a concern, according to Boyce.
When it comes to deepfake attacks, “we are starting to see a shift to both dynamic video and dynamic vocal,” he said. “And I think dynamic vocal will have the bigger impact in 2025.”
Still, despite all these developments, the world hasn’t yet seen the “explosion” of deepfakes that many have feared — and this is unlikely to happen next year, Boyce noted.
“The reality is, it’s still much cheaper for threat actors to buy credentials than it is to go through the trouble of procuring all the equipment and software to make a deepfake,” he said. “There are cheaper, more-effective, faster ways for threat actors to be able to gain access or achieve their objectives.”