GenAI Risks To Software Security On The Rise: Experts

Left unchecked, ‘generative AI in the software development process is going to produce worse outcomes from a software security perspective,’ says software security pioneer and Veracode Co-Founder Chris Wysopal.

While generative AI continues to deliver massive boosts to workplace productivity as adoption climbs, the trade-offs for security are becoming increasingly evident, as well, according to software security pioneer Chris Wysopal.

“I think what we're going to find is that in a lot of places where generative AI is used, there's these negative outcomes that aren't really thought about until much later,” said Wysopal (pictured), an early member of the high-profile L0pht hacker collective who went on to co-found application security vendor Veracode, where he is currently chief security evangelist.

[Related: 5 Emerging GenAI Security Threats In 2024]

For instance, while one of the biggest uses of GenAI tools is for enabling developers to generate software code, there’s no question that there are major downsides for security in that process, Wysopal told CRN.

The reality is that AI-generated code is “not any more secure than human written code,” Wysopal said. “Multiple studies have shown that it's about as secure as human written code.”

The problem? Since the tools are enabling a massive increase in the volume of code being written, this likewise means a lot more code that needs to be analyzed for vulnerabilities and remediated, he said.

“Even if you've automated your testing now, you have more things to fix,” Wysopal said.

An additional issue is that developers tend to over-trust the code generated by AI tools, making them more likely to skip the step of assessing for flaws, he said.

“So you get these two factors — developers are using these tools and creating more vulnerabilities than they had before, per developer, and they're over-trusting it,” Wysopal said. “Without a counter to this, generative AI in the software development process is going to produce worse outcomes from a software security perspective.”

Security experts have told CRN that other potential threats related to software security abound with GenAI, going beyond the well-known risks of sensitive data exposure and the usefulness of tools, such as ChatGPT, for writing more-convincing phishing emails and other social engineering tactics.

For example, attackers are undoubtedly attempting to use the capabilities for finding previously unknown zero-day vulnerabilities, cybersecurity luminary Kevin Mandia said in a recent interview with CRN.

“The number of zero days every year has gone up tremendously,” Mandia said, indicating that AI advancements are possibly, but not conclusively, a factor there.

“Zero-day discovery may be aided by some AI engines. We just don’t know,” he said. “But someone’s finding more.”

At the same time, the benefits for security are numerous, and Mandia is highly optimistic about the possibilities for using GenAI to aid cyber defense teams.

“I think GenAI is going to help the defender more,” said Mandia, who stepped down as CEO of Mandiant in May and is now serving on boards including at MDR vendor Expel.

Notable benefits include enabling greater security operations capabilities as well as expediting the training of security professionals, he said.

Broader Risks

And yet, when it comes to GenAI usage by hackers, organizations have every reason to believe that it continue to amplify the already constant pace of cyberattacks going forward, said Optiv’s James Turgal.

“It’s allowing threat actors to be more specific in their targeting, and it’s allowing them do this type of activity at scale,” said Turgal, vice president for cyber risk, strategy and board relations at Denver-based Optiv, No. 25 on CRN’s Solution Provider 500 for 2024.

Ultimately, the arrival of GenAI-related threats “puts more pressure on our clients to understand, ‘What does your tech stack look like? Is it optimized?’” he said. “It’s upping the ante.”