Which Side Wins With GenAI: Cybercrime Or Cyberdefense?
Security experts weigh in on the potential use cases for GenAI technology and whether those cases are benefiting security teams or giving an edge to malicious actors.
While threat actors are getting a boost from ChatGPT and other generative AI tools, many cybersecurity experts see a potential for cyberdefense teams to see a larger benefit from GenAI over time thanks to an abundance of promising uses for the technology that are expected to emerge.
A flood of GenAI-powered cybersecurity tools have been introduced, while many security teams have been experimenting with available GenAI capabilities for automating Security Operations Center (SOC) work. Security teams and MSSPs are already finding ways to uncover known security issues faster and more efficiently with the tools, according to experts.
On the other side, hackers are able to utilize OpenAI’s ChatGPT to craft more convincing phishing emails, including through improving grammar for non-native English speakers. All of this is just the beginning.
[RELATED: Why 10 Top Tech CEOs See AI As One Of 2024's Biggest Opportunities]
But in looking at ChatGPT and the wave of GenAI technologies that has followed it, which side benefits more? Cybercrime or cyberdefense?
CRN has posed this question to a variety of cybersecurity and threat experts in recent months. Many believe that in the short term, the arrival of GenAI seems to be a bigger win for the threat actors.
“Certainly the attackers have a big advantage at the moment,” said Dave DeWalt, a security industry luminary who’s now founder and CEO of San Francisco-based venture firm Night Dragon. “If you look at all the attack vectors on these generative AI platforms, I mean it’s just too easy.”
On the other hand, according to Robert Boyce, a managing director and global lead for cyber-resilience services at Dublin, Ireland-based Accenture, the industry hasn’t even started to think about “what the possibilities are” over the longer term for GenAI-powered cyberdefense.
The ultimate potential for GenAI in cybersecurity, he said, is to not just do a better job at finding known issues but to actually uncover the lurking, high-risk issues that no one knows about yet.
Even in the shorter term, GenAI will already begin to yield major benefits for security professionals in a number of areas, according to Boyce. When it comes to generative AI and cyberdefense, “I think this year is the year to move from experimentation to adoption,” he said.
Bringing Automation To The SOC
Among the most promising opportunities in the short term is using GenAI for automating routine activities in the SOC, such as gathering threat information and automatically creating queries—or searches—within a security information and event management (SIEM) system to retrieve information that might be relevant.
One common use case will be leveraging GenAI tools in the event of a data breach to synthesize relevant information and expedite a response, according to Randy Lariar, big data and analytics practice director at Denver-based Optiv, No. 24 on CRN’s Solution Provider 500 for 2023.
“I might say, ‘Act as an experienced cybersecurity analyst—review reputable news sources, and [tell me] what are the indicators of compromise that were found associated with the data breach? What was the methodology of the attacker? What information can I pull out that might be useful for me as a defender?’” Lariar said.
“You can’t rely on the AI 100 percent to answer it for you.
But you can work with it, and it can help you to review a couple dozen articles to find the [indicators of compromise] that matter,” he said. “And then you can turn around and say, ‘Now write that into a Splunk query’ so I can put that into my SIEM.”
According to DeWalt, there’s no question that GenAI will have a huge impact on the work of SOC analysts. GenAI technology will enable security teams to “collect every piece of data to help them understand their situational awareness faster,” he said.
Ultimately, “SOC automation looks highly, highly disruptive,” DeWalt said.
It’s also probable that GenAI will be able to unlock some of the automation capabilities of existing tools that have never been utilized to their fullest, such as security orchestration, automation and response (SOAR), Boyce said.
While SOAR was once touted as the automation answer for SOCs, there was still significant process engineering required to use the tool. However, “GenAI allows us to do that work much faster,” Boyce said.
“It will allow us to create those processes at a pace that humans were not willing to do it,” he said. “And so I do see that the automation within a Security Operations Center will absolutely accelerate with GenAI. I truly do believe that because it will be able to help orchestrate the technologies without needing that human intervention. The human intervention is what really slowed it down.”
As with any new technology, the pace of adoption of GenAI for cyberdefense will vary among organizations, Optiv’s Lariar noted.
“Some teams are adopting it quickly and will start to see some serious improvements and efficiencies,” he said. “Other organizations are going to be more deliberate and careful. And it could be years before you see the full impact.”
One aspect that the team at Optiv frequently discusses with customers is the potential for automating simple, repeatable tasks in a SOC that are usually handled by a Tier 1 analyst, Lariar said. And with the help of GenAI, “I really think that we’re approaching very soon a day where the best SOCs will have that about 80 percent or more automated,” he said.
“Then what you’re going to be doing in the SOC is supervising those automations, making sure that they’re running as you think they are,” Lariar said. SOC teams can then be freed up to focus on “working on higher-level things and thinking about bigger-picture threats or working with the business to help mitigate risks that a machine can’t handle,” he said.
Stress Reducer For Critical Employees
Increased usage of GenAI in security roles, such as in the SOC, should also have benefits for stress and mental health in the field, experts said.
The high stress level associated with working in many cybersecurity roles is well-documented: Twenty percent of cybersecurity professionals reported having plans to leave the field within two years, according to a survey by cybersecurity company Trellix. Meanwhile, an even higher percentage of cybersecurity leaders—25 percent—is planning to do the same by 2025, research firm Gartner has found.
The SOC can be a mentally taxing environment, making it crucial for organizations to “focus heavily on a SOC analyst’s mental health,” said Jordan Hildebrand, practice director for detection and response at St. Louis-based World Wide Technology, No. 9 on CRN’s Solution Provider 500 for 2023.
But along with boosting productivity and efficiency for SOC analysts using automation, there’s another angle to consider: the possibility that the tools could “give them a better life,” Hildebrand said.
After seeing a demonstration of CrowdStrike’s Charlotte AI technology last fall, for instance, Hildebrand said he sees the potential for the tool to remove some of the mental stress and strain associated with jobs in a SOC. The GenAI-powered assistant has the potential not just to present analysts with “more” data, but also with the “right” data, he said.
And given how difficult it can be for SOC analysts to find what they’re looking for—particularly in a high-pressure situation such as a security incident—this could make a huge difference for their mental well-being.
Ultimately, the promise of GenAI tools such as Charlotte is that SOC analysts “are going to be able to control their own destinies more,” Hildebrand said.
Tackling The Talent Shortage
The massive talent shortage in cybersecurity—a key factor contributing to burnout—could also be eased with GenAI, according to experts.
Estimates vary on how many unfilled jobs there are in cybersecurity, but what is universally agreed upon is that the cybersecurity talent pool needs to expand.
With GenAI capabilities, however, there’s a strong potential for reducing the technical barrier to entry that has been required for many cybersecurity roles, according to Boyce.
GenAI could “lessen the requirement that they need to be so technical,” he said. Among other things, the steep technical requirement for many roles in the field is “what frightens people away a lot of times,” Boyce said.
As just one example, to be able to create threat detection rules in Splunk, a person would need to learn Splunk’s search command language, SPL. With the help of GenAI, however, this significant undertaking may no longer be necessary for creating and applying detection rules in Splunk, according to Boyce.
“Now we can say to GenAI, ‘Create me a detection rule in Splunk for this [threat].’ And it will create the detection rule. And it will apply the detection rule,” he said. At Accenture, “we have tried this and it’s very, very accurate.”
For the record, Boyce said he’s not knocking Splunk and that this is just one of many potential ways GenAI could lower technical barriers to working in cybersecurity. Splunk itself also seems to be on the same page: In mid-2023—prior to Cisco Systems’ $28 billion acquisition deal for the company—Splunk released a GenAI-powered tool that helps users work with SPL through natural language.
Keeping Tabs On Vulnerabilities
Management of software vulnerabilities is another key use case for GenAI and security. And the idea of using large language models (LLMs) for finding common software vulnerabilities was quickly seized upon by security vendors.
Columbia, Md.-based vulnerability management platform provider Tenable, for instance, noted in an April report that it had already at that point built LLM-powered tools to more quickly identify vulnerabilities in applications.
GenAI can also help prioritize which patches need to be deployed first, ultimately expediting the deployment of fixes for vulnerabilities, experts said.
But discovering previously unknown, zero-day vulnerabilities is another story. There’s a reason why zero days—responsible for enabling so many major breaches and ransomware attacks—can fetch as much as $10 million.
Still, the idea that GenAI technology will eventually be capable of discovering zero-day vulnerabilities in software does not appear to be far-fetched. With the help of LLMs, it’s feasible that zero days could someday be eliminated from software before the vulnerabilities are actually released, according to Michael Sikorski, CTO and vice president of engineering at Santa Clara, Calif.-based Palo Alto Networks’ Unit 42 division.
In this potential scenario, “the developers can then leverage that [capability] ahead of time before they ship their code—find those [zero days], plug them up, before they ship,” Sikorski said.
Pre-empting the release of zero days? That would be monumental for the cyberdefense side. And so, based on this potential use of GenAI, Sikorski said “there’s an argument that I’ve started to believe in—that actually this technology will benefit the defense more than it will the offense.”
It’s also likely that the most significant advantages for cyberdefense from GenAI are still yet to be discovered. When used in concert with large and varied data sets, GenAI has the potential to help with “predicting what we don’t know yet,” Boyce said.
The result is that defenders should ultimately be enabled to “find things that we haven’t even thought about—these ‘unknown unknowns’ that we’ve been talking about for years but that we’ve never been able to figure out how to [find],” he said.
“I don’t think we’re asking the right questions of our security data now as a community. We don’t even know what to be asking,” Boyce said. “We’re just asking stuff that we already know. [This is] not going to get us to a protection strategy that’s adding a really high level of confidence for your cyber resilience.”
WWT’s Hildebrand said he also expects GenAI will prove to be a huge help in figuring out what the starting hypothesis is about a threat, thanks to the technology’s ability to sift through massive amounts of data and surface the context that’s important. In security operations, “one of the hardest things to do is hypothesize,” he said. “That’s what takes time.”
Ultimately, according to Boyce, GenAI could help security teams achieve their full potential. The capabilities should be able to enable “more of the true analyst model—of being able to ask the right questions of the information,” he said, “and have the machines do the machine work.”