Here’s What 15 Top Cybersecurity Execs Are Saying About AI: RSAC 2025

CRN spoke with C-level executives at leading players in cybersecurity—including SentinelOne, Palo Alto Networks and CrowdStrike—about their biggest AI-related discussions during RSAC 2025. Here’s what they had to say.

The usefulness of GenAI for cybersecurity has grown massively over just the past year, even as demand for enabling AI usage by employees—rather than simply blocking it—has also surged.

At the same time, cybersecurity vendors are already well along in exploring how to protect the next big leap in AI technologies with the emergence of agentic capabilities, top cybersecurity industry executives told CRN at RSAC 2025 this week.

[Related: Top Execs At RSAC 2025: Embracing AI Is Now ‘Not Optional’]

The question of the year, according to CrowdStrike Chief Business Officer Daniel Bernard, is, “Can I trust an agent to do something for me?”

The reality is that in just two years, AI has moved from being largely a buzzword in cybersecurity to the point where many organizations are considering, “Can I trust an AI agent to operate a security program or part of a security program for me?” Bernard told CRN. “That’s the evolution—the crawl, the walk and the run—that I see happening in security as relates to AI.”

The CEOs of companies including SentinelOne, Optiv, SailPoint, Proofpoint, Akamai, Trellix and NightDragon, as well as the top technology and product leaders at companies including Palo Alto Networks, Wiz and Trend Micro—plus top executives from a number of other companies—also spoke with CRN about the future of AI and security this week.

What follows are comments from CRN’s interviews with 15 top cybersecurity executives, focused on their biggest AI-related discussions during RSAC 2025.

Tomer Weingarten, Co-Founder, CEO, SentinelOne

I almost think that in six months, in nine months, you’re going to see, again, a shift in how people on-board AI to the enterprise. If you’d asked that question a year ago, ‘Hey, how are you using AI in your enterprise?’ Then people would say, ‘Oh, the chatbot, the LLM’—[but] now we’re talking about agentic. In a year it’s going to be something else [in terms of] these systems that are going to be on-boarded. So the models of how you secure them you have to believe are going to change. [That] is why I think if people are kind of jumping into a bandwagon today where they say, ‘This is what I’m going to do. I’m going to go with this technology or that technology’—I think that might lock them into a place where it might not be agile and flexible enough to support what comes next.

Kevin Lynch, CEO, Optiv

Last year [at RSAC] there was certainly a lot of buzz about [AI], but it was companies sandboxing, piloting—a lot of evaluation. This year, it’s about, ‘We’re in mainstream production, and we have a lot to protect and a lot to think about doing here.’ … You can see it show up in traditional security categories like credentialing and vaulting and identity governance, which is already an area where the average enterprise client is probably behind the curve a little bit. Now, every time you’re putting an AI agent in place, you have a credential creation and management issue. … I think this notion of credential management—this is the battlefront right here. This is where we’re going to fight the war.

Lee Klarich, Chief Product Officer, Palo Alto Networks

How are we using AI to protect our customers? We have data [showing] how many more new attacks per day we are seeing created year over year. [It’s a] 300 percent increase. So we can tell that AI is being used by attackers in order to build and launch more new attacks every day. At the same time, we have data that shows that the time from the initial attack to breach is getting shorter and shorter. So you have more attacks that are happening faster. It’s very clear that the answer to this [is that] we have to leverage AI much, much more than we traditionally have as an industry.

Daniel Bernard, Chief Business Officer, CrowdStrike

‘Can I trust an agent to do something for me?’—I think that’s the question of this year. ‘Can I trust an AI agent to operate a security program or part of a security program for me?’ Two years ago, it was like ‘AI, AI, buzz, buzz.’ Last year was, ‘I’m starting to see different things where I can glean information. It’s better than a Google search. It’ll talk back to me. I can tell it to do things. And this year, I think it’s all, ‘Can this thing run some process from start to finish without me having to be involved?’ That’s the evolution—the crawl, the walk and the run—that I see happening in security as relates to AI.

Ami Luttwak, Co-Founder, CTO, Wiz

I think there is an understanding [and] expectation in every security team, including Fortune 500, that they will have SecOps agents, AI agents, running with their security tools. … With GenAI, it was all about taking the Wiz interface and making it more natural language—‘Show me all of the attacks. Show me all the vulnerabilities.’ Now, in the agentic discussion, it’s a different discussion. It says, ‘I have a security team. I want to start embedding an agent that will do things for me. How can this agent automatically do things in Wiz?’ That’s a very different discussion. So I think that’s a much bigger revolution. Implementing native language interfaces is nice, but it’s not revolutionary. Right now agents feel like [they are] a much bigger revolution because [they] can impact your team.

Dave DeWalt, Founder, CEO, NightDragon

You’re seeing a seminal moment, I believe, in cybersecurity that has only come along in my career two or three times [over] 25 years. And the seminal moment is a new technology set that creates a massive amount of risk and a massive amount of opportunity. It reminds me of 1999 to some degree, or any year right before dotcom blew up. But what did you see in the aftermath of the dotcom? You saw massive new threats. You saw massive new cyber opportunities. You saw cyber companies become real. And I ran one of them, McAfee. And you saw cloud computing change the landscape. And what came out of that? A $32 billion acquisition of Wiz by Google. I think the next few years you’re going to see something like that, maybe bigger this time because AI has got the legs to be an integrated part of the fabric of everything we do. That’s exciting. That’s where I felt all the optimism this week.

Mark McClain, Founder, CEO, SailPoint

I think [agents] are going to help a lot of people do things faster and better. [I don’t believe] a third of the American workforce is going to get wiped out by AI. I think that’s crazy. Remember the ‘four-hour work week’—we were going to get to a four-hour work week because we were going to get so productive with technology? We keep finding more stuff to do. So I think that will be the case here. AI will make us more productive, make people more effective. It will absolutely just do some stuff that people used to do—just like factory automation took some jobs away. You don’t have to hire a guy to turn lug nuts anymore. A robot does that—but you had to hire people to run the machine that does the lug nuts. [In AI] there’s going to be people driving a lot of this intelligence.

Sumit Dhawan, CEO, Proofpoint

What is the potential of this? Will AI effectively become this extended layer of humans? You can call them ‘virtual humans.’ And that spurs up some interesting questions [around] how human-centric security extends to virtual humans. As humans, we all get socially engineered in attacks. Well, AI gets prompt engineering attacks. That’s a form of social engineering for AI. Humans leak information and we lose our credentials, or people steal credentials from us. Well, in the world of AI, they can also lose information, and AI technologies also can lose tokens. So there are some similarities that extend from humans to virtual humans. And as AI comes into production, and they become sort of these copilots of people who are already doing certain functions, there are some interesting cyber questions that come in [such as] how do you ensure what you did for humans get extended to these virtual humans that are coming into the workplace?

Tom Leighton, Co-Founder, CEO, Akamai Technologies

With customers, I would say it’s about first identifying where they have AI apps being used and exposed they didn’t know about. You could imagine a big enterprise, same problem they have with APIs—everybody is doing something with AI, and it’s not really coordinated or controlled. And so the first issue is visibility—what have you got out there? And then the next issue is, you’ve got to secure it—and it’s not the normal firewall or API security rules. It’s different kinds of exploits are taking place.

Jill Popelka, CEO, Darktrace

In all the ways that AI is helping our business, AI will be helping bad actors as well. How do we stay in front of novel threats that are coming into our environment? Because that’s truly what many of our customers are starting to think more about. It’s not just the standard threats that we’ve always known are coming. Because AI is starting to super-power the bad actors as well, how do we make sure that we continue to remain one step in front of them? And it’s very much aligned to how we protect cloud environments. So the big topics are stopping novel [threats], especially through email, and then protecting cloud environments.

Kevin Simzer, COO, Trend Micro

We do these road-show experiences, and I was at one of them last summer. I asked the 50 CISOs in the room, ‘What are you doing around AI?’ All 50 of them were just blocking [AI]. Ten months later, we’re having a lot more discussions about enterprises embracing AI. They figured out that shadow AI is everywhere. Blocking doesn’t work, and they really need to figure out how to enable this. So they’re actually using it now, and they’re looking for some controls [to put] in place. So I feel like just in 10 months the conversations [shifted to], ‘How do I do this in a safe and secure way?’

Vishal Rao, CEO, Skyhigh Security and Trellix

Cyber no longer is about defending against people. It’s about defending against machines. It’s about defending against the scale at which AI operates. We also see AI as a massive opportunity to drive productivity to our customers. So it’s that two-pronged approach, and that is consistent with every conversation you’ve been having at the conference. One, ‘Help us protect at the pace and scale of AI.’ And the second is, ‘We know we’re moving to a world where AI is going to be used, is going to be operationalized—help us do that in a safe way.’

Rick Fitz, CEO, Contrast Security

What customers are wondering about in our world—and recognize that our world has APIs, custom code and libraries, and there’s thousands of vulnerabilities in there—so they’re asking, ‘How can we possibly fix those thousands of vulnerabilities? And can we leverage AI to do that?’ You would think that that’s possible because they’ve been exposed now for a year to things like Copilot—their developers are using these tools to assist in their coding, and they’ve seen some productivity gains as a result of that. The issue is that, if you know how to prompt, that’s how you get [the AI] to do things for you. So the question is, how do you create a prompt, as a vendor, to serve up to the developer something to go do? That’s an interesting conversation, and we’re having a lot of that conversation.

Idan Plotnik, Co-Founder, CEO, Apiiro

The CTO is saying, ‘I don’t know what I have. I’ve lost control. I don’t have visibility. I don’t have any inventory of my AI in code.’ The CISO problem—and I hear it again and again—is, ‘How can I govern my developers without impacting their development philosophy?’ This is the huge pain for the CISO. Every security program starts with visibility and putting controls in place before they even think about the advanced attacks. So I think it’s [the same in] the AI era. [Ultimately] it’s not only an AI problem—it’s a ‘software development in the AI era’ problem.

Khiro Mishra, CEO, Shieldient

One of the key concepts that keeps getting discussed in this whole agent AI ecosystem is about role-based access control. What kind of roles are you assigning to these agents—whether it’s just a detection action, whether it’s a response action, whether it’s an analysis action—what role are you providing? And based on that, what kind of access are you providing—either to access the data or to act on a detection? That’s a big area.

Close