Human Security CEO: Agentic AI Will Require ‘Trusted Agents’
CEO Stu Solomon tells CRN that the increased autonomy afforded to AI agents will mean a greater need for verifying that the agentic applications are legitimate and secure.
The increased autonomy afforded to AI agents will mean a greater need for verifying that the agentic applications are legitimate and secure, according to Human Security CEO Stu Solomon.
In an interview with CRN, Solomon, a cybersecurity industry veteran and former Optiv executive, said that his company is working to apply its technology for detecting and managing bot-based activity to the emerging space of AI agents. It’s an issue that must be addressed for agentic AI to truly take off, he said.
[Related: Okta CEO On What Everyone’s Missing About AI And Software Development]
Solomon was previously the president at threat intelligence giant Recorded Future before joining Human Security in January 2024. Previously, he spent three years at cybersecurity powerhouse Optiv, including as its chief technology and strategy officer, while earlier roles in his career included executive positions at iSIGHT Partners and Bank of America.
What follows is CRN’s interview with Solomon.
How does Human Security differentiate from Cloudflare or Akamai?
[For] a Cloudflare and an Akamai, their core mission is to condition traffic and optimize availability and capability. There are lots of things that could affect that. Bot mitigation is one component that could affect that optimal performance that they strive to do. So they have a very different focus. They’re coming at it more from that angle. We are coming at it purely through the lens of risk and security reduction. It’s much more of a security-based lens into how to mitigate potential fraud and security scenarios perpetrated at scale by bot activity versus how to optimize traffic and performance that may be impacted by something like a bot. It’s just a completely different shift in mindset in the very approach. And while the problem is generally similar, the impetus, the activities and the way that you would detect at scale is just completely different.
What are some key focus areas for Human Security over the next 12 months?
We see and touch about 20 trillion digital interactions a week. [We have] a relatively ubiquitous position that people just don’t contemplate or understand—that’s the scale that we’re touching today—a little over 20 trillion digital interactions a week and growing every single day. We’re doing that and making those determinations, in tenths of a millisecond in most cases. It’s a massive, massive data set generated at the beginning and end of any digital transaction. The opportunity is to start to build a more unified data layer that allows us to look at the various observables and behavioral patterns—to do even higher and more interesting second- and third-order analytics against those. [And] then also to be able to put more reporting and or mitigating stopgaps in place every step in that journey. That’s something we’re constantly developing. You can’t tell the story of Human if you don’t tell the data story, and I want to get that out into the market in a more thoughtful way. This is, frankly, the reason I joined here from Recorded Future. [It’s a] very analogous data journey of turning massive amounts of seemingly disparate structured and unstructured data into normalized data that you can find a second- and third-order analytic for—very similar opportunity here at Human. But of course, as with anything else, data is great, action is better. I don’t want to just be interesting. I want to be actionable—also a lesson learned at Recorded Future. And so a lot of the 2025 focus is making sure that we build actionable frameworks that move closer and faster to automating the logical next step in the decision loop—so that you can make determinations around putting profiles around specific attack patterns that the human eye wouldn’t otherwise be able to pick up on—and then adding context as to, what am I looking at? Why is this important? Why do I care? Should I block? And doing that as quickly as possible into this broader notion of attack profiling that we’re building. So it’s not just building the second- and third-order analytics, it’s then applying them to be able to proactively start to contextualize an action and build an action framework around the data we’re seeing. So that’s area No. 1. Area No. 2 is really simple. You’re only relevant in the space if you’re good at detecting. So making sure that detection quality and detection efficacy continues to be at the fore of everything that we do, particularly in an age of a dynamic adversary that’s accelerated through all of the generative AI components—the barriers to entry for bad guys to be able to create outsized impacts is just worse than it’s ever been. So just trying to stay ahead of the pace of the adversary, from an overall detection quality perspective, is a second area.
How do you see agentic AI impacting what you do at Human?
I feel we’re uniquely suited to [enable] the rise of generative AI scenarios. Agentic AI becomes a use case that’s right in our wheelhouse. The idea of looking at the way to trust or not trust a particular bot-based activity that’s perpetuated on a proactive nature that is agentic in the first place is exactly the thing that we can help to protect—and the type of thing that will help to accelerate our growth as a company. So that’s going to be a pretty big theme as we develop a new product this year that will specifically focus on that.
We have early signals that we’ve been able to collect inside of the telemetry that allows us to see agentic AI use of scraping, to be able to feed its LLMs that keep it going, that we’re able to now detect, already off-the-shelf with what we can already do today. It’s an application of what you do to a use case that’s emerging, and so you’re going to see that this year as well.
Can you provide any examples of how you might be able to enable agentic usage?
Imagine a scenario where—[whereas] in the old days [you had a] VeriSign-certified website—it’s now a Human-certified agent that’s going out and acting in an authorized fashion, with the right infrastructure [that’s] not being operated by malicious actors. So it becomes a trusted agent. So if you can trust your agent, you can trust the outcome of the agentic activity, that it’s just accomplished on your behalf. So if I’m a retailer, and I want somebody to have a personal shopper, I want people who go and use [the retailer’s] personal shopper to know it’s a trusted agent doing the work. So that’s where it becomes a great tool for commerce.
