F5 CEO On Why The Growth Of AI Apps Will ‘Accelerate’ API, Edge Security

In an interview, F5 CEO François Locoh-Donou tells CRN that ‘securing these APIs, to secure AI, is absolutely key.’

The surging growth of AI-powered applications thanks to the arrival of generative AI is poised to create massive new opportunities for solution and service providers around securing APIs and edge deployments, F5 CEO François Locoh-Donou told CRN.

Locoh-Donou said in a recent interview that APIs are increasingly important in order to enable communications and other functionality for GenAI apps — meaning that API security, an area that F5 specializes in, will get a boost as well.

“AI is going to accelerate that,” he said. “For all of these AI applications, they communicate with the backend of these AI applications — meaning the AI model or the AI factories or the enterprise data stores — and all of those communications are via API. So securing these APIs, to secure AI, is absolutely key.”

[Related: The 20 Coolest Web, Email and Application Security Companies Of 2024]

Likewise, deployments of AI apps and Large Language Models (LLMs) in edge environments are expected to see a major increase as more organizations seek to run AI technologies close to where they’re needed, in order to improve performance and reduce latency, according to Locoh-Donou.

That, too, means major new opportunities for providing security services for both F5 and its channel partners, he told CRN. “Wherever we're running these things, we will need to secure them,” Locoh-Donou said.

What follows is a condensed and edited portion of CRN’s interview with Locoh-Donou.

What are the biggest things you want partners to know right now about F5?

Firstly, let me talk about what has not changed and is not changing. Our business model at F5 is very channel- and partner-centric. More than 90 percent of our business is done through partners. And even though there's a lot of evolution in the portfolio of F5 and the technology assets that we have, that part of our business model isn't changing. In fact, it is absolutely at the heart of how we win in the marketplace.

Of course, the world around us has changed quite a lot in the last few years. ADCs (application delivery controllers) was the market that F5 pioneered and came to really dominate. By the time I joined F5, the cloud had gone mainstream, and there was a belief from CIOs they would move all of their applications to a single cloud — and it would be simple, fast and very reliable. In the last six or seven years, things have gone very differently than that, as people now live in a hybrid and multi-cloud environment. All of our customers have multiple infrastructure environments. Because of that, the attack surface that attackers can take advantage of, is growing exponentially. People have more containers, more APIs, more applications that are distributed. And that, of course, creates a ton of complexity. Our customers have ended up using appliances when they're on-prem, cloud ADCs when they're in the cloud, different security vendors in different environments.

We have made it a focus, in terms of the investments the company has made over the last three years, to really fight that complexity — and build a platform, a set of capabilities, that can truly simplify our customer’s environment. Specifically, where we are focused is enabling our customers to secure and deliver and optimize every application or API — but anywhere, in any infrastructure environment, whether it's in the cloud or private cloud or on-prem or at the edge. Or even increasingly at the far edge, with some AI applications. But for our customers to be able to use the same security engine, the same delivery engine, a single policy across all these environments, and essentially a set of application security and delivery services that are abstracted from the underlying infrastructure and can be this simplification layer for our customers.

What are some of the key moves that you’d say F5 has made to build out this platform?

To do that, we made some acquisitions. We acquired NGINX in 2019. We acquired Shape [Security] to beef up our security capabilities. And we acquired Volterra, which was the foundation of our SaaS platform. And of course, to be able to do what I've just said, we needed to have a form factor that was SaaS. So now we can deliver all of these capabilities in hardware, in software or in SaaS. And the SaaS business has been growing quite rapidly. This is another area where I am pretty happy with our channel partners, because a substantial portion of the opportunities that we're closing in SaaS is coming from our partners. For a company that has its history in hardware appliances — and that has a lot of partners that grew up with us in hardware appliances — it wasn't necessarily a given that our partner community would be making the transition with us and would be also with us in selling software and selling SaaS and nurturing this relationship with the end customers.

Are most of your partners as all-in on SaaS, at this point, as you are?

Of course, it varies. We have a couple of thousand partners around the world, and not all of them pick things up at the same pace or make the investment in skillsets at the same pace. But one of the metrics that we look at is what we call PIOs — partner-initiated opportunities. And the percentage of our SaaS business that comes from PIOs is at least as high, if not higher, than the percentage of the traditional business that comes from PIOs. So that tells you that the multiplier effects that our partners are having on landing new deals in SaaS is at least as good as it's been on the traditional business, which is very encouraging.

What are some emerging areas you’re looking to bring more of your partners into?

I would touch on one key aspect of it, which is API security. For a long time, API security was kind of this niche thing that was struggling to get real attention. Now we have brought to market a whole solution, which is organically built, but we also made some acquisitions. So we're now able to go to customers and say, “We will detect and catalog all your APIs. We will look at all the vulnerabilities and catalog them. And we will mitigate against those vulnerabilities with our WAF [web application firewall] or bot protection or API security protection.” Customers love that the problem is going away. And that is combined with customers saying, more and more, “API security is a big issue for me.” And it's because the number of APIs are exploding.

And frankly, AI is going to accelerate that. For all of these AI applications, they communicate with the backend of these AI applications — meaning the AI model or the AI factories or the enterprise data stores — and all of those communications are via API. So securing these APIs, to secure AI, is absolutely key. And that's why we're starting to see more and more demand.

What other opportunities is F5 seeing in terms of AI at the moment?

We're starting to see large enterprises build these AI factories — where they're putting a number of GPU clusters into their data centers to be able either to build sovereign AI models, or deal with very large volumes of data that they want to process in their models. And that's creating a new opportunity for F5. Or rather, it’s creating a new opportunity that's an old opportunity. When we first came to market, what really allowed F5 to grow initially was load balancing for websites. The dotcoms were our early customers. We pioneered load balancing to route traffic to the right server, and vice versa. And so the same use case is now emerging for AI — where you have all these GPU clusters, but if your request is going into a cluster that is fully busy, you're going to have a lot of wait time. And load balancing — fast, high-performance load balancing between these GPU clusters — is emerging as a big need to make them more efficient. It's a net new opportunity, but it's for technology that we have mastered and built over 20 years. That's really interesting potentially for our partners that are working with customers building AI factories.

The other phenomenon with AI is that we're expecting, and starting to see, that AI applications are going to be very distributed. There are a couple of reasons for that. One is data gravity. People don't want to keep moving their data. An AI app can be in a public cloud or on-prem, but it's accessing data that is in multiple locations. The other reason is that for inferencing, people want to do the inferencing — the running of these AI applications — as close to where a machine is running. They want to do it where the decision needs to be made to minimize the latency. And so that means manufacturers may want to run inferencing right where machines are running. Retailers will want to run AI in retail shops right where the action is. And so forth. And so that's creating the need for application security and delivery services that are also highly distributed and can run in all these locations. And today there isn't really a company that was built to deliver on that use case. Because even the edge players, the large CDN players — they can bring their services in their PoP location, where they have a point of presence. But they can't do it in far edge use cases. We have built a technology on our SaaS platform that is deployable virtually anywhere there is compute infrastructure. It could be in the trunk of your car, it could be in a retail shop, it could be in a critical care unit. It could be in a military vehicle on the field. We're seeing that AI is going to have a need for these highly distributed capabilities. And we think that as companies move to inferencing for distributed AI applications, that capability is going to become a key differentiator.

So it sounds almost like you’re expecting another expansion of the edge as a result of AI?

We started seeing this with modern applications, but we think AI is just going to accelerate that. You're already hearing phone manufacturers talking about how you're going to run small LLMs on your phones. Now translate that to the business world. We're going to want to run our LLMs wherever we need to run them. And that's not going to be limited to the public cloud or a few big data centers. Wherever we're running these things, we will need to secure them. We will need to make sure they're delivered properly, and they're working 24-by-7. And so the software stack that does all the security and delivery will need to live wherever these things are. That's the capability that we’ve built.