AWS AI-Powered Security To ‘Accelerate’ Over Next Year: CISO Chris Betz
In an interview with CRN, Betz also shares why GenAI will ‘unlock’ new security advancements for the tech industry such as dramatically improved detection of software vulnerabilities.
The use of generative AI has already given a productivity boost to the internal cybersecurity team at Amazon Web Services and the cloud giant expects that this is just the beginning, according to AWS CISO Chris Betz.
Betz, who joined AWS as CISO in August, recently spoke with CRN on how GenAI is already making a difference when it comes to the security capabilities of customers and partners, as well as for AWS itself.
[Related: Which Side Wins With GenAI: Cybercrime Or Cyberdefense?]
“We’re already seeing improvements, and I’m expecting that to further accelerate over the next year,” Betz said in the interview with CRN.
Prior to joining AWS, Betz had most recently served as executive vice president and CISO at Capital One, and earlier had held security roles at companies including Apple and Microsoft. He took over the security chief role at AWS from CJ Moses, who was promoted to become the CISO for Amazon.
During the interview, Betz also discussed the areas of cyberdefense that he believes are most likely to be shaken up by GenAI in the future. Finding software vulnerabilities is a foremost example, he said.
And while this is already happening to some degree, the capability of GenAI to offer humanlike reasoning for detecting software flaws is expected to dramatically improve going forward, according to Betz.
“The approximation that generative AI does of [human] reasoning across data—in this case, reasoning across code—really unlocks a next step of ability to do analysis on software,” he said. “It really helps identify [pieces of code] that look like a potential vulnerability.”
What follows is a condensed and edited portion of CRN’s interview with Betz.
What are you hearing most from customers when it comes to GenAI?
It’s one of the most frequent conversations that I have with CFOs, CTOs, CIOs. One of the questions I get is, ‘How do we use generative AI securely?’ They have concerns around how to safely and securely use commercial, off-the-shelf AI services. They’re looking at both how they do that in a way that protects them and their companies, and how they [handle] sensitive company data as well as unreliable outputs. Accuracy and verification [are] just really crucial in what they’re looking for.
What’s your advice to customers and partners about how to handle all of these security and reliability concerns?
My first advice to them is to start from data. In order to get the most value out of generative AI, large language model, you bring your data to it. And so, how you train or tune that model to do the job you need is a really important first step. For us, I spend a lot of time talking to people about the way we’ve designed our generative AI models. First of all, much like everything at AWS, we start with the premise that your data is just that—it’s your data. Our generative AI technologies—things like Bedrock—continue that model. We haven’t changed anything under the covers. And so for many of our customers, it’s the act of bringing massive amounts of data out of S3 buckets, databases, etc., and bringing that together with the model to tune it. So we spend time talking about how we secure a customer’s data. And we’ve got things like PrivateLink that link up that connection between the S3 and the large language model in a secure way. And that allows us to bring that data to the model in a secure way that maintains the company’s control over some of their most precious assets—the data that they have.
The second thing we talk about after bringing the data back to the model is protection of data as they’re interacting with the model. Once you’ve got the model up and running, and customers are looking to use that model, they want to bring some of their most sensitive data to that model. They want to bring information about the people that they’re interacting with, their accounts, their clients, everything else—and put that into a prompt in order to get a response that is well-tuned and fits the situation. So my second advice to those customers is, pay attention to security around the data that you bring to that model in the prompt, and how you get a response.
What are the most important things for customers and partners to focus on in terms of the security around their data?
My recommendation to them is to look at the application that they’re building. Models don’t stand alone, they sit within an application. And so, [organizations need to] look at that environment. What we’ve found is that the choice that we offer—of a bunch of different models riding on Bedrock, for example—really gives customers the ability to pick the right model that they need. And the fact that that operates within AWS—where they have builders who have a ton of experience in building secure applications on top of AWS—gives them those mechanisms. Using things like PrivateLink that helps provide an encrypted connection between Bedrock and the other parts of their AWS accounts. The fact that they have the AWS account structure, the identity and access management that they’re familiar with—all of these technologies that their builders are already familiar with really helps them to put that security around the product. So the most important things we end up talking about are how to think about the model not just in isolation, but the model as a part of an end-to-end application. And how all of the different technologies that they are already using within AWS really help them to deliver that secure, end-to-end capability.
How are things going with Guardrails for Amazon Bedrock? How are partners and customers using it?
Guardrails conversations are going very well. The third thing I usually talk with customers about is, is the response trustworthy enough for the function that you’re trying to use it for? Guardrails plays very, very deeply into that. One of the coolest things about large language models is that real-life feel that they give to communication. That also can be an issue when you’re looking to make decisions based on that data. Guardrails came out of technologies that we had to build for ourselves as we’re implementing large language models. We realized we really need the capabilities to filter prompts as they come in to make sure that they’re what we expect to reach the model, and to filter the response to make sure that it matches well. I think that’s one of the spaces that we—and frankly, our customers and the world at large—are rapidly innovating in. We’re constantly learning new and different ways to help increase the robustness of those defenses. And we’re codifying that in Guardrails.
Customers don’t want to reinvent how to do that time and time again. That’s one of the things I’m most concerned about—how do I not have to go reinvent these protections over and over again? And that’s where I think Guardrails gives them a really good starting place—a good foundation of capabilities that allow them to rapidly prototype and then move into production. Certainly there may be additional custom things that they choose to do, but Guardrails gives them a good starting place to put in those protections.
When you say they don’t want to reinvent protections, you mean across the different apps that they’re building?
It’s [across] different applications, but many of them also worry about keeping up with all of the active research in this area. You can imagine, within a company, not just application by application, needing to put things in place but also, as the world gets smarter, do they have to dedicate people to keeping up with what everybody else is figuring out? Guardrails gives them a place that already is taking in a bunch of that knowledge and building it into tools, which makes it a lot easier for companies to start in a better place to have secure generative AI solutions.
What can you say about where you’d like to go next in these areas?
The thing that I’m most excited about is building security into the technologies. We’re spending time working with some of the top security researchers in the world as they’re looking at the security of large language models. My plan is to have those delivered seamlessly wherever possible as part of the solutions. The more I can make it easy for customers to be secure, the better off I am.
It’s not directly in the Guardrails space, but something else I’m excited about, and have been talking with CISOs about, is the technology we’ve embedded in Amazon CodeWhisperer. CodeWhisperer is our AI coding companion, and it is the only one that I’m aware of with built-in security scanning for finding and suggesting remediations for hard-to-detect vulnerabilities. That’s an area as well that we keep on innovating [in]. This fits into the mental model that we have—we want security to be built in wherever possible for easy and seamless operations. When we can make security easy for our builders, that’s the right way to do it. It is a case where every developer using CodeWhisperer just automatically gets AI-powered code suggestions, tailored to their code, to remediate security and code quality issues. Guardrails is another example of the same kinds of technology—deploy Guardrails and it raises the security capabilities of our tools. That’s where I think you’re going to see us continuing to innovate— things where it becomes a natural part of how builders build their tools, and security comes through there.
Are there more examples you could share?
Another good example is Amazon Inspector, where we offer remediation for Lambda functions that help patch vulnerabilities. Amazon Detective has group summaries that allow customers to quickly locate and get key insights. These are examples where you take an existing capability, which has a whole bunch of data on the back end, and use generative AI to help customers look at the right places, provide useful suggestions, synthesize a ton of data. It’s really exciting. And we see that internally as much as externally. I’ve got a large number of internal projects that do the same kinds of things for our security teams—help them synthesize large amounts of information, make suggestions on actions to take that allow us to be so much faster. That’s where I think defenders and security experts across ithe ndustry are going to benefit from that injection of generative AI into capabilities that we see every day.
How else are you using GenAI internally for security?
We’re using AI machine learning to automatically identify and resolve issues. So you can imagine the large set of data that a security expert needs to look at in order to make a decision —generative AI has an ability to synthesize that and make recommendations based on that. Other machine learning technologies as well are really, really powerful. The ability to help communicate with our builders is amazingly powerful. Like any large company, we have a ton of data and a ton of capabilities. When our builders are able to ask quick security questions—and get pointed to the right place with the right information, get those answers quickly—that is very, very useful.
And certainly, with the ability to look for hard-to-find vulnerabilities and suggest secure options, that’s already starting to take the load off my security teams. [With] the ability to speed up things, synthesize large data, help produce content that communicates things to people, that’s where generative AI has become really powerful.
How much of a difference do you think GenAI will ultimately make for your internal security teams?
We’re already seeing improvements, and I’m expecting that to further accelerate over the next year. We’ve got some really interesting plans. Some of the pilots are looking really positive. And so, we’re continuing to test and learn. We’ve got some places already where we’re seeing impact. And I think over the next year, there’s a number of these that I think are going to take off in a significant way above and beyond what we’re already seeing, which is impactful for sure.
Is GenAI ultimately more than just another technology or tool and really on another level in terms of what it will lead to?
Generative AI is really good at some problems that we’ve struggled with solving for a long, long time. Part of what we’re seeing is generative AI’s ability to unlock other areas that we’ve been struggling with for a while. So for example, the ability to do natural language communication, in an intelligible way, is something that generative AI is significantly better at than many other things. And there are a ton of other things that you can unlock once you make it easy for somebody to express a problem and receive a tailored answer. And so I think that’s where part of the excitement is coming from. And so, yes, I think generative AI is really powerful. And I do think it’s helping unlock a number of other technologies.
Specific to security, what is something you think GenAI will unlock in the future?
There’s a whole bunch of them. One of them that comes to mind is as we have an ability to process even more data in generative AI—with larger sets of data that we’re able to tie together —I think how we look at code and software vulnerabilities may change dramatically over the next five years. I think that’s going to be really important.
In terms of the ability to spot these vulnerabilities, why does GenAI excel at that?
Generative AI does things that approximate reasoning over a body of data. One of the hardest parts of finding vulnerabilities is the ability to reason across code. Historically, the systems that we’ve used to automatically find vulnerabilities do a really good job of algorithmically looking for patterns that are known to be bad or [are] trying to find those different patterns. But we still use humans in this space because the ability to reason over code remains unique and very powerful. The approximation that generative AI does of [human] reasoning across data—in this case, reasoning across code—really unlocks a next step of ability to do analysis on software. It really helps identify [pieces of code] that look like a potential vulnerability, which then in turn can be backed by something that does other analysis—a human or another system—to find it. So it’s that ability to do that reasoning-like capability that is the step function that increases the capability here.