NetApp, Google Cloud Partner On GenAI, Hybrid Cloud
‘Today, we already offer multiple tiers of service that are publicly documented: standard, premium, and extreme. They offer different throughputs. And they're all backed by ONTAP and NetApp’s enterprise storage. With Flex, we're launching this as a cloud service operated by Google. It's going to run on Google's infrastructure. And it's going to offer the ability to create very small volumes, hence the name ‘Flex,’ from one gigabyte all the way to 100 terabytes,’ says Eric Han, NetApp’s vice president of product management.
Leading independent storage and cloud technology developer NetApp Wednesday unveiled an expansion to its Google Cloud partnership aimed at helping partners and customers better leverage data for generative AI and other hybrid cloud workloads.
Launched at this week’s Google Cloud Next ’24 conference, NetApp and Google Cloud have partnered to develop a new Flex service level for Google Cloud NetApp Volumes. NetApp also used the event to preview a new GenAI toolkit reference platform for RAG (retrieval augmented generation) operations on the Google Cloud Vertex AI platform.
The moves at Google Cloud Next ’24 are just the latest in a long-term partnership between NetApp and Google, said Eric Han, NetApp’s vice president of product management.
[Related: NetApp, Nvidia Intro New A.I. Architecture With Strong Cloud Tie]
“We became a Google first-party service in August of last year,” Han told CRN. “And since then, it's grown quite a bit. We've seen customers onboard and move to Google that before were very interested in moving to Google but had certain workloads that couldn't run until NetApp and our enterprise storage was made available. We're seeing that across retail, financial services, as well as healthcare.”
That Google first-party service, called Google Cloud NetApp Volumes, lets both existing NetApp customers and customers new to the storage vendor get NetApp cloud storage as a Google service that operates, runs, and is experienced as Google, Han said.
Google Cloud NetApp Volumes supports workloads from file shares, file services, and VMware, and is now going to be extended with the new Flex offering, he said.
“Today, we already offer multiple tiers of service that are publicly documented: standard, premium, and extreme,” he said. “They offer different throughputs. And they're all backed by ONTAP and NetApp’s enterprise storage. With Flex, we're launching this as a cloud service operated by Google. It's going to run on Google's infrastructure. And it's going to offer the ability to create very small volumes, hence the name ‘Flex,’ from one gigabyte all the way to 100 terabytes.”
With this move, customers will be able to pick from four levels of service based on performance and type of workload, and have their service scale and grow with them, Han said. These include standard service offering performance of 16 MBs per second. It will also broaden the range of workloads customers can use with Google Cloud, he said.
Flex is initially launching in 15 Google regions, but by year end is expected to be in all Google’s regions, Han said.
NetApp also used Google Cloud Next ’24 to preview its GenAI toolkit to support NetApp Volumes. The demonstration showed Google Cloud NetApp Volumes with the latest Google Vertex unified AI development platform.
“But we're actually going to make it multi-cloud,” Han said. “So from that perspective, you can imagine that every customer who has unstructured data wants to be able to use cloud AI services, but to do so it's not easy. This toolkit, which will be freely available, will help them instantiate and set up this pipeline that brings their data into the cloud AI services. It's an opportunity for us to work with customers to get their feedback about what they want to see, both from their use cases as well as in the cloud services we're building. And we're going to build this across multiple clouds.”
The NetApp GenAI toolkit ingests data and then uses RAG, or retrieval augmented generation, as the pattern to spin up the components customers need to take their data to AI and bring AI to their data, Han said.
“Just like any other AI system, businesses can summarize content,” he said. ”Maybe they have doctor patient records and they want to see what the last visit was about. … Or maybe you're about to go on a call and you have a lot of products and you want to get a sense of what you should highlight for your audience and prepare a solutions brief. So it takes all the content that I have and creates that summary.”
Partner Point Of View
NetApp continues to tell an important story about storage and AI, said John Woodall, vice president and CTO of hybrid cloud at General Datatech, a Dallas-based solution provider and channel partner to both NetApp and Google Cloud.
“NetApp continues to tell the story of supporting its tight relationship with Nvidia and that ecosystem,” Woodall told CRN. “Performance for some of those workloads is extreme. At last month’s Nvidia GTC, the announcements there were kind of mind boggling in terms of scale and performance. And if the storage doesn't keep up, that becomes the bottleneck in these projects. GPUs are very hungry.”
Woodall said that a lot of the people he talks with think of Google as being very mature in AI, which is good news for Google in terms of attracting and retaining customers.
“Whether it's a startup, a midsize, or large scale, heavy GPU compute customer, cloud is almost always an element of their pipeline or DevOps environment,” he said. ”So being able to provide not just cloud native storage, but high-performant, highly available, feature-rich storage is a differentiator that makes what NetApp used to call its Data Fabric story much tighter. Customers can extend their observability, management, and automation from on-prem to cloud at the compute layer and at the network layer. But you really can't do a hybrid cloud correctly if you don't have data services and similar data storage in terms of feature functionality. That favors NetApp in a hybrid and a cloud-native model.”
The amount of focus and interest on AI is “insane,” Woodall said.
”People want answers, but a lot of people don't know where to start, so a lot of them will start in the cloud,” he said. “Giving them something like Flex is what they need. They can grow into it without rearchitecting. And they can move to shorten time-to-delivery or whatever their value metric is. This is the kind of innovations that as part of a larger ecosystem make the whole ecosystem better.”