VMware Outlines Software-Defined Data Center Strategy, Intros VMware NSX SDN Tech


 

Raghu Raghuram VMware
Raghu Raghuram of VMware

VMware and EMC presented their take on the growing importance of software-defined data centers and software-defined storage and said that the move from physical to software-defined architecture is a necessary step in the development of the cloud.

VMware, as part of the move toward software-defined data centers, this week also unveiled its new VMware NSX software-defined networking technology, which combines its organically-developed vCloud Networking and Security (vCNS) with the technology it got from last year's acquisition of Nicira, the industry's leading open source SDN developer.

Raghu Raghuram, VMware's executive vice president of cloud infrastructure and management, told analysts at Wednesday's EMC and VMware 2013 Strategic Forum for Institutional Investors that an application running in a virtual machine tied to traditional enterprise networking and storage takes away the advantage of using a virtual machine in the first place.

 

[Related: EMC, VMware To Launch Pivotal Initiative As Separate $300M Firm]

"The operational model is still the physical infrastructure," Raghuram said. "For the cloud, you need to virtualize the entire data center."

Virtualizing a server using VMware technology has three primary parts, Raghuram said. First, the application is abstracted from the server infrastructure. "That gives you mobility [and] freedom of choice," he said.

The second part is to pool server resources to ensure that required resources are available to the application. The third is to automate the pooling of those resources.

If the abstraction, pooling and automation can be applied to storage and networking resources as well as compute resources, those parts become the basis of virtualized data centers, Raghuram said.

"When you do that, the operational profile of the data center substantially changes," he said.

While traditional physical data center infrastructures work for traditional applications such as Oracle, SAP and Microsoft Dynamics, or those written using Java, enterprises are increasingly turning to either new applications such as Hadoop or those written with frameworks such as the Python programming language or VMware's Spring, Raghuram said.

Furthermore, while the old client-server model was tied to the hardware, making it hard to scale and automate, Web 2.0 companies have learned to take advantage of industry-standard hardware, any IP network, and scale-out storage and flash technology. "This gives them an amazing ability to automate data centers, gives them an amazing ability to scale," he said.

The idea of a software-defined data center carries this further to work with both traditional and new applications, Raghuram said. "Because this works independent of the underlying hardware, it works for any application and for any hardware," he said.

NEXT: The Three Requirements Of A Software-Defined Data Center