10GigE: Fast, Pricey and Coming to a Data Center Near You
Pundits have kicked around the future of 10-Gbps Ethernet in the data center for some time. Although there's no consensus on the route deployment will take, one fact is clear: At some point, you'll purchase 10GigE gear. Anyone who says differently didn't live through the excruciating death of Token Ring or, more recently, the Fast Ethernet upgrade cycle.
\ \
|
---|
Seeing as those who forget history are doomed to repeat it and all that, remember that new network technologies generally follow a well-trodden path: Very expensive, seen as backbone only, considered ultra high bandwidth; still expensive but making its way into nonbackbone roles, nice to have if you can swing it (this is where 10GigE gear is now); prices in free-fall, widely used; cheaper than the technology it replaced, pervasive.
One significant difference between 10GigE and its forebears could affect this cycle: With Fast and Gigabit Ethernet, we just upgraded switches and cards, and possibly cables if they were really old, and voilà! But 10GigE runs best over fiber. Vendors will pitch you on copper, and under the proposed 10GigE standard, Cat 6 cable will run up to 55 meters, while Cat 6a and Cat 7 will run up to 100 meters, but we're not sold. Plus, the implementations we're aware of today require upgraded connectors. Expect to pull wire, or better yet, pull fiber.
We foresee a slower uptake of 10 Gig given the required upgrades. In 2001, we were shocked to discover a Fortune 1000 company we have a relationship with was still running 10 Mbps to the desktop. Its response to our queries? "It's expensive to replace, and we don't have a driving need yet." Of course, that was the management line. Employees, particularly the IT staffers still on a 10-Mbps subnet, had a much different view.
Who's in the Sandbox?
When it comes to 10GigE, most of the usual suspects are involved. Cisco, Extreme Networks, Force 10 Networks, Foundry Networks, Hewlett-Packard and Nortel Networks comprise the core of the switching market, while Neterion Technologies (formerly s2io) and Myricom are the early NIC suppliers. When the standard is released--this summer or fall--more vendors will move into the NIC arena, and possibly the switch market. 3Com, for example, currently uses 10GigE only for switch interconnects. With a standard in hand and adoption mounting, we expect it to join the other primary switch vendors.
We have a 10GigE network in our Green Bay, Wis., Real-World Labs®, and it works as advertised. There are some gotchas, like the fact that the Windows IP stack is too slow for a full 10Gbps. Will that improve with better hardware? Maybe, with Longhorn, but we're not holding our breath. Still, the 6.4 or so Gbps we do get sure beats 1 Gbps when we need extra bandwidth. And Linux and most Unix variants do not suffer these software problems. Although 10GigE is still expensive compared with Gigabit Ethernet, not counting the fiber or special copper wiring required, prices are falling. But even if 10GigE achieves cost-per-port parity with GigE, you'll still need to replace your NICs, switches, routers and cabling.
When should you make the move? "Trust historical trends in data volumes and capacities. Your data center and backbone network utilization growth will provide a reasonable benchmark for when you'll cross the 10-gigabit threshold," says Joel Conover, research director at Current Analysis. "Remember, you generally don't want your network operating at more than 50 percent utilization because spikes and peak traffic can cause performance issues when the load creeps up." Conover adds that we won't likely see 10GigE cards built into PCs for at least two years.
I Want It Now
Near-term, using 10GigE as a backbone makes perfect sense when aggregating the throughput of multiple networks. That's the first place to look for return on investment. Another use for 10GigE that's often overlooked is in server consolidation. We recently ran tests on an HP machine running eight dual-core AMD Opteron chips and found it to be network-bound. Considering the machine used four 1-gigabit network cards, we'd say we've reached the point with 1 gigabit that servers can be limited by their network connectivity. We sure hit a throughput wall with our testing, which was admittedly over-the-top in comparison to what you're likely to see in regular data-center use.
"10GigE is the next logical step for high-performance data centers," Conover says. "With massively scalable multiprocessor systems, say eight-core and up, eliminating the CPU ceiling for network performance, it's time to start thinking about 10-gigabit connectivity direct to hosts. Remember this simple rule of thumb: It takes 1 MHz of processing power to deliver 1 Mbps of throughput. With eight core systems available, running at 2 GHz each, it's feasible to support 10-Gbps interfaces on next-generation server computing platforms."
This trend will continue, and our apps are now truly distributed, meaning that the more they can handle, the more the network will be a bottleneck. Placing a 10-gigabit NIC into a monster server being used for consolidation, or one that is constantly hitting the database for large record sets, or even into the database server itself, will deliver a high return in very busy environments.
Other ROI opportunities abound:
Another huge gain for 10GigE will be in local replication. When replicating a group of servers to a single target--say, a server that will later stream the data to tape--10 Gbps of throughput will pay off. Replication sends a lot of data over the wire, so more is always better. Say you're replicating 15 servers, and each is using 10 Mbps of throughput in bursts, you could lose replication data because the replication target alone would be receiving 150 Mbps of traffic. Multiply that by 250 servers in your average data center, and you've got an impossible task: a 1-Gbps connection at the replication server trying to handle 2.5 Gbps of throughput. Any method of disseminating information that does not require employees to leave their desks makes sense. Although video servers have gotten much more bandwidth-efficient by using multicasting and to keep throughput down, a video that is viewed by a large number of employees can still put a drain on the corporate network. This is particularly true if you're offering video on demand, because multicasting is not viable in an on-demand environment. Each individual needs only a small percentage of the whole, so putting a 10GigE card into your video server and connecting that server to a 10GigE backbone can ensure it responds to requests for streams as fast as they come in. If your business is primarily an online entity or have unpredictable peaks and valleys in your ordering process, consider having your core business servers run on a 10GigE core network. Opening up bandwidth between different tiers of your apps is much easier than using load-balancing to achieve effectively the same solution. Of course, this may return the bottleneck back to the servers and still require load-balancing, but that can be eliminated also as multicore servers proliferate. If you're joining several 1-gig ports to make a larger pipe, it might make sense to pay a bit extra and deploy a single port at 10 gigabits. Then, you won't have to play around with trunking, aggregation tools and interfaces. Consider a 10GigE backbone and a 10GigE card in a monster server as an alternative to the utterly ugly Exchange clustering that many organizations have adopted. With a 10-Gbps backbone, even the high-overhead MAPI protocol should perform well with a large number of clients hitting it. Fewer boxes equals less management, and you'll also save the cost and effort of implementing clustering while still improving the performance of your mail system.
Face it, 10GigE is on its way to your infrastructure. The question is, will it come in a planned manner, or in fits and starts with mismatched pieces of network proliferating around you. All but the smallest companies should plan for a 10GigE backbone in the next 18 to 24 months. Focus on solving the big problems, and reap the benefits of more bandwidth and fewer new servers.
Don MacVittie is a senior technology editor at Network Computing. Previously he worked as an application engineer at WPS Resources, a Green Bay, Wis., utility-holding company. Write to him at dmacvittie@nwc.com.
