6
Four Key Forces Driving 10GbE Adoption Server Virtualization Prior to server virtualization, most servers were dedicated to a single application. In that era, Gigabit Ethernet (1GbE) offered sufficient bandwidth to satisfy the performance requirements of most business applications, including basic file-serving and applications such as email and databases. The world changed with the broad adoption of server virtualization in the last decade. Now, servers support multiple Virtual Machines (VMs), each supporting one or more applications— and each requiring its own network bandwidth. Since 2010, according to IDC, 1 there have been more VMs deployed than physical servers, with VMs continuing to grow at a far faster rate than physical servers. 10 Gigabit Ethernet (10GbE) Overview Intel and other vendors have been shipping 10 Gigabit Ethernet (10GbE) adapters for a decade. What has changed that is causing users to broadly adopt 10GbE now? Why has this powerful technology taken so long to ramp? Why will deployments of 10GbE grow even faster in the coming years? This white paper will provide answers to these questions and, in doing so, explain why now is the time for you to migrate to 10GbE, if you haven’t already. Four key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance increases, unified networking, and Flash* in the datacenter. Three key trends have eased the transition: adapter I/O virtualization, improvements in server I/O performance, and, most important, tumbling deployment costs as 10GBASE-T goes mainstream. It is simply uneconomical to support a virtualized server with multiple 1GbE connections to support all those VMs. Sharing a “fatter” 10GbE pipe dramatically reduces cabling costs, power consumption, and overall infrastructure costs. Environmental considerations are becoming a more prominent consideration in IT decision making. Reductions in equipment translate into less power and cooling. Fewer cables mean improved airflow and less chance of human error during setup and maintenance. Hypervisors have also evolved to take advantage of 10GbE, which they didn’t initially. Now, 10GbE is fully exploited for VM migration and storage, and hypervisors provide bandwidth control to manage the traffic over the shared 10GbE pipe. Four key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance increases, unified networking, and Flash* in the datacenter. Now Is the Time to Upgrade to 10 Gigabit Ethernet WHITE PAPER Intel® 10GbE 9268_12_Intel_10GbE_WP_042414.indd 1 4/24/14 2:40 PM

Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

  • Upload
    others

  • View
    1

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

Four Key Forces Driving 10GbE Adoption

Server Virtualization

Prior to server virtualization, most servers were dedicated to a single application. In that era, Gigabit Ethernet (1GbE) off ered suffi cient bandwidth to satisfy the performance requirements of most business applications, including basic fi le-serving and applications such as email and databases.

The world changed with the broad adoption of server virtualization in the last decade. Now, servers support multiple Virtual Machines (VMs), each supporting one or more applications—and each requiring its own network bandwidth. Since 2010, according to IDC,1 there have been more VMs deployed than physical servers, with VMs continuing to grow at a far faster rate than physical servers.

10 Gigabit Ethernet (10GbE) Overview

Intel and other vendors have been shipping 10 Gigabit Ethernet (10GbE) adapters for a decade. What has changed that is causing users to broadly adopt 10GbE now? Why has this powerful technology taken so long to ramp? Why will deployments of 10GbE grow even faster in the coming years? This white paper will provide answers to these questions and, in doing so, explain why now is the time for you to migrate to 10GbE, if you haven’t already.

Four key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance increases, unifi ed networking, and Flash* in the datacenter. Three key trends have eased the transition: adapter I/O virtualization, improvements in server I/O performance, and, most important, tumbling deployment costs as 10GBASE-T goes mainstream.

It is simply uneconomical to support a virtualized server with multiple 1GbE connections to support all those VMs. Sharing a “fatter” 10GbE pipe dramatically reduces cabling costs, power consumption, and overall infrastructure costs. Environmental considerations are becoming a more prominent consideration in IT decision making. Reductions in equipment translate into less power and cooling. Fewer cables mean improved airfl ow and less chance of human error during setup and maintenance.

Hypervisors have also evolved to take advantage of 10GbE, which they didn’t initially. Now, 10GbE is fully exploited for VM migration and storage, and hypervisors provide bandwidth control to manage the traffi c over the shared 10GbE pipe.

Four key technology and market forces have been

driving the need for 10GbE: server virtualization, Moore’s

Law server performance increases, unified

networking, and Flash*in the datacenter.

Now Is the Time to Upgrade to 10 Gigabit Ethernet

WHITE PAPERIntel® 10GbE

9268_12_Intel_10GbE_WP_042414.indd 1 4/24/14 2:40 PM

Page 2: Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

Moore’s Law Server Performance Increases

Gordon Moore predicted in 1965 that the number of transistors contained in mass-produced integrated circuits would double approximately every two years. Remarkably, the semiconductor industry has maintained this exponential growth to the present, and most experts believe it will continue through at least 2020. Today, the effects of Moore’s Law on servers are most apparent in the continuing increase in the number of cores in a processor, as well as increases in the performance of each core. For example, the Intel® Xeon® processor E7 family offers up to 15 cores in a single processor. These ever-more powerful server processors support more VMs and drive ever-more demanding requirements on network I/O.

Unified Networking

10GbE creates the opportunity for running all network storage traffic over an Ethernet pipe. Network Attached Storage (NAS) and iSCSI can already run over 1GbE, though obviously at substantially reduced speeds compared to 10GbE. But many datacenters continue to run Fibre Channel (FC) storage networks over a completely separate Fibre Channel network with its own expensive host bus adapters (HBAs) and switches. Obviously, this is not the most economical way to run storage networking. Even if there are reasons for keeping the storage network isolated with an air gap, running the storage network over an Ethernet infrastructure is far less expensive than a Fibre Channel infrastructure.

10GbE makes it possible to run Fibre Channel protocols over an Ethernet wire (FCoE), along with NAS and iSCSI. But 10GbE itself had to evolve to support the demanding requirements expected for FC. This evolution in Ethernet technology is called “Data Center Bridging” (DCB), a set of IEEE specifications. DCB enables support of FCoE and converged data storage traffic over an Ethernet wire, which can even be shared with LAN traffic. These enhancements offer the ability to allocate bandwidth, as well as improve traffic flow management to ensure lossless transmission characteristics. The most commonly deployed DCB technologies are Enhanced Transmission Selection (ETS) and Priority-based Flow Control (PFC). Data Center Bridging Exchange (DCBX) is implemented with DCB to facilitate discovery and configuration of features enabled between endpoints. Today, DCB is supported as a standard feature by all 10GbE adapters and by enterprise-class switches introduced within the last few years.

The savings from deploying unified networking can be significant. Intel’s IT group did a published analysis that showed they could reduce network costs per rack by more than 50% by converting from FC to ECoE.3 Storage SAN administrators are very conservative as a group, and SAN refresh cycles are much longer than server refresh cycles. The adoption of FCoE has started slowly, as you’d expect. However, FCoE is expected by market researchers to accelerate in the coming years now that the technology has proven robust and current FC SANs reach end of life. Not only are the cost savings from FCoE compelling, but the Ethernet performance roadmap has left Fibre Channel behind. FC is transitioning to 16Gbps now as 40GbE, which supports FCoE, emerges. The gap is only likely to widen over time.

Intel Xeon Processor E5-2600 v2 Product Family Historical 2S-integer throughput performance.2

15x increase in integer throughput on Intel® Xeon® since 2006

Higher Is Better

15xThroughput

Performance Trend

SPEC

int*

_rat

e_ba

se20

06 P

erfo

rman

ce 1200

1000

800

600

400

200

0

2006Xeon 5160(2c, 3GHz,

80W)

2007X5272

(2c, 3.4GHz, 80W)

2007X5365

(4c, 3GHz, 150W)

2008X5492

(4c, 3.4GHz, 150W)

2009X5570

(4c, 2.9GHz, 95W)

2010X5680

(6c, 3.3GHz, 130W)

2012E5-2690

(8c, 2.9GHz, 135W)

2013E5-2697 v2

(12c, 2.7GHz, 130W)

2

Intel 10GbE White Paper

9268_12_Intel_10GbE_WP_042414.indd 2 4/24/14 2:40 PM

Page 3: Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

Flash in the Datacenter

Flash non-volatile memory offers twice the read performance than hard disk drives (HDDs). And, it is the hard disk drives that are often the critical bottleneck in today’s datacenters. So it should come as no surprise that the use of Flash in the form of solid-state drives (SSDs) can move a system’s critical choke point from storage to the network. For example, an Intel performance study of Hadoop* running SSDs reduced the time to execute the TeraSort benchmark by 50% by moving from 1GbE to 10GbE.4

Improvements this large due to 10GbE aren’t specific to Hadoop. Principled Technologies found similarly dramatic results for Microsoft SQL Server* and Exchange Server* when upgrading to 10GbE on a new Flash-enabled server.6 Principled Technologies found the 10GbE upgrade more than tripled the number of VMs the server could support (from 8 to 28), and also more than tripled database performance. The number of Exchange users that could be supported by a single server increased from 2,000 to 7,000.

Even a little bit of Flash could shift your system’s performance bottleneck from storage to the network if you are still on 1GbE. Flash technology is still too expensive to be used broadly in datacenters. So a natural reaction to results like the Hadoop and Principled Technologies studies is that they are interesting, but not particularly relevant. However, Flash is increasingly being used in tiered storage systems as the high-performance tier. The idea is to deliver much of the benefit of Flash without the cost of using it everywhere. In a recent SNIA Ethernet Storage Forum webcast,7 NetApp disclosed results showing that by adding Flash to a SAN as a storage tier, IOPS increased more than 28%. The amount of added Flash represented far less than 1% of the total SAN storage.

Three Primary Enablers Easing 10GbE Adoption

Adapter I/O Virtualization

10 Gigabit Ethernet adapters have also evolved to support the new virtualized datacenter with new capabilities and higher performance in a virtualized environment. Two key technologies that facilitate networking virtualized servers are demand queuing and SR-IOV.

Demand queuing is a technology that sorts packets coming into an Ethernet controller into queues corresponding to their target VMs. This offloads the server hypervisor from having to engage with every incoming packet. Instead, the hypervisor moves blocks of data from these queues to their intended VM, significantly improving performance and reducing hypervisor overhead. Demand queuing is given different names by the various Ethernet controller vendors. Intel, for instance, uses Virtual Machine Device Queues (VMDq). But demand queuing by all is supported in VMware vSphere* and Microsoft Hyper-V*. (Not by Linux*, however.)

Single Root I/O Virtualization (SR-IOV) is a PCI Special Interest Group (PCI-SIG) standard that was developed for virtualized servers. SR-IOV goes beyond demand queuing to support a full “shared device” model. The SR-IOV specification allows an I/O device to appear as multiple physical and virtual devices to support device sharing. There is at least one physical function (PF) for each physical port on an adapter. PFs can be managed and configured like physical devices. Virtual functions (VFs) of the physical adapter can be assigned by the hypervisor to particular VMs.

Improvements in Server I/O Performance5

Proc

essi

ng T

ime

5 hours

4 hours

3 hours

2 hours

1 hour

0

Xeon 5600HDD 1GbE

TeraSort for 1TB sort:

> 4 hours procress time

Hadoop* processing time:~7 minutes with complete Intel-based solution

Upgrade processor:

~50% reduction

Upgrade to SSD:~80%

reduction

Upgrade to 10GbE:

~50% reduction

Intel®Distribution:

~40% reduction

Xeon E5-2600

SSD 10GbE Hadoop

3

Intel 10GbE White Paper

9268_12_Intel_10GbE_WP_042414.indd 3 4/24/14 2:40 PM

Page 4: Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

The operating systems in the VMs can enumerate the VFs assigned to it and thus “believe” they own the whole device, when in fact they just own a slice. The number of supported VFs will vary by adapter, though 64 or more VFs per physical adapter port is typical. VFs assigned directly to VMs bypass the hypervisor for data flow. Performance generally increases due to the reduced overhead from the hypervisor, which no longer needs to manage this flow itself. The hypervisor doesn’t even need to get involved with data transfer between the queue in each function and the corresponding VM. To further enhance performance, network traffic between VFs on the same PF can be processed by the adapter itself using an internal Layer 2 switch, altogether eliminating routing through a physical switch.

SR-IOV is currently supported with Kernel Virtual Machine (KVM) in Red Hat Enterprise Linux 6 and SUSE Enterprise Linux 11 (and later). Microsoft supports SR-IOV in Windows Server 2012 Hyper-V. VMware introduced support for SR-IOV with vSphere 5.1.

Improvements in Server I/O Performance

The latest Intel® server and workstation platforms, the Intel Xeon processor E5 family, provide another compelling reason to take advantage of 10GbE: they deliver substantially improved I/O performance that better delivers the advantages of high-speed Ethernet. These processors introduce three major advancements to facilitate high bandwidth and low latency Ethernet traffic. For the first time, the PCI Express* interface is on the processor itself, rather than on a separate I/O Hub. This eliminates a bottleneck and a bus hop for Ethernet data to get to and from the processor. Next, the Intel Xeon processor E5 family implements PCI Express 3.0 (PCIe3), which doubles the bandwidth per pin compared to PCI Express 2.0. PCIe3 will support four channels of 10GbE or one channel of 40GbE on a single PCIe x8 connection. Lastly, the Intel Xeon processor E5 introduces Intel® Data Direct I/O (Intel® DDIO) technology, a radical rearchitecting of the flow of I/O data in the processor with tremendous benefits for Ethernet traffic in terms of increased bandwidth and lower latency. One of the key innovations of Intel DDIO is that it allows Ethernet NICs and controllers to talk directly to the processor’s last-level cache without a detour via main memory. All these new capabilities in the Intel Xeon processor E5 family will continue in future Intel server processors.

Deployment Costs Tumbling as 10GBASE-T Goes Mainstream

The evolution of the 10GbE market can be viewed as occurring in three stages.6 These stages are closely correlated with the evolution of the physical connectors for 10GbE. Mainstream Ethernet has long run over twisted-pair copper cables terminated in RJ-45 jacks. 10GbE would have certainly had a much faster ramp-up a decade ago, if a cost-effective, low-power, backward-compatible twisted pair had been available then. It wasn’t. The signaling speeds of 10GbE over the expected 100m range on twisted-pair cables were a tremendous technical challenge. So the first mainstream adopters of 10GbE (circa 2008) were in blade servers where the KX4 and later KR backplane interfaces could be used, effectively avoiding connectors. 10GbE LAN on motherboard (LOM) for blade servers delivers substantial savings in space and component costs.

The second stage for mainstream 10GbE adoption, peaking now, correlates with the standardization and broad adoption of the SFP+ (enhanced small form-factor pluggable) transceiver. SFP+ has been adopted on Ethernet adapters and switches and supports both copper and fiber modules that interface to the SFP+ shell. Crehan Research correlates the rise of the public cloud, Web 2.0, and massively scalable datacenters with the mainstream adoption of SFP+. SFP+ supports Direct Attach (DA) copper cables up to 7m, and both short-range (300m) and long-range (10km) optical cables, all relatively expensive.

The biggest problem with SFP+ is simply that it is not backward compatible with the twisted-pair 1GbE broadly deployed throughout the datacenter. SFP+ represents a revolutionary, not evolutionary, transition from the RJ-45 jack. Enterprise customers cannot just start adding SFP+ 10GbE to an existing RJ-45 1GbE infrastructure. New switches and new cables are required. This is a big hurdle except for “green field” datacenters.

This discontinuity in connectors and cables has also had economic consequences on the costs of 10GbE switch ports, further slowing 10GbE acceptance. Switch vendors had to add SFP+ receptacles to support 10GbE that were not backward compatible. Revolutionary market transitions generally require strong market demand to get over the hurdles. Until recently, however, 10GbE was a relatively small part of the Ethernet market, so switch vendors naturally had to charge enough for 10GbE switches with SFP+ receptacles to offset their development cost spread over a relatively small market. As a consequence, 10GbE switches have been expensive.

4

Intel 10GbE White Paper

9268_12_Intel_10GbE_WP_042414.indd 4 4/24/14 2:40 PM

Page 5: Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

Crehan’s third stage of 10GbE adoption is driven by the emergence of cost-effective, low-power, backward-compatible 10GBASE-T technology: 10GbE over twisted-pair copper terminated with RJ-45 jacks. The correlated mainstream market is enterprise rack and tower. The third stage is the big one.

10GBASE-T is the key to an evolutionary transition to 10GbE in the datacenter, because of its familiar, low-cost cabling and backward compatibility to 1GbE and even 100 Megabit Ethernet (MbE). 10GBASE-T supports lengths to 100m over CAT-6a or CAT-7 twisted-pair cables. The vast majority of today’s datacenters have already pulled CAT-6a cable, eliminating the potentially expensive cabling issue from consideration in the move to 10GBASE-T.

10GBASE-T has been around for several years from multiple vendors. But what’s new is the emergence of single-chip 10GBASE-T controllers. And the significance of that is it enables 10GBASE-T adapters without fans. It also enables 10GBASE-T controllers to be placed on motherboards without fans. Fans are a point of failure that system administrators are happy to live without.

Even single-chip 10GBASE-T adapters consume a watt or two more than the SFP+ alternatives. More power consumption is not a good thing in the datacenter. However, if you do a cost-benefit analysis, the expected incremental costs in power over the life of a typical datacenter is far less than the amount of money saved from reduced cabling costs.8 10GBASE-T relative to SFP+ alternatives does add about a microsecond of latency at each transition, because of the extensive digital signal processing required in a 10GBASE-T PHY (physical layer). For some specialized applications like high-frequency

stock trading, that matters. However, in the vast majority of datacenter applications, increased latency on that order has no significant impact.

Some have been skeptical about whether 10GBASE-T could ever deliver the reliability and low bit-error rate of SFP+. Particular skepticism was expressed about whether the high demands of FCoE could be met with 10GBASE-T. However, after extensive testing, Cisco announced at Cisco Live 2013 that it had successfully qualified FCoE over 10GBASE-T and is supporting it on its newer switches that support 10GBASE-T. (To simplify the qualification process, Cisco’s tests were limited to 30m or less, which the company believes covers the vast majority of usages.)

The economics of 10GbE overall are increasingly attractive. Adapter prices have dropped over the past 3 years, with 10GbE switch ports down compared to 1GbE—delivering better performance at a lower cost. 10GBASE-T will accelerate the pace of dropping 10GbE costs by driving substantially larger volumes.

All the major switch vendors are supporting 10GBASE-T in their newest switches including Arista Networks, Cisco, Dell, Extreme Networks, Huawei, HP, IBM, Juniper Networks, and Oracle. Equally important, the per-port cost of 10GBASE-T switch ports is approximately the same as SFP+ ports for most vendors. In fact, Netgear, a vendor focused on SMB, is selling today an 8-port 10GBASE-T switch at approximately $100 USD per port. So 10GBASE-T is not just for large datacenters.

The stars are aligning for a strong transition to 10GbE in the datacenter, and a strong transition to 10GBASE-T in particular. That’s why market researchers are predicting that 10GBASE-T will be the dominant connector for 10GbE within three or four years.

10GbE Physical Media Mix9

SFP+

Kx (Internal Blade)

BASE-T

Thou

sand

s of

Por

ts

60,000

50,000

40,000

30,000

20,000

10,000

0

2008 2009A 2010A 2011A 2012A 2013E 2014E 2015E 2016E 2017E

5

Intel 10GbE White Paper

9268_12_Intel_10GbE_WP_042414.indd 5 4/24/14 2:40 PM

Page 6: Now Is the Time to Upgrade to 10 Gigabit Ethernet · F our key technology and market forces have been driving the need for 10GbE: server virtualization, Moore’s Law server performance

ConclusionWith four key technology and market forces driving the need for 10GbE and the emergence of three key enablers to ease the transition— especially the reduced costs—now is the time to adopt 10GbE technology in your datacenter.

For more information, visit www.intel.com/go/10GbE

1. IDC Server Virtualization Forecast Deliverable, November 2010.

2. Performance tests and ratings are measured using specifi c computer systems and/or components and refl ect the approximate performance of Intel® products as measured by those tests. Any diff erence in system hardware or software design or confi guration may aff ect actual performance. Buyers should consult other sources of information to evaluate the performance of systems or components they are considering purchasing. For more information on performance tests and on the performance of Intel products, visit Intel Performance Benchmark Limitations.

3. https://www-ssl.intel.com/content/www/us/en/it-management/intel-it-best-practices/converged-network-adapters-paper.html

4. http://www.intel.com/performance

5. http://www.intel.com/content/dam/www/public/us/en/documents/white-papers/big-data-apache-hadoop-technologies-for-results-whitepaper.pdf

6. http://www.principledtechnologies.com/Intel/IVB_server_upgrades_1013_v2.pdf

7. http://www.brighttalk.com/webcast/663/56547

8. Credits to Crehan Research, 2013.

9. Credits to Crehan Research, 2013.

© 2014, Intel Corporation. All rights reserved. Intel, the Intel logo, and Intel Xeon are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others.

Printed in USA 0414/MBR/CMD/PDF Please Recycle 330512-001US

Intel 10GbE White Paper

9268_12_Intel_10GbE_WP_042414.indd 6 4/24/14 2:40 PM