Upload
mellanox-technologies
View
685
Download
2
Tags:
Embed Size (px)
DESCRIPTION
Presented during VMworld 2014 in San Francisco, CA by Tom Thirer, Mellanox
Citation preview
VMWorld 2014
Mellanox VXLAN Acceleration
© 2014 Mellanox Technologies 2- Mellanox Confidential Internal Use Only -
Leading Supplier of End-to-End Interconnect Solutions
Virtual Protocol Interconnect
StorageFront / Back-EndServer / Compute Switch / Gateway
56G IB & FCoIB 56G InfiniBand
10/40/56GbE & FCoE 10/40/56GbE
Virtual Protocol Interconnect
Host/Fabric SoftwareICs Switches/GatewaysAdapter Cards Cables/Modules
Comprehensive End-to-End InfiniBand and Ethernet Portfolio
Metro / WAN
© 2014 Mellanox Technologies 3- Mellanox Confidential Internal Use Only -
ConnectX-3 Pro is The Next Generation Cloud Competitive Asset
World’s first Cloud offload interconnect solution
Provides hardware offloads for Overlay Networks – enables mobility, scalability, serviceability
Dramatically lowers CPU overhead, reduces cloud application cost and overall CAPEX and OPEX
Highest throughput (10/40GbE, 56Gb/s InfiniBand), SR-IOV, PCIe Gen3, low power
Cloud 2.0The Foundation of Cloud 2.0
More users
Mobility
Scalability
Simpler Management
Lower Application Cost
© 2014 Mellanox Technologies 4- Mellanox Confidential Internal Use Only -
Physical Switch
Three virtual domains connected by Overlay Network protocols
Server
VM VM VM VM
Server
VM VM VM VM
Server
VM VM VM VM
Domain1
Domain2
Domain3
Overlay Networks
What is it?Overlay Networks provide a
method for “creating” virtual domains
Enable large scale multi-tenant isolation
The Challenge:Software implementation leads
to performance degradationCPU overhead increases
Less virtual machines can be supported
Protocols:VXLAN – VMware and Linux
NVGRE – Windows
© 2014 Mellanox Technologies 5- Mellanox Confidential Internal Use Only -
VXLAN and NVGRE Hardware Offload
VXLAN
& NVGRE
Scalable
Multi-tenant
Isolation
Cloud
ConnectX-3 ProOverlay networks Hardware offload
Lowering CPU Overhead
Higher Throughput
OPEX and CAPEX Savings
© 2014 Mellanox Technologies 6- Mellanox Confidential Internal Use Only -
PayloadVXLANheader
CRC
PayloadVXLANheader
CRC
VXLAN Hardware Offload
Improved CPU utilizationMore VMs per server
Higher Networking Throughput
VM VM VMVMVMVMVM
VM VMVMVM
VMVM
VMVMVM
HyperVisor (+ vSwitch)
VM VM VM VM
HyperVisor (+ vSwitch) Stateless Offload
Stateless Offload
PayloadPayload
CPU CPU
Mellanox ConnectX-3 Pro
Legacy NIC
© 2014 Mellanox Technologies 7- Mellanox Confidential Internal Use Only -
VXLAN Throughput with Hardware Offload
1 VM pairs 2 VM pairs 4 VM pairs 8 VM pairs 16 VM pairs0
5
10
15
20
25
30
35
40
20.51
36.25 36.11 36.21 36.12
4.857.62
11.39
17.62 17.04
40GbE VXLAN Throughput
ConnectX3-Pro VXLAN HW offload ConnectX3-Pro VXLAN no offload
Ban
dwid
th G
b/s
© 2014 Mellanox Technologies 8- Mellanox Confidential Internal Use Only -
VXLAN CPU Utilization (Receive)
1 VM pairs 2 VM pairs 4 VM pairs 8 VM pairs 16 VM pairs0
0.5
1
1.5
2
2.5
3
3.5
4
0.39 0.430.64 0.72
1.00
1.78
2.49 2.42
3.483.72
40GbE VXLAN Receive CPU%/1GbE
ConnectX3-Pro VXLAN HW offload ConnectX3-Pro VXLAN no offload
CPU
%/1
GbE
© 2014 Mellanox Technologies 9- Mellanox Confidential Internal Use Only -
Example: 1Gb/s traffic per VM
CAPEX and OPEX Savings
Without VXLAN Offload With VXLAN Offload
17 VMs 36 VMs
CPU Utilization: 61% 26%
Thank [email protected]