Upload
hoangnhan
View
231
Download
0
Embed Size (px)
Citation preview
HPE Cloud-First reference architecture guide – VMware NSX for vSphere
Contents Introduction .................................................................................................................................................................................................................................................................................... 2 Brownfield Network ................................................................................................................................................................................................................................................................. 2 Greenfield Network ................................................................................................................................................................................................................................................................. 4
Product recommendations ........................................................................................................................................................................................................................................ 7 Appendix: Sample BOM ............................................................................................................................................................................................................................................ 11
Resources ..................................................................................................................................................................................................................................................................................... 13
Architecture guide
Architecture guide Page 2
Introduction This guide provides a reference architecture for anyone interested in deploying VMware® NSX for vSphere together with HPE Networking equipment. VMware NSX for vSphere can utilize HPE Networking infrastructure as the underlying transport network. In addition, HPE Networking equipment can integrate with VMware NSX for vSphere to dynamically provide L2 network connectivity for Virtual Machines attached to NSX logical switches with bare metal (BM) servers on the physical network.
Refer to VMware compatibility guide (URL provided at the end of this document) for supported hardware platforms and software versions.
Brownfield Network Customers do not need to swap out their existing network to gain the benefits of VMware NSX for vSphere and HPE Networking integration. The topology as shown in figure 1 will be used to describe the brownfield network:
• The existing network will be used to transport traffic between servers/hypervisors/ESXi hosts
• New HPE FlexFabric 5930/5940 switches can be added when there is a requirement to provide L2 network connectivity for Virtual Machines attached to NSX logical switches with devices on the physical network
• Virtual Extensible Local Area Network (VXLAN) tunnels will be dynamically created between the HPE FlexFabric 5930/5940 switches functioning as hardware VXLAN Tunnel End Points (VTEPs) and hypervisors/ESXi hosts/vDS (software VTEPs)
• HPE FlexFabric 5930/5940 switches communicate with the NSX controllers via Open vSwitch Database Management Protocol (OVSDB) to share local/remote MAC addresses and create VXLAN tunnels
Figure 1. Brownfield network topology
VXLAN Overlay Tunnels
VM2101.1.0.12/24
Bare Metal Server101.1.0.10/24
Hardware VTEP
VM1101.1.0.11/24
OVSDB
HypervisorVMVM
VMVM
VMVM
HypervisorVMVM
VMVM
VMVM
Brownfield Underlay Network
Existing networkNew HPE Networking equipment
HPE FlexFabric 5930/5940
NSX Controller
NSX Manager
vCenter
Architecture guide Page 3
Figure 2 provides a visual of the control plane interactions between VTEPs and NSX controllers when BM and VM try to communicate on the same subnet.
Figure 2. Control plane between VTEPs and NSX controllers
Figures 3 provides a visual of the data plane interactions between VTEPs when BM and VM try to communicate on the same subnet.
Figure 3. BM and VM traffic across VXLAN data plane between VTEPs
VXLAN Overlay Tunnels
VM2101.1.0.12/24
Bare Metal Server 101.1.0.10/24
Hardware VTEP
VM1101.1.0.11/24
HypervisorVMVM
VMVM
VMVM
HypervisorVMVM
VMVM
VMVM
NSX Controller
NSX Manager
vCenter
1. BM sends broadcastARP request out forVM1 IP
2. Hardware VTEP sends BM MAC learnt on switchport to NSX controllers
4. Hardware VTEP learns remote VM MAC from NSX controllers
3. NSX vDS (software VTEP) shares local VM MAC with NSX controllers
5. NSXv vDS learns remote BM from NSX controllers
VXLAN Overlay Tunnels
VM2101.1.0.12/24
Bare Metal Server 101.1.0.10/24
Hardware VTEP
VM1101.1.0.11/24
HypervisorVMVM
VMVM
VMVM
HypervisorVMVM
VMVM
VMVM
NSX Controller
NSX Manager
vCenter
7. Hardware VTEP forwards unicast ARP response from VM1 to BM
6. VM1 sends ARP response back to BM via VXLAN tunnel between NSX vDS (software VTEP) and Hardware VTEP
Architecture guide Page 4
Greenfield Network The topology as shown in Figure 4 will be used to describe the new Greenfield Network:
• HPE FlexFabric switches should be deployed in a CLOS leaf and spine physical topology to provide a distributed scale out, high performing, resilient Ethernet fabric with all leaf switch network ports having equal latency
• 40G links should be deployed between leaf and spine switches
• Leaf switches can provide 10G connectivity to hypervisors/ESXi hosts or other devices e.g. WAN routers, bare metal servers
• The Greenfield Network will be used to transport traffic between hypervisors/ESXi hosts
• HPE FlexFabric 5930/5940 switches functioning as hardware VTEPs can be added when there is a requirement to provide L2 network connectivity for Virtual Machines attached to NSX logical switches with devices on the physical network
• HPE FlexFabric 5930/5940 switches communicate with the NSX controller via Open vSwitch Database Management Protocol (OVSDB) to share local/remote MAC addresses and create VXLAN tunnels
Figure 4. Greenfield Network topology – Part 1 (Physical network connectivity)
As described in VMware NSX-V design guide (URL provided at the end of this document): Logical separation and grouping of racks to provide specific functions such as compute, management and edge services should be implemented as shown in Figure 5.
VXLAN Overlay Tunnels
VM2101.1.0.12/24
Bare Metal Server 101.1.0.10/24
Hardware VTEP
VM1101.1.0.11/24
HypervisorVMVM
VMVM
VMVM
HypervisorVMVM
VMVM
VMVM
NSX Controller
NSX Manager
vCenter
Leaf Layer Switches
Spine Layer SwitchesGreenfield
Underlay Network
OVSDB
New HPE networking equipment
Architecture guide Page 5
Figure 5. VMware recommended deployment for various NSX components
The topology as shown in Figure 6 is used to describe the remaining network configuration:
• The underlay network design should be kept as simple as possible; this will speed up troubleshooting during outages as most of the complexity and services are moved into the servers
• An L3 IP network fabric is recommended as NSX for vSphere allows hypervisors/ESXi hosts to be on different subnets; this would also remove the need for Spanning Tree to be implemented
• Traceroutes and ping tests are now possible between directly connected switches to simplify underlay network troubleshooting
• Routed links between leaf and spine switches should utilize unique /30 subnets
• Hypervisors/ESXi hosts/servers should utilize their directly connected leaf switch as their default gateway
• VMs and bare metal servers will utilize the NSX logical routers as their default gateway (Refer to VMware NSX documentation if DLR is supported with hardware VTEPs)
• Customers can start off with a two-spine deployment, monitor link utilization, then add on more spine switches as required to increase bandwidth and lower oversubscription rates
• By default, Comware supports eight-way ECMP = eight spine switches (modify to a higher value if necessary)
• Either OSPF/BGP can be used as the routing protocol between leaf/spine switches and between NSX edge gateways to WAN routers
• Most enterprise customers should be able to utilize OSPF in their deployments (final decision depends on size of fabric, comfort level of their operations staff etc.)
• Refer to VMware NSX documentation to determine maximum hosts/clusters supported; this will determine how large the underlay network needs to grow
SpineWAN Internet
L3
L2
Leaf
L3
L2
Edge Leaf
Compute RacksManagement Racks
(NSX Controller Cluster,NSX Manager, CloudManagement System,
Storage
Edge Racks(L2 and L3 Gateways, Service
Node Cluster)
Architecture guide Page 6
Figure 6. Greenfield Network topology – Part 2 (Network configuration)
In addition to the underlay network mentioned above, an Out Of Band (OOB) network using separate OOB switches is recommended as shown in Figure 7 to provide HPE Integrated Lights-Out (iLO) connectivity if HPE servers are used. These OOB switches will also provide remote console access to switches if remote network connectivity fails via the in-band management network.
Figure 7. Greenfield Network topology – Part 3 (OOB)
Depending on the storage array selected, a separate Fibre Channel (FC) Storage Area Network (SAN) might be required in addition to the Ethernet network fabric as shown in Figure 8 for storage traffic.
Refer to SPOCK for a better understanding of the SAN deployment options available.
Bare Metal Servers
Hypervisor
VM
VM
VM
VM
VM
VM
Hypervisor
VM
VM
VM
VM
VM
VM
OSPF Area 0
Leaf Layer Switches
Spine Layer Switches
/30 routed links
Default gateway for ESXihosts
Default gateway for ESXihosts
WAN Routers
ESXi Hosts ESXi Hosts
Hardware VTEP
HypervisorVMVM
VMVM
VMVM
HypervisorVMVM
VMVM
VMVM
Leaf Layer Switches
Spine Layer Switches
WAN Routers
OOB Switches
iLOiLO
M0/0/0 M0/0/0 M0/0/0
M0/0/0 M0/0/0
iLO
Architecture guide Page 7
Figure 8. Greenfield Network topology – Part 4 (SAN)
Product recommendations The following HPE products as shown in Figure 9 are recommendations deemed suitable in this reference architecture.
Figure 9. Product recommendations
Hypervisor
VM
VM
VM
VM
VM
VM
Hypervisor
VM
VM
VM
VM
VM
VM
Leaf Layer Switches
Spine Layer Switches
Bare Metal Servers
ESXi Hosts ESXi Hosts
SAN A SAN B
Storage Arrays
Leaf Layer Switches (5900/5930)
Spine Layer Switches
(7900/12900E)
SAN A SAN B
Storage Arrays
OOB Switches (5700)
iLO
M0/0/0 M0/0/0 M0/0/0
M0/0/0 M0/0/0
iLO iLOHypervisor
VMVM
VMVM
VMVM
HypervisorVMVM
VMVM
VMVM
Hardware VTEP
Architecture guide Page 8
Spine Switch The HPE FlexFabric 12900E Switch Series is the next generation modular data center core switch while the HPE FlexFabric 7900 Switch Series is the next generation compact modular data center core switch designed to support virtualized data centers and evolution needs of private and public clouds deployments.
Either 7900 or12900E Switch Series can be deployed, depending on port density/throughput/connectivity/MAC/ARP requirements to access switches.
Refer to product datasheets to identify port density/throughput/connectivity/MAC/ARP capabilities; here is a summary.
Table 1. HPE FF 12900E FX table scalability and support
Feature 12900E FX module
MAC Address Up to 256K *
ARP (host) 16K/128K (Uni mode) *
Link aggregation: ports/group 64/1024
IPV4 LPM 32K
IPV6 LPM 8K **
IPv4/IPv6 MC 8K/1K **
Ingress/Egress ACL 18K/9K **
VXLAN Yes
Native FC No
DCB Yes
SPB Yes
* Shared resource - up to 256K table entries ** Shared resource of 24K
Table 2. HPE FF 7900 FX table scalability and support
Feature 7900 FX module
MAC Address Up to 256K *
ARP (host) 16K/128K (Uni mode) *
Link aggregation: ports/group 64/1024
IPV4 LPM 32K
IPV6 LPM 8K **
IPv4/IPv6 MC 8K/1K **
Ingress/Egress ACL 18K/9K **
VXLAN Yes
Native FC No
DCB Yes
SPB Yes
* Shared resource - up to 256K table entries ** Shared resource of 24K
The following table provides an idea of the maximum access switches and total port density possible when either 7900s or 12900Es are deployed.
Architecture guide Page 9
Table 3. Mid and large size network switch recommendations
Greenfield Network design #1 — mid size Greenfield Network design #2—large size
Spine switches HPE 7900 HPE 12900E
Leaf switches HPE 5930/5940 HPE 5930/5940
Maximum leaf switch count 24 to 48 leaf switches 72 to 144 leaf switches
Maximum port density on leaf switches 1152 to 2304 ports on leaf switches 3456 to 6912 ports on leaf switches
Leaf Switch The HPE FlexFabric 5940 Switch Series is a family of high-performance and low-latency 10GbE and 40GbE top-of-rack (ToR) data center switches. The HPE FlexFabric 5940 Switch Series is available as a 1RU delivering three configurations; 48 x 10 GbE ports with six ports of 40G, 48 x 10 GbE ports with six ports of 100G or 32 ports of 40 GbE. The HPE FlexFabric 5930 Switch Series is a family of high-density, ultra-low-latency, spine and top-of-rack (ToR) switches.
Both 5930/5940 support hardware VTEP functionality. Refer to product datasheets to identify port density/throughput /connectivity/MAC/ARP capabilities; here is a summary.
Table 4. HPE FF 5930 table scalability and support
Feature Support
MAC Address 288K
ARP (host) 120K
Link aggregation: ports/group 32/512
IPV4 LPM 120K
IPV6 LPM 64K
IPv4/IPv6 MC 4K/4K
Ingress/Egress ACL 3K/1K
VXLAN Yes
Native FC Yes – CP modules
DCB Yes
SPB Yes
Table 5. HPE FF 5940 table scalability and support
Feature 5940
MAC Address Up to 288K
ARP (host) 16K/120K
Link aggregation: ports/group 32/1024
IPV4 LPM 128K
IPV6 LPM 8K/64K
IPv4/IPv6 MC 4K/4K
Ingress/Egress ACL 10K
VXLAN Yes
Native FC No
DCB Yes
SPB Yes
Architecture guide Page 10
OOB Switch The HPE Flex Fabric 5700 Switch Series is a family of cost-effective, high-density, ultra-low-latency, Light Layer-3, top-of-rack (ToR) switches.
SAN Switch The HPE FlexFabric 5900CP Switch Series provides a converged, top-of-rack, data center switch architecture that offers wire once for FCoE converged environments. 5930M switches now also have the capability to add on a module to support FC/FCoE.
Refer to SPOCK for SAN details.
Architecture guide Page 11
Appendix: Sample BOM This section provides some sample Bill Of Material (BOM) which should prove useful during the planning stage; Figure 9 can be referenced if required.
Spine Switch (12904E) Table 6. Sample 12904E BOM
SKU Description Quantity for 2 spine switches Comments
JH262A HPE 12904E Switch Chassis 2
JH108A HPE 12900E 2400W AC PSU 8
JH108A ABA Included: Power Cord—U.S. localization 8
JH265A HPE 12904E Fan Tray Assy 4
JH107A HPE 12900E LPU Adapter 8
JC665A HPE X421 Chassis Universal 4-post RM Kit 2
JH264A HPE 12904E 2.5Tbps Type F Fabric Mod 12
JH263A HPE 12904E Main Processing Unit 4
JH045A HPE 12900 36p 40GbE QSFP+ FX Mod 8
JG661A HPE X140 40G QSFP+ LC LR4 SM XCVR 288
Spine Switch (7904) Table 7. Sample 7904 BOM
SKU Description Quantity for 2 spine switches Comments
JG682A HPE 7904 Switch Chassis 2
JC665A HPE X421 Chassis Universal 4-post RM Kit 2
JG840A HPE 7900 1800w AC PSU 4
JG840A ABA INCLUDED: Power Cord - U.S. localization 4
JG839A HP FF 7904 B-F Fan Tray 4
JG683B HPE 7900 12p 40GbE QSFP+ FX Mod 8
JG661A HPE X140 40G QSFP+ LC LR4 SM XCVR 96
Architecture guide Page 12
Leaf Switch (5930) Table 8. Sample 5930 BOM
SKU Description Quantity For 1 Leaf Switch Comments
JH178A HPE 5930 2-slot 2QSFP+ Switch 1
JC680A HPE 58x0AF 650W AC Power Supply 2
JC680A ABA Included: Power Cord—U.S. localization 2
JG552A HPE X711 Frt(prt) Bck(pwr) HV Fan Tray 2
JH182A HPE 5930 24p 10GBASE-T/2p MCsc QSFP+ Mod 2
JG661A HPE X140 40G QSFP+ LC LR4 SM XCVR 2 2 x 40G to each spine switch
Leaf Switch (5940) Table 9. Sample 5940 BOM
SKU Description Quantity For 1 Leaf Switch Comments
JH394A HPE FF 5940 48XGT 6QSFP+ Switch 1
JG552A HPE X711 Frt(prt) Bck(pwr) HV Fan Tray 2
JC680A HPE 58x0AF 650W AC Power Supply 2
JC680A ABA included: Power Cord—U.S. localization 2
JG661A HPE X140 40G QSFP+ LC LR4 SM XCVR 2 2 x 40G to each spine switch
Architecture guide
Sign up for updates
© Copyright 2016 Hewlett Packard Enterprise Development LP. The information contained herein is subject to change without notice. The only warranties for Hewlett Packard Enterprise products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. Hewlett Packard Enterprise shall not be liable for technical or editorial errors or omissions contained herein.
VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions.
4AA6-7409ENW, October 2016
Resources HPE FlexFabric
VMware Compatibility Guide
VMware NSX for vSphere (NSX-V) Network Virtualization Design Guide
HPE SPOCK
VMware NSXv & 5930 Integration Demo on HPE Youtube Channel
Learn more at HPE.com/networking