Upload
others
View
10
Download
0
Embed Size (px)
Citation preview
#vmworld
CNET1474BU
NSX and Cisco ACI:Running Your SDDCon a Cisco Underlay
Paul Mancuso, VMware, Inc.
#CNET1474BU
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc.
Disclaimer
This presentation may contain product features or functionality that are currently under development.
This overview of new technology represents no commitment from VMware to deliver these features in any generally available product.
Features are subject to change, and must not be included in contracts, purchase orders, or sales agreements of any kind.
Technical feasibility and market demand will affect final delivery.
Pricing and packaging for any new features/functionality/technology discussed or presented, have not been determined.
2
The information in this presentation is for informational purposes only and may not be incorporated into any contract. There is no commitment or obligation to deliver any items presented herein. VMworld 2019 Content: Not for publication or distribution
3©2019 VMware, Inc.
VMware NSX Data Center has proven its ability for deployment on any switch fabric. Customers have asked for simplification of application deployment on a Cisco infrastructure. VMware developed architectural guides for deploying an NSX Data Center platform on Cisco’s switching fabrics, including a Cisco ACI underlay. In this session, you will learn best practices for implementing a software-defined data center on any Cisco underlay. You will find out what is the required to set up Cisco ACI and steps to normalize ACI’s fabric for an NSX Data Center for VMware vSphere or VMware NSX-T Data Center deployment.
Session Objective
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 4
vSphere
BRANCH
BRANCH
EDGE/IOT
TELCO/NFV
BRANCH
BRANCH
DCDC
DC
BRANCH
Virtual Cloud Network
Tied Together—Everywhere.
vRNI
CLEAR VISIBILITY
NSX Intelligence
DEEP INSIGHT
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc.
Agenda
5
AgendaNSX over Any Underlay Role of Physical and Virtual Network Infrastructure
Design PrimerNSX Data Center and Cisco ACI overview and terms
NSX Data Center on an ACI InfrastructureDesign guide discussion
OperationsHow a good design reduces the pitfalls
Summary
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 6
Extract Simplicityvs Abstracting
Complexity
VMworld 2019 Content: Not for publication or distribution
7©2019 VMware, Inc.
VMware NSX over Any UnderlayThe role of physical and virtual network infrastructure
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 8
Virtual Network Infrastructure:Optimized for Application Scale, IT Operations and Efficiency
Built to meet the needs of specific infrastructure
environments (DC, Campus, Branch)
Physical Network Infrastructure: Optimized for price, throughput, latency
Built to meet the needs of applications and data across the end-to-end IT delivery model (DC, Cloud, Edge)
Productivity
Line
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 9
NSXSecurity, Overlay, Services
NSX and Cisco underlay interoperability
NSX Data Center over Any Underlay
• Consistent API for networking, security cross cloud
• Works with any underlay
NSX provides:
• Fabric management
• Programmable physical network
Switch fabric provides:
ACI Switch Fabric
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 10
High speed scalable security platform
Context aware andLayer 7 FW capabilities
Supports and extends life of the switch fabric
Supported management infrastructure
Built-in vs. Bolt-on:
NSX is the only native networking and security platform for ESXi.
ESXi w/NSX
Service InsertionDistributed Firewall Networking
vCenter
Supported
High Performance
Operational Simplicity
Native Service Platform
Agile Network Deployment
Stable, high speed network infrastructure
++
Consistent networking and security built-in
Ubiquitous policy platform data center and cloud
Bare Metal Workload
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 11
Intrinsic Security
Building the de-facto firewall used within the data center
• Unique, ubiquitous enforcement
• Distributed for cloud-scale
• Advanced security features
• Zone firewalling requirements and distributed firewalling requirements
• Analytics and visualization
Realtime Visibility
Net-sec Analytics
Zone Firewalling
Data Center Branch VMC Cloud
Unified Management Plane
Layer 4-7
Edge Appliance
URL Classification
NEW
Layer 4-7
Identity Firewalling
URL Whitelisting
Endpoint ProtectionBare-metalVMs Public Clouds,
AWS, AzureContainers
Micro-segmentation
VMC on AWSVMworld 2019 Content: Not for publication or distribution
12©2019 VMware, Inc.
Design PrimerNSX-T Architecture Overview and Value
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 13
NSX-T Datacenter Components
Converged management & control plane cluster
3 node cluster for scale & high availability
UI/API for interacting with user, automation & CMP platforms
Validates and Stores desired configuration
Maintain and propagate dynamic state
Management, control and data planes
Cloud Service Manager
NSX Container Plug-in
vCenter(s)
VMs Containers
ESXi host KVM host
Bare-MetalServer
NSXEdge
NSXCloudGW
WindowsVM
LinuxVM
PrivateCloud
PublicCloud
N-VDS N-VDS NSX
NSX NSX
Management/control plane
Data plane
NSX Management Cluster
GUI/REST/CMPConsumption
PhysicalNetwork
Management/control plane
Host workloads (VMs, containers) and services
Implements distributed routingand firewalling
Distributed data plane
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 14
Bare-metalVMs VMwareCloud
Public Cloud
PhysicalSwitching OutpostsContainers
TelcoCloud
Network Infrastructure as Code
Ops. Simple. Consistent.
Intrinsic SecurityCloud-ScalePlatform
The industry’s first & only network & security platform running today where your apps are
Networking and Security for the Multi-Cloud Era
VM, container, physical, private, public
VMworld 2019 Content: Not for publication or distribution
15©2019 VMware, Inc.
NSX Data Center Deploymenton Standard Cisco InfrastructureNexus NX-OS switches and Cisco UCS
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 16
Standard NSX data center infrastructure topology
NSX Data Center Design Overview
Physical Network
Compute Clusters Edge Cluster(s) Management Cluster
Application
Transport Subnet /22
NSX Edge Nodes
VM1
Management(VLAN)
Database
Transit VLAN
Web
TieredLogical Switches
Transport Zone
VM5
VM2
VM3
VM5 VM6
PrototypicalDesign
NSX Manager Cluster
CCP
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 17
NSX over NX-OS: Overview
ACI Infrastructure
Supports attachment of hosts
• Define physical domain of host attachment
• VLANs, switch interfaces, and policies in use
• Domains, physical and external
Create application profile
• Defines EPGs
• Networks
– Private networks
– Bridge domains
– External L2 and L3 connectivity
NSX Overlay
Compute
VMKernel
ACLs
VLANs
NX-OS Fabric
Compute Edge
VM1 VM3 VM5 VM2 VM4 VM6
Web LS
APP LS
DB LS
PeeringVLANs
Mgmt vMotion Storage Transport
Management
Border Leaves
Transit VLAN
TransitVLAN
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 18
NSX Data Center Infrastructure Independence
NSX
Any L2/L3 Data Center Interconnect
Pod A Pod BL2
L3
L3 Core L3 Core
Traditional Data Center Fabrics NSX Platform Proprietary Data Center Fabrics
ACI
ACI
ACI ACI ACI
ACIVMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 19
NSX Data Center Design on Cisco L2 and L3 Topologies
Pod components can be any mix of 9k / 7k/ 6k / 5k / 2k
95xx
VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
POD A
ComputeCluster A
ComputeCluster B
95xx
93xx 93xx
95xx
93xx 93xx
L3
L2
VLAN ID 100, 101 and 102 Scope
UCS B-Series
POD B
95xx
93xx 93xx
95xx
93xx 93xx
L3
L2
VLAN ID 100, 101 and 102 Scope
UCS B-Series
95xx
L3 Core
95xx
VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
POD A
ComputeCluster A
ComputeCluster B
95xx
93xx 93xx
95xx
93xx 93xx
L3
L2
VLAN ID 100, 101 and 102 Scope
UCS B-Series
POD B
95xx
93xx 93xx
95xx
93xx 93xx
L3
L2
VLAN ID 200, 201 and 202 Scope
UCS B-Series
95xx
L3 Core
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 20
Cisco DC Topologies – VXLAN
VLAN ID 100, 101 and 102 Scope – VXLAN VLAN ID 103 - Transport Zone Scope (extends across ALL PODs/clusters)
Compute Cluster A
Compute Cluster B
UCS B-Series UCS B-Series
Spine
Leaf
UCS B-Series UCS B-Series
Border Leaf
Mgt / Edge Cluster
VLANs & IP Subnet Defined at each ToR
SVI Interface VLAN ID IP Subnet
Management 100 10.100.100.x/24
vMotion 101 10.101.101.x/24
Storage 102 10.102.102.x/24
VXLAN 103 10.103.103.x/24
L3 Spine DC CoreInternet/DMZ
56xx 56xx
Spine - Leaf can be:9xxx, 7xxx, 6xxx ,
56xx
Cisco’s Prime or DCNM may also provide underlay and
VXLAN management.
95xx 95xx 95xx 95xx
60xx 60xx93xx 93xx93xx 93xx
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 21
UCS Fabric Interconnects• Run in End-Host mode
• vPC connectivity to Nexus switching
vSphere Compute connectivity• UCS vnics can be shared or dedicated
• vSphere dvUplinks equals number of vmnics
• Use multiple VTEPs with Src ID teaming
• 1:1 mapping of VTEPs to UCS vnics
vSphere Edge connectivity• Edge Cluster preferably UCS C-Series
• Separate straight thru connection, bypass FIs
• More to follow for UCS B-Series
vSphere Host and UCS Interconnectivity
NSX Data Center and UCS Connectivity
UCS
Edge Leaf
L3
L2
Edge ClusterUCS C-Series
DC CoreInternet/DMZ
DC Fabric
95xx 95xx
93xx 93xx
VPN VPN
VPN VPN
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 22
vDS Design, Uplink and Traffic Flow
Cisco UCS Connectivity
VMNIC 0 VMNIC 1 VMNIC 2 VMNIC 3
vNIC 3vNIC 1
2204 FEX2204 FEX
6248 (A) 6248 (B)
Teaming Mode
VMkernelVXLAN
VTEP – 1
Traffic Type
VMkernelVXLAN
VTEP - 2
VMkernel vMotion
VMkernelMgmt
VMkernel IP
Storage
LBT SRC_ID SRC_ID Explicit FailoverLBT
Nexus 93XXNX-OS Mode
vNIC 4vNIC 2
VDS – 2 Routing
Routing VLAN PG
20
BridgingPG
Routing VLAN PG
10
VDS - 1 MGMT, vMotion, NFS, VXLAN & Bridging
Recommended UCS B-Series setup
SRC_ID SRC_ID SRC_ID
UCSB-SeriesBlade
93xx 93xx
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 23
UCS vNIC Connectivity NSX Data Center
3 UCS Service Profiles
UCS Management Service Profile Template
• 2 UCS vNICS (Optional 4 vNIC)
• All Traffic East / West(Optional Management to be routed outside of fabric)
• VLAN Pinning for all VLANs to vPCports (Separates Edge VLANs)
UCS Compute Service Profile Template
• 2 UCS vNICs
• All Traffic East / West
• VLAN Pinning for all VLANs to vPCports
vDS dvPortgroups for Compute Nodes (NSX Data Center for vSphere)
VMNIC 0 VMNIC 1
vNIC 1
23xx FEX23xx FEX
63xx (A) 63xx (B)
Teaming Mode
Overlay TEP – 1
Traffic Type
Overlay TEP - 2
vMotion MgmtIP Storage
LBT SRC_ID SRC_ID Failover
LBT
Nexus 93XXNX-OS Mode
vNIC 2
VDS or NVDS - 1 MGMT, vMotion, vSAN, NSX Overlay
Recommended UCS B-Series setup
UCSB-SeriesBlade
93xx 93xx
vNIC 1 vNIC 2
ESXi Infra
VMNIC0 VMNIC1
Vnic 3 vNIC 4
NVDS-1
VMNIC2 VMNIC3
2 vDS Model1 vDS + 1 NVDS
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 24
Cisco UCS Network Uplink Recommendations
VLAN Pinning– Edge Node VLANs
Two sets of VLANs on separate port-channels (PC and vPC)
• Soft-Pinning
vSphere/NSX Infrastructure
• East – West vPC
NSX Edge Nodes
• North – South PC
Deterministic traffic
• Avoid using complex traffic patterns for routed traffic.
• Easy alignment of adjacencies
UCS B-Series – Use ‘Soft Pinning’ for Layer-2 Uplinks
Nexus 93XX LeavesNX-OS / ACI Mode
(No Leaf channel in ACI Mode)
Dedicated Port-ChannelNorth – South
VLANs
23xx FEX23xx FEX
63xx (A) 63xx (B)
93xx 93xx
UCS Fabric Interconnects andUCS B-Series Chassis FEX
Virtual Port-ChannelEast – West
VLANs
VLAN PinningEdge Node VLANs
Source: Cisco UCS Network Mgmt Guide 4.0Chapter: Upstream Soft Pinning Layer 2 UpstreamVMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 25
vDS
NSX Edge Node Connectivity
NSX Edge Node
• 4 vNIC design *
• One-to-one alignment of edge node vnics with vSphere dvPg
Soft pinning and UCS vNIC Profile
• East / West vNICs
– Mgmt (Failover A/S)
– Transport (Src ID A/A)
• Edge Node Routing vNIC (1 Per FI)
– Active / Unused per side
vSphere dv-Portgroups
• VLAN tagged for external alignment
Edge Node vnics
• No VLAN tagged required
UCS B Series reference configuration
Mgmt NVDS-1 NVDS-2 NVDS-3
vNIC 3vNIC 1
UCS vNICsESXi vmnic
Mgmt Pg
ESXi Edge Cluster Host
NSX Edge Node
vNIC 2
Transport Pg
23xx FEX
63xx (B)
23xx FEX
63xx (B)
vNIC 4
Ext–Pg–FI-A Pg Ext–Pg–FI-B Pg
Optional: One or twovSphere VDS
*UCS B Series 4 vNICs / UCS C-Series 4 NICsVMworld 2019 Content: Not for publication or distribution
26©2019 VMware, Inc.
NSX Data Center on a Cisco ACI UnderlayDesign and deployment discussion
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 27
ACI infrastructure terminology
ACI Fabric Object Alignment
Switch and leaf profile
• Interface profile
– Interface selector
– Port range
– Interface Policy Group
AEP
• Selectors for attachment points
– Ports, PC, vPC
• Associated using Interface Port Policy Group object
• Associated to domain(s)
– Links VLAN Range
Leaf Ports +
Policies
AEP
Domains (Phy, L2, L3)
• Specifies how devices are connected to the fabric
• Virtual, bare metal, external L2 or external L3
VLAN pools
• Identifies the VLAN IDs used for encapsulation between the ACI fabric and system devices
• Pools are associated to domain
Physical Domain
External Domain
VLAN Pool 1
VLAN Pool 2
VLAN Pool 3
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 28
BD
Terminology: Application profile, network and external domain objects
ACI Tenant Overview
Consume Consume Consume
L3 / VRF
Shared Infrastructure ServicesDNS, Syslog, AD, etc
L3OutEPG
WebEPG
L3OutPublic
BDSubnet2
BDSubnet3
APPEPG
DBEPG
Tenant X
CommonTenant
Provide Provide Provide
Subnet1
ProvideConsume
Contract
Web Contract
ProvideConsume
Contract
App Contract
ProvideConsume
contract
DB Contract
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 29
Streamline ACI Ops – NSX Data Center on ACI Underlay
Pools VLAN, VXLAN
VMM Dom1
VMM Dom1
VMM Dom1
Phy Ext Dom
AAEP Policy
AAEP Policy
AAEP Policy
IntfPoliciy
Grp
IntfPolicyGrp
IntfPolicyGrp
IntfPolicyGrp
IntfPolicy
Settings
IntfPolicy
Settings
IntfPolicy
Settings
IntfPolicy
Settings
Pools VLAN, VXLAN
Intf Port Selector
Intf Port Selector
Intf Port Selector
Intf Port Selector
IntfProfile
IntfProfile
IntfProfile
IntfProfile
Switch Profile
Switch Profile
Switch Profile
Switch Profile
Switch Selector
Switch Selector
Switch Selector
Switch Selector
SwPolicy Group
SwPolicy Group
SwPolicy Group
SwPolicy Group
Multiple per site fabric, multiple tenant policies plus ACI Multi-site policy manager settings = MTTR
Multiple fabric policies of various complexity
Single set of fabric policies Single ACI Tenant PolicyNSX Data Center on ACI underlay
NSX Data Center on any underlay
= +
AAEP Policy
Tenant – A
Multiple AP, BD,VRFs, L3outs
Tenant – B
Multiple AP, BD,VRFs, L3outs
Tenant – C
Multiple AP, BD,VRFs, L3outs
Tenant – D
Multiple AP, BD,VRFs, L3outs
Multiple ACI tenant policies of various complexity
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 30
NSX over NX-OS: Overview
ACI Infrastructure
Supports attachment of hosts
• Define physical domain of host attachment
• VLANs, switch interfaces, and policies in use
• Domains, physical and external
Create application profile
• Defines EPGs
• Networks
– Private networks
– Bridge domains
– External L2 and L3 connectivity
NSX Overlay
Compute
VMKernel
ACLs
VLANs
NX-OS Fabric
Compute Edge
VM1 VM3 VM5 VM2 VM4 VM6
Web LS
APP LS
DB LS
PeeringVLANs
Mgmt vMotion Storage Transport
Management
Border Leaves
Transit VLAN
TransitVLAN
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 31
ACI infrastructure requirements
NSX over ACI: Overview
ACI Infrastructure
Supports attachment of hosts
• Define physical domain of host attachment
• VLANs, switch interfaces, and policies in use
• Domains, physical and external
Create Application Profile
• Defines EPGs
• Networks
– Private networks
– Bridge domains
– External L2 and L3 connectivity
NSX Overlay
Compute
VMKernel
ACI Contracts
ACI EPGs
Layer 2ACI Fabric
Compute Edge
VM1 VM3 VM5 VM2 VM4 VM6
Web LS
APP LS
DB LS
PeeringVLANs
Mgmt vMotion Storage Transport
Management
Border Leaves
Transit EPG
TransitEPG
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 32
NSX over ACI DeploymentInfrastructure design
ACI Fabric IdealsFabric Policies
Fabric Access PoliciesNSX over ACI
Tenant Configuration
Fabric PoliciesFabric Access Policies
Overall infrastructure design for a supported NSX data center deployment on an ACI underlay
Layer 2 fabric logistics:
• Single tenant
• Fewer contract needs
• Map static vSphere Eps
• Map NSX Edge to ACI border
Minimum requirements:
• 1 Physical domain
• 1 External routing domain
• 2 VLAN pools (Int & Ext)
• 1 AEP (Leaf & switch policies, Int & Int sel policies, etc..)
NSX data center deployment
• Separate tenant (not common)
• 1 Application (network) profile
• 4 EPGs (base EPGs)
• 4 bridge domains, 1 VRF(ea VLAN = ea EPG = ea BD)L3Out; South (NSX Overlay)
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 34
Fabric and fabric access policies required of ACI
Creating the Fabric Policies
This small set of policies are
required of any ACI deployment.
VMware NSX over ACI underlay design requires a
substantially smaller subset of these fabric
required objects
Create a single set of fabric abstractions
NSX-Phy-Domain
NSX-Infra VLAN Pool
NSX-L3-Domain
NSX-Ext VLAN Pool
NSX-AEP(AEP)
NSX-Host-Int-Profile(Leaf Interface Profile)
NSX-Host-Ports(Access Port Selector)
NSX-Port-Policy Group(Interface Policy Group)
Interface Policies
NSX-Leaf-Profile(Leaf Profile)
NSX-Leaf-Sel(Leaf Selector)
NSX-Sw-Pol-Group(Leaf Policy Group)
Leaf Policies
LLDP CDP Default
DomainsAttachable Access
Entity Profile Interface Policies Switch Policies
Where How Interface Settings Switch Settings
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 35
Reducing the complexity and increasing operational simplicity
NSX on ACI Tenant
TenantCommon Shared Infrastructure Services
DNS, Syslog, AD, etc.
Provide
Consume
Optional: Place either in common tenant or within NSX on ACI tenant.
TenantNSX on ACI
Management EPG
Shared Infrastructure ServicesDNS, Syslog, AD, etc.
vMotionEPG
IP StorageEPG
OverlayEPG
Routed External
L3 Out EPG
L3 OutvMotion
BDIPSBD
OverlayBD
ManagementBD
vMotionSubnet
IP StorageSubnet
OverlaySubnet
L3 / VRF
PublicManagement
Subnet
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 36
NSX on ACI Tenant Overview
Single ACI Tenant
• Adds simplicity to fabric mgmt.
• Reduces dynamic management of ACI fabric
• Adds stability
NSX Edge
• Application Platform disaggregated from physical
Single Tenant for all application workloads
• NSX on ACI Application Profile
• Four EPGs
• Statically or App EPG to AEP
• NSX on ACI Networking
• Bridge Domains (one)
• VRF (single)
• NSX on ACI External Networking
• L3Out and EEPG
• DC connect
• NSX OverlayVMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 37
NSX on ACI L3Out Detail
NSX External connectivity
• ECMP or HA
• ACI setup the same
• Simplicity to L3Out
• NSX Overlay: Stub network
• Requires only a default route
• Simplicity to L3Out
• NSX Edge Cluster
• Role: Provider edge
• NSX T1 routers
• Role: Tenant routers
• Routing External Domain
• Node Profiles (Border leaves)BGP Peers using SVIs
• Configured Node Profiles
• Displays BGP Neighbors
• L3Out Networks (EEPG)
• Default Route Leak PolicyVMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 38
Connecting the NSX overlay to the switch fabric
NSX Edge Routing – NSX on ACI Underlay
Edge Cluster provides E/W and N/S connectivity
• Standard ACI physical domainfor E/W
– 4 Infrastructure EPGs
• Routed external domain for N/S
• The same or separate physical uplinks can be used
• 2 or 4 pNIC (4 vNIC UCS) design
• Best to utilize separate fabric access and pool policy objects: VLAN pools, Domains, PC, Interface leaf and switch policy objects
Edge Cluster
Same physical uplinks used for East/West and North/South
ACI
ACI
ACI
ACI ACI ACI
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 39
Example Physical and Logical Connectivity
NSX Edge to ACI SVI Peer Connectivity
Interface SVIsVLAN Red
Interface SVIsVLAN Blue
ECMPEdges
Pair of SVIs per Border Leaf One SVI per each VLAN Interface
2x Peer Connections per Edge Node = 1x Peer per VLAN, per NSX Edge
Transit VLANs
Peer Connectivity
ACIBorderLeaves
VMworld 2019 Content: Not for publication or distribution
40©2019 VMware, Inc.
OperationsA good design reduces the risks
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 41
VMware NSX data center design on an ACI underlay
Leverage Operational Value
Operational features and scaling are inherent and easily manageable
• Infrastructure traffic inherently isolated
• Infrastructure traffic scales within the fabric
• Minimal need for infrastructure changes
• Increasing workload deployment within the NSX Overlay requires no changes to physical underlay
• Scaling infrastructure has minimal overhead
• Reduced HW replacement
• Hardware infrastructure stabilized
Isolation of Infrastructure Traffic
Application deployment scalability
Infrastructure operations are minimized
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 42
Leveraging operational simplicity
Infrastructure Traffic Isolation
NSX data center infrastructure traffic
• Provides a high speed scalable fabric
• Traffic types can be inherently isolated by ACI EPGs
• vSphere provides inherent Advanced Security for VMKernel traffic
• No necessity to manage isolation
• Simplifies underlay operations
Management Group
VLAN 100
Transport Group
VLAN 100
Management Group
VLAN 100
TransportGroup
VLAN 100
Permit Access No Access
ACI ACI
ACI ACI ACI ACI
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 43
Scale and deploy applications abstracted from switch fabric
NSX Logical Routing Enhancements
NSX Edge cluster advantage
• Application decoupled from physical fabric
• Homogenous policy operation
- Multi-Site, Multi-Cloud, Heterogenous switch fabrics
• Agile distributed services deployment
- LB, NAT, Tenant FW, DNS, DHCP,etc
• Micro-segmentation completeness
- Logical overlay separation
- Stateful distributed FW
- Context aware micro-segmentation
- Layer-7 guest and network service insertion
ACI Leaves
Edge Cluster ECMP Routing
ACI Switch Fabric
VLAN: 700 10.0.100.0/24
Physical-to Virtual On/Off Ramp
Virtual workload flow may stay local on the host if source and destination are the same host
VNI: 5003192.168.102.0/24
VNI: 5002192.168.101.0/24
VNI: 5001192.168.100.0/24
Logical Network
Distributed Routing
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 44
Infrastructure operations minimized
Software Defined Data Center
Deployed application
view
Stable, scalable
hardware underlay
Software defined
managed network
Two fabric access policies
required to scale out
Leaf and Interface selector policies
Web2Web1 App2App1
Logical Switch 1 Logical Switch 2
Spine
Physical Topology
Logical Router
Software defined logical application topology
HV1
HV1
HV3
Web1
App2
App1 HV4
HV5
HV6 Web2
Edge
HV7
HV8
Edge
HV10
HV11
Logical routers on every hypervisor
(in kernel)
(ESXi / KVM) Infrastructure ClustersVMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 45
NSX Transport and Edge Performance
Methodology:• iPerf 2.0.5
− 4 Threads per VM Pair
− 4 – 12 VM Pairs
• Average Throughput – 35 Gbps @ 1500 MTU
• E/W (With Firewall)– Logical Switch
– Logical Router
• N/S– Routing
– Routing with Firewall
– SNAT
– DNAT
East/west and north/south high performance capability
NSX-T Data Center Performance Deep Dive [NET1855BU]VMworld 2018 Live Demonstration
Throughput - NSX Data Center 2.2Using Intel® XL710s (40Gbps) – iPerf 2
Th
rou
gh
pu
t (G
bp
s)
SNAT DNAT
NAT
Overlay > VLAN
LS
Overlay > VLANVLAN> Overlay VLAN> Overlay
LR (T1) LR (T0) NS Routing NS Routing + Firewall
40
35
30
25
20
15
10
5
0
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 46
Wireshark caption of NSX overlay traffic in ACI iVXLAN fabric
Visibility of Encapsulated Traffic in ACI Fabric
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 47
Accelerate application security and networking across private, public and hybrid clouds
VMware Network Insight
• Accelerate micro-segmentation deployment
• Troubleshoot security for SDDC, native AWS and hybrid applications
• Minimize business risk during application migration
• Reduce mean time to resolution for application connectivity issues
• Optimize application performance by eliminating network bottlenecks
• Audit network and security changes over time
• Scale across multiple NSX managers
• Boost uptime by proactively detecting misconfiguration errors
• Ensure compliance for NSX
Use Cases
Plan application security and migration
Optimize and troubleshoot virtual and physical networks
Manage and scale NSX
VMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 48
Complete virtual through physical visibility!
vRealize Network Insight: NSX Overlay on ACI Switch Fabric
vRNI for ACI Fabrics
Features include:
• ACI/APIC integration
• EPG, bridge domain, ANP, EPG to VM, EPG to DVPG association
• Layer 2 path from VM to Leaf nodes
• VM to VM path visibility I(ACI VRF in VM-VM path)
• ACI Leaf nodes and their ports in path through Spine (ACI fabric)
VMworld 2019 Content: Not for publication or distribution
49©2019 VMware, Inc.
Jesse RyskiAAA/ACG
Fiat Chrysler Automotive
• ACI Multi-Pod Deployment 70+ Leaves
AAA/ACG
• Greenfield COLO ACI as Underlay For NSX
• Brownfield 7,5,2+ Vcenter Migration to ACI/NSX
I’m a NSX Ninja
VMworld 2019 Content: Not for publication or distribution
50©2019 VMware, Inc.
Use Case – Secure PCI Data Deploy New Application• Consistent Repetitive infrastructure model irrelevant to application being
deployed
• Micro-segmentation For new app
• Reduce DC footprint
• Visibility into physical layer underlay packet flow
• Prepare future NSX Micro-segmentation in Existing Brownfield DC
• Prepare for automation
• Prepare Containerization
VMworld 2019 Content: Not for publication or distribution
51©2019 VMware, Inc.
ACI as Underlay For NSX• Spine Leaf Model scales easily for NSX
• Predictable Latency
• Scalable
• High Bandwidth
• Management
• Centralized Configuration
• Rapid deployment\expansion
• Rapid Transfer of Encapsulated Traffic
• Agnostic to overlay traffic
• Forwarding encapsulated traffic efficiently
• Security
• Bare metal micro-seg
• Default Deny
• Strategic Partners
• Supported Model
• Day 2 SupportVMworld 2019 Content: Not for publication or distribution
52©2019 VMware, Inc.
NSX As overlay
• Underlay agnostic
• HardWare Lifecycle Management
• Strategic Direction
• East West Traffic
• Opportunistic Local Host switching and routing
• Underlay simplistic switching and routing requirements
• Secured by NSX micro segmentation
• Reduced overhead
• Existing Vcenter
• Largely Virtualized in Primary DC (60%-70%)
• New Green-Field 99% virtual
• Highly virtualized future
• Net New Apps Deployed NSX micro-segmentation
• Refresh into Virtual
• Slowly Re-platformVMworld 2019 Content: Not for publication or distribution
©2019 VMware, Inc. 53
ResourcesHow to get started
Design Guides Demos
Take a Hands-on Lab Join VMUG, VMware Communities (VMTN)
LEARN TRY
nsx.techzone.vmware.com
CONNECTTRY
@VMwareNSX#runNSXVMworld 2019 Content: Not for publication or distribution
VMworld 2019 Content: Not for publication or distribution
VMworld 2019 Content: Not for publication or distribution