54
Architecture and Design for VMware NSX-T Workload Domains 18 JUL 2019 VMware Validated Design 5.1 VMware NSX-T 2.4.1

Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

  • Upload
    others

  • View
    12

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Architecture and Design for VMwareNSX-T Workload Domains

18 JUL 2019VMware Validated Design 5.1VMware NSX-T 2.4.1

Page 2: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

If you have comments about this documentation, submit your feedback to

[email protected]

VMware, Inc.3401 Hillview Ave.Palo Alto, CA 94304www.vmware.com

Copyright © 2018-2019 VMware, Inc. All rights reserved. Copyright and trademark information.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 2

Page 3: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Contents

About Architecture and Design for VMware NSX-T for Workload Domains 4

1 Applying the Guidance for VMware NSX-T Workload Domains 5

2 Architecture Overview 7Physical Network Architecture for NSX-T Workload Domains 7

Network Transport for NSX-T Workload Domains 7

Virtual Infrastructure Architecture for NSX-T Workload Domains 9

Virtual Infrastructure Overview for NSX-T Workload Domains 10

Network Virtualization Components for NSX-T Workload Domains 11

Network Virtualization Services for NSX-T Workload Domains 13

3 Detailed Design 16Physical Infrastructure Design for NSX-T Workload Domains 16

Physical Networking Design for NSX-T Workload Domains 17

Virtual Infrastructure Design for NSX-T Workload Domains 22

vSphere Cluster Design for NSX-T Workload Domains 23

Virtualization Network Design for NSX-T Workload Domains 26

NSX Design for NSX-T Workload Domains 35

VMware, Inc. 3

Page 4: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

About Architecture and Design forVMware NSX-T for Workload Domains

Architecture and Design for VMware NSX-T for Workload Domains provides detailed information aboutthe requirements for software, tools, and external services to implement VMware NSX-T™ Data Center ina shared edge and compute cluster in an SDDC that is compliant with VMware Validated Design™ forSoftware-Defined Data Center.

PrerequisitesDeploy the management cluster according to VMware Validated Design for Software-Defined Data Centerat least in a single region. See the VMware Validated Design documentation page.

Intended AudienceThis design is intended for architects and administrators who want to deploy NSX-T in a virtualinfrastructure workload domain for tenant workloads.

Required VMware SoftwareIn addition to the VMware Validated Design for Software-Defined Data Center 5.1 deployment, you mustdownload NSX-T 2.4.1. You then deploy and configure NSX-T in the shared edge and compute clusteraccording to this guide. See VMware Validated Design Release Notes for more information aboutsupported product versions

VMware, Inc. 4

Page 5: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Applying the Guidance forVMware NSX-T WorkloadDomains 1The content in Architecture and Design for VMware NSX-T for Workload Domains replaces certain partsof Architecture and Design in VMware Validated Design for Software-Defined Data Center, also referredto as the Standard SDDC.

Before You Design the Virtual Infrastructure WorkloadDomain with NSX-TBefore you follow this documentation, you must deploy the components for the SDDC managementcluster according to VMware Validated Design for Software-Defined Data Center at least in a singleregion. See Architecture and Design, Planning and Preparation and Deployment for Region A in theVMware Validated Design documentation.

n VMware ESXi™

n VMware Platform Services Controller™ pair and Management vCenter Server®

n VMware NSX® Data Center for vSphere®

n VMware vRealize® Lifecycle Manager™

n vSphere® Update Manager™

n VMware vRealize® Operations Manager™

n VMware vRealize® Log Insight™

n VMware vRealize® Automation™ with embedded vRealize® Orchestrator™

n VMware vRealize® Business™ for Cloud

VMware, Inc. 5

Page 6: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Designing a Virtual Infrastructure Workload Domain withNSX-TNext, follow the guidance to design a virtual infrastructure (VI) workload domain with NSX-T deployed inthis way:

n In general, use the guidelines about the VI workload domain and shared edge and compute cluster inthe following sections of Architecture and Design in VMware Validated Design for Software-DefinedData Center:

n Architecture Overview > Physical Infrastructure Architecture

n Architecture Overview > Virtual Infrastructure Architecture

n Detailed Design > Physical Infrastructure Design

n Detailed Design > Virtual Infrastructure Design

n For the sections that are available in both Architecture and Design for VMware NSX-T for WorkloadDomains and Architecture and Design, follow the design guidelines in Architecture and Design forVMware NSX-T for Workload Domains.

First-Level Chapter Places to Use the Guidance for NSX-T

Architecture Overview n Physical Infrastructure Architecture

• Workload Domain Architecture

• Cluster Types

• Physical Network Architecture

• Network Transport• Infrastructure Network Architecture

• Physical Network Interfaces

n Virtual Infrastructure Architecture

Detailed Design n Physical Infrastructure Design

• Physical Design Fundamentals

• Physical Networking Design• Physical Storage Design

n Virtual Infrastructure Design

• vCenter Server Design

• vCenter Server Deployment

• vCenter Server Networking

• vCenter Server Redundancy

• vCenter Server Appliance Sizing

• vSphere Cluster Design• vCenter Server Customization

• Use of TLS Certificates in vCenter Server

• Virtualization Network Design• NSX-T Design

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 6

Page 7: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Architecture Overview 2VMware Validated Design for NSX-T enables IT organizations that have deployed VMware ValidatedDesign for Software-Defined Data Center 5.1 to create a shared edge and compute cluster that usesNSX-T capabilities.

This chapter includes the following topics:

n Physical Network Architecture for NSX-T Workload Domains

n Virtual Infrastructure Architecture for NSX-T Workload Domains

Physical Network Architecture for NSX-T WorkloadDomainsVMware Validated Designs can use most physical network architectures.

Network Transport for NSX-T Workload DomainsYou can implement the physical layer switch fabric of an SDDC by offering Layer 2 or Layer 3 transportservices. For a scalable and vendor-neutral data center network, use a Layer 3 transport.

VMware Validated Design supports both Layer 2 and Layer 3 transports. To decide whether to use Layer2 or Layer 3, consider the following factors:

n NSX-T service routers establish Layer 3 routing adjacency with the first upstream Layer 3 device toprovide equal cost routing for workloads.

n The investment you have today in your current physical network infrastructure.

n The benefits and drawbacks for both layer 2 and layer 3 designs.

Benefits and Drawbacks of Layer 2 TransportA design using Layer 2 transport has these considerations:

n In a design that uses Layer 2 transport, top of rack switches and upstream Layer 3 devices, such ascore switches or routers, form a switched fabric.

n The upstream Layer 3 device terminates each VLAN and provides default gateway functionality.

n Uplinks from the top of rack switch to the upstream Layer 3 devices are 802.1Q trunks carrying allrequired VLANs.

Using a Layer 2 transport has the following benefits and drawbacks:

VMware, Inc. 7

Page 8: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 2-1. Benefits and Drawbacks for Layer 2 Transport

Characteristic Description

Benefits n More design freedom.

n You can span VLANs across racks.

Drawbacks n The size of such a deployment is limited because the fabricelements have to share a limited number of VLANs.

n You might have to rely on a specialized data centerswitching fabric product from a single vendor.

n Traffic between VLANs must traverse to upstream Layer 3device to be routed.

Figure 2-1. Example Layer 2 Transport

ToR ToR

ESXiHost

Upstream L3 DeviceL3

25 GbE 25 GbE

L2

Upstream L3 Device

Benefits and Drawbacks of Layer 3 TransportA design using Layer 3 transport requires these considerations:

n Layer 2 connectivity is limited within the data center rack up to the top of rack switches.

n The top of rack switch terminates each VLAN and provides default gateway functionality. The top ofrack switch has a switch virtual interface (SVI) for each VLAN.

n Uplinks from the top of rack switch to the upstream layer are routed point-to-point links. You cannotuse VLAN trunking on the uplinks.

n A dynamic routing protocol, such as BGP, connects the top of rack switches and upstream switches.Each top of rack switch in the rack advertises a small set of prefixes, typically one per VLAN orsubnet. In turn, the top of rack switch calculates equal cost paths to the prefixes it receives from othertop of rack switches.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 8

Page 9: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 2-2. Benefits and Drawbacks of Layer 3 Transport

Characteristic Description

Benefits n You can select from many Layer 3 capable switch productsfor the physical switching fabric.

n You can mix switches from different vendors because ofgeneral interoperability between their implementation ofBGP.

n This approach is typically more cost effective because ituses only the basic functionality of the physical switches.

Drawbacks n VLANs are restricted to a single rack. The restriction canaffect vSphere Fault Tolerance, and storage networks.

Figure 2-2. Example Layer 3 Transport

ToR

L3

ESXiHost

UpstreamSwitch

UpstreamSwitch

WAN/MPLSInternet

WAN/MPLSInternet

ToR

L2

25 GbE25 GbE

Virtual Infrastructure Architecture for NSX-T WorkloadDomainsThe virtual infrastructure is the foundation of an operational SDDC. It contains the software-definedinfrastructure, software-defined networking and software-defined storage.

In the virtual infrastructure layer, access to the underlying physical infrastructure is controlled andallocated to the management and compute workloads. The virtual infrastructure layer consists of thehypervisors on the physical hosts and the control of these hypervisors. The management components ofthe SDDC consist of elements in the virtual management layer itself.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 9

Page 10: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Figure 2-3. Virtual Infrastructure Layer in the SDDC

ServiceManagement

Portfolio Management

OperationsManagement

CloudManagement

Layer

Service Catalog

Self-Service Portal

Orchestration

BusinessContinuity

Fault Tolerance and Disaster

Recovery

Backup & Restore

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

Layer

Compute

Storage

Network

PhysicalLayer

Security

Replication Compliance

Risk

Governance

Virtual Infrastructure Overview for NSX-T Workload DomainsThe SDDC virtual infrastructure consists of workload domains. The SDDC virtual infrastructure includes amanagement workload domain that contains the management cluster and a virtual infrastructure workloaddomain that contains the shared edge and compute cluster.

Management ClusterThe management cluster runs the virtual machines that manage the SDDC. These virtual machines hostvCenter Server, vSphere Update Manager, NSX Manager, and other management components. Allmanagement, monitoring, and infrastructure services are provisioned to a vSphere cluster which provideshigh availability for these critical services. Permissions on the management cluster limit access only toadministrators. This limitation protects the virtual machines that are running the management, monitoring,and infrastructure services from unauthorized access. The management cluster leverages software-defined networking capabilities in NSX for vSphere.

The management cluster architecture and design is covered in the VMware Validated Design forSoftware-Defined Data Center. The NSX-T validated design does not include the design of themanagement cluster.

Shared Edge and Compute ClusterThe shared edge and compute cluster runs the NSX-T edge virtual machines and all tenant workloads.The edge virtual machines are responsible for North-South routing between compute workloads and theexternal network. This is often referred to as the on-off ramp of the SDDC. The NSX-T edge virtualmachines also enable services such as load balancers.

The hosts in this cluster provide services such as high availability to the NSX-T edge virtual machinesand tenant workloads.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 10

Page 11: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Figure 2-4. SDDC Logical Design

APP

OS

APP

OSAPP

OSAPP

OS

APP

OSAPP

OSAPP

OSAPP

OS

APP

OS

APP

OSAPP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OS

Virtual InfrastructureManagement

NSX-VControllers

(Mgmt)

OtherManagementApplications

NSX-VEdges(Mgmt)

NSX-VManager(Mgmt)

NSX-TManagers(Compute)

NSX-TEdges

ESXi

TransportNodes

ESXi ESXi ESXi

SDDCPayload

Virtual Infrastructure Edge Compute

NSX-T Transport Zones

N-VDS (Compute) vDS (Mgmt)

NSX-V Transport Zones (Mgmt)

Shared Edge and Compute Cluster

Management Cluster

Managed by: Compute vCenter Server

Managed by:Management vCenter Server

Network: External(Internet/Corporate)

Network: Internal SDDC

vCenterServer(Mgmt)

vCenterServer

(Compute)

ESXiESXi

TransportNodes

ESXi

TransportNodes

ESXi

TransportNodes

Network Virtualization Components for NSX-T Workload DomainsThe NSX-T platform consists of several components that are relevant to the network virtualization design.

NSX-T PlatformNSX-T creates a network virtualization layer, which is an abstraction between the physical and virtualnetworks. You create all virtual networks on top of this layer.

Several components are required to create this network virtualization layer:

n NSX-T Managers

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 11

Page 12: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

n NSX-T Edge Nodes

n NSX-T Distributed Routers (DR)

n NSX-T Service Routers (SR)

n NSX-T Segments (Logical Switches)

These components are distributed in different planes to create communication boundaries and provideisolation of workload data from system control messages.

Data plane Performs stateless forwarding or transformation of packets based on tablespopulated by the control plane, reports topology information to the controlplane, and maintains packet level statistics.

The following traffic runs in the data plane:

n Workload data

n N-VDS virtual switch, distributed routing, and the distributed firewall inNSX-T

The data is carried over designated transport networks in the physicalnetwork.

Control plane Contains messages for network virtualization control. You place the controlplane communication on secure physical networks (VLANs) that areisolated from the transport networks for the data plane.

The control plane computes the runtime state based on configuration fromthe management plane. Control plane propagates topology informationreported by the data plane elements, and pushes stateless configuration toforwarding engines.

Control plane in NSX-T has two parts:

n Central Control Plane (CCP). The CCP is implemented as a cluster ofvirtual machines called CCP nodes. The cluster form factor providesboth redundancy and scalability of resources.

The CCP is logically separated from all data plane traffic, that is, afailure in the control plane does not affect existing data planeoperations.

n Local Control Plane (LCP). The LCP runs on transport nodes. It is nearto the data plane it controls and is connected to the CCP. The LCP isresponsible for programming the forwarding entries of the data plane.

Management plane Provides a single API entry point to the system, persists user configuration,handles user queries, and performs operational tasks on all management,control, and data plane nodes in the system.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 12

Page 13: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

For NSX-T, all querying, modifying, and persisting user configuration is inthe management plane. Propagation of that configuration down to thecorrect subset of data plane elements is in the control plane. As a result,some data belongs to multiple planes. Each plane uses this data accordingto stage of existence. The management plane also queries recent statusand statistics from the control plane, and under certain conditions directlyfrom the data plane.

The management plane is the only source of truth for the logical systembecause it is the only entry point for user configuration. You make changesusing either a RESTful API or the NSX-T user interface.

For example, responding to a vSphere vMotion operation of a virtualmachine is responsibility of the control plane, but connecting the virtualmachine to the logical network is responsibility of the management plane.

Network Virtualization Services for NSX-T Workload DomainsNetwork virtualization services include segments, gateways, firewalls, and other components of NSX-T.

Segments (LogicalSwitch)

Reproduces switching functionality, broadcast, unknown unicast, andmulticast (BUM) traffic in a virtual environment that is decoupled from theunderlying hardware.

Segments are similar to VLANs because they provide network connectionsto which you can attach virtual machines. The virtual machines can thencommunicate with each other over tunnels between ESXi hosts. EachSegment has a virtual network identifier (VNI), like a VLAN ID. UnlikeVLANs, VNIs scale beyond the limits of VLAN IDs.

Gateway (LogicalRouter)

Provides North-South connectivity so that workloads can access externalnetworks, and East-West connectivity between logical networks.

A Logical Router is a configured partition of a traditional network hardwarerouter. It replicates the functionality of the hardware, creating multiplerouting domains in a single router. Logical Routers perform a subset of thetasks that are handled by the physical router, and each can contain multiplerouting instances and routing tables. Using logical routers can be aneffective way to maximize router use, because a set of logical routers withina single physical router can perform the operations previously performed byseveral pieces of equipment.

n Distributed router (DR)

A DR spans ESXi hosts whose virtual machines are connected to thisgateway, and edge nodes the gateway is bound to. Functionally, theDR is responsible for one-hop distributed routing between segmentsand gateways connected to this gateway.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 13

Page 14: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

n One or more (optional) service routers (SR).

An SR is responsible for delivering services that are not currentlyimplemented in a distributed fashion, such as stateful NAT.

A gateway always has a DR. A gateway has SRs when it is a Tier-0gateway, or when it is a Tier-1 gateway and has services configured suchas NAT or DHCP.

NSX-T Edge Node Provides routing services and connectivity to networks that are external tothe NSX-T domain through a Tier-0 gateway over BGP or static routing.

You must deploy an NSX-T Edge for stateful services at either the Tier-0 orTier-1 gateways.

NSX-T Edge Cluster Represents a collection of NSX-T Edge nodes that host multiple servicerouters in highly available configurations. At a minimum, deploy a singleTier-0 SR to provide external connectivity.

An NSX-T Edge cluster does not have a one-to-one relationship with avSphere cluster. A vSphere cluster can run multiple NSX-T Edge clusters.

Transport Node Participates in NSX-T overlay or NSX-T VLAN networking. If a nodecontains an NSX-T Virtual Distributed Switch (N-VDS) such as ESXi hostsand NSX-T Edge nodes, it can be a transport node.

If an ESXi host contains at least one N-VDS, it can be a transport node.

Transport Zone A transport zone can span one or more vSphere clusters. Transport zonesdictate which ESXi hosts and which virtual machines can participate in theuse of a particular network.

A transport zone defines a collection of ESXi hosts that can communicatewith each other across a physical network infrastructure. Thiscommunication happens over one or more interfaces defined as TunnelEndpoints (TEPs).

When you create an ESXi host transport node and then add the node to atransport zone, NSX-T installs an N-VDS on the host. For each transportzone that the host belongs to, a separate N-VDS is installed. The N-VDS isused for attaching virtual machines to NSX-T Segments and for creatingNSX-T gateway uplinks and downlinks.

NSX-T Controller As a component of the control plane, the controllers control virtual networksand overlay transport tunnels.

For stability and reliability of data transport, NSX-T deploys the NSX-TController as a role in the Manager cluster which consists of three highlyavailable virtual appliances. They are responsible for the programmaticdeployment of virtual networks across the entire NSX-T architecture.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 14

Page 15: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Logical Firewall Responsible for traffic handling in and out the network according to firewallrules.

A logical firewall offers multiple sets of configurable Layer 3 and Layer 2rules. Layer 2 firewall rules are processed before Layer 3 rules. You canconfigure an exclusion list to exclude segments, logical ports, or groupsfrom firewall enforcement.

The default rule, located at the bottom of the rule table, is a catch-all rule.The logical firewall enforces the default rule on packets that do not matchother rules. After the host preparation operation, the default rule is set tothe allow action. Change this default rule to a block action and enforceaccess control through a positive control model, that is, only traffic definedin a firewall rule can flow on the network.

Logical Load Balancer Provides high-availability service for applications and distributes thenetwork traffic load among multiple servers.

The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on thevirtual IP address and determines which pool server to use.

Logical load balancer is supported only in an SR on the Tier-1 gateway.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 15

Page 16: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Detailed Design 3The NSX-T detailed design considers both physical and virtual infrastructure design. It includes numbereddesign decisions and the justification and implications of each decision.

This chapter includes the following topics:

n Physical Infrastructure Design for NSX-T Workload Domains

n Virtual Infrastructure Design for NSX-T Workload Domains

Physical Infrastructure Design for NSX-T WorkloadDomainsThe physical infrastructure design includes design decision details for the physical network.

Figure 3-1. Physical Infrastructure Design

ServiceManagement

Portfolio Management

OperationsManagement

CloudManagement

Layer

Service Catalog

Self-Service Portal

Orchestration

BusinessContinuity

Fault Tolerance and Disaster

Recovery

Backup & Restore

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

Layer

Compute

Storage

Network

PhysicalLayer

Security

Replication Compliance

Risk

Governance

n Physical Networking Design for NSX-T Workload Domains

Design of the physical SDDC network includes defining the network topology for connecting thephysical switches and the ESXi hosts, determining switch port settings for VLANs and linkaggregation, and designing routing. You can use the VMware Validated Design guidance for designand deployment with most enterprise-grade physical network architectures.

VMware, Inc. 16

Page 17: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Physical Networking Design for NSX-T Workload DomainsDesign of the physical SDDC network includes defining the network topology for connecting the physicalswitches and the ESXi hosts, determining switch port settings for VLANs and link aggregation, anddesigning routing. You can use the VMware Validated Design guidance for design and deployment withmost enterprise-grade physical network architectures.

n Switch Types and Network Connectivity for NSX-T Workload Domains

Follow the best practices for physical switches, switch connectivity, VLANs and subnets, and accessport settings.

n Physical Network Design Decisions for NSX-T Workload Domains

The physical network design decisions determine the physical layout and use of VLANs. They alsoinclude decisions on jumbo frames and on other network-related requirements such as DNS andNTP.

Switch Types and Network Connectivity for NSX-T Workload DomainsFollow the best practices for physical switches, switch connectivity, VLANs and subnets, and access portsettings.

Top of Rack Physical Switches

When configuring top of rack (ToR) switches, consider the following best practices:

n Configure redundant physical switches to enhance availability.

n Configure switch ports that connect to ESXi hosts manually as trunk ports.

n Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce thetime to transition ports over to the forwarding state, for example using the Trunk PortFast featurefound in a Cisco physical switch.

n Provide DHCP or DHCP Helper capabilities on all VLANs used by TEP VMkernel ports. This setupsimplifies the configuration by using DHCP to assign IP address based on the IP subnet in use.

n Configure jumbo frames on all switch ports, inter-switch link (ISL), and switched virtual interfaces(SVIs).

Top of Rack Connectivity and Network Settings

Each ESXi host is connected redundantly to the ToR switches SDDC network fabric by two 25 GbE ports.Configure the ToR switches to provide all necessary VLANs using an 802.1Q trunk. These redundantconnections use features in vSphere Distributed Switch and NSX-T to guarantee that no physicalinterface is overrun and available redundant paths are used.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 17

Page 18: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Figure 3-2. Host to ToR Connectivity

ToR ToR

ESXiHost

25 GbE 25 GbE

VLANs and Subnets

Each ESXi host uses VLANs and corresponding subnets.

Follow these guidelines:

n Use only /24 subnets to reduce confusion and mistakes when handling IPv4 subnetting.

n Use the IP address .254 as the (floating) interface with .252 and .253 for Virtual Router RedundancyProtocol (VRPP) or Hot Standby Routing Protocol (HSRP).

n Use the RFC1918 IPv4 address space for these subnets and allocate one octet by region andanother octet by function.

Note The following VLANs and IP ranges are samples. Your actual implementation depends on yourenvironment.

Table 3-1. Sample Values for VLANs and IP Ranges

Function Sample VLAN Sample IP Range

Management 1641 172.16.41.0/24

vSphere vMotion 1642 172.16.42.0/24

vSAN 1643 172.16.43.0/24

Host Overlay 1644 172.16.44.0/24

Uplink01 1647 172.16.47.0/24

Uplink02 1648 172.16.48.0/24

Edge Overlay 1649 172.16.49.0/24

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 18

Page 19: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Access Port Network Settings

Configure additional network settings on the access ports that connect the ToR switches to thecorresponding servers.

Spanning Tree Protocol(STP)

Although this design does not use the Spanning Tree Protocol, switchesusually include STP configured by default. Designate the access ports astrunk PortFast.

Trunking Configure the VLANs as members of a 802.1Q trunk with the managementVLAN acting as the native VLAN.

MTU Set MTU for all VLANs and SVIs (Management, vMotion, VXLAN, andStorage) to jumbo frames for consistency purposes.

DHCP Helper Configure a DHCP helper (sometimes called a DHCP relay) on all TEPVLANs.

Physical Network Design Decisions for NSX-T Workload DomainsThe physical network design decisions determine the physical layout and use of VLANs. They alsoinclude decisions on jumbo frames and on other network-related requirements such as DNS and NTP.

Physical Network Design Decisions

Routing protocols NSX-T supports only the BGP routing protocol.

DHCP Helper Set the DHCP helper (relay) to point to a DHCP server by IPv4 address.

Table 3-2. Physical Network Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-PHY-NET-001 Implement the followingphysical network architecture:

n One 25 GbE (10 GbEminimum) port on eachToR switch for ESXi hostuplinks.

n No EtherChannel (LAG/LACP/vPC) configurationfor ESXi host uplinks

n Layer 3 device thatsupports BGP.

n Guarantees availabilityduring a switch failure.

n Uses BGP as the onlydynamic routing protocolthat is supported by NSX-T.

n Might limit the hardwarechoice.

n Requires dynamic routingprotocol configuration inthe physical network.

NSXT-PHY-NET-002 Use a physical network that isconfigured for BGP routingadjacency.

n Supports flexibility innetwork design for routingmulti-site and multi-tenancy workloads.

n Uses BGP as the onlydynamic routing protocolthat is supported by NSX-T.

Requires BGP configuration inthe physical network.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 19

Page 20: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-2. Physical Network Design Decisions (continued)

Decision ID Design Decision Design Justification Design Implication

NSXT-PHY-NET-003 Use two ToR switches foreach rack.

Supports the use of two 10GbE (25 GbE recommended)links to each server andprovides redundancy andreduces the overall designcomplexity.

Requires two ToR switches perrack which can increase costs.

NSXT-PHY-NET-004 Use VLANs to segmentphysical network functions.

n Supports physicalnetwork connectivitywithout requiring manyNICs.

n Isolates the differentnetwork functions of theSDDC so that you canhave differentiatedservices and prioritizedtraffic as needed.

Requires uniform configurationand presentation on all thetrunks made available to theESXi hosts.

Additional Design Decisions

Additional design decisions deal with static IP addresses, DNS records, and the required NTP timesource.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 20

Page 21: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-3. IP Assignment, DNS, and NTP Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-PHY-NET-005 Assign static IP addresses toall management componentsin the SDDC infrastructureexcept for NSX-T TEPs.

NSX-T TEPs are assigned byusing a DHCP server. Set thelease duration for the TEPDHCP scope to at least 7days.

Ensures that interfaces suchas management and storagealways have the same IPaddress. In this way, youprovide support forcontinuous management ofESXi hosts using vCenterServer and for provisioning IPstorage by storageadministrators.

NSX-T TEPs do not have anadministrative endpoint. As aresult, they can use DHCP forautomatic IP addressassignment. IP pools are anoption but the NSX-Tadministrator must createthem. If you must change orexpand the subnet, changingthe DHCP scope is simplerthan creating an IP pool andassigning it to the ESXi hosts.

Requires accurate IP addressmanagement.

NSXT-PHY-NET-006 Create DNS records for allESXi Hosts managementinterfaces to enable forward(A), reverse (PTR), short, andFQDN resolution.

Ensures consistent resolutionof management nodes usingboth IP address (reverselookup) and name resolution.

None.

NSXT-PHY-NET-007 Use an NTP time source forall management nodes.

Maintains accurate andsynchronized time betweenmanagement nodes.

None.

Jumbo Frames Design Decisions

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-framepayload from 1500 bytes to the jumbo frame setting improves the efficiency of data transfer. You mustconfigure jumbo frames end-to-end. Select an MTU that matches the MTU of the physical switch ports.

According to the purpose of the workload, determine whether to configure jumbo frames on a virtualmachine. If the workload consistently transfers large amounts of network data, configure jumbo frames, ifpossible. In that case, confirm that both the virtual machine operating system and the virtual machineNICs support jumbo frames.

Using jumbo frames also improves the performance of vSphere vMotion.

Note The Geneve overlay requires an MTU value of 1600 bytes or greater.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 21

Page 22: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-4. Jumbo Frames Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-PHY-NET-008 Configure the MTU size to atleast 9000 bytes (jumboframes) on the physicalswitch ports, vSphereDistributed Switches, vSphereDistributed Switch portgroups, and N-VDS switchesthat support the followingtraffic types.

n Geneve (overlay)

n vSAN

n vMotion

n NFS

n vSphere Replication

Improves traffic throughput.

To support Geneve, increasethe MTU setting to a minimumof 1600 bytes.

When adjusting the MTUpacket size, you must alsoconfigure the entire networkpath (VMkernel ports, virtualswitches, physical switches,and routers) to support thesame MTU packet size.

Virtual Infrastructure Design for NSX-T WorkloadDomainsThe virtual infrastructure design includes the NSX-T components that make up the virtual infrastructurelayer.

Figure 3-3. Virtual Infrastructure Layer in the SDDC

ServiceManagement

Portfolio Management

OperationsManagement

CloudManagement

Layer

Service Catalog

Self-Service Portal

Orchestration

BusinessContinuity

Fault Tolerance and Disaster

Recovery

Backup & Restore

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

Layer

Compute

Storage

Network

PhysicalLayer

Security

Replication Compliance

Risk

Governance

n vSphere Cluster Design for NSX-T Workload Domains

The cluster design must consider the workload that the cluster handles. Different cluster types in thisdesign have different characteristics.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 22

Page 23: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

n Virtualization Network Design for NSX-T Workload Domains

Design the virtualization network according to the business goals of your organization. Prevent alsounauthorized access, and provide timely access to business data.

n NSX Design for NSX-T Workload Domains

This design implements software-defined networking by using VMware NSX-T. By using NSX-T,virtualization delivers for networking what it has already delivered for compute and storage.

vSphere Cluster Design for NSX-T Workload DomainsThe cluster design must consider the workload that the cluster handles. Different cluster types in thisdesign have different characteristics.

vSphere Cluster Design Decision BackgroundWhen you design the cluster layout in vSphere, consider the following guidelines:

n Use fewer, larger ESXi hosts, or more, smaller ESXi hosts.

n A scale-up cluster has fewer, larger ESXi hosts.

n A scale-out cluster has more, smaller ESXi hosts.

n Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing more,smaller ESXi hosts. Costs vary between vendors and models.

n Evaluate the operational costs of managing a few ESXi hosts with the costs of managing more ESXihosts.

n Consider the purpose of the cluster.

n Consider the total number of ESXi hosts and cluster limits.

n vSphere High Availability Design for NSX-T Workload Domains

VMware vSphere High Availability (vSphere HA) protects your virtual machines in case of ESXi hostfailure by restarting virtual machines on other hosts in the cluster when an ESXi host fails.

n Shared Edge and Compute Cluster Design for NSX-T Workload Domains

Tenant workloads run on the ESXi hosts in the shared edge and compute cluster. Because of theshared nature of the cluster, NSX-T Edge appliances also run in this cluster. To support theseworkloads, you must determine the number of ESXi hosts and vSphere HA settings and severalother characteristics of the shared edge and compute cluster.

n Compute Cluster Design for NSX-T Workload Domains

As the SDDC expands, you can add compute clusters.

vSphere High Availability Design for NSX-T Workload DomainsVMware vSphere High Availability (vSphere HA) protects your virtual machines in case of ESXi hostfailure by restarting virtual machines on other hosts in the cluster when an ESXi host fails.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 23

Page 24: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

vSphere HA Design Basics

During configuration of the cluster, the ESXi hosts elect a master ESXi host. The master ESXi hostcommunicates with the vCenter Server system and monitors the virtual machines and secondary ESXihosts in the cluster.

The master ESXi host detects different types of failure:

n ESXi host failure, for example an unexpected power failure

n ESXi host network isolation or connectivity failure

n Loss of storage connectivity

n Problems with virtual machine OS availability

Table 3-5. vSphere HA Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-VC-001 Use vSphere HA to protect allclusters against failures.

vSphere HA supports arobust level of protection forboth ESXi host and virtualmachine availability.

You must provide sufficientresources on the remaininghosts so that virtual machinescan be migrated to those hostsin the event of a host outage.

NSXT-VI-VC-002 Set vSphere HA HostIsolation Response to PowerOff.

vSAN requires that you setHA Isolation Response toPower Off to restart the virtualmachines on the availableESXi hosts.

VMs are powered off in case ofa false positive and an ESXihost is declared isolatedincorrectly.

vSphere HA Admission Control Policy Configuration

The vSphere HA Admission Control Policy allows an administrator to configure how the clusterdetermines available resources. In a smaller vSphere HA cluster, a larger proportion of the clusterresources are reserved to accommodate ESXi host failures, based on the selected policy.

The following policies are available:

Host failures the clustertolerates

vSphere HA ensures that a specified number of ESXi hosts can fail andsufficient resources remain in the cluster to fail over all the virtual machinesfrom those ESXi hosts.

Percentage of clusterresources reserved

vSphere HA reserves a specified percentage of aggregate CPU andmemory resources for failover.

Specify Failover Hosts When an ESXi host fails, vSphere HA attempts to restart its virtualmachines on any of the specified failover ESXi hosts. If restart is notpossible, for example, the failover ESXi hosts have insufficient resources orhave failed as well, then vSphere HA attempts to restart the virtualmachines on other ESXi hosts in the cluster.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 24

Page 25: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Shared Edge and Compute Cluster Design for NSX-T Workload DomainsTenant workloads run on the ESXi hosts in the shared edge and compute cluster. Because of the sharednature of the cluster, NSX-T Edge appliances also run in this cluster. To support these workloads, youmust determine the number of ESXi hosts and vSphere HA settings and several other characteristics ofthe shared edge and compute cluster.

Table 3-6. Shared Edge and Compute Cluster Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-VC-003 Create a shared edge andcompute cluster that containstenant workloads and NSX-TEdge appliances.

Limits the footprint of thedesign by saving the use of avSphere cluster specificallyfor the NSX-T Edge nodes.

In a shared cluster, the VLANsand subnets between theVMkernel ports for ESXi hostoverlay and the overlay ports ofthe edge appliances must beseparate.

NSXT-VI-VC-004 Configure admission controlfor a failure of one ESXi hostand percentage-basedfailover capacity.

vSphere HA protects thetenant workloads and NSX-TEdge appliances in the eventof an ESXi host failure.vSphere HA powers on thevirtual machines from thenon-responding ESXi hostson the remaining ESXi hosts.

Only a single ESXi host failureis tolerated before a resourcecontention occurs.

NSXT-VI-VC-005 Create a shared edge andcompute cluster that consistsof a minimum of four ESXihosts.

Allocating four ESXi hostsprovides a full redundancywithin the cluster.

Four ESXi hosts is the smalleststarting point for the sharededge and compute cluster forredundancy and performanceas a result increasing cost.

NSXT-VI-VC-006 Create a resource pool for thetwo large sized edge virtualmachines with a CPU sharelevel of High, a memory shareof normal, and a 64-GBmemory reservation.

The NSX-T Edge appliancescontrol all network traffic inand out of the SDDC. In acontention situation, theseappliances must receive allthe resources required.

During contention, the NSX-Tcomponents receive moreresources than the otherworkloads. As a result,monitoring and capacitymanagement must be aproactive activity.

The resource pool memoryreservation must be expandedif you plan to deploy moreNSX-T Edge appliances.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 25

Page 26: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-6. Shared Edge and Compute Cluster Design Decisions (continued)

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-VC-007 Create a resource pool for alltenant workloads with a CPUshare value of Normal and amemory share value ofNormal.

Running virtual machines atthe cluster level has anegative impact on all othervirtual machines duringcontention. To avoid animpact on networkconnectivity, in a shared edgeand compute cluster, theNSX-T Edge appliances mustreceive resources with priorityto the other workloads.Setting the share values toNormal increases theresource shares of the NSX-TEdge appliances in thecluster.

During contention, tenantworkloads might haveinsufficient resources and havepoor performance. Proactivelyperform monitoring andcapacity management, addcapacity or dedicate an edgecluster before contentionoccurs.

NSXT-VI-VC-008 Create a host profile for theshared edge and computecluster.

Using host profiles simplifiesthe configuration of ESXihosts and ensures thatsettings are uniform acrossthe cluster.

After NSX-T has beendeployed you must update thehost profile.

Anytime an authorized changeto an ESXi host is made, youmust update the host profile toreflect the change or the statuswill show non-compliant.

Compute Cluster Design for NSX-T Workload DomainsAs the SDDC expands, you can add compute clusters.

Tenant workloads run on the ESXi hosts in the compute cluster instances. One Compute vCenter Serverinstance manages multiple compute clusters. The design determines vSphere HA settings for thecompute cluster.

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-VC-009 Configure vSphere HA to usepercentage-based failovercapacity to ensure n+1availability.

Using explicit host failoverlimits the total availableresources in a cluster.

The resources of one ESXihost in the cluster is reservedwhich can cause provisioningto fail if resources areexhausted.

Virtualization Network Design for NSX-T Workload DomainsDesign the virtualization network according to the business goals of your organization. Prevent alsounauthorized access, and provide timely access to business data.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 26

Page 27: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

This network virtualization design uses vSphere and NSX-T to implement virtual networking.

n Virtual Network Design Guidelines for NSX-T Workload Domains

This VMware Validated Design follows high-level network design guidelines and networking bestpractices.

n Virtual Switches for NSX-T Workload Domains

Virtual switches simplify the configuration process by providing single pane of glass view forperforming virtual network management tasks.

n NIC Teaming for NSX-T Workload Domains

You can use NIC teaming to increase the network bandwidth available in a network path, and toprovide the redundancy that supports higher availability.

n Geneve Overlay for NSX-T Workload Domains

Geneve provides the overlay capability in NSX-T to create isolated, multi-tenant broadcast domainsacross data center fabrics, and enables customers to create elastic, logical networks that spanphysical network boundaries.

n vMotion TCP/IP Stack for NSX-T Workload Domains

Use the vMotion TCP/IP stack to isolate traffic for vSphere vMotion and to assign a dedicateddefault gateway for vSphere vMotion traffic.

Virtual Network Design Guidelines for NSX-T Workload DomainsThis VMware Validated Design follows high-level network design guidelines and networking bestpractices.

Design Goals

You can apply the following high-level design goals to your environment:

n Meet diverse needs. The network must meet the diverse needs of many different entities in anorganization. These entities include applications, services, storage, administrators, and users.

n Reduce costs. Server consolidation alone reduces network costs by reducing the number of requirednetwork ports and NICs, but you should determine a more efficient network design. For example,configuring two 25 GbE NICs with VLANs might be more cost effective than configuring a dozen 1-GbE NICs on separate physical networks.

n Boost performance. You can achieve performance improvements and decrease the time required toperform maintenance by providing sufficient bandwidth, which reduces contention and latency.

n Improve availability. You usually improve availability by providing network redundancy.

n Support security. You can support an acceptable level of security through controlled access whererequired and isolation where necessary.

n Improve infrastructure functionality. You can configure the network to support vSphere features suchas vSphere vMotion, vSphere High Availability, and vSphere Fault Tolerance.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 27

Page 28: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Best Practices

Follow the networking best practices throughout your environment.

n Separate network services from one another for greater security and better performance.

n Use Network I/O Control and traffic shaping to guarantee bandwidth to critical virtual machines.During network contention, these critical virtual machines receive a higher percentage of thebandwidth.

n Separate network services on an NSX-T Virtual Distributed Switch (N-VDS) by attaching them tosegments with different VLAN IDs.

n Keep vSphere vMotion traffic on a separate network. When migration with vMotion occurs, thecontents of the memory of the guest operating system is transmitted over the network. You can placevSphere vMotion on a separate network by using a dedicated vSphere vMotion VLAN.

n When using pass-through devices with Linux kernel version 2.6.20 or an earlier guest OS, avoid MSIand MSI-X modes. These modes have significant performance impact.

n For best performance, use VMXNET3 virtual machine NICs.

n Ensure that physical network adapters connected to the same virtual switch are also connected to thesame physical network.

Network Segmentation and VLANs

You separate different types of traffic for access security and to reduce contention and latency.

High latency on a network can impact performance. Some components are more sensitive to high latencythan others. For example, reducing latency is important on the IP storage and the vSphere FaultTolerance logging network, because latency on these networks can negatively affect the performance ofmultiple virtual machines.

According to the application or service, high latency on specific virtual machine networks can alsonegatively affect performance. Use information gathered from the current state analysis and frominterviews with key stakeholder and SMEs to determine which workloads and networks are especiallysensitive to high latency.

Virtual Networks

Determine the number of networks or VLANs that are required according to the type of traffic.

n vSphere operational traffic.

n Management

n Geneve (overlay)

n vMotion

n vSAN

n NFS Storage

n vSphere Replication

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 28

Page 29: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

n Traffic that supports the services and applications of the organization.

Virtual Switches for NSX-T Workload DomainsVirtual switches simplify the configuration process by providing single pane of glass view for performingvirtual network management tasks.

n Virtual Switch Design Background for NSX-T Workload Domains

vSphere Distributed Switch and NSX-T Virtual Distributed Switch (N-VDS) provide severaladvantages over vSphere Standard Switch.

n Virtual Switch Design Decisions for NSX-T Workload Domains

The virtual switch design decisions determine the use and placement of specific switch types.

n Shared Edge and Compute Cluster Switches for NSX-T Workload Domains

The shared edge and compute cluster uses a single N-VDS with a certain configuration for handledtraffic types, NIC teaming, and MTU size.

Virtual Switch Design Background for NSX-T Workload Domains

vSphere Distributed Switch and NSX-T Virtual Distributed Switch (N-VDS) provide several advantagesover vSphere Standard Switch.

Centralizedmanagement

n A distributed switch is created and centrally managed on a vCenterServer system. The switch configuration is consistent across ESXihosts.

n An N-VDS is created and centrally managed in NSX-T Manager. Theswitch configuration is consistent across ESXi and edge transportnodes.

Centralized management saves time and reduces mistakes and operationalcosts.

Additional features Some of the features of distributed switches can be useful to theapplications and services running in the organization’s infrastructure. Forexample, NetFlow and port mirroring provide monitoring andtroubleshooting capabilities to the virtual infrastructure.

Consider the following caveats for distributed switches:

n Distributed switches are manageable only when the vCenter Server instance is available. As a result,vCenter Server becomes a Tier-1 application.

n N-VDS instances are manageable only when the NSX-T Manager cluster is available. As a result, theNSX-T Manager cluster becomes a Tier-1 application.

Virtual Switch Design Decisions for NSX-T Workload Domains

The virtual switch design decisions determine the use and placement of specific switch types.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 29

Page 30: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-NET-001 Use N-VDS for the NSX-Tbased shared edge andcompute cluster, and foradditional NSX-T basedcompute clusters.

The N-VDS is required foroverlay traffic.

Management is shifted fromthe vSphere Client to the NSXManager.

Shared Edge and Compute Cluster Switches for NSX-T Workload Domains

The shared edge and compute cluster uses a single N-VDS with a certain configuration for handled traffictypes, NIC teaming, and MTU size.

Figure 3-4. Virtual Switch Design for ESXi Hosts in the Shared Edge and Compute Cluster

nic0 nic1

VLAN NFS

VLAN vMotion

VLAN vSAN

Sample ESXi Shared Edge and Compute Host

sfo01-w-nvds01

VLAN ESXi Management

VLAN Trunking Uplink01

VLAN Trunking Uplink02

VLAN ESXi Overlay (Host TEP)

VLAN Trunking Edge Overlay (Edge TEP)

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 30

Page 31: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-7. Virtual Switches for the Shared Edge and Compute Cluster

N-VDS Switch Name FunctionNumber of PhysicalNIC Ports Teaming Policy MTU

sfo01-w-nvds01 n ESXi Management

n vSphere vMotion

n vSAN

n NFS

n Geneve Overlay(TEP)

n Uplink trunking (2)for the NSX-TEdge instances

2 n Load balancesource for the ESXitraffic

n Failover order forthe edge VM traffic

9000

sfo01-w-uplink01 n Uplink to enableECMP

1 Failover order 9000

sfo01-w-uplink02 n Uplink to enableECMP

1 Failover order 9000

Table 3-8. Virtual Switches in the Shared Edge and Compute Cluster by Physical NIC

N-VDS Switch vmnic Function

sfo01-w-nvds01 0 Uplink

sfo01-w-nvds01 1 Uplink

Figure 3-5. Segment Configuration on an ESXi Host That Runs an NSX-T Edge Node

NSX-T Edge Virtual Machine

fp-eth0 (overlay)

sfo01-w-uplink01(VLAN)

fp-eth1 (uplink 1)

fp-eth2 (uplink 2)

eth0 (management)

ESXi Host

sfo01-w-nvds01

vmnic0

vmnic1

sfo01-w-nvds01(Overlay)

sfo01-w-uplink02(VLAN)

sfo01-w-nvds01-uplink01(VLAN Trunking)

sfo01-w-overlay(VLAN Trunking)

sfo01-w-nvds01-uplink02(VLAN Trunking)

sfo01-w-nvds01-management(VLAN 1641)

Table 3-9. Segments in the Shared Edge and Compute Cluster

N-VDS Switch Segment Name

sfo01-w-nvds01 sfo01-w-nvds01-management

sfo01-w-nvds01 sfo01-w-nvds01-vmotion

sfo01-w-nvds01 sfo01-w-nvds01-vsan

sfo01-w-nvds01 sfo01-w-nvds01-overlay

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 31

Page 32: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-9. Segments in the Shared Edge and Compute Cluster (continued)

N-VDS Switch Segment Name

sfo01-w-nvds01 sfo01-w-nvds01-uplink01

sfo01-w-nvds01 sfo01-w-nvds01-uplink02

sfo01-w02-uplink01 sfo01-w02-uplink01

sfo01-w02-uplink02 sfo01-w02-uplink02

Table 3-10. VMkernel Adapters for the Shared Edge and Compute Cluster

N-VDS Switch Segment Name Enabled Services

sfo01-w-nvds01 sfo01-w-nvds01-management Management Traffic

sfo01-w-nvds01 sfo01-w-nvds01-vmotion vMotion Traffic

sfo01-w-nvds01 sfo01-w-nvds01-vsan vSAN

sfo01-w-nvds01 sfo01-w-nvds01-nfs --

sfo01-w-nvds01 auto created (Host TEP) --

sfo01-w-nvds01 auto created (Host TEP) --

sfo01-w-nvds01 auto created (Hyperbus) --

Note

When the NSX-T Edge appliance is on an N-VDS, it must use a different VLAN ID and subnet from theESXi hosts overlay (TEP) VLAN ID and subnet.

ESXi host TEP VMkernel ports are automatically created when you configure an ESXi host as a transportnode.

NIC Teaming for NSX-T Workload DomainsYou can use NIC teaming to increase the network bandwidth available in a network path, and to providethe redundancy that supports higher availability.

Benefits and Overview

NIC teaming helps avoid a single point of failure and provides options for load balancing of traffic. Toreduce further the risk of a single point of failure, build NIC teams by using ports from multiple NIC andmotherboard interfaces.

Create a single virtual switch with teamed NICs across separate physical switches.

NIC Teaming Design Background

For a predictable level of performance, use multiple network adapters in one of the followingconfigurations.

n An active-passive configuration that uses explicit failover when connected to two separate switches.

n An active-active configuration in which two or more physical NICs in the server are assigned theactive role.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 32

Page 33: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

This validated design uses a non-LAG active-active configuration using the route based on physical NICload algorithm for vSphere Distributed Switch and load balance source algorithm for N-VDS. By using thisconfiguration, network cards remain active instead of remaining idle until a failure occurs.

Table 3-11. NIC Teaming and Policy

Design Quality Active-Active Active-Passive Comments

Availability ↑ ↑ Using teaming regardless ofthe option increases theavailability of the environment.

Manageability o o Neither design option impactsmanageability.

Performance ↑ o An active-active configurationcan send traffic across eitherNIC, thereby increasing theavailable bandwidth. Thisconfiguration provides a benefitif the NICs are being sharedamong traffic types andNetwork I/O Control is used.

Recoverability o o Neither design option impactsrecoverability.

Security o o Neither design option impactssecurity.

Legend: ↑ = positive impact on quality; ↓ = negative impact on quality; o = no impact on quality.

Table 3-12. NIC Teaming Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-NET-002 In the shared edge andcompute cluster, use the Loadbalance source teamingpolicy on N-VDS.

NSX-T Virtual DistributedSwitch(N-VDS) supports Loadbalance source and Failoverteaming policies. When youuse the Load balance sourcepolicy, both physical NICs canbe active and carry traffic.

None.

Geneve Overlay for NSX-T Workload DomainsGeneve provides the overlay capability in NSX-T to create isolated, multi-tenant broadcast domainsacross data center fabrics, and enables customers to create elastic, logical networks that span physicalnetwork boundaries.

The first step in creating these logical networks is to isolate and pool the networking resources. By usingthe Geneve overlay, NSX-T isolates the network into a pool of capacity and separates the consumption ofthese services from the underlying physical infrastructure. This model is similar to the model vSphereuses to abstract compute capacity from the server hardware to create virtual pools of resources that canbe consumed as a service. You can then organize the pool of network capacity in logical networks thatare directly attached to specific applications.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 33

Page 34: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Geneve is a tunneling mechanism which provides extensibility while still using the offload capabilities ofNICs for performance improvement.

Geneve works by creating Layer 2 logical networks that are encapsulated in UDP packets. A Segment IDin every frame identifies the Geneve logical networks without the need for VLAN tags. As a result, manyisolated Layer 2 networks can coexist on a common Layer 3 infrastructure using the same VLAN ID.

In the vSphere architecture, the encapsulation is performed between the virtual NIC of the guest VM andthe logical port on the virtual switch, making the Geneve overlay transparent to both the guest virtualmachines and the underlying Layer 3 network. The Tier-0 Gateway performs gateway services betweenoverlay and non-overlay hosts, for example, a physical server or the Internet router. The NSX-T Edgevirtual machine translates overlay segment IDs to VLAN IDs, so that non-overlay hosts can communicatewith virtual machines on an overlay network.

The edge cluster hosts all NSX-T Edge virtual machine instances that connect to the corporate networkfor secure and centralized network administration.

Table 3-13. Geneve Overlay Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-NET-003 Use NSX-T to introduceoverlay networks forworkloads.

Simplifies the networkconfiguration by usingcentralized virtual networkmanagement.

n Requires additionalcompute and storageresources to deploy NSX-Tcomponents.

n Might require more trainingin NSX-T.

NSXT-VI-NET-004 To provide virtualized networkcapabilities to workloads, useoverlay networks with NSX-TEdge virtual machines anddistributed routing.

Creates isolated, multi-tenantbroadcast domains acrossdata center fabrics to deployelastic, logical networks thatspan physical networkboundaries.

Requires configuring transportnetworks with an MTU size ofat least 1600 bytes.

vMotion TCP/IP Stack for NSX-T Workload DomainsUse the vMotion TCP/IP stack to isolate traffic for vSphere vMotion and to assign a dedicated defaultgateway for vSphere vMotion traffic.

By using a separate TCP/IP stack, you can manage vSphere vMotion and cold migration traffic accordingto the topology of the network, and as required for your organization.

n Route the traffic for the migration of virtual machines by using a default gateway that is different fromthe gateway assigned to the default stack on the ESXi host.

n Assign a separate set of buffers and sockets.

n Avoid routing table conflicts that might otherwise appear when many features are using a commonTCP/IP stack.

n Isolate traffic to improve security.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 34

Page 35: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-14. vMotion TCP/IP Stack Design Decision

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-NET-005 Use the vMotion TCP/IP stackfor vSphere vMotion traffic.

By using the vMotion TCP/IPstack, vSphere vMotion trafficcan be assigned a defaultgateway on its own subnetand can go over Layer 3networks.

The vMotion TCP/IP stack isnot available in the VMkerneladapter creation wizard ofvSphere Distributed Switch.You must create the VMkerneladapter directly on the ESXihost.

NSX Design for NSX-T Workload DomainsThis design implements software-defined networking by using VMware NSX-T. By using NSX-T,virtualization delivers for networking what it has already delivered for compute and storage.

In much the same way that server virtualization programmatically creates, takes snapshots of, deletes,and restores software-based virtual machines (VMs), NSX network virtualization programmaticallycreates, takes snapshots of, deletes, and restores software-based virtual networks. As a result, you followa simplified operational model for the underlying physical network.

NSX-T is a nondisruptive solution. You can deploy it on any IP network, including existing traditionalnetworking models and next-generation fabric architectures, regardless of the vendor.

When administrators provision workloads, network management is a time-consuming task. You spendmost time configuring individual components in the physical infrastructure and verifying that networkchanges do not affect other devices that are using the same physical network infrastructure.

The need to pre-provision and configure networks is a constraint to cloud deployments where speed,agility, and flexibility are critical requirements. Pre-provisioned physical networks enable fast creation ofvirtual networks and faster deployment times of workloads using the virtual network. If the physicalnetwork that you need is already available on the ESXi host to run a workload, pre-provisioning physicalnetworks works well. However, if the network is not available on an ESXi host, you must find an ESXi hostwith the available network and allocate capacity to run workloads in your environment.

Decouple virtual networks from their physical counterparts. In the virtualized environment, you mustrecreate all physical networking attributes that are required by the workloads. Because networkvirtualization supports the creation of virtual networks without modification of the physical networkinfrastructure, you can provision the workload networks faster.

n NSX-T Design for NSX-T Workload Domains

NSX-T components are not dedicated to a specific vCenter Server or vSphere construct. You canshare them across different vSphere environments.

n NSX-T Components for NSX-T Workload Domains

The following sections describe the components in the solution and how they are relevant to thenetwork virtualization design.

n NSX-T Network Requirements and Sizing for NSX-T Workload Domains

NSX-T requirements impact both physical and virtual networks.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 35

Page 36: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

n NSX-T Network Virtualization Conceptual Design for NSX-T Workload Domains

This conceptual design for NSX-T provides the network virtualization design of the logicalcomponents that handle the data to and from tenant workloads in the environment.

n Cluster Design for NSX-T Workload Domains

The NSX-T design uses management, and shared edge and compute clusters. You can add morecompute clusters for scale-out, or different workload types or SLAs.

n Replication Mode of Segments for NSX-T Workload Domains

The control plane decouples NSX-T from the physical network, and handles the broadcast, unknownunicast, and multicast (BUM) traffic in the segments (logical switches).

n Transport Zone Design for NSX-T Workload Domains

Transport zones determine which hosts can participate in the use of a particular network. A transportzone identifies the type of traffic, VLAN or overlay, and the N-VDS name. You can configure one ormore transport zones. A transport zone does not represent a security boundary.

n Network I/O Control Design for NSX-T Workload Domains

When a Network I/O Control profile is attached to an N-VDS, during contention the switch allocatesavailable bandwidth according to the configured shares, limit, and reservation for each vSpheretraffic type.

n Transport Node and Uplink Policy Design for NSX-T Workload Domains

A transport node can participate in an NSX-T overlay or NSX-T VLAN network.

n Routing Design by Using NSX-T for NSX-T Workload Domains

The routing design considers different levels of routing in the environment, such as number and typeof NSX-T routers, dynamic routing protocol, and so on. At each level, you apply a set of principlesfor designing a scalable routing solution.

n Virtual Network Design Example Using NSX-T for NSX-T Workload Domains

Design a setup of virtual networks where you determine the connection of virtual machines toSegments and the routing between the Tier-1 Gateway and Tier-0 Gateway, and then between theTier-0 Gateway and the physical network.

n Monitoring NSX-T for NSX-T Workload Domains

Monitor the operation of NSX-T for identifying failures in the network setup by using vRealize LogInsight and vRealize Operations Manager.

n Use of SSL Certificates in NSX-T for NSX-T Workload Domains

By default, NSX-T Manager uses a self-signed Secure Sockets Layer (SSL) certificate. Thiscertificate is not trusted by end-user devices or Web browsers.

NSX-T Design for NSX-T Workload DomainsNSX-T components are not dedicated to a specific vCenter Server or vSphere construct. You can sharethem across different vSphere environments.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 36

Page 37: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

NSX-T, while not dedicated to a vCenter Server, supports only single-region deployments in the currentrelease. This design is focused on compute clusters in a single region.

Table 3-15. NSX-T Design Decisions

Design ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-001 Deploy three NSX-T Managerappliances to configure andmanage all NSX-T basedcompute clusters in a singleregion.

Software-defined networking(SDN) capabilities offered byNSX, such as load balancingand firewalls, are required tosupport the requiredfunctionality in the computeand edge layers.

As of NSX-T 2.4, the theNSX-T Manager also servesthe role of the NSX-TControllers. You deploy threenodes in the cluster foravailability of services.

You must install and configureNSX-T Manager in a highlyavailable management cluster.

NSXT-VI-SDN-002 In the management cluster,add the NSX-T Manager tothe NSX for vSphereDistributed Firewall exclusionlist.

Ensures that themanagement plane is stillavailable if a misconfigurationof the NSX for vSphereDistributed Firewall occurs.

None.

NSX-T Components for NSX-T Workload DomainsThe following sections describe the components in the solution and how they are relevant to the networkvirtualization design.

NSX-T Manager

NSX-T Manager provides the graphical user interface (GUI) and the RESTful API for creating,configuring, and monitoring NSX-T components, such as segments and gateways.

NSX-T Manager implements the management and control plane for the NSX-T infrastructure. NSX-TManager provides an aggregated system view and is the centralized network management component ofNSX-T. It provides a method for monitoring and troubleshooting workloads attached to virtual networks. Itprovides configuration and orchestration of the following services:

n Logical networking components, such as logical switching and routing

n Networking and edge services

n Security services and distributed firewall

NSX-T Manager also provides a RESTful API endpoint to automate consumption. Because of thisarchitecture, you can automate all configuration and monitoring operations using any cloud managementplatform, security vendor platform, or automation framework.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 37

Page 38: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

The NSX-T Management Plane Agent (MPA) is an NSX-T Manager component that is available on eachESXi host. The MPA is in charge of persisting the desired state of the system and for communicating non-flow-controlling (NFC) messages such as configuration, statistics, status, and real-time data betweentransport nodes and the management plane.

NSX-T Manager also contains the NSX-T Controller component. NSX-T Controllers control the virtualnetworks and overlay transport tunnels. The controllers are responsible for the programmatic deploymentof virtual networks across the entire NSX-T architecture.

The Central Control Plane (CCP) is logically separated from all data plane traffic, that is, a failure in thecontrol plane does not affect existing data plane operations. The controller provides configuration to otherNSX-T Controller components such as the segments, gateways, and edge virtual machine configuration.

Table 3-16. NSX-T Manager Design Decisions

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-003 Deploy a three node NSX-TManager cluster using thelarge-size appliance.

The large-size appliancesupports greater than 64ESXi hosts. The small-sizeappliance is for proof ofconcept and the medium sizeonly supports up to 64 ESXihosts.

The large size requires moreresources in the managementcluster.

NSXT-VI-SDN-004 Create a virtual IP (VIP) forthe NSX-T Manager cluster.

Provides HA for the NSX-TManager UI and API.

The VIP provides HA only, itdoes not load balance requestsacross the manager cluster.

NSXT-VI-SDN-005 n Grant administratorsaccess to both the NSX-TManager UI and itsRESTful API endpoint.

n Restrict end-user accessto the RESTful APIendpoint configured forend-user provisioning,such as vRealizeAutomation or VMwareEnterprise PKS.

Ensures that tenants or non-provider staff cannot modifyinfrastructure components.

End-users typically interactonly indirectly with NSX-Tfrom their provisioning portal.Administrators interact withNSX-T using its UI and API.

End users have access only toend-point components.

NSX-T Virtual Distributed Switch

An NSX-T Virtual Distributed Switch (N-VDS) runs on ESXi hosts and provides physical traffic forwarding.It transparently provides the underlying forwarding service that each segment relies on. To implementnetwork virtualization, a network controller must configure the ESXi host virtual switch with network flowtables that form the logical broadcast domains the tenant administrators define when they create andconfigure segments.

NSX-T implements each logical broadcast domain by tunneling VM-to-VM traffic and VM-to-gatewaytraffic using the Geneve tunnel encapsulation mechanism. The network controller has a global view of thedata center and ensures that the ESXi host virtual switch flow tables are updated as VMs are created,moved, or removed.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 38

Page 39: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-17. NSX-T N-VDS Design Decision

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-006 Deploy an N-VDS instance toeach ESXi host in the sharededge and compute cluster.

ESXi hosts in the sharededge and compute clusterprovide tunnel endpoints forGeneve overlayencapsulation.

None.

Logical Switching

NSX-T Segments create logically abstracted segments to which you can connect tenant workloads. Asingle Segment is mapped to a unique Geneve segment that is distributed across the ESXi hosts in atransport zone. The Segment supports line-rate switching in the ESXi host without the constraints ofVLAN sprawl or spanning tree issues.

Table 3-18. NSX-T Logical Switching Design Decision

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-007 Deploy all workloads on NSX-T Segments (logicalswitches).

To take advantage of featuressuch as distributed routing,tenant workloads must beconnected to NSX-TSegments.

You must perform all networkmonitoring in the NSX-TManager UI, vRealize LogInsight, vRealize OperationsManger, or vRealize NetworkInsight.

Gateways (Logical Routers)

NSX-T Gateways provide North-South connectivity so that workloads can access external networks, andEast-West connectivity between different logical networks.

A Logical Router is a configured partition of a traditional network hardware router. It replicates thefunctionality of the hardware, creating multiple routing domains in a single router. Logical routers performa subset of the tasks that are handled by the physical router, and each can contain multiple routinginstances and routing tables. Using logical routers can be an effective way to maximize router use,because a set of logical routers within a single physical router can perform the operations previouslyperformed by several pieces of equipment.

n Distributed router (DR)

A DR spans ESXi hosts whose virtual machines are connected to this Gateway, and edge nodes theGateway is bound to. Functionally, the DR is responsible for one-hop distributed routing betweensegments and Gateways connected to this Gateway.

n One or more (optional) service routers (SR).

An SR is responsible for delivering services that are not currently implemented in a distributedfashion, such as stateful NAT.

A Gateway always has a DR. A Gateway has SRs when it is a Tier-0 Gateway, or when it is a Tier-1Gateway and has services configured such as NAT or DHCP.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 39

Page 40: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Tunnel Endpoint

Tunnel endpoints enable ESXi hosts to participate in an NSX-T overlay. The NSX-T overlay deploys aLayer 2 network on top of an existing Layer 3 network fabric by encapsulating frames inside packets andtransferring the packets over an underlying transport network. The underlying transport network can beanother Layer 2 networks or it can cross Layer 3 boundaries. The Tunnel Endpoint (TEP) is theconnection point at which the encapsulation and decapsulation take place.

NSX-T Edges

NSX-T Edges provide routing services and connectivity to networks that are external to the NSX-Tdeployment. You use an NSX-T Edge for establishing external connectivity from the NSX-T domain byusing a Tier-0 Gateway using BGP or static routing. Additionally, you deploy an NSX-T Edge to supportnetwork address translation (NAT) services at either the Tier-0 or Tier-1 Gateway.

The NSX-T Edge connects isolated, stub networks to shared uplink networks by providing commongateway services such as NAT, and dynamic routing.

Logical Firewall

NSX-T handles traffic in and out the network according to firewall rules.

A logical firewall offers multiple sets of configurable Layer 3 and Layer 2 rules. Layer 2 firewall rules areprocessed before Layer 3 rules. You can configure an exclusion list to exclude segments, logical ports, orgroups from firewall enforcement.

The default rule, that is at the bottom of the rule table, is a catchall rule. The logical firewall enforces thedefault rule on packets that do not match other rules. After the host preparation operation, the default ruleis set to the allow action. Change this default rule to a block action and apply access control through apositive control model, that is, only traffic defined in a firewall rule can flow on the network.

Logical Load Balancer

The NSX-T logical load balancer offers high-availability service for applications and distributes thenetwork traffic load among multiple servers.

The load balancer accepts TCP, UDP, HTTP, or HTTPS requests on the virtual IP address anddetermines which pool server to use.

Logical load balancer is supported only on the Tier-1 Gateway.

NSX-T Network Requirements and Sizing for NSX-T Workload DomainsNSX-T requirements impact both physical and virtual networks.

Physical Network Requirements

Physical requirements determine the MTU size for networks that carry overlay traffic, dynamic routingsupport, time synchronization through an NTP server, and forward and reverse DNS resolution.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 40

Page 41: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Requirement Comments

Provide an MTU size of 1600 or greater on any network thatcarries Geneve overlay traffic.

Geneve packets cannot be fragmented. The MTU size must belarge enough to support extra encapsulation overhead.

This design uses an MTU size of 9000 for Geneve traffic. SeeTable 3-4. Jumbo Frames Design Decisions.

Enable dynamic routing support on the upstream Layer 3devices.

You use BGP on the upstream Layer 3 devices to establishrouting adjacency with the Tier-0 SRs.

Provide an NTP server. The NSX-T Manager requires NTP settings that synchronize itwith the rest of the environment.

Establish forward and reverse DNS resolution for allmanagement VMs.

Enables administrators to access the NSX-T envrionement viaFQDN as opposed to memorizing IP addresses.

NSX-T Component Specifications

When you size the resources for NSX-T components, consider the compute and storage requirements foreach component, and the number of nodes per component type.

Size of NSX Edge services gateways might be different according to tenant requirements. Consider alloptions in such a case.

Table 3-19. Resource Specification of the NSX-T Components

Virtual Machine vCPU Memory (GB) Storage (GB)Quantity per NSX-TDeployment

NSX-T Manager 12 (Large) 48 (Large) 200 (Large) 3

NSX-T Edge virtualmachine

2 (Small, PoC only) 4 (Small, PoC only) 200 (Small, PoC only) Numbers are differentaccording to the usecase. At least two edgedevices are required toenable ECMP routing.

4 (Medium) 8 (Medium) 200 (Medium)

8 (Large) 32 (Large) 200 (Large)

Table 3-20. Design Decisions on Sizing the NSX-T Edge Virtual Machines

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-008 Use large-size NSX-T Edgevirtual machines.

The large-size applianceprovides the requiredperformance characteristics ifa failure occurs.

Virtual Edges providesimplied lifecyclemanagement.

Large size Edges consumemore CPU and Memoryresources.

NSX-T Network Virtualization Conceptual Design for NSX-T WorkloadDomainsThis conceptual design for NSX-T provides the network virtualization design of the logical componentsthat handle the data to and from tenant workloads in the environment.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 41

Page 42: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

The network virtualization conceptual design includes a perimeter firewall, a provider logical router, andthe NSX-T Gateway. It also considers the external network, internal workload networks, and themanagement network.

Figure 3-6. NSX-T Conceptual Overview

VM VMVM VM VMVM

Internet MPLS

MgmtNetwork

External Network

Perimeter Firewall

Upstream Layer 3Devices

NSX-Т DistributedRouter (DR)

Internal Tenant Networks (Segments)

vNIC-Level Distributed Firewall

NSX-T Service Router(SR)

The conceptual design has the following components.

External Networks Connectivity to and from external networks is through the perimeter firewall.

Perimeter Firewall The firewall exists at the perimeter of the data center to filter Internet traffic.

Upstream Layer 3Devices

The upstream Layer 3 devices are behind the perimeter firewall and handleNorth-South traffic that is entering and leaving the NSX-T environment. Inmost cases, this layer consists of a pair of top of rack switches orredundant upstream Layer 3 devices such as core routers.

NSX-T Service Router(SR)

The SR component of the NSX-T Tier-0 Gateway is responsible forestablishing eBGP peering with the Upstream Layer 3 devices and enablingNorth-South routing.

NSX-T DistributedRouter (DR)

The DR component of the NSX-T Gateway is responsible for East-Westrouting.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 42

Page 43: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Management Network The management network is a VLAN-backed network that supports allmanagement components such as NSX-T Manager and NSX-T Controllers.

Internal TenantNetworks

Internal tenant networks are NSX-T Segments and provide connectivity forthe tenant workloads. Workloads are directly connected to these networks.Internal tenant networks are then connected to a DR.

Cluster Design for NSX-T Workload DomainsThe NSX-T design uses management, and shared edge and compute clusters. You can add morecompute clusters for scale-out, or different workload types or SLAs.

The logical NSX-T design considers the vSphere clusters and defines the place where each NSXcomponent runs.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 43

Page 44: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Figure 3-7. SDDC Logical Design

APP

OS

APP

OSAPP

OSAPP

OS

APP

OSAPP

OSAPP

OSAPP

OS

APP

OS

APP

OSAPP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OSAPP

OS

APP

OS

Virtual InfrastructureManagement

NSX-VControllers

(Mgmt)

OtherManagementApplications

NSX-VEdges(Mgmt)

NSX-VManager(Mgmt)

NSX-TManagers(Compute)

NSX-TEdges

ESXi

TransportNodes

ESXi ESXi ESXi

SDDCPayload

Virtual Infrastructure Edge Compute

NSX-T Transport Zones

N-VDS (Compute) vDS (Mgmt)

NSX-V Transport Zones (Mgmt)

Shared Edge and Compute Cluster

Management Cluster

Managed by: Compute vCenter Server

Managed by:Management vCenter Server

Network: External(Internet/Corporate)

Network: Internal SDDC

vCenterServer(Mgmt)

vCenterServer

(Compute)

ESXiESXi

TransportNodes

ESXi

TransportNodes

ESXi

TransportNodes

Management Cluster

The management cluster contains all components for managing the SDDC. This cluster is a corecomponent of the VMware Validated Design for Software-Defined Data Center. For information about themanagement cluster design, see the Architecture and Design documentation in VMware Validated Designfor Software-Defined Data Center.

NSX-T Edge Node Cluster

The NSX-T Edge cluster is a logical grouping of NSX-T Edge virtual machines. These NSX-T Edge virtualmachines run in the vSphere shared edge and compute cluster and provide North-South routing andnetwork services for workloads in the compute clusters.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 44

Page 45: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Shared Edge and Compute Cluster

In the shared edge and compute cluster, ESXi hosts are prepared for NSX-T. As a result, they can beconfigured as transport nodes and can participate in the overlay network. All tenant workloads, and NSX-T Edge virtual machines run in this cluster.

Table 3-21. Cluster Design Decisions

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-009 For virtual infrastructureworkload domains, do notdedicate an edge cluster invSphere.

Simplifies configuration andminimizes the number ofESXi hosts required for initialdeployment.

The NSX-T Edge virtualmachines are deployed in theshared edge and computecluster.

Because of the shared natureof the cluster, to avoid animpact on networkperformance, you must scaleout the cluster as you addtenant workloads.

NSXT-VI-SDN-010 Deploy at least two large-sizeNSX-T Edge virtual machinesin the shared edge andcompute cluster.

Creates the NSX-T Edgecluster, and meets availabilityand scale requirements.

When additional Edge VM'sare added, the Resource PoolMemory Reservation must beadjusted.

NSXT-VI-SDN-011 Apply VM-VM anti-affinityrules in vSphere DistributedResource Scheduler(vSphere DRS) to the NSX-TManager appliances.

Keeps the NSX-T Managerappliances running ondifferent ESXi hosts for highavailability.

Requires at least four physicalhosts to guarantee the threeNSX-T Manager appliancescontinue to run if an ESXi hostfailure occurs.

Additional configuration isrequired to set up anti-affinityrules.

NSXT-VI-SDN-012 Apply VM-VM anti-affinityrules for vSphere DRS to thevirtual machines of the NSX-T Edge cluster.

Keeps the NSX-T Edgevirtual machines running ondifferent ESXi hosts for highavailability.

You must perform additionalconfiguration to set up the anti-affinity rules.

High Availability of NSX-T Components

The NSX-T Managers run on the management cluster. vSphere HA protects the NSX-T Managers byrestarting the NSX-T Manager virtual machine on a different ESXi host if a primary ESXi host failureoccurs.

The data plane remains active during outages in the management and control planes although theprovisioning and modification of virtual networks is impaired until those planes become available again.

The NSX-T Edge virtual machines are deployed on the shared edge and compute cluster. vSphere DRSanti-affinity rules prevent NSX-T Edge virtual machines that belong to the same NSX-T Edge cluster fromrunning on the same ESXi host.

NSX-T SRs for North-South routing are configured in equal-cost multi-path (ECMP) mode that supportsroute failover in seconds.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 45

Page 46: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Replication Mode of Segments for NSX-T Workload DomainsThe control plane decouples NSX-T from the physical network, and handles the broadcast, unknownunicast, and multicast (BUM) traffic in the segments (logical switches).

The following options are available for BUM replication on segments.

Table 3-22. BUM Replication Mode of NSX-T Segments

BUM Replication Mode Description

Hierarchical Two-Tier In this mode, the ESXi host transport nodes are groupedaccording to their TEP IP subnet. One ESXi host in each subnetis responsible for replication to a ESXi host in another subnet.The receiving ESXi host replicates the traffic to the ESXi hosts inits local subnet.

The source ESXi host transport node knows about the groupsbased on information it has received from the NSX-T controlcluster. The system can select an arbitrary ESXi host transportnode as the mediator for the source subnet if the remotemediator ESXi host node is available.

Head-End In this mode, the ESXi host transport node at the origin of theframe to be flooded on a segment sends a copy to every otherESXi host transport node that is connected to this segment.

Table 3-23. Design Decisions on Segment Replication Mode

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-013 Use hierarchical two-tierreplication on all segments.

Hierarchical two-tierreplication is more efficient byreducing the number of ESXihosts the source ESXi hostmust replicate traffic to.

None.

Transport Zone Design for NSX-T Workload DomainsTransport zones determine which hosts can participate in the use of a particular network. A transportzone identifies the type of traffic, VLAN or overlay, and the N-VDS name. You can configure one or moretransport zones. A transport zone does not represent a security boundary.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 46

Page 47: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-24. Transport Zones Design Decisions

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-014 Create a single transportzone for all overlay trafficacross all workload domains.

Ensures all Segments areavailable to all ESXi hostsand edge virtual machinesconfigured as TransportNodes.

None.

NSXT-VI-SDN-015 Create a VLAN transportzone for ESXi host VMkernelports.

Enables the migration of ESXihost VMkernel ports to the N-VDS.

The N-VDS name must matchthe N-VDS name in the overlaytransport zone.

NSXT-VI-SDN-016 Create two transport zonesfor edge virtual machineuplinks.

Enables the edge virtualmachines to use equal-costmulti-path routing (ECMP).

You must specify a VLANrange, that is use VLANtrunking, on the segment usedas the uplinks.

Network I/O Control Design for NSX-T Workload DomainsWhen a Network I/O Control profile is attached to an N-VDS, during contention the switch allocatesavailable bandwidth according to the configured shares, limit, and reservation for each vSphere traffictype.

How Network I/O Control Works

Network I/O Control enforces the share value specified for the different traffic types only when there isnetwork contention. When contention occurs, Network I/O Control applies the share values set to eachtraffic type. As a result, less important traffic, as defined by the share percentage, is throttled, grantingaccess to more network resources to more important traffic types.

Network I/O Control also supports the reservation of bandwidth for system traffic according to the overallpercentage of available bandwidth.

Network I/O Control Heuristics

The following heuristics can help with design decisions.

Shares vs. Limits When you use bandwidth allocation, consider using shares instead of limits.Limits impose hard limits on the amount of bandwidth used by a traffic floweven when network bandwidth is available.

Limits on NetworkResource Pools

Consider imposing limits on a resource pool. For example, set a limit onvSphere vMotion traffic to avoid oversubscription at the physical networklevel when multiple vSphere vMotion data transfers are initiated on differentESXi hosts at the same time. By limiting the available bandwidth forvSphere vMotion at the ESXi host level, you can prevent performancedegradation for other traffic.

Network I/O Control Design Decisions

Based on the heuristics, this design has the following decisions.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 47

Page 48: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-25. Network I/O Control Design Decisions

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-SDN-017 Create and attach a NetworkI/O Control Policy on all N-DVS switches.

Increases resiliency andperformance of the network.

If configured incorrectly,Network I/O Control mightimpact network performancefor critical traffic types.

NSXT-VI-SDN-018 Set the share value forvSphere vMotion traffic toLow (25).

During times of networkcontention, vSphere vMotiontraffic is not as important asvirtual machine or storagetraffic.

During times of networkcontention, vMotion takeslonger than usual to complete.

NSXT-VI-SDN-019 Set the share value forvSphere Replication traffic toLow (25).

During times of networkcontention, vSphereReplication traffic is not asimportant as virtual machineor storage traffic.

During times of networkcontention, vSphereReplication takes longer andmight violate the defined SLA.

NSXT-VI-SDN-020 Set the share value for vSANtraffic to High (100).

During times of networkcontention, vSAN trafficneeds a guaranteedbandwidth to support virtualmachine performance.

None.

NSXT-VI-SDN-021 Set the share value formanagement traffic to Normal(50).

By keeping the default settingof Normal, managementtraffic is prioritized higher thanvSphere vMotion andvSphere Replication but lowerthan vSAN traffic.Management traffic isimportant because it ensuresthat the hosts can still bemanaged during times ofnetwork contention.

None.

NSXT-VI-SDN-022 Set the share value for NFStraffic to Low (25).

Because NFS is used forsecondary storage, such asbackups and vRealize LogInsight archives, its priority islower than the priority of thevSAN traffic.

During times of networkcontention, backups are slowerthan usual.

NSXT-VI-SDN-023 Set the share value forbackup traffic to Low (25).

During times of networkcontention, the primaryfunctions of the SDDC mustcontinue to have access tonetwork resources withpriority over backup traffic.

During times of networkcontention, backups are slowerthan usual.

NSXT-VI-SDN-024 Set the share value for virtualmachines to High (100).

Virtual machines are the mostimportant asset in the SDDC.Leaving the default setting ofHigh ensures that they alwayshave access to the networkresources they need.

None.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 48

Page 49: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-25. Network I/O Control Design Decisions (continued)

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-SDN-025 Set the share value forvSphere Fault Tolerance toLow (25).

This design does not usevSphere Fault Tolerance.Fault tolerance traffic can beset the lowest priority.

None.

NSXT-VI-SDN-026 Set the share value for iSCSItraffic to Low (25).

This design does not useiSCSI. iSCSI traffic can be setthe lowest priority.

None.

Transport Node and Uplink Policy Design for NSX-T Workload DomainsA transport node can participate in an NSX-T overlay or NSX-T VLAN network.

Several types of transport nodes are available in NSX-T.

ESXi Host TransportNodes

ESXi host transport nodes are ESXi hosts prepared and configured forNSX-T. N-VDS provides network services to the virtual machines runningon these ESXi hosts.

Edge Nodes NSX-T Edge nodes are service appliances that run network services thatcannot be distributed to the hypervisors. They are grouped in one orseveral NSX-T Edge clusters. Each cluster represents a pool of capacity.

Uplink profiles define policies for the links from ESXi hosts to NSX-T Segments or from NSX-T Edgevirtual machines to top of rack switches. By using uplink profiles, you can apply consistent configurationof capabilities for network adapters across multiple ESXi hosts or edge virtual machines. Uplink profilesare containers for the properties or capabilities for the network adapters.

Uplink profiles can use either load balance source or failover order teaming. If using load balance source,multiple uplinks can be active. If using failover order, only a single uplink can be active.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 49

Page 50: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-26. Design Decisions on Transport Nodes and Uplink Policy

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-027 Create an uplink profile withthe load balance sourceteaming policy with two activeuplinks for ESXi hosts.

For increased resiliency andperformance, supports theconcurrent use of bothphysical NICs on the ESXihosts that are configured astransport nodes.

You can use this policy onlywith ESXi hosts. Edge virtualmachines must use the failoverorder teaming policy.

NSXT-VI-SDN-028 Create an uplink profile withthe failover order teamingpolicy with one active uplinkand no standby uplinks foredge virtual machine overlaytraffic.

Provides a profile accordingto the requirements for Edgevirtual machines. Edge virtualmachines support uplinkprofiles only with a failoverorder teaming policy.

VLAN ID is required in theuplink profile. Hence, youmust create an uplink profilefor each VLAN used by theedge virtual machines.

n You create and managemore uplink profiles.

n The VLAN ID used must bedifferent than the VLAN IDfor ESXi host overlaytraffic.

NSXT-VI-SDN-029 Create two uplink profiles withthe failover order teamingpolicy with one active uplinkand no standby uplinks foredge virtual machine uplinktraffic.

Enables ECMP because theеdge virtual machine canuplink to the physical networkover two different VLANs.

You create and manage moreuplink profiles.

NSXT-VI-SDN-030 Create a Transport NodePolicy with the VLAN andOverlay Transport Zones, N-VDS settings, Physical NICsand Network mappings toallow NSX-T to migrate theVMKernel adapter to the N-VDS.

Allows the profile to beassigned directly to thevSphere cluster and ensuresconsistent configurationacross all ESXi HostTransport Nodes.

You must create all requiredTransport Zones andSegments before creating theTransport Node Policy.

NSXT-VI-SDN-031 Add as transport nodes allESXi hosts in the SharedEdge and Compute cluster byapplying the Transport NodePolicy to the vSphere clusterobject.

Enables the participation ofESXi hosts and the virtualmachines on them in NSX-Toverlay and VLAN networks.

ESXi hosts VMKernel adaptersare migrated to the N-VDS.Ensure the VLAN ID's for theVMKernel Segments arecorrect to ensure hostcommunication is not lost.

NSXT-VI-SDN-032 Add as transport nodes alledge virtual machines.

Enables the participation ofedge virtual machines in theoverlay network and thedelivery of services, such asrouting, by these machines.

None.

NSXT-VI-SDN-033 Create an NSX-T Edgecluster with the defaultBidirectional ForwardingDetection (BFD) settingscontaining the edge transportnodes.

Satisfies the availabilityrequirements by default.

Edge clusters are required tocreate services such as NAT,routing to physical networks,and load balancing.

None.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 50

Page 51: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Routing Design by Using NSX-T for NSX-T Workload DomainsThe routing design considers different levels of routing in the environment, such as number and type ofNSX-T routers, dynamic routing protocol, and so on. At each level, you apply a set of principles fordesigning a scalable routing solution.

Routing can be defined in the following directions: North-South and East-West.

n North-South traffic is traffic leaving or entering the NSX-T domain, for example, a virtual machine onan overlay network communicating with an end-user device on the corporate network.

n East-West traffic is traffic that remains in the NSX-T domain, for example, two virtual machines on thesame or different segments communicating with each other.

Table 3-27. Design Decisions on Routing Using NSX-T

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-034 Create two VLANs to enableECMP between the Tier-0Gateway and the Layer 3device (ToR or upstreamdevice).

The ToR switches orupstream Layer 3 deviceshave an SVI on one of thetwo VLANS and each edgevirtual machine has aninterface on each VLAN.

Supports multiple equal-costroutes on the Tier-0 Gatewayand provides more resiliencyand better bandwidth use inthe network.

Extra VLANs are required.

NSXT-VI-SDN-035 Deploy an Active-ActiveTier-0 Gateway.

Supports ECMP North-Southrouting on all edge virtualmachines in the NSX-T Edgecluster.

Active-Active Tier-0 Gatewayscannot provide services suchas NAT. If you deploy a specificsolution that requires statefulservices on the Tier-0Gateway, such as VMwareEnterprise PKS, you mustdeploy a Tier-0 Gateway inActive-Standby mode.

NSXT-VI-SDN-036 Use BGP as the dynamicrouting protocol.

Enables the dynamic routingby using NSX-T. NSX-Tsupports only BGP .

In environments where BGPcannot be used, you mustconfigure and manage staticroutes.

NSXT-VI-SDN-037 Configure BGP Keep AliveTimer to 4 and Hold DownTimer to 12 between the ToRswitches and the Tier-0Gateway.

Provides a balance betweenfailure detection between theToR switches and the Tier-0Gateway and overburdeningthe ToRs with keep alivetraffic.

By using longer timers todetect if a router is notresponding, the data aboutsuch a router remains in therouting table longer. As aresult, the active routercontinues to send traffic to arouter that is down.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 51

Page 52: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-27. Design Decisions on Routing Using NSX-T (continued)

Decision ID Design Decision Design Justification Design Implications

NSXT-VI-SDN-038 Do not enable GracefulRestart between BGPneighbors.

Avoids loss of traffic. GracefulRestart maintains theforwarding table which in turnwill forward packets to a downneighbor even after the BGPtimers have expired causingloss of traffic.

None.

NSXT-VI-SDN-039 Deploy a Tier-1 Gateway tothe NSX-T Edge cluster andconnect it to the Tier-0Gateway.

Creates a two-tier routingarchitecture that supportsload balancers and NAT.

Because the Tier-1 is alwaysActive/Standby, creation ofservices such as loadbalancers or NAT is possible.

A Tier-1 Gateway can only beconnected to a single Tier-0Gateway.

In scenarios where multipleTier-0 Gateways are required,you must create multiple Tier-1Gateways.

NSXT-VI-SDN-040 Deploy Tier-1 Gateways withthe Non-Premptive setting.

Ensures that when the failedEdge Transport Node comesback online it doesn't moveservices back to itselfresulting in a small serviceoutage.

None.

Virtual Network Design Example Using NSX-T for NSX-T Workload DomainsDesign a setup of virtual networks where you determine the connection of virtual machines to Segmentsand the routing between the Tier-1 Gateway and Tier-0 Gateway, and then between the Tier-0 Gatewayand the physical network.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 52

Page 53: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Figure 3-8. Virtual Network Example

Physical Layer 3 Devices

Tier-1 to Tier-0 Connection

Legend:

Segment 1 - 192.168.100.0/24

VLANs

Segment

Tier-0Gateway

Active/Active

ECMP

NSX-TEdge Cluster

Edge VM 1 Edge VM 2

Tier-1Gateway

Linux Windows Linux

Segment 2 - 192.168.200.0/24

Linux Windows Linux

Monitoring NSX-T for NSX-T Workload DomainsMonitor the operation of NSX-T for identifying failures in the network setup by using vRealize Log Insightand vRealize Operations Manager.

n vRealize Log Insight saves log queries and alerts, and you can use dashboards for efficientmonitoring.

n vRealize Operations Manger provides alerts, capacity management and custom views anddashboards.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 53

Page 54: Architecture and Design for VMware NSX-T …...Architecture Overview 2 VMware Validated Design for NSX-T enables IT organizations that have deployed VMware Validated Design for Software-Defined

Table 3-28. Design Decisions on Monitoring NSX-T

Decision ID Design Decision Design Justification Design Implication

NSXT-VI-SDN-041 Install the content pack forNSX-T in vRealize LogInsight.

Add a dashboard and metricsfor granular monitoring of theNSX-T infrastructure.

Requires manually installingthe content pack.

NSXT-VI-SDN-042 Configure each NSX-Tcomponent to send loginformation over syslog to thevRealize Log Insight clusterVIP.

Ensures that all NSX-Tcomponents log files areavailable for monitoring andtroubleshooting in vRealizeLog Insight.

Requires manually configuringsyslog on each NSX-Tcomponent.

Use of SSL Certificates in NSX-T for NSX-T Workload DomainsBy default, NSX-T Manager uses a self-signed Secure Sockets Layer (SSL) certificate. This certificate isnot trusted by end-user devices or Web browsers.

As a best practice, replace self-signed certificates with certificates that are signed by a third-party orenterprise Certificate Authority (CA).

Table 3-29. Design Decisions on the SSL Certificate of NSX-T Manager

Design ID Design Decision Design Justification Design Implication

NSXT-VI-SDN-043 Replace the certificate of theNSX-T Manager instanceswith a certificate that is signedby a third-party Public KeyInfrastructure.

Ensures that thecommunication betweenNSX-T administrators and theNSX-T Manager instance isencrypted by using a trustedcertificate.

Replacing and managingcertificates is an operationaloverhead.

NSXT-VI-SDN-044 Replace the NSX-T Managercluster certificate with acertificate that is signed by athird-party Public KeyInfrastructure.

Ensures that thecommunication between thevirtual IP address of the NSX-T Manager cluster andadministrators is encrypted byusing a trusted certificate.

Replacing and managingcertificates is an operationaloverhead.

Architecture and Design for VMware NSX-T Workload Domains

VMware, Inc. 54