159
Architecture and Design for the Management Domain Modified on 28 JUL 2020 VMware Validated Design 6.0 VMware Cloud Foundation 4.0

Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Architecture and Design for the Management Domain

Modified on 28 JUL 2020VMware Validated Design 6.0VMware Cloud Foundation 4.0

Page 2: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

You can find the most up-to-date technical documentation on the VMware website at:

https://docs.vmware.com/

VMware, Inc.3401 Hillview Ave.Palo Alto, CA 94304www.vmware.com

Copyright ©

2016-2020 VMware, Inc. All rights reserved. Copyright and trademark information.

Architecture and Design for the Management Domain

VMware, Inc. 2

Page 3: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Contents

About Architecture and Design for the Management Domain 4

1 Architecture Overview for the Management Domain 6

2 Detailed Design for Management Domain 10Physical Infrastructure Design 10

Availability Zones and Regions for the Management Domain 11

Workload Domains and Racks for the Management Domain 13

Virtual Infrastructure Design 17

ESXi Design for the Management Domain 17

vCenter Server Design for the Management Domain 27

vSphere Networking Design for the Management Domain 47

Software-Defined Networking Design for the Management Domain 58

Shared Storage Design for the Management Domain 106

Cloud Operations Design 130

SDDC Manager Detailed Design 131

Security and Compliance Design 140

Region-Specific Workspace ONE Access Design 141

VMware, Inc. 3

Page 4: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

About Architecture and Design for the Management Domain

The Architecture and Design for the Management Domain document contains a validated design model for the management domain of the Software-Defined Data Center (SDDC).

Chapter 1 Architecture Overview for the Management Domain discusses the building blocks and the main principles of the management domain. Chapter 2 Detailed Design for Management Domain provides the available design options according to the design objectives, and a set of design decisions to justify the path that you select for building each component.

Intended Audience

Architecture and Design for the Management Domain document is intended for cloud architects, who are familiar with and want to use VMware software to deploy and manage an SDDC that meets the requirements for capacity, scalability, backup and restore, and extensibility for disaster recovery support.

Required VMware Software

Architecture and Design for the Management Domain is compliant and validated with certain product versions. See VMware Validated Design Release Notes.

Before You Apply This Guidance

The sequence of the VMware Validated Design documentation follows the stages for implementing and maintaining an SDDC.

If you plan to deploy an SDDC by following the prescriptive path of VMware Validated Design, to apply Architecture and Design for the Management Domain, you must be acquainted with Introducing VMware Validated Design. See Guided Documentation Map of VMware Validated Design.

If you plan to deploy an SDDC by following the VMware Cloud Foundation documentation, you must be acquainted with Introducing VMware Cloud Foundation. See the VMware Cloud Foundation documentation page.

VMware, Inc. 4

Page 5: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Update History

This Architecture and Design for the Management Domain is updated with each release of the product or when necessary.

Revision Description

28 JUL 2020 n To size the vSAN storage for the management domain, you consider the requirements for four Workspace ONE Access virtual machines now - one for the region-specific Workspace ONE Access instance and three for the cross-region Workspace ONE Access instance. See vSAN Physical Design for the Management Domain.

n Improved wording to promote a more inclusive culture in accordance with VMware values.

25 JUN 2020 n The MTU value for management VLANs and SVIs changed to 1500 bytes for the access port network configurations. See Physical Network Infrastructure Design for NSX-T Data Center for the Management Domain.

n The management VLAN for the second availability zone with ID 1641 is no longer required to be stretched. See Physical Network Infrastructure Design for NSX-T Data Center for the Management Domain

14 APR 2020 Initial release.

Architecture and Design for the Management Domain

VMware, Inc. 5

Page 6: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Architecture Overview for the Management Domain 1By implementing the design for the SDDC, an IT organization can automate the provisioning of common repeatable requests and respond to business needs with agility and predictability. This SDDC design provides an IT solution with features across many areas such as operations management, cloud management, business continuity, and security and compliance.

Figure 1-1. Architecture Overview of the SDDC Management Domain in a Region

Management Domain

ESXi

vCenter Server

NSX-T

SDDC Manager

Region-Specific

Workspace ONE Access

Workload Domain

vSAN

ESXi ESXi

Shared Storage(vSAN, NFS, VMFS)

vCenter Server

NSX-T (1:1 or 1:N)

VMware Solution for Kubernetes

Workload Domain

Shared Storage(vSAN, NFS, VMFS)

vCenter Server

NSX-T (1:1 or 1:N)

VMware Solution for Kubernetes

Cloud Operations and Automation Solution Add-on

vRealize Suite Lifecycle

Manager

vRealize Operations

ManagervRealize

Log InsightvRealize

AutomationCross-Region Workspace

ONE Access

Another Solution Add-On

Consolidated SDDC Architecture

Standard SDDC Architecture

VMware, Inc. 6

Page 7: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

SDDC Architectures

You start SDDC deployment from the management domain and extend it with more virtual infrastructure and solutions. You select a deployment architecture according to the number of tenant workloads you plan to support and the available virtual infrastructure.

Standard SDDC Architecture

In a standard deployment, the management domain consists of workloads supporting the virtual infrastructure, cloud operations, cloud automation, business continuity, and security and compliance components for the SDDC. You allocate separate workload domains to tenant or containerized workloads. Each workload domain is managed by a separate vCenter Server instance and a dedicated or shared NSX-T Manager cluster for scalability. The workload domain construct also has autonomous licensing and life cycle management. The vCenter Server and NSX-T Manager components for these workload domains are running in the management domain too.

The scope of this validated design is the standard architecture.

Consolidated SDDC Architecture

In a consolidated deployment, the management domain runs both the SDDC management workloads and tenant workloads.

Management Domain Architecture

The management domain runs all management components of the SDDC for both the management domain and workload domains, except for workload NSX-T Edge nodes and vSphere with Kubernetes components. You start with an initial management domain configuration which is extended with each workload domain deployment. For extending the capabilities of the SDDC, you can also deploy additional solutions in the management domain, for example, solutions for cloud operations and cloud automation.

Table 1-1. Initial Component Configuration of the Management Domain in Each Region

Management Component Services

ESXi Virtual infrastructure for running the SDDC management components. See ESXi Design for the Management Domain.

vCenter Server Central management and protection of the ESXi hosts and the management appliances running on the hosts. See vCenter Server Design for the Management Domain.

NSX-T Logical switching, dynamic routing, and load balancing for the SDDC management components. In the initial component configuration of the management domain, the management NSX-T instance provides a virtual network segments to the region-Specific Workspace ONE Access instance. See Software-Defined Networking Design for the Management Domain.

vSAN Primary software-defined storage for all SDDC management components. See Shared Storage Design for the Management Domain.

Architecture and Design for the Management Domain

VMware, Inc. 7

Page 8: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 1-1. Initial Component Configuration of the Management Domain in Each Region (continued)

Management Component Services

SDDC Manager n Virtual infrastructure provisioning and life cycle management of workload domains.

n Life cycle management and provisioning of additional virtual infrastructure to the management domain for ESXi, vCenter Server, NSX-T, and vRealize Suite Lifecycle Manager. In the initial component configuration of the management domain, SDDC Manager performs life cycle management for ESXi, vCenter Server, and NSX-T.

See SDDC Manager Detailed Design.

Region-specific Workspace ONE Access instance

Centralized identity and access management. In the initial component configuration of the management domain, the region-specific Workspace ONE Access instance is connected to the management NSX-T Manager cluster. See Region-Specific Workspace ONE Access Design.

Availability Zones and Regions

The SDDC design consists one region that includes at least one management domain but can also include one or more workload domains. Clusters within a region can use two availability zones.

This design uses a single region, with the option to use one or two availability zones in Region A.

Figure 1-2. Component Location in a Single Availability Zone

APPOS

APPOS

APPOS

APPOS

Management Workloads

Management Cluster

ESXi ESXi ESXi ESXi

Management Domain vCenter Server

vSphere Distributed Switch with NSX-T

vSAN

Architecture and Design for the Management Domain

VMware, Inc. 8

Page 9: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 1-3. Component Location in Multiple Availability Zones

APPOS

APPOS

APPOS

APPOS

Management Workloads

Management Cluster

Availability Zone1

ESXi

ESXi

ESXi

ESXi

Availability Zone2

ESXi

ESXi

ESXi

ESXi

Management Domain vCenter Server

vSphere Distributed Switch with NSX-T

vSAN

Architecture and Design for the Management Domain

VMware, Inc. 9

Page 10: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Detailed Design for Management Domain 2The management domain detailed design considers components for physical infrastructure, virtual infrastructure, cloud operations, and security and compliance. It includes numbered design decisions, and the justification and implications of each decision.

This design provides two design options for availability zone setup. In certain areas, configurations and design decision alternatives are specific to a single availability zone setup and to a multiple availability zone setup.

This chapter includes the following topics:

n Physical Infrastructure Design

n Virtual Infrastructure Design

n Cloud Operations Design

n Security and Compliance Design

Physical Infrastructure Design

The physical infrastructure design includes deciding on the configuration of availability zones and regions, the cluster layout in data center racks.

VMware, Inc. 10

Page 11: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-1. Physical Infrastructure in the SDDC

Cloud Operations

Monitoring

Logging

Life CycleManagement

BusinessContinuity

Fault Tolerance and Disaster

Recovery

Backup & Restore

Replication

Security and Compliance

Security Policies

Industry Regulations

Identity and Access Management

CloudAutomation

Service Catalog

Self-Service Portal

Orchestration

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

Compute

Storage

Network

PhysicalInfrastructure

n Availability Zones and Regions for the Management Domain

Availability zones and regions have different purposes. Availability zones protect against failures of individual hosts. You can consider regions to place workloads closer to your customers, comply with data privacy laws and restrictions, and support disaster recovery solutions for the entire SDDC.

n Workload Domains and Racks for the Management Domain

Availability Zones and Regions for the Management Domain

Availability zones and regions have different purposes. Availability zones protect against failures of individual hosts. You can consider regions to place workloads closer to your customers, comply with data privacy laws and restrictions, and support disaster recovery solutions for the entire SDDC.

This design uses a single region for SDDC management components with two availability zones.

Availability zones

An availability zone is the fault domain of the SDDC. Multiple availability zones can provide continuous availability of an SDDC, minimize down time of services and improve SLAs.

Architecture and Design for the Management Domain

VMware, Inc. 11

Page 12: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Availability Zone Characteristic Description

Outage prevention

You avoid outages and improve SLAs. An outage that is caused by external factors, such as power supply, cooling, and physical integrity, affects only one zone. These factors do not cause outage in other zones except in the case of major disasters.

Reliability Each availability zone runs on its own physically distinct, independent infrastructure, and is engineered to be highly reliable. Each zone should have independent power, cooling, network, and security. Do not share common points of failures in a physical data center, like generators and cooling equipment, across availability zones. Additionally, these zones should be physically separate so that even uncommon disasters affect only one zone.

Availability zones are either two distinct data centers in a metro distance, or two safety or fire sectors (data halls) in the same large-scale data center.

Distance between zones

Multiple availability zones belong to a single region. The physical distance between availability zones is short enough to offer low, single-digit latency (less than 5 ms) and large bandwidth (10 Gbps or greater) between the zones.

You can operate workloads across multiple availability zones in the same region as if they were part of a single virtual data center. This architecture supports high availability that is suitable for mission critical applications. If the distance between two locations of equipment becomes too large, these locations can no longer function as two availability zones in the same region and must be designed as separate regions.

Regions

Regions provide disaster recovery across different SDDC instances or a location that is closer to your customers. Each region is a separate SDDC instance. The regions have a similar physical layer and virtual infrastructure designs but different naming.

Regions are geographically separate, but latency between them must be 100 ms or lower.

The identifiers follow United Nations Code for Trade and Transport Locations (UN/LOCODE) and also contain a numeric instance ID.

Table 2-1. Availability Zones and Regions in the SDDC

RegionRegion Identifier and Availability Zone

Region-Specific Domain Name Region Description

SFO SFO01 sfo.rainpole.io Availability Zone 1 in San Francisco, CA, USA based data center

SFO SFO02 sfo.rainpole.io Availability Zone 2 in San Francisco, CA, USA based data center

Architecture and Design for the Management Domain

VMware, Inc. 12

Page 13: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-2. Design Decisions оn Availability Zones and Regions

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-PHY-001 In Region SFO, that is Region A, deploy one or two availability zones to support all SDDC management components and their SLAs.

Supports all SDDC management and compute components for a region.

n Using a single availability zone results in limited redundancy of the overall solution.

n A single availability zone can become a single point of failure and prevent high-availability design solutions in a region.

Supports stretched clusters and application-aware failover for high availability between two physical locations.

Implementing two availability zones increases the solution footprint and can complicate the operational procedures.

Workload Domains and Racks for the Management Domain

The SDDC functionality is distributed across multiple workload domains and clusters. A workload domain and its initial cluster can occupy one rack or multiple racks. You determine the total number of racks for each cluster type according to your scalability needs.

Workload Domain to Rack Mapping

The relationship between workload domains and data center racks is not one-to-one. While a workload domain is an atomic unit of repeatable building blocks, a rack is a unit of size. Because workload domains can have different sizes, you map workload domains to data center racks according to the use case.

When using a Layer 3 network fabric, the clusters in the management domain cannot span racks. NSX-T Manager instances and other virtual machines rely on VLAN-backed networks. The physical network configuration terminates Layer 2 networks in each rack at the Top of the Rack (ToR) switch. Therefore, you cannot migrate a virtual machine to a different rack because the IP subnet is available only in the rack where the virtual machine currently runs.

Table 2-3. Workload Domain to Rack Configuration Options

Workload Domain to Rack Configuration Description

One Workload Domain in One Rack One workload domain can occupy exactly one rack.

Multiple Workload Domains in One Rack Two or more workload domains can occupy a single rack, for example, one management workload domain and one virtual infrastructure workload domain can be deployed to a single rack.

Architecture and Design for the Management Domain

VMware, Inc. 13

Page 14: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-3. Workload Domain to Rack Configuration Options (continued)

Workload Domain to Rack Configuration Description

Single Workload Domain Across Multiple Racks A single workload domain can stretch across multiple adjacent racks. For example, a virtual infrastructure workload domain that has more ESXi hosts than a single rack can support.

Stretched Workload Domain Across Availability Zones A cluster in a single workload domain can span across two availability zones by using VMware vSAN™ stretched clustering.

Table 2-4. Design Decisions on Clusters and Racks

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-PHY-002 Use two separate power feeds for each rack.

Redundant power feeds increase availability by ensuring that failure of a power feed does not bring down all equipment in a rack.

Combined with redundant network connections to a rack and in a rack, redundant power feeds prevent a failure of the equipment in an entire rack.

All equipment used must support two separate power feeds. The equipment must keep running if one power feed fails.

If the equipment of an entire rack fails, the cause, such as flooding or an earthquake, also affects neighboring racks.

Architecture and Design for the Management Domain

VMware, Inc. 14

Page 15: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Single Availability Zone

Figure 2-2. SDDC Cluster Architecture for a Single Availability Zone

ToR Switch

ToR Switch

Management cluster (4 ESXi hosts)

External connection

Table 2-5. Design Decisions on Clusters and Racks for Single Availability Zone

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-PHY-003 Mount the compute resources (minimum of 4 ESXi hosts) for the first cluster in the management domain together in one rack.

Mounting the compute resources for the first cluster in the management domain together can ease physical data center design, deployment, and troubleshooting.

You only need to provide on-ramp and off-ramp connectivity to physical networks , for example, north-south Layer 3 routing for NSX -T Data Center, to a single rack.

NSX-T Edge nodes require external connectivity to physical network devices. Placing edge nodes in the same rack minimizes VLAN spread.

Data centers must have sufficient power and cooling to operate the server equipment according to the selected vendor and products.

If the equipment in the entire rack fails, to reduce downtime associated with such an event, you must have a second availability zone .

Architecture and Design for the Management Domain

VMware, Inc. 15

Page 16: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Two Availability Zones

Figure 2-3. SDDC Cluster Architecture for Two Availability Zones

Availability Zone 1

ToR Switch

ToR Switch

Stretchedmanagement clusterAvailability Zone 1(4 ESXi hosts)

External connection

External connection

ToR Switch

ToR Switch

Stretchedmanagement clusterAvailability Zone 2(4 ESXi hosts)

Availability Zone 2

Table 2-6. Design Decisions on Clusters and Racks for Two Availability Zones

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-PHY-004 When using two availability zone, in each availability zone, mount the compute resources (minimum of 4 ESXi hosts) for the first cluster in the management domain together in one rack.

Mounting the compute resources for the first cluster in the management domain together can ease physical data center design, deployment, and troubleshooting.

You only need to provide on-ramp and off-ramp connectivity to physical networks , for example, north-south Layer 3 routing for NSX -T Data Center, to a single rack.

NSX-T Edge nodes require external connectivity to physical network devices. Placing edge nodes in the same rack minimizes VLAN spread.

Data centers must have sufficient power and cooling to operate the server equipment according to the selected vendor and products.

If the equipment in the entire rack fails, to reduce downtime associated with such an event, you must have a second availability zone .

Architecture and Design for the Management Domain

VMware, Inc. 16

Page 17: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Virtual Infrastructure Design

The virtual infrastructure design includes the software components that make up the virtual infrastructure layer for providing software-defined storage, networking, and compute.

These components include the software products that provide the virtualization platform hypervisor, virtualization management, storage virtualization, and network virtualization. The VMware products in this layer are vSphere, vSAN, and NSX-T Data Center.

Figure 2-4. Virtual Infrastructure in the SDDC

Cloud Operations

BusinessContinuity

Security and Compliance

CloudAutomation

Service Catalog

Self-Service Portal

Orchestration

Monitoring

Logging

Life CycleManagement

Fault Tolerance and Disaster

Recovery

Backup & Restore

Replication Security Policies

Industry Regulations

Identity and Access Management

Compute

Storage

Network

PhysicalInfrastructure

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

For an overview of the setup of the virtual infrastructure layer in the management domain, see Chapter 1 Architecture Overview for the Management Domain.

ESXi Design for the Management Domain

The compute layer of the virtual infrastructure layer in the SDDC is implemented by ESXi, a bare-metal hypervisor that you install directly onto your physical server. With direct access and control on underlying resources, ESXi logically partitions hardware to consolidate applications and cut costs.

Logical Design for ESXi for the Management Domain

In the logical design for ESXi, you determine the high-level integration of the ESXi hosts with the other components of the SDDC for providing virtual infrastructure to the SDDC management components.

To provide the resources required to run the management components of the SDDC according to the design objectives, each ESXi host consists of the following elements:

n Out of band management interface

n Network interfaces

Architecture and Design for the Management Domain

VMware, Inc. 17

Page 18: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

n Storage devices that

Figure 2-5. ESXi Logical Design

ESXi Host

CPU

Compute

Memory

Non-Local Storage (SAN/NAS)

Storage

Local Storage (vSAN)

NIC 1 and NIC 2 Uplinks

Network

Out of Band Management Uplink

Deployment Specification for ESXi for the Management Domain

You determine the size of the compute resources, start-up configuration, and patching and upgrade support for the ESXi hosts for the management domain according to the design objectives and aggregated requirements of the management components of the SDDC.

Sizing Compute Resources for ESXi for the Management Domain

You size the compute resources of the ESXi hosts in the management domain according to the system requirements of the management components and the requirements for managing customer workloads according to the design objectives.

ESXi Server Hardware

The configuration and assembly process for each system should be standardized, with all components installed in the same manner on each ESXi host. Because standardization of the physical configuration of the ESXi hosts removes variability, the infrastructure is easily managed and supported. ESXi hosts are deployed with identical configuration across all cluster members, including storage and networking configurations. For example, consistent PCIe card slot placement, especially for network controllers, is essential for accurate alignment of physical to virtual I/O resources. By using identical configurations, you have an even balance of virtual machine storage components across storage and compute resources.

Architecture and Design for the Management Domain

VMware, Inc. 18

Page 19: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

In this design, the primary storage system for the management domain is vSAN. As a result, the sizing of physical servers that run ESXi requires these special considerations.

n An average-size virtual machine has two virtual CPUs and 4 GB of RAM.

n A typical spec 2U ESXi host can run 60 average-size virtual machines.

This design uses vSAN ReadyNodes for the physical servers that run ESXi. For information about the models of physical servers that are vSAN-ready, see vSAN Compatibility Guide for vSAN ReadyNodes.

Table 2-7. Design Decisions on Server Hardware for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-001 Use vSAN ReadyNodes with vSAN storage for each ESXi host in the management domain.

Your SDDC is fully compatible with vSAN at deployment.

Hardware choices might be limited.

SDDC-MGMT-VI-ESXi-002 Allocate hosts with uniform configuration across the first cluster of the management domain.

A balanced cluster has these advantages:

n Predictable performance even during hardware failures

n Minimal impact of resync or rebuild operations on performance

You must apply vendor sourcing, budgeting, and procurement considerations for uniform server nodes, on a per cluster basis.

ESXi Host Memory

When sizing memory for the ESXi hosts in the management domain, consider certain requirements.

n Requirements for the workloads that are running in the cluster

When sizing memory for the hosts in a cluster, to reserve the resources of one host for failover or maintenance, set the admission control setting to n+1, which reserves the resources of one host for failover or maintenance.

n Number of vSAN disk groups and disks on an ESXi host

To support the maximum number of disk groups of 5 for vSphere 7, you must provide 32 GB of RAM. For more information about disk groups, including design and sizing guidance, see Administering VMware vSAN from the vSphere documentation.

Architecture and Design for the Management Domain

VMware, Inc. 19

Page 20: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-8. Design Decisions on Host Memory for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-003 Install each ESXi host in the first, four-node, cluster of the management domain with a minimum of 256 GB RAM.

The management and NSX-T Edge appliances in this cluster require a total of 453 GB RAM.

You allocate the remaining memory to additional management components that are required for new capabilities, for example, for new virtual infrastructure workload domains.

In a four-node cluster, only 768 GB is available for use because the host redundancy that is configured in vSphere HA is N+1.

Host Boot Device and Scratch Partition Design for the Management Domain

Determining the boot device type and size for each ESXi host in the management domain is important for the creation of system storage volumes. You also plan the location of the scratch partition according to the selected boot device type so that system log information is available even if a storage failure occurs.

ESXi requires a boot disk of at least 8 GB for USB or SD devices, and 32 GB for other device types such as HDD, SSD, or NVMe. A boot device must not be shared between ESXi hosts.

ESXi can boot from a disk larger than 2 TB if the system firmware and the firmware on any add-in card support it. See vendor documentation.

The ESXi system storage volumes occupy up to 128 GB of disk space. A local VMFS datastore is only created if the local disk device has at least 4 GB additional free space. To share a boot device with a local VMFS datastore, you must use a local disk of 240 GB or larger. If a local disk cannot be found, ESXi operates in a mode with limitations, where certain functionality is disabled, and the scratch partition is on the RAM disk of the ESXi host, linked to the /tmp folder. As a result, log information and stack traces are lost on host reboot. You can reconfigure the scratch partition to use a separate disk or LUN.

An ESXi installation process on a USB or SD device does not configure a default scratch partition on the local disk. Place the scratch partition on a shared datastore and configure remote syslog logging for the ESXi host.

Architecture and Design for the Management Domain

VMware, Inc. 20

Page 21: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-9. Design Decisions for Host Boot Device and Scratch Partition of ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-004 Install and configure all ESXi hosts in the first cluster of the management domain to boot using a 32-GB device or greater.

Provides hosts that have large memory, that is, greater than 512 GB, with enough space for the scratch partition when using vSAN.

When you use SATA-DOM or SD devices, ESXi logs are not retained locally.

SDDC-MGMT-VI-ESXi-005 Use the default configuration for the scratch partition on all ESXi hosts in the first cluster of the management domain.

Тhe ESXi hosts remain responsive and log information is still accessible if a failure in the vSAN cluster occurs.

Additional storage capacity on the primary datastore is required to store the system logs for ESXi and it must be factored in when sizing storage.

Virtual Machine Swap File Design for the Management Domain

When you decide on the placement of the VMkernel swap file of the virtual machines running in the management domain, consider the configuration efforts and traffic related to transferring virtual machine system data across the data center.

When a virtual machine is powered on, the system creates a VMkernel swap file to serve as a backing store for the contents of the virtual machine's RAM. By default, the swap file is stored in the same location as the configuration file of the virtual machine. The co-location simplifies the configuration. However, it can lead to generating additional replication traffic that is not needed.

You can reduce the amount of traffic that is replicated by changing the swap file location on the ESXi host. However, the completion of vSphere vMotion operations might take longer when the swap file must be recreated because pages swapped to a local swap file on the source host must be transferred across the network to the destination host.

Table 2-10. Design Decisions on the Virtual Machine Swap Configuration of ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-006 For workloads running in the first cluster in the management domain, save the virtual machine swap file at the default location.

Simplifies the configuration process.

Increases the amount of replication traffic for management workloads that are recovered as part of the disaster recovery process.

Life Cycle Management Design for ESXi for the Management Domain

When you decide on a life cycle management approach for the ESXi software, you consider the effort and time required for preparing the environment and performing the patch, upgrade, or update operation.

Architecture and Design for the Management Domain

VMware, Inc. 21

Page 22: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Life cycle management of ESXi is the process of performing patch updates or upgrades to the underlying ESXi operating system. In a typical vSphere environment, you perform life cycle management is by using vSphere Lifecycle Manager that is running in vCenter Server. When you implement a solution by using VMware Cloud Foundation, you use SDDC Manager for life cycle management where additional components are included as part of the life cycle management process.

Table 2-11. Design Decisions on Life Cycle Management of ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-007 Use SDDC Manager to perform the life cycle management of ESXi hosts in the management domain.

Because the deployment scope of SDDC Manager covers the full SDDC stack, SDDC Manager performs patching, update, or upgrade of the management domain as a single process.

The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager.

Network Design for ESXi for the Management Domain

In the network design for the ESXi hosts in the management domain, you place the hosts on a VLAN for traffic segmentation. You decide on the IP addressing scheme and name resolution for connectivity to the SDDC management components and maintenance of the hosts.

Network Segments

To perform system functions in a virtual infrastructure in addition to providing network connectivity to the virtual machines, the ESXi hosts in the management domain are connected to several dedicated networks. See Overlay Design for NSX-T Data Center for the Management Domain.

Management network

Carries traffic for management of the ESXi hosts and communication to and from vCenter Server. Also on this network, the hosts exchange heartbeat messages when vSphere HA is enabled.

vSphere vMotion network

Carries traffic for relocating virtual machines between ESXi hosts with zero downtime.

vSAN network

Carries the communication between ESXi hosts in the cluster to implement a vSAN shared storage.

Software-defined networks

Carries overlay traffic between the management components in the management domain, traffic for software-defined network services such as load balancing and dynamic routing, and traffic for communication to the external network.

Architecture and Design for the Management Domain

VMware, Inc. 22

Page 23: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-12. Design Decisions on Network Segments for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-008 Place the ESXi hosts in the first cluster of the management domain on the VLAN-backed management network segment.

Reduces the number of VLANs needed because a single VLAN can be allocated to both the ESXi hosts, vCenter Server, and NSX-T for Data Center management components.

Separation of the physical VLAN between ESXi hosts and other management components for security reasons is missing.

IP Addressing

The IP addresses of the ESXi hosts can be assigned by using DHCP or statically. Assign static IP addresses and host names across all ESXi hosts in the management domain.

Table 2-13. Design Decisions on the IP Addressing Scheme for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-009 Allocate statically assigned IP addresses and host names across all ESXi hosts in the first cluster of the management domain.

Ensures stability across the SDDC, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Name Resolution

The host name of each ESXi host in the management domain is allocated to a specific domain for name resolution according to the region the host is in.

n The IP address and host name of each ESXi host are associated with a fully qualified name ending with a region-specific suffix, such as, sfo.rainpole.io.

Table 2-14. Design Decisions on Name Resolution for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-010 Configure forward and reverse DNS records for each ESXi host in the first cluster of the management domain, assigning the records to the child domain on the region.

All ESXi hosts are accessible by using a fully qualified domain name instead of by using IP addresses only.

You must provide DNS records for each ESXi host.

Architecture and Design for the Management Domain

VMware, Inc. 23

Page 24: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Time Synchronization

Time synchronization provided by the Network Time Protocol (NTP) is important to ensure that all components in the SDDC are synchronized to the same time source. For example, if the clocks on the physical machines in your vSphere network are not synchronized, SSL certificates and SAML Tokens, which are time-sensitive, might not be recognized as valid in communications between network machines. Time inconsistencies in vSphere can cause first-boot to fail at different services depending on where in the environment time is not accurate and when the time is synchronized.

Table 2-15. Design Decisions on Time Synchronization for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-011 Configure time synchronization by using an internal NTP time source across all ESXi hosts in the management domain for the region.

Prevents from failures in the deployment of the vCenter Server Appliance on an ESXi host if the host is not using NTP.

An operational NTP service must be available in the environment.

SDDC-MGMT-VI-ESXi-012 Set the NTP service policy to Start and stop with host across all ESXi hosts in the first cluster of the management domain.

Ensures that the NTP service is available right after you restart an ESXi host.

None.

Information Security and Access Design for ESXi for the Management Domain

You design authentication access, controls, and certificate management for ESXi according to industry standards and the requirements of your organization.

Host Access

After installation, you add ESXi hosts to a vCenter Server system for host management.

Direct access to the host console is still available and most commonly used for troubleshooting purposes. You can access ESXi hosts directly by using one of these four methods.

Table 2-16. Accessing ESXi Hosts

Method for ESXi Host Access Description

Direct Console User Interface (DCUI) Graphical interface on the console. Provides basic administrative controls and troubleshooting options.

ESXi Shell A Linux-style bash login to the ESXi console itself.

Secure Shell (SSH) Access Remote command-line console access.

VMware Host Client HTML5-based client that has a similar interface to the vSphere Client but for managing individual ESXi hosts only. You use the VMware Host Client for emergency management when vCenter Server is temporarily unavailable.

Architecture and Design for the Management Domain

VMware, Inc. 24

Page 25: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

You can enable or disable each method. By default, the ESXi Shell and SSH are disabled to protect the ESXi host. The Direct Console User Interface is disabled if Strict Lockdown Mode is enabled.

Table 2-17. Design Decisions on Host Access for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-013 Configure the SSH service policy to Start and stop with host across all ESXi hosts in the management domain.

Ensures that on an ESXi host reboot, the SSH service is started ensuring access from SDDC Manager is maintained.

Might be in a direct conflict with your corporate security policy.

SDDC-MGMT-VI-ESXi-014 Set the advanced setting UserVars.SuppressShellWarn

ing to 1 across all ESXi hosts in the management domain.

Ensures that only critical messages appear in the VMware Host Client and vSphere Client by suppressing the warning message about enabled local and remote shell access.

Might be in a direct conflict with your corporate security policy.

User Access

By default, you can log in to an ESXi host only by using the root account. To have more accounts that can access the ESXi hosts in the management domain, you can add the hosts to an Active Directory domain. After the ESXi host is added to an Active Directory domain, you can grant access by using Active Directory groups. Auditing logins to the ESXi hosts becomes easier too.

Architecture and Design for the Management Domain

VMware, Inc. 25

Page 26: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-18. Design Decisions on User Access for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-015 Join each ESXi host in the management domain to the Active Directory domain of the region in which the ESXi host resides.

n Using Active Directory membership provides greater flexibility in granting access to ESXi hosts.

n Ensuring that users log in with a unique user account provides greater visibility for auditing.

Adding ESXi hosts to the domain can add some administrative overhead.

SDDC-MGMT-VI-ESXi-016 Change the default ESX Admins group to the ug-esxi-admins group in Active Directory.

Changing the Active Directory group improves security by removing a known administrative access point.

Additional changes to the ESXi hosts advanced settings are required.

SDDC-MGMT-VI-ESXi-017 Add ESXi administrators to the ug-esxi-admins group in Active Directory following standard access procedures.

Adding ESXi administrator accounts to the Active Directory group provides these benefits.

n Direct control on the access to the ESXi hosts by using Active Directory group membership

n Separation of management tasks

n More visibility for access auditing

Administration of direct user access is controlled by using Active Directory.

Password Management and Account Lockout Behavior

ESXi enforces password requirements for access from the Direct Console User Interface, the ESXi Shell, SSH, or the VMware Host Client. By default, you have to include a mix of characters from four character classes: Lowercase letters, uppercase letters, numbers, and special characters such as underscore or dash when you create a password. By default, required password length between 7 and 40 characters. Passwords cannot contain a dictionary word or part of a dictionary word.

Account locking is supported for access by using SSH and the vSphere Web Services SDK. By default, a maximum of five failed attempts is allowed before the account is locked. The account is unlocked after 15 minutes by default. The Direct Console Interface and the ESXi Shell do not support account lockout.

This design applies a password policy according to security best practices and standards.

Architecture and Design for the Management Domain

VMware, Inc. 26

Page 27: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-19. Example Password Policy Specification

Password Setting Value

Minimum Length 1

Maximum lifetime 60 days

Remember old passwords 5

that is, remember the previous five passwords so that they do not get reused.

Allow similar passwords Deny

Complexity At least 1 uppercase, 1 lowercase, 1 number, and 1 special character

Max failed login attempts 3

Time interval between failures 900 seconds

Unlock time 0

Table 2-20. Design Decisions on Password and Account Lockout Behavior for ESXi

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-ESXi-018 Configure a policy for ESXi host passwords and account lockout according to the industry standard for security and compliance of your organization.

Aligns with the industry standard across your organization.

None.

vCenter Server Design for the Management Domain

The vCenter Server design includes determining the number of vCenter Server instances in the management domain, their size, networking configuration, cluster layout, redundancy, and security configuration.

By using vCenter Server, you manage your vSphere infrastructure from a centralized location. It acts as a central administration point for ESXi hosts and their respective virtual machines. Implemented within the same appliance is the Platform Services Controller which provides a set of infrastructure services including vCenter Single Sign-On, License service, Lookup Service, and VMware Certificate Authority (VMCA).

n Logical Design for vCenter Server for the Management Domain

For the management domain in each region, you deploy a vCenter Server appliance that manages the ESXi hosts that are running the management components of the SDDC and supports integration with other solutions for monitoring and management of the virtual infrastructure.

Architecture and Design for the Management Domain

VMware, Inc. 27

Page 28: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

n Deployment Specification of vCenter Server for the Management Domain

You determine the size of the compute resources, high availability implementation, and patching and upgrade support for the management domain vCenter Server according to the design objectives and aggregated requirements of the management components of the SDDC.

n Network Design for vCenter Server for the Management Domain

In the network design for the management domain vCenter Server, you place vCenter Server on a VLAN for traffic segmentation, and decide on the IP addressing scheme and name resolution for optimal support for the SDDC management components and host management.

n vSphere Cluster Design for the Management Domain

The cluster design must consider the characteristics of the management workloads that the cluster handles in the management domain.

n Information Security and Access Control Design for vCenter Server for the Management Domain

You design authentication access, controls, and certificate management for the management domain vCenter Server according to industry standards and the requirements of your organization.

Logical Design for vCenter Server for the Management Domain

For the management domain in each region, you deploy a vCenter Server appliance that manages the ESXi hosts that are running the management components of the SDDC and supports integration with other solutions for monitoring and management of the virtual infrastructure.

A vCenter Server deployment can consist of one or more vCenter Server instances with an embedded Platform Services Controller according to the scale, number of virtual machines, and availability requirements for your environment.

vCenter Server is deployed as a preconfigured virtual appliance that is running the VMware Photon™ operating system. vCenter Server is required for some advanced vSphere features, such as vSphere High Availability (vSphere HA), vSphere Fault Tolerance, vSphere Distributed Resource Scheduler (vSphere DRS), vSphere vMotion, and vSphere Storage vMotion.

Architecture and Design for the Management Domain

VMware, Inc. 28

Page 29: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-6. Logical Design of vCenter Server in the Management Domain

Solution andUser Authentication

vCenter SingleSign-On Domain

Access

User Interface

API

Identity Source

Active Directory

Region A

Virtual Infrastructure

Management Domain vCenter Server

Virtual Appliance

Supporting Infrastructure:Shared Storage, DNS, NTP

ESXi

Architecture and Design for the Management Domain

VMware, Inc. 29

Page 30: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-21. vCenter Server Configuration

Single Availability Zone Multiple Availability Zones

n One vCenter Server instance that is allocated to the management domain and the SDDC management components, such as the NSX-T Manager cluster, SDDC Manager, and other solutions.

n A should-run VM-Host affinity rule in vSphere DRS specifies that the virtual machines in each availability zone reside in their initial availability unless an outage in this zone occurs.

n One vCenter Server instance that is allocated to the management domain and the SDDC management components, such as the NSX-T Manager cluster, SDDC Manager, and other solutions.

Deployment Specification of vCenter Server for the Management Domain

You determine the size of the compute resources, high availability implementation, and patching and upgrade support for the management domain vCenter Server according to the design objectives and aggregated requirements of the management components of the SDDC.

Deployment Model for vCenter Server for the Management Domain

You determine the number of vCenter Server instances for the management domain and the amount of compute and storage resources for them according to the scale of the environment, the plans for deployment of virtual infrastructure workload domains, and the requirements for isolation of management workloads from tenant workloads.

Deployment Type

You allocate a vCenter Server instance in the management domain for the region, using Enhanced Linked Mode to connect, view, and search across all linked vCenter Server systems.

Architecture and Design for the Management Domain

VMware, Inc. 30

Page 31: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-22. Design Decisions on the Deployment Model for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-001 Deploy a dedicated vCenter Server instance in the first availability zone of the region for the management domain.

n Isolates vCenter Server failures to management or tenant workloads.

n Isolates vCenter Server operations between management and tenants.

n Supports a scalable cluster design where you can reuse the management components as more tenant workloads are added to the SDDC.

n Simplifies capacity planning for tenant workloads because you do not consider management workloads for the workload domain vCenter Server.

n Improves the ability to upgrade the vSphere environment and related components by providing for explicit separation of maintenance windows:

n Management workloads remain available while you are upgrading the tenant workloads

n Tenant workloads remain available while you are upgrading the management nodes

n Supports clear separation of roles and responsibilities to ensure that only administrators with granted authorization can control the management workloads.

n Facilitates quicker troubleshooting and problem resolution.

n Simplifies disaster recovery operations by supporting a clear separation between recovery of the management components and tenant workloads.

n Provides isolation of potential network issues by introducing network separation of the clusters in the SDDC.

Requires a license for the vCenter Server instance in each virtual infrastructure workload domain.

Architecture and Design for the Management Domain

VMware, Inc. 31

Page 32: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Sizing Compute and Storage Resources

When you deploy the vCenter Server appliance, you select to deploy an appliance that is suitable for the size of your environment. The option that you select determines the number of CPUs and the amount of memory for the appliance.

Table 2-23. Compute Resource Specifications of vCenter Server

vCenter Server Appliance Size Management Capacity Number of vCPUs Memory

X-Large environment Up to 2,000 hosts or 35,000 virtual machines

24 56 GB

Large environment Up to 1,000 hosts or 10,000 virtual machines

16 37 GB

Medium environment Up to 400 hosts or 4,000 virtual machines

8 28 GB

Small environment Up to 100 hosts or 1,000 virtual machines

4 19 GB

Tiny environment Up to 10 hosts or 100 virtual machines

2 12 GB

When you deploy the vCenter Server appliance, the ESXi host or cluster on which you deploy the appliance must meet minimum storage requirements. You determine the required storage not only according to the size of the environment and the storage size, but also according to the disk provisioning mode.

Table 2-24. Storage Resource Specifications of vCenter Server

Appliance SizeManagement Capacity Default Storage Size Large Storage Size X-Large Storage Size

X-Large environment Up to 2,000 hosts or 35,000 virtual machines

1805 GB 1905 GB 3665 GB

Large environment Up to 1,000 hosts or 10,000 virtual machines

1065 GB 1765 GB 3525 GB

Medium environment Up to 400 hosts or 4,000 virtual machines

700 GB 1700 GB 3460 GB

Small environment Up to 100 hosts or 1,000 virtual machines

480 GB 1535 GB 3295 GB

Tiny environment Up to 10 hosts or 100 virtual machines

415 GB 1490 GB 3245 GB

Architecture and Design for the Management Domain

VMware, Inc. 32

Page 33: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-25. Design Decisions on Sizing the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-002 Deploy an appliance for the management domain vCenter Server of a small deployment size or larger.

A vCenter Server appliance of a small-deployment size is sufficient to manage the management components that are required for achieving the design objectives.

If the size of the management environment increases, you might have to increase the vCenter Server appliance size.

SDDC-MGMT-VI-VC-003 Deploy the appliance of the management domain vCenter Server with the default storage size.

The default storage capacity assigned to a small appliance is sufficient to manage the management appliances that are required for achieving the design objectives.

None.

Enhanced Linked Mode Design for the Management Domain

By using Enhanced Linked Mode of vCenter Server, you can log in to all vCenter Server instances across the SDDC that are joined to the same vCenter Single Sign-on domain and access their inventories.

You can join up to 15 vCenter Server instances in to a single vCenter Single Sign-On domain.

Table 2-26. Design Decisions on Enhanced Linked Mode for the Management Domain

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-004 Join all vCenter Server instances to a single vCenter Single Sign-On domain.

When all vCenter Server instances are in the same vCenter Single Sign-On domain, they can share authentication and license data across all components and regions.

Only one vCenter Single Sign-On domain exists.

SDDC-MGMT-VI-VC-005 Create a ring topology for the vCenter Single Sign-On domain that is running in the vCenter Server instances.

By default, one vCenter Server instance replicates only with another vCenter Server instance. This setup creates a single point of failure for replication. A ring topology ensures that each vCenter Server instance has two replication partners and removes any single point of failure.

None.

Architecture and Design for the Management Domain

VMware, Inc. 33

Page 34: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

High Availability Design for vCenter Server for the Management Domain

Protecting the management domain vCenter Server is important because it is the central point of management and monitoring for the SDDC. You protect vCenter Server according to the maximum downtime tolerated and whether fail-over automation is required.

You can use the following methods for protecting the vCenter Server appliance:

Table 2-27. Methods for Protecting the vCenter Server Appliance

High Availability Method Protects vCenter Server Appliance

Automated protection by using vSphere HA Yes

Manual configuration and manual failover, for example, by using a cold standby.

Yes

vSphere HA cluster with external load balancer Not Available

vCenter Server HA Yes

Table 2-28. Design Decisions on High Availability of the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-006 Protect the appliance of the management domain vCenter Server by using vSphere HA.

Supports the availability objectives for vCenter Server without requiring manual intervention if a failure occurs.

vCenter Server becomes unavailable during a vSphere HA failover.

SDDC-MGMT-VI-VC-007 In vSphere HA, set the restart priority policy for the vCenter Server appliance to high.

vCenter Server is the management and control plane for physical and virtual infrasturcture. In a HA event, vCenter should be available first before other management components come online to ensure the rest of the SDDC management stack comes up cleanly.

If the restart priority for another virtual machines is set to highest, the connectivity delays for management components will be longer.

Life Cycle Management Design of vCenter Server for the Management Domain

You decide on the life cycle management of the vCenter Server appliance according to the amount of time and effort to perform a deployment, upgrade, or patch operation. You also consider the impact such an operation has on the management solutions that are connected to the management domain vCenter Server.

Life cycle management of vCenter Server includes the process of performing patch updates or upgrades to the vCenter Server appliance. In a typical vSphere environment, life cycle management is performed by using vSphere Lifecycle Manager that is running in vCenter Server. When you implement a solution by using VMware Cloud Foundation, you use SDDC Manager for life cycle management where additional components are included as part of the life cycle management process.

Architecture and Design for the Management Domain

VMware, Inc. 34

Page 35: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-29. Design Decisions on Life Cycle Management of the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-008 Use SDDC Manager to perform the life cycle management of the appliance for the management domain vCenter Server.

Because the deployment scope of SDDC Manager covers the full SDDC stack, SDDC Manager performs patching, update, or upgrade of the management domain as a single process.

The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager.

Network Design for vCenter Server for the Management Domain

In the network design for the management domain vCenter Server, you place vCenter Server on a VLAN for traffic segmentation, and decide on the IP addressing scheme and name resolution for optimal support for the SDDC management components and host management.

Network Segments

For secure access to the vSphere Client and vCenter Server APIs, in each region, the management domain vCenter Server is connected to the management VLAN network segment.

Architecture and Design for the Management Domain

VMware, Inc. 35

Page 36: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-7. vCenter Server Network Design

DataCenterUser

ActiveDirectory

Internet/EnterpriseNetwork

Region A(SFO - San Francisco)

VLAN: sfo01-m01-cl01-vds01-pg-mgmt

172.16.11.0/24

Management DomainvCenter Server

sfo-m01-vc01.sfo.rainpole.local

PhysicalUpstream

Router

NSX-T Tier-0 Gateway/ Tier-1 Gateway

Table 2-30. Design Decisions on the Network Segment for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-009 Place the appliance of the management domain vCenter Server on the management VLAN network segment of the region, that is, sfo-m01-cl01-vds01-pg-mgmt for Region A.

Reduces the number of required VLANs because a single VLAN can be allocated to both, vCenter Server and NSX-T for Data Center management components.

None.

IP Addressing

You can assign the IP address of the management domain vCenter Server by using DHCP or statically according to the network configuration in your environment.

Architecture and Design for the Management Domain

VMware, Inc. 36

Page 37: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-31. Design Decisions on the IP Addressing Scheme for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-010 Allocate a statically assigned IP address and host name to the appliance of the management domain vCenter Server.

Ensures stability across the SDDC, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Name Resolution

vCenter Server systems must be connected to the following components:

n Systems running vCenter Server add-on modules

n Each ESXi host

Table 2-32. Design Decisions on Name Resolution for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-011 Configure forward and reverse DNS records for the appliance of the management domain vCenter Server, assigning the record to the child domain for the region.

The vCenter Server appliance is accessible by using a fully qualified domain name instead of by using an IP address only.

You must provide DNS records for the vCenter Server appliance in the management domain in each region.

Time Synchronization

Time synchronization provided by the Network Time Protocol (NTP) is important to ensure that all components within the SDDC are synchronized to the same time source.

Table 2-33. Design Decisions on Time Synchronization for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-012 Configure time synchronization by using an internal NTP time for the appliance of the management domain vCenter Server.

n Prevents from failures in the deployment of the vCenter Server appliance on an ESXi host if the host is not using NTP.

n Discards the requirement to provide Internet connectivity to an external NTP server.

n An operational NTP service must be available in the environment.

n All firewalls between the vCenter Server appliance and the NTP servers must allow NTP traffic on the required network ports.

vSphere Cluster Design for the Management Domain

The cluster design must consider the characteristics of the management workloads that the cluster handles in the management domain.

Architecture and Design for the Management Domain

VMware, Inc. 37

Page 38: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

When you design the cluster layout in vSphere, consider the following guidelines:

n Use fewer, larger ESXi hosts, or more, smaller ESXi hosts.

n A scale-up cluster has fewer, larger ESXi hosts.

n A scale-out cluster has more, smaller ESXi hosts.

n Compare the capital costs of purchasing fewer, larger ESXi hosts with the costs of purchasing more, smaller ESXi hosts. Costs vary between vendors and models.

n Evaluate the operational costs for managing a few ESXi hosts with the costs of managing more ESXi hosts.

n Consider the purpose of the cluster.

n Consider the total number of ESXi hosts and cluster limits.

Figure 2-8. vSphere Logical Cluster Layout for the Management Domain

Management Domain vCenter Server

ESXi ESXi ESXi ESXi

Region A

Management Cluster

Architecture and Design for the Management Domain

VMware, Inc. 38

Page 39: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-9. vSphere Logical Cluster Layout for Multiple Availability Zones for the Management Domain

Management Domain vCenter Server

ESXi ESXi ESXi ESXi

Availability Zone 1

ESXi ESXi ESXi ESXi

Availability Zone 2

Region A

Management Cluster

Table 2-34. Number of Hosts in the First Cluster of the Management Domain

Attribute Specification

Number of ESXi hosts that is required to support management virtual machines with no memory over-commitment

3

Number of ESXi hosts recommended to handle operational constraint, that is, be able to take a host offline without losing high availability capabilities.

4

Number of ESXi hosts recommended to operational constraints, while using vSAN, that is, be able to take a host offline without losing high availability capabilities .

Single availability zone 4

Two availability zones 8

Reserved capacity for handling ESXi host failures per cluster Single availability zone 25% CPU and RAM

Two availability zone 50% CPU and RAM

Architecture and Design for the Management Domain

VMware, Inc. 39

Page 40: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-35. Design Decisions on the Host Configuration for the First Cluster in the Management Domain

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-013 Create a cluster in the management domain for the initial set of ESXi hosts.

n Simplifies configuration by isolating management workloads from tenant workloads.

n Ensures that tenant workloads have no impact on the management stack.

You can add ESXi hosts to the cluster as needed.

Management of multiple clusters and vCenter Server instances increases operational overhead.

SDDC-MGMT-VI-VC-014 In Region SFO, create the first cluster in the management domain with this configuration.

n A minimum of 4 ESXi hosts for a single availability zone

n A minimum of 8 ESXi hosts for two availability zones, that is, minimum of 4 ESXi hosts in each availability zone.

n Allocating 4 ESXi hosts provides full redundancy for each availability zone in the cluster.

n Having 4 ESXi hosts in each availability zone guarantees vSAN and NSX redundancy during availability zone outages or maintenance operations.

To support redundancy, you must allocate additional ESXi host resources.

n vSphere HA Design for the Management Domain

The vSphere HA configuration protects the virtual machines of the management components of the SDDC whose operation is critical for the operation of your environment. You consider the varying and sometimes significant CPU or memory reservations for the management virtual machines and the requirements of vSAN.

n vSphere DRS Design for the Management Domain

vSphere Distributed Resource Scheduling (vSphere DRS) provides load balancing in a cluster by migrating workloads from heavily loaded ESXi hosts to ESXi hosts with more available resources in the cluster. vSphere DRS supports manual and automatic modes.

n vSphere EVC Design for the Management Domain

If you enable vSphere Enhanced vMotion Compatibility (EVC) in the management domain, the virtual machines of the SDDC management components can be migrated between ESXi hosts containing older CPUs. You can use EVC for a rolling upgrade of all hardware with zero downtime.

vSphere HA Design for the Management Domain

The vSphere HA configuration protects the virtual machines of the management components of the SDDC whose operation is critical for the operation of your environment. You consider the

Architecture and Design for the Management Domain

VMware, Inc. 40

Page 41: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

varying and sometimes significant CPU or memory reservations for the management virtual machines and the requirements of vSAN.

If a ESXi host failure occurs, vSphere HA restarts virtual machines on other hosts in the cluster. A primary ESXi host communicates with the management domain vCenter Server, and monitors the virtual machines and secondary ESXi hosts in the cluster. vSphere HA uses admission control to ensure that sufficient resources are reserved for virtual machine recovery when a host fails.

You configure several vSphere HA features to provide high availability for the management components of the SDDC.

Table 2-36. vSphere HA Features Configured for the SDDC

vSphere HA Feature Description

Admission control policy Configure how the cluster determines available resources. In a smaller vSphere HA cluster, a larger proportion of the cluster resources are reserved to accommodate ESXi host failures according to the selected admission control policy.

VM and Application Monitoring Have the VM and Application Monitoring service restart a virtual machine if a failure occurs. The service uses VMware Tools to evaluate whether each virtual machine in the cluster is running.

Table 2-37. Admission Control Policies in vSphere HA

Policy Name Description

Host failures the cluster tolerates vSphere HA ensures that a specified number of ESXi hosts can fail and sufficient resources remain in the cluster to fail over all the virtual machines from those ESXi hosts.

Percentage of cluster resources reserved vSphere HA reserves a specified percentage of aggregate CPU and memory resources for failover.

Specify Failover Hosts When an ESXi host fails, vSphere HA attempts to restart its virtual machines on any of the specified failover ESXi hosts. If restart is not possible, for example, the failover ESXi hosts have insufficient resources or have failed as well, then vSphere HA attempts to restart the virtual machines on other ESXi hosts in the cluster.

Table 2-38. Design Decisions on the Admission Control Policy for the First Cluster in the Management Domain

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-015 Use vSphere HA to protect all virtual machines against failures.

vSphere HA supports a robust level of protection for both ESXi host and virtual machine availability.

You must provide sufficient resources on the remaining hosts so that virtual machines can be migrated to those hosts in the event of a host outage.

SDDC-MGMT-VI-VC-016 Set host isolation response to Power Off in vSphere HA.

vSAN requires that the host isolation response be set to Power Off and to restart virtual machines on available ESXi hosts.

If a false positive event occurs, virtual machines are powered off and an ESXi host is declared isolated incorrectly.

Architecture and Design for the Management Domain

VMware, Inc. 41

Page 42: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-38. Design Decisions on the Admission Control Policy for the First Cluster in the Management Domain (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-017 When using a single availability zone, configure admission control for 1 ESXi host failure and percentage-based failover capacity.

Using the percentage-based reservation works well in situations where virtual machines have varying and sometimes significant CPU or memory reservations.

vSphere automatically calculates the reserved percentage according to the number of ESXi host failures to tolerate and the number of ESXi hosts in the cluster.

In a cluster of 4 ESXi hosts, only the resources of 3 ESXi hosts are available for use.

SDDC-MGMT-VI-VC-018 When using two availability zones, configure admission control for percentage-based failover based on half of the ESXi hosts in the cluster.

Allocating only half of a stretched cluster ensures that all VMs have enough resources if an availability zone outage occurs.

In a cluster of 8 ESXi hosts, only the resources of 4 ESXi hosts are available for use.

If you add more ESXi hosts to the first cluster in the management domain, add them in pairs, one in each availability zone.

SDDC-MGMT-VI-VC-019 When using two availability zones, set the isolation addresses for the cluster to the gateway IP address for the vSAN network in both availability zones.

Allows vSphere HA to validate complete network isolation if a connection failure between availability zones occurs.

You must manually configure the isolation address.

DDC-MGMT-VI-VC-020 When using two availability zones, set the advanced cluster setting das.usedefaultisolationadd

ress to false.

Ensures that vSphere HA uses the manual isolation addresses instead of the default management network gateway address.

None.

Table 2-39. Design Decisions on VM and Application Monitoring Service for the Management Domain

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-021 Enable VM Monitoring for each cluster.

VM Monitoring provides in-guest protection for most VM workloads. The application or service running on the virtual machine must be capable of restarting successfully after a reboot or the virtual machine restart is not sufficient.

None.

Architecture and Design for the Management Domain

VMware, Inc. 42

Page 43: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSphere DRS Design for the Management Domain

vSphere Distributed Resource Scheduling (vSphere DRS) provides load balancing in a cluster by migrating workloads from heavily loaded ESXi hosts to ESXi hosts with more available resources in the cluster. vSphere DRS supports manual and automatic modes.

vSphere DRS supports manual and automatic modes.

Table 2-40. vSphere DRS Automation Modes

Automation Mode Description

Manual vSphere DRS provides recommendations but an administrator must confirm the changes.

Automatic Automatic management can be set to five migration thresholds. At the lowest setting, workloads are placed automatically at power-on and only migrated to fulfill certain criteria, such as entering maintenance mode. At the highest level, any migration that can provide a slight improvement in balancing is performed.

When using two availability zones, enable vSphere DRS to create VM-Host group affinity rules for initial placement of virtual machines, improving read locality by ensuring a local copy of virtual machine data in each availability zone. In this way, you avoid unnecessary vSphere vMotion migration of virtual machines between availability zones.

Because the vSAN stretched cluster is still one cluster, vSphere DRS is unaware that the cluster stretches across different physical locations. As result, vSphere DRS might decide to move virtual machines between these location. By using VM-Host group affinity rules, you can constrain virtual machines to an availability zone. Otherwise, if a virtual machine, VM1, that resides in Availability Zone 1, moves across availability zones, VM1 might eventually be running in Availability Zone 2. Because vSAN stretched clusters implement read locality, the cache for the virtual machine in Availability Zone 1 is warm whereas the cache in Availability Zone 2 is cold. This situation might impact the performance of VM1 until the cache for it in Availability Zone 2 is warmed up.

Table 2-41. Design Decisions on vSphere DRS for the Management Domain

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-022 Enable vSphere DRS (Distributed Resource Scheduling) on all clusters, using the default fully automated mode (medium) .

Provides the best trade-off between load balancing and unnecessary migration with vSphere vMotion.

If a vCenter Server outage occurs, mapping from virtual machines to ESXi hosts might be difficult to determine.

SDDC-MGMT-VI-VC-023 Create virtual machine groups for use in startup rules in the first cluster in the management domain.

By creating virtual machine groups, you can use rules to configure the startup order of the SDDC management components.

Creating the groups is a manual task and adds administrative overhead.

Architecture and Design for the Management Domain

VMware, Inc. 43

Page 44: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-41. Design Decisions on vSphere DRS for the Management Domain (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-024 Create virtual machine rules to set the startup order of the SDDC management components.

Rules enforce the startup order of virtual machine groups, hence, the startup order of the SDDC management components.

Creating the rules is a manual task and adds administrative overhead.

SDDC-MGMT-VI-VC-025 When using two availability zones, create a host group and add the ESXi hosts in Availability Zone 1 in Region A to it.

Makes it easier to manage which virtual machines run in which availability zone.

You must create and maintain VM-Host DRS group rules.

SDDC-MGMT-VI-VC-026 When using two availability zones, create a host group and add the ESXi hosts in Availability Zone 2 in Region A to it.

Makes it easier to manage which virtual machines run in which availability zone.

You must create and maintain VM-Host DRS group rules.

SDDC-MGMT-VI-VC-027 When using two availability zones, create a virtual machine group and add the virtual machines in Availability Zone 1 in Region A to it.

Ensures that virtual machines are located only in the assigned availability zone.

You must add virtual machines to the allocated group manually.

SDDC-MGMT-VI-VC-028 When using two availability zones, create a virtual machine group and add the virtual machines in Availability Zone 2 in Region A to it.

Ensures that virtual machines are located only in the assigned availability zone.

You must add virtual machines to the allocated group manually.

SDDC-MGMT-VI-VC-029 When using two availability zones, create a should-run VM-Host affinity rule to run the group of virtual machines in Availability Zone 1 on the group of hosts in the same zone.

Ensures that virtual machines are located only in the assigned availability zone.

Creating the rules is a manual task and adds administrative overhead.

SDDC-MGMT-VI-VC-030 When using two availability zones, create a should-run VM-Host affinity rule to run the group of virtual machines in Availability Zone 2 on the group of hosts in the same zone.

Ensures that virtual machines are located only in the assigned availability zone.

Creating the rules is a manual task and adds administrative overhead.

vSphere EVC Design for the Management Domain

If you enable vSphere Enhanced vMotion Compatibility (EVC) in the management domain, the virtual machines of the SDDC management components can be migrated between ESXi hosts containing older CPUs. You can use EVC for a rolling upgrade of all hardware with zero downtime.

Architecture and Design for the Management Domain

VMware, Inc. 44

Page 45: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSphere Enhanced vMotion Compatibility (EVC) works by masking certain features of newer CPUs to allow migration between ESXi hosts containing older CPUs. If you set EVC during cluster creation, you can add ESXi hosts with newer CPUs later without disruption. You can use EVC for a rolling upgrade of all hardware with zero downtime.

EVC works only with CPUs from the same manufacturer and there are limits to the version difference gaps between the CPU families.

Table 2-42. Design Decisions on Enhanced vMotion Compatibility for the Management Domain

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-031 Enable Enhanced vMotion Compatibility (EVC) on all clusters in the management domain.

Supports cluster upgrades without virtual machine downtime.

You can enable EVC only if clusters contain hosts with CPUs from the same vendor.

SDDC-MGMT-VI-VC-032 Set the cluster EVC mode to the highest available baseline that is supported for the lowest CPU architecture on the hosts in the cluster.

Supports cluster upgrades without virtual machine downtime.

None

Information Security and Access Control Design for vCenter Server for the Management Domain

You design authentication access, controls, and certificate management for the management domain vCenter Server according to industry standards and the requirements of your organization.

Identity Management

Users can log in to vCenter Server only if they are in a domain that was added as a vCenter Single Sign-On identity source. vCenter Single Sign-On administrator users can add identity sources, or change the settings for identity sources that they added. An identity source can be a native Active Directory (Integrated Windows Authentication) domain or an OpenLDAP directory service. For backward compatibility, Active Directory as an LDAP server is also available.

Architecture and Design for the Management Domain

VMware, Inc. 45

Page 46: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-43. Design Decisions on Identity Management in the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-033 Join the management domain vCenter Server to the Active Directory domain for the region that vCenter Server resides in.

n Using Active Directory membership provides greater flexibility in granting access to vCenter Server.

n Ensuring that users log in with a unique user account provides greater visibility for auditing.

Joining vCenter Server to the domain adds some administrative overhead.

SDDC-MGMT-VI-VC-034 Assign global permissions to the vCenter Server inventory to an Active Directory group, such as ug-vc-admin, by using the Administrator role.

By assigning the Administrator role to an Active Directory group, you can easily create user accounts that have administrative rights on vCenter Server.

The Active Directory group must be created in advance to assigning it the Administrator role.

Password Management and Account Lockout Behavior

vCenter Server enforces password requirements for access to the vCenter Server Management Interface. By default, you must include at least six characters, which should not be any of your previous five passwords. Account locking is supported for access to the vCenter Server Management Interface. By default passwords are set to expire after 90 days. .

This design applies a password policy according to security best practices and standards.

Table 2-44. Example Password Policy Specification for vCenter Server

Password Setting Value

Minimum Length 15

Maximum lifetime 60 days

Remember old passwords 5 (remember previous 5 so they don't get reused)

Allow similar passwords Deny

Complexity At least 1 upper case, 1 lower case, 1 number, and 1 special char

Max failed login attempts 3

Time interval between failures 900 seconds

Unlock time 0

Architecture and Design for the Management Domain

VMware, Inc. 46

Page 47: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-45. Design Decisions on Password and Account Lockout for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-035 Configure a password and account lockout policy for the appliance of the management domain vCenter Server according to the industry standard for security and compliance of your organization.

Aligns with the industry standard across your organization.

None.

Certificate Management

Access to all vCenter Server interfaces must use an Secure Socket Layer (SSL) connection. By default, vCenter Server uses a certificate for the appliance which is signed by the VMware Certificate Authority (VMCA). To provide secure access to the vCenter Server appliance, replace the default certificate with a CA-signed certificate.

Table 2-46. Design Decisions on Certificate Management for the Management Domain vCenter Server

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-VC-036 Replace the default VMCA-signed certificate of the appliance of the management domain vCenter Server with a certificate that is signed by a certificate authority.

Ensures that the communication to the externally facing Web user interface and API to vCenter Server, and between vCenter Server and other management components is encrypted.

Replacing the default certificates with trusted CA-signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests.

SDDC-MGMT-VI-VC-037 Use a SHA-2 algorithm or stronger for signed certificates.

The SHA-1 algorithm is considered less secure and has been deprecated.

Not all certificate authorities support SHA-2.

vSphere Networking Design for the Management Domain

The network design prevents unauthorized access, and provides timely access to business data. This design uses vSphere Distributed Switch and VMware NSX-T Data Center for virtual networking.

Virtual Network Design Guidelines

The high-level design goals apply regardless of your environment.

Architecture and Design for the Management Domain

VMware, Inc. 47

Page 48: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-47. Goals of the vSphere Networking Design

Design Goal Description

Meet diverse needs The network must meet the diverse needs of many different entities in an organization. These entities include applications, services, storage, administrators, and users.

Reduce costs Reducing costs is one of the simpler goals to achieve in the vSphere infrastructure. Server consolidation alone reduces network costs by reducing the number of required network ports and NICs, but a more efficient network design is desirable. For example, configuring two 25-GbE NICs might be more cost effective than configuring four 10-GbE NICs.

Improve performance You can achieve performance improvement and decrease the time that is required to perform maintenance by providing sufficient bandwidth, which reduces contention and latency.

Improve availability A well-designed network improves availability, usually by providing network redundancy.

Support security A well-designed network supports an acceptable level of security through controlled access and isolation, where required.

Enhance infrastructure functionality

You can configure the network to support vSphere features such as vSphere vMotion, vSphere High Availability, and vSphere Fault Tolerance.

Follow networking best practices throughout your environment.

n Separate network services from one another to achieve greater security and better performance.

n Use Network I/O Control and traffic shaping to guarantee bandwidth to critical virtual machines. During network contention, these critical virtual machines will receive a higher percentage of the bandwidth.

n Separate network services on a single vSphere Distributed Switch by attaching them to port groups with different VLAN IDs.

n Keep vSphere vMotion traffic on a separate network.

When a migration using vSphere vMotion occurs, the contents of the memory of the guest operating system is transmitted over the network. You can place vSphere vMotion on a separate network by using a dedicated vSphere vMotion VLAN.

n When using pass-through devices with Linux kernel version 2.6.20 or an earlier guest OS, avoid MSI and MSI-X modes. These modes have significant performance impact.

n For best performance, use VMXNET3 virtual machine NICs.

n Ensure that physical network adapters that are connected to the same vSphere Standard Switch or vSphere Distributed Switch, are also connected to the same physical network.

Network Segmentation and VLANs

Separating different types of traffic is required to reduce contention and latency, and for access security.

Architecture and Design for the Management Domain

VMware, Inc. 48

Page 49: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

High latency on any network can negatively affect performance. Some components are more sensitive to high latency than others. For example, reducing latency is important on the IP storage and the vSphere Fault Tolerance logging network because latency on these networks can negatively affect the performance of multiple virtual machines. According to the application or service, high latency on specific virtual machine networks can also negatively affect performance. Use information gathered from the current state analysis and from interviews with key stakeholder and SMEs to determine which workloads and networks are especially sensitive to high latency.

Virtual Networks

Determine the number of networks or VLANs that are required depending on the type of traffic.

n vSphere system traffic

n Management

n vSphere vMotion

n vSAN

n TEP

n Traffic that supports the services and applications in the organization

n Virtual Switch Type Design for the Management Domain

Virtual switches simplify the configuration process by providing a single pane of glass for performing virtual network management tasks.

n vSphere Distributed Switch Design for the Management Domain

Each cluster in the management domain uses a single vSphere Distributed Switch whose design includes traffic types on the switch, the number of required NICs, and MTU configuration.

n Distributed Port Group and VMkernel Adapter Design for the Management Domain

A distributed port group specifies port configuration options for each member port on a vSphere Distributed Switch. Distributed port groups define how a connection is made to a network.

n vMotion TCP/IP Stack Design for the Management Domain

Use the vMotion TCP/IP stack to isolate traffic for vSphere vMotion and to assign a dedicated default gateway for vSphere vMotion traffic.

n vSphere Network I/O Control Design for the Management Domain

Use vSphere Network I/O Control to allocate network bandwidth to management applications and to resolve situations where several types of traffic compete for common resources.

Architecture and Design for the Management Domain

VMware, Inc. 49

Page 50: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Virtual Switch Type Design for the Management Domain

Virtual switches simplify the configuration process by providing a single pane of glass for performing virtual network management tasks.

vSphere supports two types of virtual switch:

n vSphere Standard Switch

n vSphere Distributed Switch

A distributed switch offers several enhancements over a standard switch such as centralized control plane and support for traffic monitoring features.

Table 2-48. Advantages and Disadvantages of vSphere Distributed Switch

Component Description

Centralized management Because distributed switches are created and managed centrally on a vCenter Server system, switch configuration is more consistent across ESXi hosts. Centralized management saves time, reduces mistakes, and reduces operational costs.

Additional features n NetFlow and port mirroring provide monitoring and troubleshooting capabilities to the virtual infrastructure.

n To guarantee that traffic types with high priority have enough network capacity, you can assign shares to these traffic types by using Network I/O Control.

n To ensure that every network adapter is used when the network traffic is high, you can use the Route Based on Physical NIC Load teaming policy. The distributed switch directs the traffic from one physical network adapter to another if the usage of an adapter remains at or above 75% for 30 seconds.

Disadvantages If vCenter Server is unavailable, distributed switches are not manageable. You can consider vCenter Server a Tier 1 application.

Table 2-49. Design Decisions on Virtual Switch Type

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NET-001 Use vSphere Distributed Switches.

Simplifies management. Migration from a standard switch to a distributed switch requires a minimum of two physical NICs to maintain redundancy.

SDDC-MGMT-VI-NET-002 Use a single vSphere Distributed Switch per cluster.

n Reduces the complexity of the network design.

n Reduces the size of the fault domain.

Increases the number of vSphere Distributed Switches that must be managed.

Architecture and Design for the Management Domain

VMware, Inc. 50

Page 51: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSphere Distributed Switch Design for the Management Domain

Each cluster in the management domain uses a single vSphere Distributed Switch whose design includes traffic types on the switch, the number of required NICs, and MTU configuration.

Table 2-50. sfo-m01-cl01-vds01 vSphere Distributed Switch Configuration

Number of Physical NIC Ports Network I/O Control MTU Size

2 Enabled 9000

Table 2-51. Physical Uplinks on sfo-m01-cl01-vds01 vSphere Distributed Switch

Physical NIC Function

vmnic0 Uplink

vmnic1 Uplink

Table 2-52. Design Decisions for vSphere Distributed Switch

Design ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NET-003 Enable Network I/O Control on vSphere distributed switch of the management domain cluster.

Increases resiliency and performance of the network.

If configured incorrectly, Network I/O Control might impact network performance for critical traffic types.

SDDC-MGMT-VI-NET-004 Configure the MTU size of the vSphere Distributed Switch to 9000 for jumbo frames.

n Supports the MTU size required by system traffic types.

n Improves traffic throughput.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.

vSphere Distributed Switch Health Check Design

The health check service helps identify and troubleshoot configuration errors in vSphere distributed switches.

n Mismatching VLAN trunks between an ESXi host and the physical switches it's connected to.

n Mismatching MTU settings between physical network adapters, distributed switches, and physical switch ports.

n Mismatching virtual switch teaming policies for the physical switch port-channel settings.

Health check monitors VLAN, MTU, and teaming policies. Health check is limited to the access switch port to which the NICs of the ESXi hosts are connected.

Architecture and Design for the Management Domain

VMware, Inc. 51

Page 52: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-53. Health Check in vSphere Distributed Switch

Monitored Parameter Description

VLANs Checks whether the VLAN settings on the distributed switch match the trunk port configuration on the connected physical switch ports.

MTU For each VLAN, determines whether the MTU size configuration for jumbo frames on the physical access switch port matches the distributed switch MTU setting.

Teaming policies Determines whether the connected access ports of the physical switch that participate in an EtherChannel are paired with distributed ports whose teaming policy is Route based on IP hash.

Table 2-54. Design Decisions for vSphere Distributed Switch Health Check

Design ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NET-005 Enable vSphere Distributed Switch Health Check on all distributed switches.

vSphere Distributed Switch Health Check verifies that all VLANS are trunked to all ESXi hosts attached to the vSphere Distributed Switch and the MTU sizes match the physical network.

n In a multiple availability zone configuration, some VLANs are not available to all ESXi hosts in the cluster which triggers alarms.

n You must have a minimum of two physical uplinks to use this feature.

Distributed Port Group and VMkernel Adapter Design for the Management Domain

A distributed port group specifies port configuration options for each member port on a vSphere Distributed Switch. Distributed port groups define how a connection is made to a network.

vSphere Distributed Switch introduces two abstractions that you use to create consistent networking configuration for physical NICs, virtual machines, and VMkernel traffic.

Uplink port group

An uplink port group or dvuplink port group is defined during the creation of the distributed switch and can have one or more uplinks. An uplink is a template that you use to configure physical connections of hosts as well as failover and load balancing policies. You map physical NICs of hosts to uplinks on the distributed switch. You set failover and load balancing policies over uplinks and the policies are automatically propagated to the host proxy switches, or the data plane.

Distributed port group

Architecture and Design for the Management Domain

VMware, Inc. 52

Page 53: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Distributed port groups provide network connectivity to virtual machines and accommodate VMkernel traffic. You identify each distributed port group by using a network label, which must be unique to the current data center. You configure NIC teaming, failover, load balancing, VLAN, security, traffic shaping , and other policies on distributed port groups. As with uplink port groups, the configuration that you set on distributed port groups on vCenter Server (the management plane) is automatically propagated to all hosts on the distributed switch through their host proxy switches (the data plane).

Table 2-55. Distributed Port Group Configuration

Parameter Setting

Failover detection Link status only

Notify switches Yes

Failback Yes

Failover order Active uplinks: Uplink1, Uplink2

Figure 2-10. vSphere Distributed Switch Design for Management Domain

Sample ESXi Management Host

sfo-m01-cl01-vds01

VLAN ESXi Management

VLAN vMotion

VLAN NFS

VLAN Host Overlay (Host TEP)

VLAN Uplink01

VLAN Uplink02

VLAN vSAN

nic0 nic1

Architecture and Design for the Management Domain

VMware, Inc. 53

Page 54: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-56. Port Group Binding and Teaming

vSphere Distributed Switch Port Group Name Port Binding Teaming Policy Active Uplinks

sfo-m01-cl01-vds01 sfo01-m01-cl01-vds01-pg-mgmt

Ephemeral Port Binding

Route based on physical NIC load

1, 2

sfo-m01-cl01-vds01 sfo01-m01-cl01-vds01-pg-vmotion

Static Port Binding Route based on physical NIC load

1, 2

NIC Teaming

For a predictable level of performance, use multiple network adapters in one of the following configurations.

n An active-passive configuration that uses explicit failover when connected to two separate switches.

n An active-active configuration in which two or more physical NICs in the server are assigned the active role.

This design uses an active-active configuration.

Table 2-57. Design Decisions on Distributed Port Groups

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NET-006 Use ephemeral port binding for the management port group.

Using ephemeral port binding provides the option for recovery of the vCenter Server instance that is managing the distributed switch.

Port-level permissions and controls are lost across power cycles, and no historical context is saved.

SDDC-MGMT-VI-NET-007 Use static port binding for all non-management port groups.

Static binding ensures a virtual machine connects to the same port on the vSphere Distributed Switch. This allows for historical data and port level monitoring .

None

SDDC-MGMT-VI-NET-008 Use the Route based on physical NIC load teaming algorithm for the management port group.

Reduces the complexity of the network design and increases resiliency and performance.

None.

SDDC-MGMT-VI-NET-009 Use the Route based on physical NIC load teaming algorithm for the vMotion Port Group.

Reduces the complexity of the network design and increases resiliency and performance.

None.

VMkernel Network Adapter Configuration

The VMkernel networking layer provides connectivity to hosts and handles the system traffic for vSphere vMotion, IP storage, vSphere HA, vSAN, and others.

Architecture and Design for the Management Domain

VMware, Inc. 54

Page 55: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

You can also create VMkernel network adapters on the source and target vSphere Replication hosts to isolate the replication data traffic.

Table 2-58. VMkernel Adapters for the Management Domain

vSphere Distributed Switch

Availability Zones

Network Function

Connected Port Group Enabled Services MTU Size (Bytes)

sfo-m01-cl01-vds01 Availability Zone 1

Management sfo01-m01-cl01-vds01-pg-mgmt

Management Traffic

1500 (Default)

sfo-m01-cl01-vds01 vMotion sfo01-m01-cl01-vds01-pg-vmotion

vMotion Traffic 9000

sfo-m01-cl01-vds01 vSAN sfo01-m01-cl01-vds01-pg-vsan

vSAN 9000

sfo-m01-cl01-vds01 Availability Zone 2

Management az2_sfo01-m01-cl01-vds01-pg-mgmt

Management Traffic

1500 (Default)

sfo-m01-cl01-vds01 vMotion az2_sfo01-m01-cl01-vds01-pg-vmotion

vMotion Traffic 9000

sfo-m01-cl01-vds01 vSAN az2_sfo01-m01-cl01-vds01-pg-vsan

vSAN 9000

vMotion TCP/IP Stack Design for the Management Domain

Use the vMotion TCP/IP stack to isolate traffic for vSphere vMotion and to assign a dedicated default gateway for vSphere vMotion traffic.

By using a separate TCP/IP stack, you can manage vSphere vMotion and cold migration traffic according to the topology of the network, and as required by your organization.

n Route the traffic for the migration of virtual machines that are powered on or powered off by using a default gateway that is different from the gateway assigned to the default stack on the ESXi host.

n Assign a separate set of buffers and sockets.

n Avoid routing table conflicts that might otherwise appear when many features are using a common TCP/IP stack.

n Isolate traffic to improve security.

Architecture and Design for the Management Domain

VMware, Inc. 55

Page 56: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-59. Design Decisions on the vMotion TCP/IP Stack

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NET-010 Use the vMotion TCP/IP stack for vSphere vMotion traffic.

By using the vMotion TCP/IP stack, vSphere vMotion traffic can be assigned a default gateway on its own subnet and can go over Layer 3 networks.

In the vSphere Client, the vMotion TCP/IP stack is not available in the wizard for creating a VMkernel network adapter wizard at the distributed port group level. You must create the VMkernel adapter directly on the ESXi host.

vSphere Network I/O Control Design for the Management Domain

Use vSphere Network I/O Control to allocate network bandwidth to management applications and to resolve situations where several types of traffic compete for common resources.

When Network I/O Control is enabled, the distributed switch allocates bandwidth for the traffic that is related to the main vSphere features.

n Fault tolerance traffic

n iSCSI traffic

n vSphere vMotion traffic

n Management traffic

n VMware vSphere Replication traffic

n NFS traffic

n vSAN traffic

n Backup traffic

n Virtual machine traffic

Network I/O Control Heuristics

The following heuristics can help with design decisions for Network I/O Control.

Shares and Limits

When you use bandwidth allocation, consider using shares instead of limits. Limits impose hard limits on the amount of bandwidth used by a traffic flow even when network bandwidth is available.

Limits on Network Resource Pools

Architecture and Design for the Management Domain

VMware, Inc. 56

Page 57: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Consider imposing limits on a given network resource pool. For example, if you put a limit on vSphere vMotion traffic, you can benefit in situations where multiple vSphere vMotion data transfers, initiated on different ESXi hosts at the same time, result in oversubscription at the physical network level. By limiting the available bandwidth for vSphere vMotion at the ESXi host level, you can prevent performance degradation for other traffic.

Teaming Policy

When you use Network I/O Control, use Route based on physical NIC load teaming as a distributed switch teaming policy to maximize the networking capacity utilization. With load-based teaming, traffic might move among uplinks, and reordering of packets at the receiver can result occasionally.

Traffic Shaping

Use distributed port groups to apply configuration policies to different traffic types. Traffic shaping can help in situations where multiple vSphere vMotion migrations initiated on different ESXi hosts converge on the same destination ESXi host. The actual limit and reservation also depend on the traffic shaping policy for the distributed port group where the adapter is connected to.

How Network I/O Control Works

Network I/O Control enforces the share value specified for the different traffic types when a network contention occurs. Network I/O Control applies the share values set to each traffic type. As a result, less important traffic, as defined by the share percentage, is throttled, granting access to more network resources to more important traffic types.

Network I/O Control also supports reservation of bandwidth for system traffic based on the capacity of the physical adapters on an ESXi host, and enables fine-grained resource control at the virtual machine network adapter level. Resource control is similar to the model for CPU and memory reservations in vSphere DRS.

Architecture and Design for the Management Domain

VMware, Inc. 57

Page 58: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-60. Design Decisions on vSphere Network I/O Control

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NET-011 Set the share value for management traffic to Normal.

By keeping the default setting of Normal, management traffic is prioritized higher than vSphere vMotion and vSphere Replication but lower than vSAN traffic. Management traffic is important because it ensures that the hosts can still be managed during times of network contention.

None.

SDDC-MGMT-VI-NET-012 Set the share value for vSphere vMotion traffic to Low.

During times of network contention, vSphere vMotion traffic is not as important as virtual machine or storage traffic.

During times of network contention, vMotion takes longer than usual to complete.

SDDC-MGMT-VI-NET-013 Set the share value for virtual machines to High.

Virtual machines are the most important asset in the SDDC. Leaving the default setting of High ensures that they always have access to the network resources they need.

None.

SDDC-MGMT-VI-NET-014 Set the share value for vSAN traffic to High.

During times of network contention, vSAN traffic needs a guaranteed bandwidth to support virtual machine performance.

None.

SDDC-MGMT-VI-NET-015 Set the share value for vSphere Fault Tolerance to Low.

This design does not use vSphere Fault Tolerance. Fault tolerance traffic can be set the lowest priority.

None.

Software-Defined Networking Design for the Management Domain

In this design, you use NSX-T Data Center for connecting the management workloads by using virtual network segments and routing. You also create constructs for region-specific and cross-region solutions. These constructs isolate the solutions from the rest of the network, providing routing to the data center and load balancing.

Architecture and Design for the Management Domain

VMware, Inc. 58

Page 59: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

NSX-T Data Center

NSX-T Data Center provides network virtualization capabilities in the management domain. With network virtualization, networking components that are usually part of the physical infrastructure, can be programmatically created and managed by using this software-defined network (SDN) platform. NSX-T Data Center provides both a declarative intent-based policy model, and an imperative based model to define and manage the SDN.

The deployment of NSX-T Data Center includes management, control plane, and services components. For the management domain, all these components run in the first cluster in the management domain to support the SDN needs of the management domain itself.

NSX-T Manager

NSX-T Manager provides the user interface and the RESTful API for creating, configuring, and monitoring NSX-T components, such as virtual network segments, and Tier-0 and Tier-1 gateways.

NSX-T Manager implements the management and control plane for the NSX-T infrastructure. NSX-T Manager is the centralized network management component of NSX-T, providing an aggregated view on all components in the NSX-T Data Center system.

Table 2-61. Components of NSX-T Manager

Component Description

Services n Logical switching and routing

n Networking and edge services

n Security services and distributed firewall

RESTful API You can automate all configuration and monitoring operations by using any cloud automation platform, security vendor platform, or automation framework.

Management Plane Agent (MPA)

Available on each ESXi host. The MPA is in charge of persisting the desired state of the system and for communicating non-flow-controlling (NFC) messages such as configuration, statistics, status, and real-time data between transport nodes and the management plane.

NSX-T Controller NSX-T Controllers implement the central control plane (CCP). They control the virtual networks and overlay transport tunnels. The controllers are responsible for the programmatic deployment of virtual networks across the entire NSX-T architecture.

The CCP is logically separated from all data plane traffic, that is, a failure in the control plane does not affect existing data plane operations. The controller provides configuration to other NSX-T Data Center components, such as segment, gateway, and edge node configuration.

Integration with vCenter Server

NSX-T Data Center components are not assigned to a specific vCenter Server or vSphere construct. You can share them across different vSphere environments.

NSX-T Edge Nodes

An NSX-T Edge node is a special type of transport node which contains service router components.

NSX-T Edge nodes provide north-south traffic connectivity between the physical data center networks and the NSX-T SDN networks. Each NSX-T Edge node has multiple interfaces where traffic flows.

Architecture and Design for the Management Domain

VMware, Inc. 59

Page 60: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

You also use the NSX-T Edge nodes in east-west traffic flow between virtualized workloads. They provide stateful services such as load balancers and DHCP. In a multi-region deployment, east-west traffic between the regions flows through the NSX-T Edge nodes too.

Logical Design for NSX-T Data Center for the Management Domain

NSX-T Data Center provides networking services to SDDC management workloads such as load balancing , routing and virtual networking. NSX-T Data Center is connected to the region-specific Workspace ONE Access for central user management.

Figure 2-11. NSX-T Logical Design for the Management Domain

Mnagement Domain vCenter Server

Access

User Interface

API

Identity Management

Workspace ONE Access

SupportingInfrastructure

DNS NTP

NSX-T LocalManager Cluster

Internal VIP Load Balancer

Local Manager

1

Local Manager

2

Local Manager

3

NSX-TTransport Nodes

Management Cluster

ESXi ESXiESXiESXi

NSX-T EdgeNode Cluster

EdgeNode

1

EdgeNode

2

An NSX-T Data Center deployment consists of these components:

n Unified appliances that have both the NSX-T Local Manager and NSX-T Controller roles. They provide management and control plane capabilities.

n NSX-T Edge nodes that provide advanced services such as load balancing, and north-south connectivity.

Architecture and Design for the Management Domain

VMware, Inc. 60

Page 61: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

n The ESXi hosts within the management domain are registered as NSX-T transport nodes to provide distribute routing and firewall services to management workloads.

Component Single Availability Zone Multiple Availability Zones

NSX-T Manager Cluster n Three medium-size NSX-T Local Manager nodes with an internal virtual IP (VIP) address for high availability

n vSphere HA protects the NSX-T Manager cluster nodes applying high restart priority

n vSphere DRS rule keeps the NSX-T Manager nodes running on different hosts

n Three medium-size NSX-T Local Manager nodes in Availability Zone 1 with an internal virtual IP (VIP) address for high availability

n vSphere should-run DRS rule keeps the NSX-T Manager nodes running in Availability Zone 1. Failover to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

n In the availability zone, vSphere HA protects the cluster nodes applying high restart priority

n In the availability zone, vSphere DRS rule keeps the nodes running on different hosts

NSX-T Edge Cluster n Two medium-size NSX-T Edge nodes

n vSphere HA protects the NSX-T Edge nodes applying high restart priority

n vSphere DRS rule keeps the NSX-T Edge nodes running on different hosts

n Two medium-size NSX-T Edge nodes in Availability Zone 1

n vSphere should-run DRS rule keeps the NSX-T Edge nodes running in Availability Zone 1. Failover to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

n In the availability zone, vSphere HA protects the NSX-T Edge nodes applying high restart priority

n In the availability zone, vSphere DRS rule keeps the NSX-T Edge nodes running on different hosts

Transport Nodes n Four ESXi host transport nodes

n Two Edge transport nodes

n In each availability zone, four ESXi host transport nodes

n Two Edge transport nodes in Availability Zone 1

Transport Zones n One VLAN transport zone for NSX-T Edge uplink traffic

n One overlay transport zone for SDDC management components and NSX-T Edge nodes

n One VLAN transport zone for NSX-T Edge uplink traffic

n One overlay transport zone for SDDC management components and NSX-T Edge nodes

Architecture and Design for the Management Domain

VMware, Inc. 61

Page 62: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Component Single Availability Zone Multiple Availability Zones

VLANs and IP Subnets n Management

n vSphere vMotion

n vSAN

n Host Overlay

n NFS

n Uplink01

n Uplink02

n Edge Overlay

Networks for Availability Zone 1:

n Management (stretched)

n vSAN

n vSphere vMotion

n Host Overlay

n NFS

n Uplink01 (stretched)

n Uplink02 (stretched)

n Edge Overlay (stretched)

Networks for Availability Zone 2:

n Management (stretched)

n vSAN

n vSphere vMotion

n Host Overlay

Routing Configuration BGP BGP with ingress and egress traffic to Availability Zone 1 with limited exceptions.

Physical Network Infrastructure Design for NSX-T Data Center for the Management Domain

Design of the physical data center network includes defining the network topology for connecting the physical switches and the ESXi hosts, determining switch port settings for VLANs and link aggregation, and designing routing.

A software-defined network (SDN) both integrates with and uses components of the physical data center. SDN integrates with your physical network to support east-west transit in the data center and north-south transit to and from the SDDC networks.

Several typical data center network deployment topologies exist:

n Core-Aggregation-Access

n Leaf-Spine

n Hardware SDN

VMware Validated Design uses the leaf-spine networking topology, because in a single data center deployment, it provides predictable performance, scalable nature, and applicability across multiple vendors.

In an environment with multiple availability zones, Layer 2 networks must be stretched between the availability zones by the physical infrastructure. You also must provide a Layer 3 gateway that is highly available between availability zones. The method for stretching these Layer 2 networks and providing a highly available Layer 3 gateway is vendor-specific.

Architecture and Design for the Management Domain

VMware, Inc. 62

Page 63: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

In an environment with multiple availability zones or regions, dynamic routing is needed to provide networks the ability to fail ingress and egress of traffic from availability zone to availability zone, or region to region. This design uses BGP as the dynamic routing protocol. As such, BGP must be present in the customer environment to facilitate the failover of networks from site to site. Because of the complexity of local-ingress, local-egress is not generally in use. In this design, network traffic flows in and out of a primary site.

Switch Types and Network Connectivity

Follow the best practices for physical switches, switch connectivity, VLANs and subnets, and access port settings.

Figure 2-12. Host-to-ToR Connectivity

ToR ToR

ESXiHost

25 GbE 25 GbE

Table 2-62. Design Components for Physical Switches in the SDDC

Design Component Configuration Best Practices

Top of rack (ToR) physical switches

n Configure redundant physical switches to enhance availability.

n Configure switch ports that connect to ESXi hosts manually as trunk ports.

n Modify the Spanning Tree Protocol (STP) on any port that is connected to an ESXi NIC to reduce the time to transition ports over to the forwarding state, for example using the Trunk PortFast feature found in a Cisco physical switch.

n Provide DHCP or DHCP Helper capabilities on all VLANs used by TEP VMkernel ports. This setup simplifies the configuration by using DHCP to assign IP address based on the IP subnet in use.

n Configure jumbo frames on all switch ports, inter-switch link (ISL), and switched virtual interfaces (SVIs).

Top of rack connectivity and network settings

Each ESXi host is connected redundantly to the top of rack switches SDDC network fabric by two 25 GbE ports. Configure the top of rack switches to provide all necessary VLANs using an 802.1Q trunk. These redundant connections use features in vSphere Distributed Switch and NSX-T to guarantee that no physical interface is overrun and available redundant paths are used.

VLANs and Subnets for Single Region and Single Availability Zone

Each ESXi host uses VLANs and corresponding subnets.

Architecture and Design for the Management Domain

VMware, Inc. 63

Page 64: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Follow these guidelines:

n Use only /24 subnets to reduce confusion and mistakes when handling IPv4 subnetting.

n Use the IP address .254 as the (floating) interface with .252 and .253 for Virtual Router Redundancy Protocol (VRPP) or Hot Standby Routing Protocol (HSRP).

n Use the RFC1918 IPv4 address space for these subnets and allocate one octet by region and another octet by function.

Note Implement VLAN and IP subnet configuration according to the requirements of your organization.

Table 2-63. VLANs and IP Ranges in This Design

Function VLAN ID IP Range

Management 1611 172.16.11.0/24

vSphere vMotion 1612 172.16.12.0/24

vSAN 1613 172.16.13.0/24

Host Overlay 1614 172.16.14.0/24

NFS 1615 172.16.15.0/24

Uplink01 2711 172.27.11.0/24

Uplink02 2712 172.27.12.0/24

Edge Overlay 2713 172.27.13.0/24

VLANs and Subnets for Multiple Available Zones

In an environment with multiple availability zones, you can apply this configuration. The management, Uplink 01, Uplink 02, and Edge Overlay networks in each availability zone must be stretched to facilitate failover of the NSX-T Edge appliances between availability zones. The Layer 3 gateway for the management and Edge Overlay networks must be highly available across the availability zones.

FunctionAvailability Zone 1

Availability Zone 2 VLAN ID IP Range

HA Layer 3 Gateway

Management ✓ ✓ 1611 (Stretched) 172.16.11.0/24 ✓

vSphere vMotion ✓ X 1612 172.16.12.0/24 ✓

vSAN ✓ X 1613 172.16.13.0/24 ✓

Host Overlay ✓ X 1614 172.16.14.0/24 ✓

Uplink01 ✓ ✓ 2711 (Stretched) 172.27.11.0/24 X

Uplink02 ✓ ✓ 2712 (Stretched)

172.27.12.0/24 X

Edge Overlay ✓ ✓ 2713 (Stretched)

172.27.13.0/24 ✓

Management X ✓ 1621 172.16.21.0/24 ✓

Architecture and Design for the Management Domain

VMware, Inc. 64

Page 65: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

FunctionAvailability Zone 1

Availability Zone 2 VLAN ID IP Range

HA Layer 3 Gateway

vSphere vMotion X ✓ 1622 172.16.22.0/24 ✓

vSAN X ✓ 1623 172.16.23.0/24 ✓

Host Overlay X ✓ 1624 172.16.24.0/24 ✓

Physical Network Requirements

Physical requirements determine the MTU size for networks that carry overlay traffic, dynamic routing support, time synchronization through an NTP server, and forward and reverse DNS resolution.

Requirement Comment

Use 25 GbE (10 GbE minimum) port on each ToR switch for ESXi host uplinks. Connect each host to two ToR switches.

25 GbE provides required bandwidth for hyperconverged networking traffic. Connection to two ToR switches provides redundant physical network paths to each host.

Provide an MTU size of 1,700 bytes or greater on any network that carries Geneve overlay traffic.

Geneve packets cannot be fragmented. The MTU size must be large enough to support extra encapsulation overhead.

Geneve is an extensible protocol, therefore the MTU size might increase with future capabilities. While 1,600 is sufficient, an MTU size of 1,700 bytes provides more room for increasing the Geneve MTU without the need to change physical infrastructure MTU.

This design uses an MTU size of 9,000 bytes for Geneve traffic.

Enable BGP dynamic routing support on the upstream Layer 3 devices.

You use BGP on the upstream Layer 3 devices to establish routing adjacency with the Tier-0 service routers (SRs). NSX-T supports only the BGP routing protocol.

Dynamic routing enables ECMP failover for upstream connectivity.

BGP Autonomous System Number (ASN) allocation A BGP ASN must be allocated for the NSX-T SDN. Use a private ASN according to RFC1930.

Physical Network Design Decisions

The physical network design decisions determine the physical layout and use of VLANs. They also include decisions on jumbo frames, and on network-related requirements such as DNS and NTP.

Architecture and Design for the Management Domain

VMware, Inc. 65

Page 66: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-64. Design Decisions on the Physical Network Infrastructure for NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-001 Use two ToR switches for each rack.

Supports the use of two 10 GbE (25 GbE or greater recommended) links to each server and provides redundancy and reduces the overall design complexity.

Requires two ToR switches per rack which can increase costs.

SDDC-MGMT-VI-SDN-002 Implement the following physical network architecture:

n One 25 GbE (10 GbE minimum) port on each ToR switch for ESXi host uplinks.

n Layer 3 device that supports BGP.

n Provides availability during a switch failure.

n Provides support for BGP as the only dynamic routing protocol that is supported by NSX-T Data Center.

n Might limit the hardware choices.

n Requires dynamic routing protocol configuration in the physical network

SDDC-MGMT-VI-SDN-003 Do not use EtherChannel (LAG, LACP, or vPC) configuration for ESXi host uplinks

n Simplifies configuration of top of rack switches.

n Teaming options available with vSphere Distributed Switch and N-VDS provide load balancing and failover.

n EtherChannel implementations might have vendor-specific limitations.

None.

SDDC-MGMT-VI-SDN-004 Use a physical network that is configured for BGP routing adjacency

n Supports flexibility in network design for routing multi-site and multi-tenancy workloads.

n BGP is the only dynamic routing protocol that is supported by NSX-T.

n Supports failover between ECMP Edge uplinks.

Requires BGP configuration in the physical network.

Access Port Network Settings

Configure additional network settings on the access ports that connect the ToR switches to the corresponding servers.

Architecture and Design for the Management Domain

VMware, Inc. 66

Page 67: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-65. Access Port Network Configuration

Setting Value

Spanning Tree Protocol (STP) Although this design does not use the Spanning Tree Protocol, switches usually include STP configured by default. Designate the access ports as trunk PortFast.

Trunking Configure the VLANs as members of a 802.1Q trunk. Optionally, the management VLAN can act as the native VLAN.

MTU n Set the MTU for management VLANs and SVIs to 1,500 bytes.

n Set the MTU for vSphere vMotion, vSAN, NFS, uplinks, host overlay, and edge overlay VLANs and SVIs to 9,000 bytes.

DHCP Helper Configure a DHCP helper (sometimes called a DHCP relay) on all TEP VLANs. Set the DHCP helper (relay) to point to a DHCP server by IPv4 address.

Architecture and Design for the Management Domain

VMware, Inc. 67

Page 68: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-66. Design Decisions on Access Ports for NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-005 Assign persistent IP configurations to each management component in the SDDC with the exception for NSX-T tunnel endpoints (TEPs) that use dynamic IP allocation.

Ensures that endpoints have a persistent management IP address.

In VMware Cloud Foundation, you assign storage (vSAN and NFS) and vSphere vMotion IP configurations by using user-defined network pools.

Requires precise IP address management.

SDDC-MGMT-VI-SDN-006 Set the lease duration for the TEP DHCP scope to at least 7 days.

NSX-T TEPs are assigned by using a DHCP server.

n NSX-T TEPs do not have an administrative endpoint. As a result, they can use DHCP for automatic IP address assignment. If you must change or expand the subnet, changing the DHCP scope is simpler than creating an IP pool and assigning it to the ESXi hosts.

n DHCP simplifies the configuration of default gateway for TEP if hosts within same cluster are on separate Layer 2 domains.

Requires configuration and management of a DHCP server.

SDDC-MGMT-VI-SDN-007 Use VLANs to separate physical network functions.

n Supports physical network connectivity without requiring many NICs.

n Isolates the different network functions of the SDDC so that you can have differentiated services and prioritized traffic as needed.

Requires uniform configuration and presentation on all the trunks that are made available to the ESXi hosts.

Architecture and Design for the Management Domain

VMware, Inc. 68

Page 69: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Jumbo Frames

IP storage throughput can benefit from the configuration of jumbo frames. Increasing the per-frame payload from 1500 bytes to the jumbo frame setting improves the efficiency of data transfer. You must configure jumbo frames end-to-end. Select an MTU that matches the MTU of the physical switch ports.

n According to the purpose of the workload, determine whether to configure jumbo frames on a virtual machine. If the workload consistently transfers large amounts of network data, configure jumbo frames, if possible. In that case, confirm that both the virtual machine operating system and the virtual machine NICs support jumbo frames.

n Using jumbo frames also improves the performance of vSphere vMotion.

n The Geneve overlay requires an MTU value of 1600 bytes or greater.

Table 2-67. Design Decisions on the Jumbo Frames for NSX-T Data Center

Decision ID Decision Decision Decision Justification Decision Implication

SDDC-MGMT-VI-SDN-008 Set the MTU size to at least 1, 700 bytes (recommended 9,000 bytes for jumbo frames) on the physical switch ports, vSphere Distributed Switches, vSphere Distributed Switch port groups, and N-VDS switches that support the following traffic types:

n Overlay (Geneve)

n vSAN

n vSphere vMotion

n NFS

n Improves traffic throughput.

n Supports Geneve by increasing the MTU size to a minimum of 1,600 bytes.

n Geneve is an extensible protocol. The MTU size might increase with future capabilities. While 1,600 is sufficient, an MTU size of 1,700 bytes provides more room for increasing the Geneve MTU size without the need to change the MTU size of the physical infrastructure.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel network adapters, virtual switches, physical switches, and routers) to support the same MTU packet size.

Networking for Multiple Availability Zone

Specific requirements for the physical data center network exist for a topology with multiple availability zones. These requirements extend those for an environment with a single availability zone.

Architecture and Design for the Management Domain

VMware, Inc. 69

Page 70: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-68. Physical Network Requirements for Multiple Availability Zone

Component Requirement

MTU n VLANs that are stretched between availability zones must meet the same requirements as the VLANs for intra-zone connection including the MTU size.

n MTU value must be consistent end-to-end including components on the inter zone networking path.

n Set the MTU for management VLANs and SVIs to 1,500 bytes.

n Set the MTU for vSphere vMotion, vSAN, NFS, uplinks, host overlay, and edge overlay VLANs and SVIs to 9,000 bytes.

Layer 3 gateway availability For VLANs that are are stretched between available zones, configure data center provided method, for example, VRRP or HSRP, to failover the Layer 3 gateway between availability zones.

DHCP availability For VLANs that are stretched between availability zones, provide high availability for the DHCP server so that a failover operation of a single availability zone will not impact DHCP availability.

BGP routing Each availability zone data center must have its own Autonomous System Number (ASN).

Ingress and egress traffic n For VLANs that are stretched between availability zones, traffic flows in and out of a single zone. Local egress is not supported.

n For VLANs that are not stretched between availability zones, traffic flows in and out of the zone where the VLAN is located.

n For NSX-T virtual network segments that are stretched between regions, trafficflows in and out of a single availability zone. Local egress is not supported.

Latency n Maximum network latency between NSX-T Managers is 10 ms.

n Maximum network latency between the NSX-T Manager cluster and transport nodes is 150 ms.

Architecture and Design for the Management Domain

VMware, Inc. 70

Page 71: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-69. Design Decisions on the Physical Network for Multiple Available Zones for NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-009 Set the MTU size to at least 1,700 bytes (recommended 9,000 bytes for jumbo frames) on physical inter- availability zone networking components which are part of the networking path between availability zones for the following traffic types.

n Overlay (Geneve)

n vSAN

n vSphere vMotion

n NFS

n Improves traffic throughput.

n Geneve packets are tagged as do not fragment.

n For optional performance, provides a consistent MTU size across the environment.

n Geneve is an extensible protocol. The MTU size might increase with future capabilities. While 1,600 is sufficient, an MTU size of 1,700 bytes provides more room for increasing the Geneve MTU size without the need to change the MTU size of the physical infrastructure.

When adjusting the MTU packet size, you must also configure the entire network path (VMkernel ports, virtual switches, physical switches, and routers) to support the same MTU packet size.

In multi-AZ deployments, the MTU must be configured on the entire network path between AZs.

SDDC-MGMT-VI-SDN-010 Configure VRRP, HSRP, or another Layer 3 gateway availability method for these networks.

n Management

n Edge Overlay

Ensures that the VLANs that are stretched between availability zones are connected to a highly- available gateway if a failure of an availability zone occurs. Otherwise, a failure in the Layer 3 gateway will cause disruption in the traffic in the SDN setup.

Requires configuration of a high availability technology for the Layer 3 gateways in the data center.

NSX-T Manager Deployment Specification and Network Design for the Management Domain

You determine the size of the compute resources, high availability implementation, and patching and upgrade support for the NSX-T Manager instance for the management domain according to the design objectives and aggregated requirements of the management components of the SDDC.

Deployment Model for NSX-T Manager for the Management Domain

As a best practice, you must deploy a highly available NSX-T Manager instance so that the NSX-T central control place can continue propagating configuration to the transport nodes. You also

Architecture and Design for the Management Domain

VMware, Inc. 71

Page 72: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

select an NSX-T Manager appliance size according to the number of ESXi hosts required to run the SDDC management components.

Deployment Type

You can deploy NSX-T Manager in a one-node configuration or as a cluster for high availability.

Table 2-70. Design Decisions on the NSX-T Manager Deployment Type

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-011 Deploy three NSX-T Manager nodes for the management domain in the first cluster in the domain for configuring and managing the network services for SDDC management components.

SDDC management components can be placed on isolated virtual networks, using load balancing, logical switching, dynamic routing, and logical firewalls services.

n You must turn on vSphere HA in the first cluster in the management domain.

n The first cluster in the management domain requires four physical ESXi hosts for vSphere HA and for high availability of the NSX-T Manager cluster.

Sizing Compute and Storage Resources for NSX-T Manager

When you size the resources for NSX-T management components, consider the compute and storage requirements for each component, and the number of nodes per component type.

Table 2-71. NSX-T Manager Resource Specification

Appliance Size vCPU Memory (GB) Storage (GB) Scale

Extra-Small 2 8 300 Cloud Service Manager only

Small 4 16 300 Proof of concept

Medium 6 24 300 Up to 64 ESXi hosts

Large 12 48 300 More than 64 ESXi hosts

Table 2-72. Design Decisions on Sizing Resources for NSX-T Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-012

Deploy each node in the NSX-T Manager cluster for the management domain as a medium-size appliance or larger.

A medium-size appliance is sufficient for providing network services to the SDDC management components.

If you extend the management domain, increasing the size of the NSX-T Manager appliances might be required.

High Availability of NSX-T Manager in a Single Region

The NSX-T Manager cluster runs on the first cluster in the management domain. vSphere HA protects the NSX-T Manager appliances by restarting an NSX-T Manager appliance on a different ESXi host if a primary ESXi host failure occurs.

Architecture and Design for the Management Domain

VMware, Inc. 72

Page 73: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-73. Design Decisions on the High Availability Configuration for NSX-T Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-013 Create a virtual IP (VIP) address for the NSX-T Manager cluster for the management domain.

Provides high availability of the user interface and API of NSX-T Manager.

n The VIP address feature provides high availability only. It it does not load balance requests across the cluster.

n When using the VIP address feature, all NSX-T Manager nodes must be deployed on the same Layer 2 network.

SDDC-MGMT-VI-SDN-014 Apply VM-VM anti-affinity rules in vSphere Distributed Resource Scheduler (vSphere DRS) to the NSX-T Manager appliances.

Keeps the NSX-T Manager appliances running on different ESXi hosts for high availability.

n You must allocate at least four physical hosts so that the three NSX-T Manager appliances continue running if an ESXi host failure occurs.

n You must perform additional configuration for the anti-affinity rules.

SDDC-MGMT-VI-SDN-015 In vSphere HA, set the restart priority policy for each NSX-T Manager appliance to high.

n NSX-T Manager implements the control plane for virtual network segments too. vSphere HA restarts the NSX-T Manager appliances first so that other virtual machines that are being powered on or migrated by using vSphere vMotion while the control plane is offline lose connectivity only until the control plane quorum is re-established.

n Setting the restart priority to high reserves the highest priority for flexibility for adding services that must be started before NSX-T Manager.

If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer.

High Availability of NSX-T Manager in Multiple Availability Zones

In an environment with multiple availability zones, the NSX-T Manager cluster runs in Availability Zone 1. If a failure in Availability Zone 1 occurs, the NSX-T Manager cluster is failed over to Availability Zone 2.

Architecture and Design for the Management Domain

VMware, Inc. 73

Page 74: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-74. Design Decisions on the High Availability Configuration for NSX-T Manager for Multiple Availability Zones

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-016 When using two availability zones, create a virtual machine group for the NSX-T Manager appliances.

Ensures that the NSX-T Manager appliances can be managed as a group.

You must add virtual machines to the allocated group manually.

SDDC-MGMT-VI-SDN-017 When using two availability zones, create a should-run VM-Host affinity rule to run the group of NSX-T Manager appliances on the group of hosts in Availability Zone 1.

n Ensures that the NSX-T Manager appliances are located only in Availability Zone 1.

n Failover of the NSX-T Manager appliances to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

n Splitting NSX-T Manager appliances across availability zones does not improve availability.

Creating the rules is a manual task and adds administrative overhead.

Network Design for NSX-T Manager for the Management Domain

For traffic segmentation, you place NSX-T Manager for the management domain on the management VLAN, and decide on the IP addressing scheme and name resolution for optimal support for the SDDC management components and host management.

Network Segment

For secure access to the ESXi hosts and vCenter Server, in each region, NSX-T Manager for the management domain is connected to the management VLAN segment.

Table 2-75. Design Decisions on the Network Segment for NSX-T Manager

Decision ID Design Decision Design JustificationDecision Implication

SDDC-MGMT-VI-SDN-018

Place the appliances of the NSX-T Manager cluster on the management VLAN network, that is, sfo01-m01-cl01-vds01-pg-mgmt.

n Provides direct secure connection to the ESXi hosts and vCenter Server for edge node management and distributed network services.

n Reduces the number of required VLANs because a single VLAN can be allocated to both, vCenter Server and NSX-T.

None.

IP Addressing Scheme

You can assign the IP addresses of the NSX-T Manager appliances by using DHCP or statically according to the network configuration in your environment.

Architecture and Design for the Management Domain

VMware, Inc. 74

Page 75: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-76. Design Decisions on IP Addressing for NSX-T Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-019

Allocate a statically assigned IP address and host name to the nodes of the NSX-T Manager cluster.

Ensures stability across the SDDC, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Name Resolution

NSX-T Manager node name resolution, including the internal virtual IP addresses (VIPs), uses a region-specific suffix, that is, sfo.rainpole.io for Region A.

Table 2-77. Design Decisions on Name Resolution for NSX-T Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-020

Configure forward and reverse DNS records for the nodes of the NSX-T Manager cluster for the management domain, assigning the record to the child domain in the region.

The NSX-T Manager nodes and VIP address are accessible by using fully qualified domain names instead of by using IP addresses only.

You must provide DNS records for the NSX-T Manager nodes for the management domain in each region.

Time Synchronization

Time synchronization provided by the Network Time Protocol (NTP) is important to ensure that all components within the SDDC are synchronized to the same time source.

Table 2-78. Design Decisions on Time Synchronization for NSX-T Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-021 Configure NTP on each NSX-T Manager appliance.

NSX-T Manager depends on time synchronization.

None.

NSX-T Edge Deployment Specification and Network Design for the Management Domain

Following the principles of this design and of each product, you deploy, configure, and connect the NSX-T Edge nodes to support networks within the software-defined networking.

Deployment Specification of the NSX-T Edge Nodes for the Management Domain

You determine the size of the compute resources, high availability implementation, and patching and upgrade support for the NSX-T Edge appliances for the management domain according to the design objectives and aggregated requirements of the management components of the SDDC.Deployment Model for the NSX-T Edge Nodes for the Management DomainFor NSX-T Edge nodes, you determine the form factor and the appliance number and place according to the requirements for network services in the management domain.

Architecture and Design for the Management Domain

VMware, Inc. 75

Page 76: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

An NSX-T Edge node is an appliance that provides centralized networkings services which can not be distributed to hypervisors, such as load balancing, NAT, VPN, and physical network uplinks. Some services, such as T-0 gateways, are limited to a single instance per NSX-T Edge node. However, most services can coexist in these nodes.

NSX-T Edge nodes are grouped in one or more edge clusters, representing a pools of capacity for NSX-T Data Center services.

Form Factors of NSX-T Edge Nodes

An NSX-T Edge node can be deployed as a virtual appliance, or installed on bare metal hardware. The edge node on bare-metal hardware can have better performance capabilities at the expense of more difficult deployment and limited deployment topology use cases.

Design ComponentEdge Virtual Appliance

Edge Bare Metal Appliance Considerations

Ease of deployment and ease of expansion

↑↑ ↓ n You can deploy NSX-T Edge virtual appliances from NSX-T Manager, SDDC Manager, or directly from OVA

n Deployment of bare metal appliances have certain hardware compatibility requirements, and must be manually deployed and connected to the environment.

Ease of upgrade and life cycle management

o ↓ n NSX-T Manager manages the life cycle of the associated NSX-T Edge appliances.

n NSX-T Edge nodes on bare metal hardware require individual hardware life cycle management of firmware, drivers, and so on.

Manageability ↑↑ o NSX-T Edge nodes on bare metal hardware require individual monitoring and management of the hardware, such as failures, firmware, and so on.

Availability and recoverability

↑↑ ↓ n If a failure of the hardware hosting the NSX-T Edge virtual appliance occurs, vSphere HA automatically recovers the edge appliance on another host.

n If a failure of an NSX-T Edge node on bare metal hardware occurs, you must redeploy it on new hardware.

Capability o o NSX-T Edge virtual appliances and NSX-T Edge nodes on bare metal hardware provide the same network services.

Design agility ↑ ↓ You can use NSX-T Edge nodes only in an SDDC with a single availability zone in the management domain.

Architecture and Design for the Management Domain

VMware, Inc. 76

Page 77: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Design ComponentEdge Virtual Appliance

Edge Bare Metal Appliance Considerations

Performance ↑ ↑↑ n In certain use cases, an NSX-T Edge node on bare metal hardware can support more raw performance and lower latency.

n The performance of an NSX-T Edge virtual appliance depends on the underlying ESXi host. To improve performance, migrate the appliance to a host that has higher performance.

Capacity o ↑ n In rare use cases, an NSX-T Edge node on bare metal hardware can support more raw capacity.

n NSX-T Edge virtual appliance can easily be scaled up and down as needed.

Sizing Compute and Storage Resources for NSX-T Edge Nodes

When you size the resources for NSX-T components, consider the compute and storage requirements for each component, and the number of nodes per component type.

Table 2-79. Resource Specifications of NSX-T Edge Nodes per Form Factor

Form Factor Appliance Size CPU or vCPU Memory (GB) Storage (GB)

NSX-T Edge on bare metal hardware

Minimum requirements

- 8 CPU 8 200

Recommended requirements

- 24 CPU 265 500

NSX-T Edge virtual appliance Small

Use for proof of concept only

2 vCPU 4 200

Medium 4 vCPU 8 200

Large

Use in large environments that require load balancers

8 vCPU 32 200

Extra Large 16 vCPU 64 200

Architecture and Design for the Management Domain

VMware, Inc. 77

Page 78: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-80. Design Decisions on the Form Factor and Sizing for the NSX-T Edge Nodes

Design ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-022 Use medium-size NSX-T Edge virtual appliances.

n The medium-size appliance provides the performance characteristics for supporting the SDDC management components in the SDDC.

n The deployment and life cycle management of virtual edges node is simpler.

n You can migrate virtual machine form factor between physical locations to support advanced deployment topologies such as multiple availability zones.

None.

High Availability Design for the NSX-T Edge Nodes for the Management DomainThe NSX-T Edge cluster runs on the first cluster in the management domain. vSphere HA and vSphere DRS protect the NSX-T Edge appliances. In an environment with multiple availability zones, to configure the first availability zone as the main location for the NSX-T Edge nodes, you use vSphere DRS.

NSX-T Edge Cluster Design

The NSX-T Edge cluster is a logical grouping of NSX-T Edge transport nodes. These NSX-T Edge appliances run on a vSphere cluster, and provide north-south routing and network services for the management workloads. You can dedicate this vSphere cluster only to edge appliance or can share it with the other management appliances.

First vSphere Cluster in the Management Domain

The first cluster in the management domain contains all components for managing the SDDC. See the vSphere Cluster Design for the Management Domain.

Dedicated Edge vSphere Cluster

A dedicated vSphere Edge cluster contains only NSX-T Edge appliances.

Architecture and Design for the Management Domain

VMware, Inc. 78

Page 79: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-81. Design Decisions on the NSX-T Edge Cluster Configuration

Decision ID Design Decision Design Justification Design Implications

SDDC-MGMT-VI-SDN-023 n Deploy the NSX-T Edge virtual appliances.

n Do not configure a dedicated vSphere cluster for edge nodes.

n Because of the prescriptive nature of the management domain, resource contention from unknown workloads is minimized

n Simplifies configuration and minimizes the number of ESXi hosts required for initial deployment.

n Keeps the management component located in the same domain and cluster, isolated from tenant workloads.

.

None.

SDDC-MGMT-VI-SDN-024 Deploy two NSX-T Edge appliances in an edge cluster in the first cluster in the management domain.

Creates the NSX-T Edge cluster for satisfying the requirements for availability and scale.

If you add more NSX-T Edge appliances, you must adjust the memory reservation on the edge resource pool.

SDDC-MGMT-VI-SDN-025 Apply VM-VM anti-affinity rules for vSphere DRS to the virtual machines of the NSX-T Edge cluster.

Keeps the NSX-T Edge nodes running on different ESXi hosts for high availability.

You must perform additional tasks to set up the anti-affinity rules.

SDDC-MGMT-VI-SDN-026 In vSphere HA, set the restart priority policy for each NSX-T Edge appliance to high.

n The NSX-T Edge nodes are part of the north-south data path for overlay segments for SDDC management components. vSphere HA restarts the NSX-T Edge appliances first so that other virtual machines that are being powered on or migrated by using vSphere vMotion while the edge nodes are offline lose connectivity only for a short time.

n Setting the restart priority to high reserves highest for future needs.

If the restart priority for another management appliance is set to highest, the connectivity delays for management appliances will be longer .

Architecture and Design for the Management Domain

VMware, Inc. 79

Page 80: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-81. Design Decisions on the NSX-T Edge Cluster Configuration (continued)

Decision ID Design Decision Design Justification Design Implications

SDDC-MGMT-VI-SDN-027 Configure all edge nodes as transport nodes.

Enables the participation of edge nodes in the overlay network for delivery of services to the SDDC management components such as routing and load balancing.

None.

SDDC-MGMT-VI-SDN-028 Create an NSX-T Edge cluster with the default Bidirectional Forwarding Detection (BFD) configuration between the NSX-T Edge nodes in the cluster.

n Satisfies the availability requirements by default.

n Edge nodes must remain available to create services such as NAT, routing to physical networks, and load balancing.

None.

High Availability for Multiple Availability Zones

NSX-T Edge nodes connect to top of rack switches in each data center to support northbound uplinks and route peering for SDN network advertisement. This connection is specific to the top of rack switch that you are connected to.

If an outage of an availability zone occurs, vSphere HA fails over the edge appliances to the other availability zone by using vSphere HA. Availability Zone 2 must provide an analog of the network infrastructure which the edge node is connected to in Availability Zone 1.

To support failover of the NSX-T Edge appliances, the following networks are stretched across Availability Zone 1 to Availability Zone 2. For information about the networks in a management domain with multiple availability zones, see Physical Network Infrastructure Design for NSX-T Data Center for the Management Domain.

Table 2-82. Networks That Are Stretched Across Availability Zones

Function VLAN IP Range HA Layer 3 Gateway

Management for Availability Zone 1

1611 172.16.11.0/24 ✓

Uplink01 2711 172.27.11.0/24 x

Uplink02 2712 172.27.12.0/24 x

Edge overlay 2713 172.27.13.0/24 ✓

Management for Availability Zone 2

1621 172.16.21.0/24 ✓

Architecture and Design for the Management Domain

VMware, Inc. 80

Page 81: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-83. Design Decisions on the High Availability Configuration of the NSX-T Edge Nodes for Multiple Availability Zones

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-029 When using two availability zones, create a virtual machine group for the NSX-T Edge appliances.

Ensures that NSX-T Edge appliances can be managed as a group.

You must add virtual machines to the allocated group manually to ensure that they are not powered-on in or migrated to the wrong availability zone.

SDDC-MGMT-VI-SDN-030 When using two availability zones, create a should-run VM-Host affinity rule to run the group of NSX-T Edge appliances on the group of hosts in Availability Zone 1.

n Ensures that the NSX-T Edge appliances are located only in Availability Zone 1.

n Failover of the NSX-T Edge appliances to Availability Zone 2 occurs only if a failure in Availability Zone 1 occurs.

n Intra-tier routing maintains traffic flow within a single availability zone.

Creating the rules is a manual task and adds administrative overhead.

Network Design for the NSX-T Edge Nodes for the Management Domain

You implement an NSX-T Edge configuration with a single N-VDS. You connect the uplink network interfaces of the edge appliance to VLAN trunk port groups that are connected to particular physical NICs on the host.

The NSX-T Edge node contains an NSX-T managed virtual switch called an N-VDS. This internal N-VDS is used to define traffic flow through the interfaces of the edge node. An N-VDS can be connected to one or more interfaces. Interfaces can not be shared between N-VDS instances.

Architecture and Design for the Management Domain

VMware, Inc. 81

Page 82: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-13. NSX-T Edge Network Configuration

ToRSwitches

ESXi Host

sfo01-m01-cl01-vds01-pg-uplink01

sfo01-m01-cl01-vds01-pg-uplink02

sfo01-m01-cl01-vds01-pg-mgmt

sfo-m01-cl01-vds01

eth0(mgmt)

fp-eth0(uplink 1)

fp-eth1(uplink 2)

fp-eth2(unused)

sfo-m01-cl01-nvds01

Uplink 1(VLAN)

Overlay Segments

TEP1 TEP2Uplink 2(VLAN)

NSX-T Edge Node

vmnic0 vmnic1

Architecture and Design for the Management Domain

VMware, Inc. 82

Page 83: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-84. Design Decisions on the Network Configuration of the NSX-T Edge Appliances

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-031 Connect the management interface eth0 of each NSX-T Edge node to the management VLAN.

Provides connection to the NSX-T Manager cluster.

None.

SDDC-MGMT-VI-SDN-032 n Connect the fp-eth0 interface of each NSX-T Edge appliance to a VLAN trunk port group pinned to physical NIC 0 of the host.

n Connect the fp-eth1 interface of each NSX-T Edge appliance to a VLAN trunk port group pinned to physical NIC 1 of the host.

n Leave the fp-eth2 interface of each NSX-T Edge appliance unused.

n Because VLAN trunk port groups pass traffic for all VLANs, VLAN tagging can occur in the NSX-T Edge node itself for easy post-deployment configuration.

n By using two separate VLAN trunk port groups, you can direct traffic from the edge node to a particular host network interface and top of rack switch as needed.

None.

SDDC-MGMT-VI-SDN-033 Use a single N-VDS in the NSX-T Edge nodes.

n Simplifies deployment of the edge nodes.

n The same N-VDS switch design can be used regardless of edge form factor.

n Supports multiple TEP interfaces in the edge node.

n vSphere Distributed Switch is not supported in the edge node.

None.

Uplink Policy Design for the NSX-T Edge Nodes for the Management Domain

By using uplink profiles, you can apply consistent policy on the uplinks of the N-VDS instance on each NSX-T Edge appliance. The uplink profile for the NSX-T Edge appliances supports the VLANs for connection to physical Layer 3 devices.

A transport node can participate in an overlay and VLAN network. Uplink profiles define policies for the links from the NSX-T Edge transport nodes to top of rack switches. Uplink profiles are containers for the properties or capabilities for the network adapters. Uplink profiles are applied to the N-VDS of the edge node.

Uplink profiles can use either load balance source or failover order teaming. If using load balance source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Teaming can be configured by using the default teaming policy or a user-defined named teaming policy. You can use named teaming policies to pin traffic segments to designated edge uplinks.

Architecture and Design for the Management Domain

VMware, Inc. 83

Page 84: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-85. Design Decisions on the NSX-T Edge Uplink Policy

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-034 Create one uplink profile for the edge node with three teaming policies.

n Default teaming policy of load balance source both active uplinks uplink-1 and uplink-2.

n Named teaming policy of failover order with a single active uplink uplink-1 without standby uplinks.

n Named teaming policy of failover order with a single active uplink uplink-2 without standby uplinks.

n An NSX-T Edge node that uses a single N-VDS can have only one uplink profile.

n For increased resiliency and performance, supports the concurrent use of both edge uplinks through both physical NICs on the ESXi hosts.

n The default teaming policy increases overlay performance and availability by using multiple TEPs, and balancing of overlay traffic.

n By using named teaming policies, you can connect an edge uplink to a specific host uplink and from there to a specific top of rack switch in the data center.

n Enables ECMP in each availability zone because the NSX-T Edge nodes can uplink to the physical network over two different VLANs.

You can use this policy only with ESXi hosts. Edge virtual machines must use the failover order teaming policy.

SDDC-MGMT-VI-SDN-035 Use a dedicated VLAN for edge overlay that is different from the host overlay VLAN.

Edge overlay network must be isolated from the host overlay network to protect the host overlay from edge-generated overlay traffic.

n You must have routing between the VLANs for edge overlay and host overlay.

n You must allocate another VLAN in the data center infrastructure for edge overlay.

Life Cycle Management Design of NSX-T Data Center for the Management Domain

You decide on the life cycle management of the NSX-T Data Center components according to the amount of time and effort to perform a deployment, upgrade, or patch operation. You also

Architecture and Design for the Management Domain

VMware, Inc. 84

Page 85: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

consider the impact such an operation has on the management solutions that are connected to NSX-T Data Center for the management domain.

Life cycle management of NSX-T Data Center involves the process of applying patches, updates or upgrades to the NSX-T Data Center appliances and hypervisor components. In a typical environment, you perform life cycle management by using the Upgrade Coordinator which is a service in NSX-T Manager. When you implement a solution by using VMware Cloud Foundation, you use SDDC Manager for life cycle management where additional components, such as automatic patching, upgrade, and product compatibility verification, are included as part of the life cycle management process.

Table 2-86. Design Decisions on Life Cycle Management of NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-036 Use SDDC Manager to perform the life cycle management of NSX-T Manager and related components in the management domain.

Because the deployment scope of SDDC Manager covers the full SDDC stack, SDDC Manager performs patching, update, or upgrade of the management domain as a single process.

The operations team must understand and be aware of the impact of a patch, update, or upgrade operation by using SDDC Manager.

NSX-T Services Design for the Management Domain

NSX-T Edge clusters are pools of capacity for NSX-T service router and load balancing functions.

North - South Routing

The routing design considers different levels of routing in the environment, such as number and type of NSX-T gateways, dynamic routing protocol, and so on. At each level, you apply a set of principles for designing a scalable routing solution.

Routing can be defined in the following directions:

n North-south traffic is traffic leaving or entering the NSX-T domain, for example, a virtual machine on an overlay network communicating with an end-user device on the corporate network.

n East-west traffic is traffic that remains in the NSX-T domain, for example, two virtual machines on the same or different segments communicating with each other.

As traffic flows north-south, edge nodes can be configured to pass traffic in an active-standby or an active-active model, where active-active can scale up to 8 active nodes. NSX-T service routers (SRs) for north-south routing are configured an active-active equal-cost multi-path (ECMP) mode that supports route failover of Tier-0 gateways in seconds.

Architecture and Design for the Management Domain

VMware, Inc. 85

Page 86: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-87. Features of Active-Active and Active-Standby SRs

Design Component Active-Active Active-Standby Comment

Bandwidth per node 0 0 Bandwidth per node is the same because it is independent of the Tier- 0 gateway failover model.

Total aggregate bandwidth ↑↑↑↑ 0 n The active-active mode can support up to 8 NSX-T Edge nodes per northbound SR.

n The active-standby mode is limited to a single node.

Availability ↑ 0 With up to 8 active-active NSX-T Edge nodes, availability can be as high as N+7, while for the active-standby mode it is N+1.

Failover Time 0 0 Both are capable of sub-second failover with use of BFD.

Routing Protocol Support ↓ 0 The active-active mode requires BGP for ECMP failover.

Figure 2-14. Dynamic Routing in a Single Availability Zone

VM VM VM

ToR Switches

Tier-0 Gateway

Tier-1 Gateway

ESXi Transport Nodes

ESXi Host 1

ESXi Host 2

ESXi Host 3

ESXi Host 4

Edge Node 1

Edge Node 2

Edge Nodes

DR DR

SR SR

DR DR

SR SR

Region A

BGP

DR DR DR DR

DR DR DR DR

ECMP

Architecture and Design for the Management Domain

VMware, Inc. 86

Page 87: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-88. Design Decisions on the High Availability Mode of Tier-0 Gateways

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-037 Deploy an active-active Tier-0 gateway.

Supports ECMP north-south routing on all Edge nodes in the NSX-T Edge cluster.

Active-active Tier-0 gateways cannot provide stateful services such as NAT.

Table 2-89. Design Decisions on Edge Uplink Configuration for North-South Routing

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-038 To enable ECMP between the Tier-0 gateway and the Layer 3 devices (ToR switches or upstream devices), create two VLANs .

The ToR switches or upstream Layer 3 devices have an SVI on one of the two VLANS and each Edge node in the cluster has an interface on each VLAN.

Supports multiple equal-cost routes on the Tier-0 gateway and provides more resiliency and better bandwidth use in the network.

Additional VLANs are required.

SDDC-MGMT-VI-SDN-039 Assign a named teaming policy to the VLAN segments to the Layer 3 device pair.

Pins the VLAN traffic on each segment to its target Edge node interface. From there the traffic is directed to the host physical NIC that is connected to the target top of rack switch.

None.

SDDC-MGMT-VI-SDN-040 Create a VLAN transport zone for Edge uplink traffic.

Enabled the configuration of VLAN segments on the N-VDS in the Edge nodes.

Additional VLAN transport zones are required if the edge nodes are not connected to the same top of rack switch pair.

Architecture and Design for the Management Domain

VMware, Inc. 87

Page 88: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-90. Design Decisions on Dynamic Routing

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-041 Use BGP as the dynamic routing protocol.

n Enables the dynamic routing by using NSX-T. NSX-T supports only BGP.

n SDDC architectures with multiple availability zones or multiple regions architectures require BGP.

In environments where BGP cannot be used, you must configure and manage static routes.

SDDC-MGMT-VI-SDN-042 Configure the BGP Keep Alive Timer to 4 and Hold Down Timer to 12 between the top of tack switches and the Tier-0 gateway.

Provides a balance between failure detection between the top of rack switches and the Tier-0 gateway, and overburdening the top of rack switches with keep-alive traffic.

By using longer timers to detect if a router is not responding, the data about such a router remains in the routing table longer. As a result, the active router continues to send traffic to a router that is down.

SDDC-MGMT-VI-SDN-043 Do not enable Graceful Restart between BGP neighbors.

Avoids loss of traffic.

On the Tier-0 gateway, BGP peers from all the gateways are always active. On a failover, the Graceful Restart capability increases the time a remote neighbor takes to select an alternate Tier-0 gateway. As a result, BFD-based convergence is delayed.

None.

SDDC-MGMT-VI-SDN-044 Enable helper mode for Graceful Restart mode between BGP neighbors.

Avoids loss of traffic.

During a router restart, helper mode works with the graceful restart capability of upstream routers to maintain the forwarding table which in turn will forward packets to a down neighbor even after the BGP timers have expired causing loss of traffic.

None.

SDDC-MGMT-VI-SDN-045 Enable Inter-SR iBGP routing.

In the event that an edge node as all of its northbound eBGP sessions are down, north-south traffic will continue to flow by routing traffic to a different Edge node.

None.

Architecture and Design for the Management Domain

VMware, Inc. 88

Page 89: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Intra-SDN Routing

Gateways are needed to provide routing between logical segments created in the NSX-T based SDN. Logical segments can be connected directly to a Tier-0 or Tier-1 gateway.

Table 2-91. Design Decisions on Tier-1 Gateway Configuration

Decision ID Design Decision Design Implication Design Justification

SDDC-MGMT-VI-SDN-046 Deploy a Tier-1 gateway and connect it to the Tier-0 gateway.

Creates a two-tier routing architecture.

A Tier-1 gateway can only be connected to a single Tier-0 gateway.

In cases where multiple Tier-0 gateways are required, you must create multiple Tier-1 gateways.

SDDC-MGMT-VI-SDN-047 Deploy a Tier-1 gateway to the NSX-T Edge cluster.

Enables stateful services, such as load balancers and NAT, for SDDC management components.

Because a Tier-1 gateway always works in active-standby mode, the gateway supports stateful services.

None.

SDDC-MGMT-VI-SDN-048 Deploy a Tier-1 gateway in non-preemptive failover mode.

Ensures that after a failed NSX-T Edge transport node is back online, it does not take over the gateway services thus causing a short service outage.

None.

Dynamic Routing in Multiple Availability Zones

In an environment with multiple availability zones, plan for failover of the NSX-T Edge nodes and configuring BGP so that traffic from the top of rack switches is directed to Availability Zone 1 unless a failure in Availability Zone 1 occurs.

Architecture and Design for the Management Domain

VMware, Inc. 89

Page 90: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-15. Dynamic Routing in Multiple Availability Zones

VM VM VM

ToR Switches

ESXi Transport Nodes ESXi Transport Nodes

ESXi Host 1

ESXi Host 2

ESXi Host 3

ESXi Host 4

ESXi Host 1

ESXi Host 2

ESXi Host 3

ESXi Host 4

Edge Node 1

Edge Node 2

Edge NodesTier-0

Gateway

Tier-1 Gateway

DR DR DR DR DR DR

SR SR

DR DR DR DR DR DR

SR SR

Availability Zone1 BGP eBGP (loc-pref & asprepend)

ToR Switches

Availability Zone 2

DR DR DR DR

DR DR DR DR

ECMP ECMP

Architecture and Design for the Management Domain

VMware, Inc. 90

Page 91: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-92. Design Decisions on North-South Routing for Multiple Availability Zones

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-049 When you have two availability zones, extend the uplink VLANs to the top of rack switches so that the VLANs are stretched between both availability zones.

Because the NSX-T Edge nodes will fail over between the availability zones, ensures uplink connectivity to the top of rack switches in both availability zones regardless of the zone the NSX-T Edge nodes are presently in.

You must configure a stretched Layer 2 network between the availability zones by using physical network infrastructure.

SDDC-MGMT-VI-SDN-050 When you have two availability zones, provide this SVI configuration on the top of the rack switches or upstream Layer 3 devices.

n In Availability Zone 2, configure the top of rack switches or upstream Layer 3 devices with an SVI on each of the two uplink VLANs.

n Make the top of rack switch SVI in both availability zones part of a common stretched Layer 2 network between the availability zones.

Enables the communication of the NSX-T Edge nodes to both the top of rack switches in both availability zones over the same uplink VLANs.

You must configure a stretched Layer 2 network between the availability zones by using the physical network infrastructure.

SDDC-MGMT-VI-SDN-051 When you have two availability zones, provide this VLAN configuration.

n Use two VLANs to enable ECMP between the Tier-0 gateway and the Layer 3 devices (top of rack switches or upstream devices).

n The ToR switches or upstream Layer 3 devices have an SVI to one of the two VLANS and each NSX-T Edge node has an interface to each VLAN.

Supports multiple equal-cost routes on the Tier-0 gateway, and provides more resiliency and better bandwidth use in the network.

Extra VLANs are required.

Requires stretching uplink VLANs between Availability zones

Architecture and Design for the Management Domain

VMware, Inc. 91

Page 92: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-92. Design Decisions on North-South Routing for Multiple Availability Zones (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-052 Create an IP prefix list that permits access to route advertisement by any network instead of using the default IP prefix list.

Used in a route map to prepend a path to one or more autonomous system (AS-path prepend) for BGP neighbors in Availability Zone 2.

You must manually create an IP prefix list that is identical to the default one.

SDDC-MGMT-VI-SDN-053 Create a route map-out that contains the custom IP prefix list and an AS-path prepend value set to the Tier-0 local AS added twice.

n Used for configuring neighbor relationships with the Layer 3 devices in Availability Zone 2.

n Ensures that all ingress traffic passes through Availability Zone 1.

You must manually create the route map.

The two NSX-T Edge nodes will route north-south traffic through Availability Zone 2 only if the connection to their BGP neighbors in Availability Zone 1 is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs.

SDDC-MGMT-VI-SDN-054 Create an IP prefix list that permits access to route advertisement by network 0.0.0.0/0 instead of using the default IP prefix list.

Used in a route map to configure local-reference on learned default-route for BGP neighbors in Availability Zone 2.

You must manully create an IP prefix list that is identical to the default one.

SDDC-MGMT-VI-SDN-055 Apply a route map-in that contains the IP prefix list for the default route 0.0.0.0/0 and assign a lower local-preference , for example, 80, to the learned default route and a lower local-preference, for example, 90 any routes learned.

n Used for configuring neighbor relationships with the Layer 3 devices in Availability Zone 2.

n Ensures that all egress traffic passes through Availability Zone 1.

You must manually create the route map.

The two NSX-T Edge nodes will route north-south traffic through Availability Zone 2 only if the connection to their BGP neighbors in Availability Zone 1 is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs.

SDDC-MGMT-VI-SDN-056 Configure Availability Zone 2 neighbors to use the route maps as In and Out filters respectively.

Makes the path in and out of Availability Zone 2 less preferred because the AS path is longer. As a result, all traffic passes through Availability Zone 1.

The two NSX-T Edge nodes will route north-south traffic through Availability Zone 2 only if the connection to their BGP neighbors in Availability Zone 1 is lost, for example, if a failure of the top of the rack switch pair or in the availability zone occurs.

Architecture and Design for the Management Domain

VMware, Inc. 92

Page 93: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Load Balancers

The logical load balancer in NSX-T Data Center offers high-availability service for applications and distributes the network traffic load among multiple servers.

Because it is a stateful service, the load balancer is instantiated in a Tier-1 gateway.

Table 2-93. Design Decisions on Load Balancer Configuration

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-057 Deploy a standalone Tier-1 gateway to support advanced stateful services such as load balancing for other management components.

Provides independence between north-south Tier-1 gateways to support advanced deployment scenarios.

You must add a separate Tier-1 gateway.

SDDC-MGMT-VI-SDN-058 Connect the standalone Tier-1 gateway to the cross-region virtual network.

You connect the Tier-1 gateway manually to the networks it provides services on.

For information on the virtual network segment configuration, see Virtual Network Segment Design for NSX-T for the Management Domain .

You must connect the gateway to each network that requires load balancing.

SDDC-MGMT-VI-SDN-059 Configure the standalone Tier-1 gateway with static routes to the gateways of the networks it is connected to.

Because the Tier-1 gateway is standalone, it does not autoconfigure it routes and you must configure it manually with static routes.

You must configure the gateway for each network that requires load balancing.

Overlay Design for NSX-T Data Center for the Management Domain

As part of the overlay design, you determine the NSX-T Data Center configuration for handling traffic between management workloads. You determine the configuration of vSphere Distributed Switch and virtual segments on it, and of the transport zones.

This conceptual design for NSX-T provides the network virtualization design of the logical components that handle the data to and from tenant workloads in the environment.

ESXi Host Transport Nodes

An NSX-T transport node is a node that is capable of participating in an NSX-T overlay network. The management domain contains multiple ESXi hosts in a vSphere cluster to support management workloads. You register these ESXi hosts as NSX-T transport nodes so that networks and workloads on that host can use the capabilities of NSX-T Data Center. During the preparation process, the native vSphere Distributed Switch for the management domain is extended with NSX-T capabilities.

Architecture and Design for the Management Domain

VMware, Inc. 93

Page 94: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-94. Design Decisions on ESXi Host Transport Nodes

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-060 Enable all ESXi hosts in the management domain as NSX-T transport nodes.

Enables distributed routing, logical segments, and distributed firewall.

None.

SDDC-MGMT-VI-SDN-061 Configure each ESXi host as a transport node without using transport node profiles.

Enables the participation of ESXi hosts and the virtual machines on them in NSX-T overlay and VLAN networks.

Transport node profiles can only be applied at the cluster level. Because in an environment with multiple availability zones each availability zone is connected to a different set of VLANs, you cannot use a transport node profile.

You must configure each transport node with an uplink profile individually.

Virtual Switches

NSX-T segments are logically abstracted segments to which you can connect tenant workloads. A single segment is mapped to a unique Geneve segment that is distributed across the ESXi hosts in a transport zone. The segment supports line-rate switching in the ESXi host without the constraints of VLAN sprawl or spanning tree issues.

Consider the following limitations of distributed switches:

n Distributed switches are manageable only when the vCenter Server instance is available. You can consider vCenter Server a Tier-1 application.

n Distributed switches with NSX-T capabilities are manageable only when the vCenter Server instance and NSX-T Manager cluster is available. You can Center Server and NSX-T Manager as Tier-1 applications.

n N-VDS instances are manageable only when the NSX-T Manager cluster is available. You can consider the NSX-T Manager cluster as a Tier-1 application.

Architecture and Design for the Management Domain

VMware, Inc. 94

Page 95: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-95. Design Decision on Virtual Switches for NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-062 Use a vSphere Distributed Switch for the first cluster in the management domain that is enabled for NSX-T Data Center.

n Use the existing vSphere Distributed Switch.

n Provides NSX-T logical segment capabilities to support advanced use cases.

To use features such as distributed routing, tenant workloads must be connected to NSX-T segments.

Management occurs jointly from the vSphere Client to NSX-T Manager. However, you must perform all network monitoring in the NSX-T Manager user interface or another solution.

Configuration of the vSphere Distributed Switch with NSX-T

The first cluster in the management domain uses a single vSphere Distributed Switch with a configuration for system traffic types, NIC teaming, and MTU size. See vSphere Networking Design for the Management Domain.

To support traffic uplink and overlay traffic for the NSX-T Edge nodes for the management domain, you must create several port groups on the vSphere Distributed Switch for the management domain. The VMkernel adapter for the Host TEP is connected to the host overlay VLAN, but does not require a dedicated port group on the distributed switch. The VMkernel network adapter for Host TEP is automatically created when you configure the ESXi host as a transport node.

NSX-T Edge appliances and the VMkernel adapter for the Host TEP be connected to different VLANs and subnets. The VLAN IDs for the NSX-T Edge nodes are mapped to the VLAN trunk port groups sfo01-m01-cl01-vds01-pg-uplink01 and sfo01-m01-cl01-vds01-pg-uplink02 on the host.

Table 2-96. vSphere Distributed Switch Configuration for the Management Domain

Switch Name Type Function

Number of Physical NIC Ports Teaming Policy MTU

sfo-m01-cl01-vds01 vSphere Distributed Switch 7.0

n ESXi Management

n vSphere vMotion

n vSAN

n NFS

n Host Overlay

n Edge Uplinks and Overlay - VLAN Trunking

2 n Load balance source for the ESXi traffic

n Failover order for the edge uplinks

9000

Architecture and Design for the Management Domain

VMware, Inc. 95

Page 96: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-97. sfo-m01-cl01-vds01 Switch Configuration Per Physical NIC

vmnic Function Connected to

0 Uplink Top of rack switch 1

1 Uplink Top of rack switch 2

Table 2-98. Segments on sfo-m01-cl01-vds01 in a Single Availability Zone

Segment Name Type Purpose

sfo01-m01-cl01-vds01-pg-mgmt VLAN Management traffic

sfo01-m01-cl01-vds01-pg-vmotion VLAN vSphere vMotion traffic

sfo01-m01-cl01-vds01-pg-vsan VLAN vSAN traffic

sfo-m01-cl01-vds01-pg-uplink01 VLAN Trunk Edge node overlay and uplink traffic to the first top of rack switch

sfo-m01-cl01-vds01-pg-uplink02 VLAN Trunk Edge node overlay and uplink traffic to the second top of rack switch

sfo-m01-cl01-vds01-pg-nfs VLAN NFS traffic

auto created (Host TEP) - Host overlay

auto created (Host TEP) - Host overlay

auto created (Hyperbus) - -

In a multi-region deployment, you must provide new or extend existing networks.

Table 2-99. Segments on sfo-m01-cl01-vds01 for a Second Availability Zone

Segment Name Type Availability Zone Purpose

az2_sfo01-m01-cl01-vds01-pg-mgmt

VLAN Availability Zone 2 Management traffic in Availability Zone 2

az2_sfo01-m01-cl01-vds01-pg-vmotion

VLAN Availability Zone 2 vSphere vMotion traffic in Availability Zone 2

az2_sfo01-m01-cl01-vds01-pg-vsan

VLAN Availability Zone 2 vSAN traffic in Availability Zone 2

sfo-m01-cl01-vds01-pg-uplink01

VLAN Trunk Stretched between Availability Zone 1 and Availability Zone 2

Edge node overlay and uplink traffic to the first top of rack switch in Availability Zone 2

sfo-m01-cl01-vds01-pg-uplink02

VLAN

Trunk

Stretched between Availability Zone 1 and Availability Zone 2

Edge node overlay and uplink to the second top of rack switch in Availability Zone 2

az2_sfo01-m01-vds01-pg-nfs VLAN Availability Zone 2 -

auto created (Host TEP) - - Host overlay

auto created (Host TEP) - - Host overlay

auto created (Hyperbus) - - -

Architecture and Design for the Management Domain

VMware, Inc. 96

Page 97: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Virtual Segments

Geneve provides the overlay capability in NSX-T to create isolated, multi-tenant broadcast domains across data center fabrics, and enables customers to create elastic, logical networks that span physical network boundaries.

The first step in creating these logical networks is to isolate and pool the networking resources. By using the Geneve overlay, NSX-T isolates the network into a pool of capacity and separates the consumption of these services from the underlying physical infrastructure. This model is similar to the model vSphere uses to abstract compute capacity from the server hardware to create virtual pools of resources that can be consumed as a service. You can then organize the pool of network capacity in logical networks that are directly attached to specific applications.

Geneve is a tunneling mechanism which provides extensibility while still using the offload capabilities of NICs for performance improvement.

Geneve works by creating Layer 2 logical networks that are encapsulated in UDP packets. A Segment ID in every frame identifies the Geneve logical networks without the need for VLAN tags. As a result, many isolated Layer 2 networks can coexist on a common Layer 3 infrastructure using the same VLAN ID.

In the vSphere architecture, the encapsulation is performed between the NIC of the virtual machine and the logical port on the virtual switch, making the Geneve overlay transparent to both the guest virtual machines and the underlying Layer 3 network. The Tier-0 Gateway performs gateway services between overlay and non-overlay hosts, for example, a physical server or the Internet router. The NSX-T Edge node translates overlay segment IDs to VLAN IDs, so that non-overlay hosts can communicate with virtual machines on an overlay network.

The edge cluster hosts all NSX-T Edge node instances that connect to the corporate network for secure and centralized network administration.

Table 2-100. Design Decisions on Geneve Overlay

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-063 To provide virtualized network capabilities to management workloads, use overlay networks with NSX-T Edge nodes and distributed routing.

n Creates isolated, multi-tenant broadcast domains across data center fabrics to deploy elastic, logical networks that span physical network boundaries.

n Enables advanced deployment topologies by introducing Layer 2 abstraction from the data center networks.

Requires configuring transport networks with an MTU size of at least 1,600 bytes.

Architecture and Design for the Management Domain

VMware, Inc. 97

Page 98: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Transport Zones

Transport zones determine which hosts can participate in the use of a particular network. A transport zone identifies the type of traffic, VLAN or overlay, and the vSphere Distributed Switch name. You can configure one or more VLAN transport zones and a single overlay transport zone per virtual switch. A transport zone does not represent a security boundary.

Figure 2-16. Transport Zone Design

NSX-T Edge Nodes

N-VDS

Overlay Transport Zone sfo-m01-tz-overlay01

VLAN Transport Zone sfo-m01-tz-vlane01

(Edge Uplinks to ToR)

VLAN Transport Zone sfo-m01-tz-vlan01

(Optional - Workload VLANs)

Edge Node 1

Edge Node 2

ESXi Hosts

vSphere Distributed Switch with NSX-T

ESXi

Table 2-101. Design Decision on the Transport Zone Configuration for NSX-T Data Center

Decision ID Design Decision Design Implication Design Justification

SDDC-MGMT-VI-SDN-064 Create a single overlay transport zone for all overlay traffic across the management domain and NSX-T Edge nodes.

n Ensures that overlay segments are connected to an NSX-T Edge node for services and north-south routing.

n Ensures that all segments are available to all ESXi hosts and NSX-T Edge nodes configured as transport nodes.

None.

SDDC-MGMT-VI-SDN-065 Create a single VLAN transport zone for uplink VLAN traffic that is applied only to NSX-T Edge nodes.

Ensures that uplink VLAN segments are configured on the NSX-T Edge transport nodes.

If VLAN segments are needed on hosts, you must create another VLAN transport zone for the host transport nodes only.

Architecture and Design for the Management Domain

VMware, Inc. 98

Page 99: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Uplink Policy for ESXi Host Transport Nodes

Uplink profiles define policies for the links from ESXi hosts to NSX-T segments or from NSX-T Edge appliances to top of rack switches. By using uplink profiles, you can apply consistent configuration of capabilities for network adapters across multiple ESXi hosts or NSX-T Edge nodes.

Uplink profiles can use either load balance source or failover order teaming. If using load balance source, multiple uplinks can be active. If using failover order, only a single uplink can be active.

Table 2-102. Design Decisions on the Uplink Profile for ESXi Transport Nodes

Decision ID Design Decision Decision Justification Decision Implication

SDDC-MGMT-VI-SDN-066 Create an uplink profile with the load balance source teaming policy with two active uplinks for ESXi hosts.

For increased resiliency and performance, supports the concurrent use of both physical NICs on the ESXi hosts that are configured as transport nodes.

None.

Replication Mode of Segments

The control plane decouples NSX-T Data Center from the physical network. The control plane handles the broadcast, unknown unicast, and multicast (BUM) traffic in the virtual segments.

The following options are available for BUM replication on segments.

Table 2-103. BUM Replication Modes of NSX-T Segments

BUM Replication Mode Description

Hierarchical Two-Tier The ESXi host transport nodes are grouped according to their TEP IP subnet. One ESXi host in each subnet is responsible for replication to an ESXi host in another subnet. The receiving ESXi host replicates the traffic to the ESXi hosts in its local subnet.

The source ESXi host transport node knows about the groups based on information it has received from the NSX-T Controller. The system can select an arbitrary ESXi host transport node as the mediator for the source subnet if the remote mediator ESXi host node is available.

Head-End In this mode, the ESXi host transport node at the origin of the frame to be flooded on a segment sends a copy to every other ESXi host transport node that is connected to this segment.

Architecture and Design for the Management Domain

VMware, Inc. 99

Page 100: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-104. Design Decisions on Segment Replication Mode

Decision ID Design Decision Design Justification Design Implications

SDDC-MGMT-VI-SDN-067 Use hierarchical two-tier replication on all segments.

Hierarchical two-tier replication is more efficient by reducing the number of ESXi hosts the source ESXi host must replicate traffic to.

None.

Virtual Network Segment Design for NSX-T for the Management Domain

Management applications that are deployed on top of the management domain can use a pre-defined configuration of NSX-T virtual network segments.

NSX-T segments provide flexibility for workload placement by removing the dependence on traditional physical data center networks. This approach also improves security and mobility of the management applications, and reduces the integration effort with existing customer networks.

Architecture and Design for the Management Domain

VMware, Inc. 100

Page 101: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-17. Virtual Network Segments in the SDDC

VC

OSSDDC Mgr

OS

xreg-m01-seg01

192.168.11/24

sfo-m01-seg01

192.168.31/24

Workload Domain

Internet/ EnterpriseNetwork

Tier-0 GatewayActive/ Active

NSX-T EdgeCluster

ToR Switches

ECMP

Tier-1 Gateway

vRSCLMCross-Region WSA

vROpsvRA

Region-Specific WSAvROps Remote CollectorsvRLI

Architecture and Design for the Management Domain

VMware, Inc. 101

Page 102: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-105. Design Decisions on Virtual Network Segments in NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-068 Create one or more cross-region NSX-T virtual network segments for management application components which require mobility between regions.

Enables management workload mobility without complex physical network configuration.

Each NSX-T virtual network segment requires a unique IP address space.

SDDC-MGMT-VI-SDN-069 Create one or more region-specific NSX-T virtual network segments for management application components that are assigned to a specific region.

Enables workload mobility within the data center without complex physical network configuration.

Each NSX-T virtual network segment requires a unique IP address space.

Information Security and Access Control Design for NSX-T Data Center for the Management Domain

You design authentication access, controls, and certificate management for the NSX-T Data Center instance in the management domain according to industry standards and the requirements of your organization.

Identity Management

Users can authenticate to NSX-T Manager from several sources. Role-based access control is not available with local user accounts.

n Local user accounts

n Active Directory by using LDAP

n Active Directory by using Workspace ONE Access

n Principal identity

Architecture and Design for the Management Domain

VMware, Inc. 102

Page 103: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-106. Design Decisions on Identity Management in NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-070 Limit the use of local accounts.

n Local accounts are not user specific and do not offer complete auditing from solutions back to users.

n Local accounts do not provide full role-based access control capabilities.

You must use Active Directory for user accounts.

SDDC-MGMT-VI-SDN-071 Enable NSX-T Manager integration with your corporate identity source by using the region-specific Workspace ONE Access instance.

n Provides integration with Active Directory for role-based access control. You can introduce authorization policies by assignment of organization and cloud services roles to enterprise users and groups defined in your corporate identity source.

n Simplifies deployment by consolidating the Active Directory integration for the SDDC in single component, that is, Workspace ONE Access.

You must have the region-specific Workspace ONE Access deployed before configuring role-based access in NSX-T Manager.

SDDC-MGMT-VI-SDN-072 Use Active Directory groups to grant privileges to roles in NSX-T Data Center.

n Centralizes role-based access control by mapping roles in NSX-T Data Center to Active Directory groups.

n Simplifies user management.

n You must create the role configuration outside of the SDDC stack.

n You must set the appropriate directory synchronization interval in Workspace ONE Access to ensure that changes are available within a reasonable period.

SDDC-MGMT-VI-SDN-073 Create an NSX-T Enterprise Admin group rainpole.io\ug-nsx-enterprise-admins in Active Directory and map it to the Enterprise Administrator role in NSX-T Data Center.

Provides administrator access to the NSX-T Manager user interface.

You must maintain the life cycle and availability of the Active Directory group outside of the SDDC stack.

Architecture and Design for the Management Domain

VMware, Inc. 103

Page 104: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-106. Design Decisions on Identity Management in NSX-T Data Center (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-074 Create an NSX-T Auditor group rainpole.io\ug-nsx-auditors in Active Directory and map it to the Auditor role in NSX-T Data Center.

Provides read-only access account to NSX-T Data Center.

You must maintain the life cycle and availability of the Active Directory group outside of the SDDC stack.

SDDC-MGMT-VI-SDN-075 Create more Active Directory groups and map them to roles in NSX-T Data Center according to the business and security requirements of your organization.

Each organization has its own internal business processes. You evaluate the needs for role separation in your business and implement mapping from individual user accounts to Active Direcotry groups and roles in NSX-T Data Center.

You must maintain the life cycle and availability of the Active Directory group outside of the SDDC stack.

SDDC-MGMT-VI-SDN-076 Restrict end-user access to both NSX-T Manager user interface and its RESTful API endpoint.

The workloads in the management domain are not end-user workloads.

End users have access only to endpoint components.

Password Management and Account Lockout Behavior for NSX-T Manager and NSX-T Edge Nodes

By default you must include at least eight characters and passwords to expire after 90 days. You configure access to the NSX-T command line interface (CLI) and lockout behavior for the NSX-T Manager user interface and RESTful API separately.

Architecture and Design for the Management Domain

VMware, Inc. 104

Page 105: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-107. Design Decisions on Password Management and Account Lockout for NSX-T Data Center

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDN-077 Configure the passwords for CLI access to NSX-T Manager for the root, admin, and audit users, and account lockout behavior for CLI according to the industry standard for security and compliance of your organization.

Aligns with the industry standard across your organization.

You must run console commands on the NSX-T Manager appliances.

Some commands like "set auth-policy cli lockout-period <lockout-period> " and "set auth-policy cli max-auth-failures <auth-failures>" are usually part of implications

SDDC-MGMT-VI-SDN-078 Configure the passwords for access to the NSX-T Edge nodes for the root, admin, and audit users, and account lockout behavior for CLI according to the industry standard for security and compliance of your organization.

Aligns with the industry standard across your organization.

You must run console commands on the NSX-T Edge appliances.

SDDC-MGMT-VI-SDN-079 Configure the passwords for access to the NSX-T Manager user interface and RESTful API or the root, admin, and audit users, and account lockout behavior for CLI according to the industry standard for security and compliance of your organization.

Aligns with the industry standard across your organization.

You must run console commands on the NSX-T Manager appliances.

Certificate Management

Access to all NSX-T Manager interfaces must use an Secure Sockets Layer (SSL) connection. By default, NSX-T Manager uses a self-signed SSL certificate. This certificate is not trusted by end-user devices or Web browsers.

As a best practice, replace self-signed certificates with certificates that are signed by a third-party or enterprise Certificate Authority (CA).

Architecture and Design for the Management Domain

VMware, Inc. 105

Page 106: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-108. Design Decisions on Certificate Management in NSX-T Manager

Decision ID Design Decision Design Implication Design Justification

SDDC-MGMT-VI-SDN-080 Replace the default self-signed certificate of the NSX-T Manager instance for the management domain with a certificate that is signed by a third-party certificate authority.

Ensures that the communication between NSX-T administrators and the NSX-T Manager instance is encrypted by using a trusted certificate.

Replacing the default certificates with trusted CA-signed certificates from a certificate authority might increase the deployment preparation time because you must generate and submit certificates requests.

SDDC-MGMT-VI-SDN-081 Use a SHA-2 algorithm or stronger when signing certificates.

The SHA-1 algorithm is considered less secure and has been deprecated.

Not all certificate authorities support SHA-2.

Shared Storage Design for the Management Domain

The shared storage design includes the design for vSAN and NFS storage for the SDDC management components.

Well-designed shared storage provides the basis for an SDDC and has the following benefits.

n Provides performant access to business data.

n Prevents unauthorized access to business data.

n Protects data from hardware and software failures.

n Protects data from malicious or accidental corruption.

Follow these guidelines when designing shared storage for your environment.

n Optimize the storage design to meet the diverse needs of applications, services, administrators, and users.

n Strategically align business applications and the storage infrastructure to reduce costs, boost performance, improve availability, provide security, and enhance functionality.

n Provide multiple tiers of storage to match application data access to application requirements.

n Design each tier of storage with different performance, capacity, and availability characteristics. Because not every application requires storage that is expensive, high-performance and highly available, designing different storage tiers reduces cost.

n Logical Design for Shared Storage for the Management Domain

The shared storage design selects the storage technology for each type of cluster. The clusters in the management domain use vSAN for primary storage and NFS for secondary storage.

Architecture and Design for the Management Domain

VMware, Inc. 106

Page 107: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

n Deployment Specification for Shared Storage for the Management Domain

The shared storage design includes determining the physical storage infrastructure required for using VMware vSAN and NFS, and the policy configuration for delivering reliable storage service to the SDDC management components.

n Network Design for Shared Storage Design for the Management Domain

In the network design for shared storage in the management domain, you determine the network configuration for vSAN and NFS traffic.

Logical Design for Shared Storage for the Management Domain

The shared storage design selects the storage technology for each type of cluster. The clusters in the management domain use vSAN for primary storage and NFS for secondary storage.

Architecture and Design for the Management Domain

VMware, Inc. 107

Page 108: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-18. Logical Storage Design

Virtual Appliance

Virtual Appliance

Virtual Appliance

Virtual Appliance

Virtual Appliance

Virtual Appliance

Management Cluster

ESXi Host

Datastore(s)

MgmtVMs

Backups Templatesand Logs

SampleDatastore Software-Defined Storage

Policy-Based Storage ManagementVirtualized Data Services

Hypervisor Storage Abstraction

SAN or NAS or DAS(3rd party or VMware vSAN)

Physical Disks

SSD FC15K FC10K SATA SSD FC15K FC10K SATA

VMDKs1500GB

200GB2048GB

Swap Files + Logs

Architecture and Design for the Management Domain

VMware, Inc. 108

Page 109: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-109. vSAN Logical Design

Single Availability Zone Multiple Availability Zones

n vSAN clusters in the management domain

n Four ESXi hosts per vSAN cluster

n All-flash vSAN configuration

n NFS secondary storage

n First cluster in the management domain is a vSAN stretched cluster

n Eight ESXi hosts in the stretched cluster

Four ESXi hosts in each availability zone

n vSAN witness appliance in a third location to the SDDC

n All-flash configuration

n NFS secondary storage

Deployment Specification for Shared Storage for the Management Domain

The shared storage design includes determining the physical storage infrastructure required for using VMware vSAN and NFS, and the policy configuration for delivering reliable storage service to the SDDC management components.

Shared Storage Platform Design for the Management Domain

You can choose between a traditional storage, VMware vSphere Virtual Volumes, and VMware vSAN storage.

Traditional Storage

Fibre Channel, NFS, and iSCSI are applicable options for virtual machines.

VMware vSAN Storage

vSAN is a software-based distributed storage platform that combines the compute and storage resources of VMware ESXi hosts. When you design and size a vSAN cluster, host hardware choices are more limited than for traditional storage.

VMware vSphere Virtual Volumes

This design does not use VMware vSphere Virtual Volumes because not all storage arrays have the same vSphere Virtual Volume feature sets enabled.

Traditional Storage and vSAN Storage

Fibre Channel, NFS, and iSCSI are mature and applicable options to support workload needs.

Your decision to implement one technology or another can be based on performance and functionality, and on considerations like the following:

n The current in-house expertise and installation base in your organization

n The cost, including both capital and long-term operational expenses

n The current relationship of your organization with a storage vendor

Architecture and Design for the Management Domain

VMware, Inc. 109

Page 110: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSAN is a software-based distributed storage platform that combines the compute and storage resources of ESXi hosts. It provides a simple storage management experience for the user. However, you must carefully consider supported hardware options when sizing and designing a vSAN cluster.

Storage Type Comparison

ESXi hosts support a variety of storage types. Each storage type supports different vSphere features.

Table 2-110. Network Shared Storage Supported by ESXi Hosts

Technology Protocols Transfers Interface

Fibre Channel FC/SCSI Block access of data/LUN Fibre Channel HBA

iSCSI IP/SCSI Block access of data/LUN iSCSI HBA or iSCSI enabled NIC (hardware iSCSI)

Network Adapter (software iSCSI)

NAS IP/NFS File (no direct LUN access) Network adapter

vSAN IP Block access of data Network adapter

Table 2-111. Supported vSphere Features by Storage Type

TypevSphere vMotion Datastore

Raw Device Mapping (RDM)

Application or Block-Level Clustering

vSphere HA and vSphere DRS

Storage APIs Data Protection

Local Storage ✓ VMFS x ✓ x ✓

iSCSI ✓ VMFS ✓ ✓ ✓ ✓

NAS over NFS ✓ NFS x x ✓ ✓

vSAN ✓ vSAN x ✓ (using iSCSI Initiator)

✓ ✓

Design Decisions on the Shared Storage Type

This design uses vSAN storage. The vSAN design is limited to the first cluster in the management domain. The design uses the default storage policy to achieve redundancy and performance within the cluster.

Architecture and Design for the Management Domain

VMware, Inc. 110

Page 111: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-112. Design Decisions on Storage Type

Decision ID Design Decision Design JustificationDesign Implication

SDDC-MGMT-VI-STO-001

When using a single availability zone in the first cluster of the management cluster, use vSAN and NFS shared storage:

n Use vSAN as the primary shared storage platform.

n Use NFS as the secondary shared storage platform for the management cluster.

By using vSAN as the primary shared storage solution, you can take advantage of more cost-effective local storage.

NFS is for archiving and to maintain historical data. Using NFS provides large, low-cost volumes that you can flexibly expand regularly according to capacity needs.

n The use of two different storage technologies increases the complexity and operational overhead.

n You cannot configure multiple availability zones to use an NFS array in the event an availability zone fails.

SDDC-MGMT-VI-STO-002

In all clusters, ensure that at least 20% of free space is always available on all non-vSAN datastores.

If a datastore runs out of free space, applications, and services in the SDDC, including but not limited to the NSX-T Edge core network services, the provisioning portal, and backup, fail.

Monitoring and capacity management must be proactive operations.

vSAN Physical Design for the Management Domain

This design uses VMware vSAN to implement software-defined storage as the primary storage type for the management cluster. By using vSAN, you have a high level of control over the storage subsystem.

All functional testing and validation of the design is on vSAN. Although VMware Validated Design uses vSAN, in particular for the clusters running management components, you can use any supported storage solution. If you select a storage solution other than vSAN, take into account that all the design, deployment, and Day-2 guidance in VMware Validated Design applies under the context of vSAN and adjust appropriately. Your storage design must match or exceed the capacity and performance capabilities of the vSAN configuration in the design. For multiple availability zones, the vSAN configuration includes vSAN stretched clusters.

Architecture and Design for the Management Domain

VMware, Inc. 111

Page 112: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSAN is a hyper-converged storage software that is fully integrated with the hypervisor. vSAN creates a cluster of local ESXi host hard disk drives and solid-state drives, and presents a flash-optimized, highly resilient, shared storage datastore to ESXi hosts and virtual machines. By using vSAN storage policies, you can control capacity, performance, and availability on a per virtual machine basis.

vSAN Physical Requirements and Dependencies

The software-defined storage module has the following requirements and options.

Requirement Category Requirements

Number of hosts Minimum of three ESXi hosts providing storage resources to the vSAN cluster.

vSAN configuration vSAN is configured as hybrid storage or all-flash storage.

n A vSAN hybrid storage configuration requires both magnetic devices and flash caching devices.

n An all-flash vSAN configuration requires flash devices for both the caching and capacity tiers.

Requirements for individual hosts that provide storage resources

n Minimum of one flash device. The flash-based cache tier must be at least 10% of the size of the HDD capacity tier.

n Minimum of two additional devises for capacity tier.

n RAID controller that is compatible with vSAN.

n Minimum 10 Gbps network for vSAN traffic.

n vSphere High Availability host isolation response set to power off virtual machines. With this setting, you prevent split-brain conditions if isolation or network partition occurs. In a split-brain condition, the virtual machine might be powered on by two ESXi hosts by accident.

See Table 2-38. Design Decisions on the Admission Control Policy for the First Cluster in the Management Domain.

vSAN Hardware Considerations

While VMware supports building your own vSAN cluster from compatible components, vSAN ReadyNodes are selected for this VMware Validated Design. See Table 2-7. Design Decisions on Server Hardware for ESXi.

Architecture and Design for the Management Domain

VMware, Inc. 112

Page 113: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSAN Hardware Options Description

Build Your Own Use hardware from the VMware Compatibility Guide for the following vSAN components:

n Flash-based drives

n Magnetic hard drives

n I/O controllers, including vSAN certified driver and firmware combinations

Use VMware vSAN ReadyNodes A vSAN ReadyNode is a server configuration that is validated in a tested, certified hardware form factor for vSAN deployment, jointly recommended by the server OEM and VMware. See the vSAN ReadyNode documentation. The vSAN Compatibility Guide for vSAN ReadyNodes documentation provides examples of standardized configurations, including supported numbers of VMs and estimated number of 4K IOPS delivered.

I/O Controllers for vSAN

The I/O controllers are as important to a vSAN configuration as the selection of disk drives. vSAN supports SAS, SATA, and SCSI adapters in either pass-through or RAID 0 mode. vSAN supports multiple controllers per ESXi host.

n You select between single- and multi-controller configuration in the following way: Multiple controllers can improve performance, and mitigate a controller or SSD failure to a smaller number of drives or vSAN disk groups.

n With a single controller, all disks are controlled by one device. A controller failure impacts all storage, including the boot media (if configured).

Controller queue depth is possibly the most important aspect for performance. All I/O controllers in the VMware vSAN Hardware Compatibility Guide have a minimum queue depth of 256. Consider regular day-to-day operations and increase of I/O because of virtual machine deployment operations, or re-sync I/O activity as a result of automatic or manual fault remediation.

Table 2-113. Design Decisions on the vSAN I/O Controller Configuration

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDS-001

Ensure that the I/O Controller that is running the vSAN disk group(s) is capable and has a minimum queue depth of 256 set.

Controllers with lower queue depths can cause performance and stability problems when running vSAN.

vSAN Ready Nodes are configured with appropriate queue depths.

Limits the number of compatible I/O controllers that can be used for storage.

SDDC-MGMT-VI-SDS-002

I/O Controllers that are running vSAN disk group(s) should not be used for an other purpose.

Running non-vSAN disks, EG: VMFS, on an I/O controller that is running a vSAN disk group can impact vSAN performance.

If non-vSAN disks are required in ESXi hosts, an additional I/O controller is needed in the host.

Architecture and Design for the Management Domain

VMware, Inc. 113

Page 114: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

vSAN Flash Options

vSAN has two configuration options: all-flash and hybrid.

Hybrid Mode

In a hybrid storage architecture, vSAN pools server-attached capacity devices (in this case magnetic devices) and flash-based caching devices, typically SSDs, or PCI-e devices, to create a distributed shared datastore.

All-Flash Mode

All-flash storage uses flash-based devices (SSD or PCI-e) as a write cache while other flash-based devices provide high endurance for capacity and data persistence.

Table 2-114. Design Decisions on vSAN Mode

Decision ID Design Decision Design JustificationDesign Implication

SDDC-MGMT-VI-SDS-003

Configure vSAN in all-flash mode in the first cluster of the management domain.

n Meets the performance needs of the first cluster in the management domain.

Using high speed magnetic disks in a hybrid vSAN configuration can provide satisfactory performance and is supported.

More disks might be required per host because flash disks are not as dense as magnetic disks.

Sizing Storage

You usually base sizing on the requirements of the IT organization. However, this design provides calculations that are based on a single-region implementation, and is then implemented on a per-region basis. In this way, you can handle storage in a dual-region deployment that has failover capabilities enabled.

This sizing is calculated according to a certain node configuration per region. Although VMware Validated Design allocates enough memory capacity to handle N-1 host failures, and uses thin-provisioned swap for the vSAN configuration, the potential think-provisioned swap capacity is factored in the calculation.

Table 2-115. Management Layers and Hardware Sizes

Category Quantity Resource Type Consumption (GB)

Physical Infrastructure (ESXi) 4 Memory 1,024

Virtual Infrastructure 12 Disk 4,286

Swap 301

Identity and Access Management

4 Disk 240

Swap 54

Cloud Operations 9 Disk 6,180

Architecture and Design for the Management Domain

VMware, Inc. 114

Page 115: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-115. Management Layers and Hardware Sizes (continued)

Category Quantity Resource Type Consumption (GB)

Swap 158

Cloud Automation 3 Disk 666

Swap 120

Total n 32 management virtual machines

n 4 ESXi hosts

Disk 11,372

Swap 633

Memory 1,024

The storage space that is required for the vSAN capacity tier according is worked out using the following calculations. For vSAN memory consumption by management ESXi hosts, see VMware Knowledge Base article 2113954.

Derive the consumption of storage space by the management virtual machines according to the following calculations. See vSAN ReadyNode™ Sizer.

The Disk Space Usage Distribution is made up of the following components:

n Effective Raw Capactiy - Space Available for the vSAN Datastore.

n Slack Space - Space reserved for vSAN specific operations such as resync and rebuilds.

n Dedupe Overhead - Space reserved for dedup and compression metadata such as hash, translation and allocations maps.

n Disk Formatting Overhead - Reservation for file system metadata.

n Checksum Overhead - Space used to checksum information.

n Physical Reservation - How much physical space/raw capacity do we consume as part of the overheads

11,372 GB Disk + 633 GB Swap = 12,005 GB Virtual Machine Raw Capacity Requirements

12,005 GB * 2 = 24,010 GB for Total Virtual Machine Raw Capacity, using FTT 1 (RAID 1).

24,010 GB + 30% = 31,213 GB Total Raw Virtual Machine Capacity with Overheads

31,213 GB + 20% = 37,455.6 GB Total Raw Virtual Machine Capacity with Overheads and 20% Estimated

Growth

37,455.6 GB / 4 = 9,363.9 GB Total Raw Capaity per Host

9,363.9 / 2 = 4,681.95 GB per disk group

Architecture and Design for the Management Domain

VMware, Inc. 115

Page 116: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-116. Design Decisions on vSAN Disk Configuration

Decision ID Design Decision Design JustificationDesign Implication

SDDC-MGMT-VI-SDS-004

Use a 600 GB or greater flash-based drive for the cache tier in each disk group.

Provides enough cache for both hybrid or all-flash vSAN configurations to buffer I/O and ensure disk group performance.

Additional space in the cache tier does not increase performance.

Larger flash disks can increase initial host cost

SDDC-MGMT-VI-SDS-005

Have at least 5TB of flash-based drives for the capacity tier in each disk group.

Provides enough capacity for the management virtual machines with a minimum of 30% of overhead, and 20% growth when the number of primary failures to tolerate is 1.

None.

vSAN Hardware Considerations

While VMware supports building your own vSAN cluster from compatible components, vSAN ReadyNodes are selected for this VMware Validated Design. See Sizing Compute Resources for ESXi for the Management Domain.

vSAN Deployment Specification for the Management Domain

When determining the vSAN deployment specification, you decide on the datastore size, number of ESXi hosts per cluster, number of disk groups per ESXi host, and the vSAN policy.

vSAN Datastore Size

The size of the vSAN datastore depends on the requirements for the datastore. Consider cost against availability to provide the appropriate sizing.

As per the calculations in Sizing Storage, a minimum size is required to run the management workloads and infrastructure. If you plan to add more solutions or additions to this environment, you must increase this size.

Architecture and Design for the Management Domain

VMware, Inc. 116

Page 117: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-117. Design Decisions on the vSAN Datastore

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDS-006 Provide the first cluster in the management with a minimum of 37 TB of raw capacity for vSAN.

Management virtual machines require at least 12 TB of raw storage (prior to FTT=1) and 24 TB when using the default vSAN storage policy.

By allocating at least 37 TB, initially there is 30% space available as slack for vSAN internal operations and 20% free space that you can use for additional management virtual machine growth.

NFS is used as secondary shared storage for some management components, for example, for backups and log archives.

If you scale the environment out with more workloads, additional storage is required in the management domain.

SDDC-MGMT-VI-SDS-007 On all vSAN datastores, ensure that at least 30% of free space is always available.

When vSAN reaches 80% usage, a rebalance task is started which can be resource-intensive.

Increases the amount of available storage needed.

Number of vSAN-Enabled Hosts Per Cluster

The number of ESXi hosts in the cluster depends on these factors:

n Amount of available space on the vSAN datastore

n Number of failures you can tolerate in the cluster

For example, if the vSAN cluster has only 3 ESXi hosts, only a single failure is supported. If a higher level of availability is required, you must add more hosts.

Architecture and Design for the Management Domain

VMware, Inc. 117

Page 118: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-118. Design Decision on the Cluster Size for vSAN

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDS-008

When using a single availability zone, the first cluster in the management domain requires a minimum of 4 ESXi hosts to support vSAN.

n Having 4 ESXi hosts addresses the availability and sizing requirements.

n You can take an ESXi host offline for maintenance or upgrades without impacting the overall vSAN cluster health.

The availability requirements for the management cluster might cause under utilization of the cluster's ESXi hosts.

SDDC-MGMT-VI-SDS-009

When using two availability zones, the first cluster in the management domain, requires a minimum of 8 ESXi hosts (4 in each availability zone) to support a stretched vSAN configuration.

n Having 8 ESXi hosts addresses the availability and sizing requirements.

n You can take an availability zone offline for maintenance or upgrades without impacting the overall vSAN cluster health.

The capacity of the additional 4 hosts is not added to capacity of the cluster. They are only used to provide additional availability.

Number of Disk Groups Per ESXi Host

Disk group sizing is important during volume design. The number of disk groups can affect availability and performance. If more ESXi hosts are available in the cluster, more failures are tolerated in the cluster. This capability adds cost because additional hardware for the disk groups is required. More available disk groups can decrease the recoverability time of vSAN during a failure. Consider these data points when deciding on the number of disk groups per ESXi host:

n Amount of available space on the vSAN datastore.

n Number of failures you can tolerate in the cluster.

n Performance required when recovering vSAN objects.

Each vSAN host can have five disk groups, each containing seven capacity devices, resulting in a maximum of 35 capacity devices. The optimal number of disk groups is a balance between hardware and space requirements for the vSAN datastore. More disk groups increase space and provide higher availability. However, adding disk groups can be restricted by cost.

Table 2-119. Design Decisions on the Disk Groups per ESXi Host

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDS-010

Configure vSAN with a minimum of two disk groups per ESXi host.

Reduces the size of the fault domain and spreads the I/O load over more disks for better performance.

Multiple disks groups require more disks in each ESXi host.

vSAN Policy Design

After you enable and configure VMware vSAN, you can create storage policies that define the virtual machine storage characteristics. Storage characteristics specify different levels of service for different virtual machines.

Architecture and Design for the Management Domain

VMware, Inc. 118

Page 119: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

The default storage policy tolerates a single failure and has a single disk stripe. Use the default policy. If you configure a custom policy, vSAN should guarantee its application. However, if vSAN cannot guarantee a policy, you cannot provision a virtual machine that uses the policy unless you enable force provisioning.

Policy design starts with assessment of business needs and application requirements. Assess the use cases for VMware vSAN to determine the necessary policies. Start by assessing the following application requirements:

n I/O performance and profile of your workloads on a per-virtual-disk basis

n Working sets of your workloads

n Hot-add of additional cache (requires repopulation of cache)

n Specific application best practice (such as block size)

After assessment, configure the software-defined storage module policies for availability and performance in a conservative manner so that consumed space and recoverability are balanced. Usually the default system policy covers most common cases. You create custom policies if specific requirements for performance or availability exist.

A storage policy includes several attributes. You can use them alone or combine them to provide different service levels. By using policies, you can customize any configuration according to the business requirements of the consuming application.

Before making design decisions, understand the policies and the objects to which they can be applied.

Architecture and Design for the Management Domain

VMware, Inc. 119

Page 120: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-120. VMware vSAN Policy Options

Capability Use Case Default Value Maximum Value Comments

Number of failures to tolerate

Redundancy 1 3 A standard RAID 1 mirrored configuration that provides redundancy for a virtual machine disk. The higher the value, the more failures can be tolerated. For N failures tolerated, N1 copies of the disk are created, and 2N+1 ESXi hosts contributing storage are required.

A higher N value indicates that more replicas of virtual machines are made, which can consume more disk space than expected.

Number of disk stripes per object

Performance 1 12 A standard RAID 0 stripe configuration used to increase performance for a virtual machine disk.

This setting defines the number of HDDs on which each replica of a storage object is striped.

If the value is higher than 1, you can increase performance. However, an increase in system resource usage might also result.

Architecture and Design for the Management Domain

VMware, Inc. 120

Page 121: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-120. VMware vSAN Policy Options (continued)

Capability Use Case Default Value Maximum Value Comments

Flash read cache reservation (%)

Performance 0% 100% Flash capacity reserved as read cache for the storage is a percentage of the logical object size that is reserved for that object.

Use this setting for workloads only if you must address read performance issues. The downside of this setting is that other objects cannot use a reserved cache.

Avoid using these reservations unless it is necessary because unreserved flash is shared fairly among all objects.

Architecture and Design for the Management Domain

VMware, Inc. 121

Page 122: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-120. VMware vSAN Policy Options (continued)

Capability Use Case Default Value Maximum Value Comments

Object space reservation (%)

Thick provisioning 0% 100% The percentage of the storage object that is thick-provisioned when creating a virtual machine. The remaining storage capacity is thin-provisioned.

This setting is useful if an object will always use a predictable amount of storage, cutting back on repeatable disk growth operations for all but new or non-predictable storage use.

Force provisioning Override policy No - Forces provisioning to occur even if the currently available cluster resources cannot satisfy the current policy.

Force provisioning is useful during a planned expansion of the vSAN cluster, during which provisioning of virtual machines must continue.

vSAN automatically tries to bring the object into compliance as resources become available.

If you do not specify a user-configured policy, vSAN uses a default system policy of 1 failure to tolerate and 1 disk stripe for virtual disks and virtual disk snapshots. To ensure protection for these critical virtual machine components, policy defaults for the VM namespace and swap are set statically and are not configurable. Configure policies according to the business requirements of the application. By using policies, vSAN can adjust the performance of a disk on the fly.

Object Policy Comments

Virtual machine namespace

Failures-to-Tolerate: 1 Configurable. Changes are not recommended.

Swap Failures-to-Tolerate: 1 Configurable. Changes are not recommended.

Architecture and Design for the Management Domain

VMware, Inc. 122

Page 123: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Object Policy Comments

Virtual disks User-Configured Storage Policy Can be any storage policy configured on the system.

Virtual disk snapshots Uses virtual disk policy Same as virtual disk policy by default. Changes are not recommended.

Table 2-121. Design Decisions on the vSAN Storage Policy

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDS-011

When using a single availability zone, use the default VMware vSAN storage policy.

Provides the level of redundancy that is needed in the management cluster.

Provides the level of performance that is enough for the individual management components.

You might need additional policies for third-party VMs hosted in these clusters because their performance or availability requirements might differ from what the default VMware vSAN policy supports.

SDDC-MGMT-VI-SDS-012

When using two availability zones, add the following setting to the default vSAN storage policy:

Secondary Failures to Tolerate = 1

Provides the necessary protection for virtual machines in each availability zone, with the ability to recover from an availability zone outage.

You might need additional policies if third-party VMs are to be hosted in these clusters because their performance or availability requirements might differ from what the default VMware vSAN policy supports.

SDDC-MGMT-VI-SDS-013

When using two availability zones, configure two fault domains, one for each availability zone. Assign each host to their respective availability zone fault domain.

Fault Domains are mapped to Availability Zones to provide logical host separation and ensure a copy of vSAN data is always available even when an availability zone goes offline.

Additional raw storage is required when Secondary Failures to Tolerate and Fault Domains are enabled.

SDDC-MGMT-VI-SDS-014

Leave the default virtual machine swap file as a sparse object on VMware vSAN.

Creates virtual swap files as a sparse object on the vSAN datastore. Sparse virtual swap files only consume capacity on vSAN as they are accessed. As a result, you can reduce the consumption on the vSAN datastore if virtual machines do not experience memory over-commitment which requires the use of the virtual swap file.

None.

NFS Physical Design for the Management Domain

You can use Network File System (NFS) as a secondary storage in the SDDC. When you design the physical NFS configuration, consider disk type and size, networking between the storage and

Architecture and Design for the Management Domain

VMware, Inc. 123

Page 124: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

the ESXi hosts, and volumes according to the data you are going to store. In this validated design, NFS stores VM templates, backups and log archives.

This NFS design is not related to a specific vendor or array guidance. Consult your storage vendor for the configuration settings appropriate for your storage array.

NFS Load Balancing

No load balancing is available for NFS/NAS on vSphere because it is based on single session connections. You can configure aggregate bandwidth by creating multiple paths to the NAS array, and by accessing some datastores via one path, and other datastores via another path. You can configure NIC Teaming so that if one interface fails, another can take its place. However, these load balancing techniques work only if a network failure occurs. They might not be able to handle error conditions on the NFS array or on the NFS server. The storage vendor is often the source for correct configuration and configuration maximums.

NFS Versions

vSphere supports both version 3 and version 4.1 of NFS. However, not all vSphere features are available when connecting to storage arrays that use NFS version 4.1.

Table 2-122. Design Decisions on the NFS Version

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NFS-001

Use NFS version 3 for all NFS datastores.

You cannot use Storage I/O Control with NFS version 4.1 datastores.

NFS version 3 does not support Kerberos authentication.

NFS Physical Requirements

To use NFS storage in the VMware Validated Design, your environment must meet certain requirements for networking and bus technology.

n All connections must be on 10 Gbps Ethernet minimum, with 25 Gbps Ethernet recommended.

n Jumbo frames are enabled.

n 10K SAS (or faster) drives are used in the storage array.

n You can combine different disk speeds and disk types in an array to create different performance and capacity tiers. The management cluster uses 10K SAS drives in the RAID configuration recommended by the array vendor to achieve the required capacity and performance.

Architecture and Design for the Management Domain

VMware, Inc. 124

Page 125: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-123. Design Decisions for NFS Hardware

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NFS-002

n Consider 10k SAS drives a base line performance requirement. Greater performance might be needed according to the scale and growth profile of the environment.

n Consider the number and performance of disks backing secondary storage NFS volumes.

10K SAS drives provide a balance between performance and capacity. You can use faster drives.

n vStorage API for Data Protection-based backups require high- performance datastores to meet backup SLAs.

10K SAS drives are more expensive than other alternatives.

vSphere Storage APIs - Array Integration

The VMware vSphere Storage APIs for Array Integration (VAAI) supports a set of ESXCLI commands for enabling communication between ESXi hosts and storage devices. Using this API/CLI has several advantages.

The APIs define a set of storage primitives that enable the ESXi host to offload certain storage operations to the array for hardware acceleration. Offloading the operations reduces resource overhead on the ESXi hosts and can significantly improve performance for storage-intensive operations such as storage cloning, zeroing, and so on. The goal of hardware acceleration is to help storage vendors provide hardware assistance to speed up VMware I/O operations that are more efficiently accomplished in the storage hardware.

Without the use of VAAI, cloning or migration of virtual machines by the VMkernel data mover involves software data movement. The data mover issues I/O to read and write blocks to and from the source and destination datastores. With VAAI, the data mover can use the API primitives to offload operations to the array when possible. For example, when you copy a virtual machine disk file (VMDK file) from one datastore to another inside the same array, the data mover directs the array to make the copy completely inside the array. If you invoke a data movement operation and the corresponding hardware offload operation is enabled, the data mover first attempts to use hardware offload. If the hardware offload operation fails, the data mover reverts to the traditional software method of data movement.

Hardware data movement performs better than software data movement. It consumes fewer CPU cycles and less bandwidth on the storage fabric. Timing operations that use the VAAI primitives and use esxtop to track values such as CMDS/s, READS/s, WRITES/s, MBREAD/s, and MBWRTN/s of storage adapters during the operation show performance improvements.

Architecture and Design for the Management Domain

VMware, Inc. 125

Page 126: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-124. Design Decision on the Integration of vStorage APIs for Array

Decision IDDesign Decision Design Justification Design Implication

SDDC-MGMT-VI-NFS-003

Select an array that supports vStorage APIs for Array Integration (VAAI) over NAS (NFS).

n VAAI offloads tasks to the array itself, enabling the ESXi hypervisor to use its resources for application workloads and not become a bottleneck in the storage subsystem.

n VAAI is required to support the target number of virtual machine life cycle operations in this design.

Not all arrays support VAAI over NFS. For the arrays that support VAAI, to enable VAAI over NFS, you must install a plug-in from the array vendor .

NFS Volumes

Select a volume configuration for NFS storage in the SDDC according to the requirements of the management applications that use the storage.

n Multiple datastores can be created on a single volume for applications that do not have a high I/O footprint.

n For high I/O applications, such as backup applications, use a dedicated volume to avoid performance issues.

n For other applications, set up Storage I/O Control (SIOC) to limit the storage use by high I/O applications so that other applications get the I/O capacity they are requesting.

Table 2-125. Design Decisions for NFS Volume Assignment

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NFS-004

Use a dedicated NFS volume to support backup requirements.

The backup and restore process is I/O intensive. Using a dedicated NFS volume ensures that the process does not impact the performance of other management components.

Dedicated volumes add management overhead to storage administrators. Dedicated volumes might use more disks, according to the array and type of RAID.

SDDC-MGMT-VI-NFS-005

Use a shared volume for other management component datastores.

Non-backup related management applications can share a common volume because of the lower I/O profile of these applications.

Enough storage space for shared volumes and their associated application data must be available.

NFS Exports

All NFS exports are shared directories that sit on top of a storage volume. These exports control the access between the ESXi host endpoints and the underlying storage system. Multiple exports can exist on a single volume, with different access controls on each.

Architecture and Design for the Management Domain

VMware, Inc. 126

Page 127: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-126. Design Decisions on the NFS Exports

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-NFS-006

For each export, limit access to the application virtual machines or hosts requiring the ability to mount the storage only.

Limiting access helps ensure the security of the underlying data.

Securing exports individually can introduce operational overhead.

Storage Tiering for the Management Domain

Not all application workloads have the same storage requirements. Storage tiering allows for these differences by creating multiple levels of storage with varying degrees of performance, reliability and cost, depending on the application workload needs.

Enterprise-class storage arrays contain multiple drive types and protection mechanisms. Storage, server, and application administrators determine storage configuration individually for each application in the environment. Virtualization can make this problem more challenging by consolidating many different application workloads onto a small number of large devices. The most mission-critical data typically represents the smallest amount of data and offline data represents the largest amount. Details differ for different organizations. To determine the storage tier for application data, determine the storage characteristics of the application or service.

n I/O operations per second (IOPS) requirements

n Megabytes per second (MBps) requirements

n Capacity requirements

n Availability requirements

n Latency requirements

After you determine the information for each application, you can move the application to the storage tier with matching characteristics.

n Consider any existing service-level agreements (SLAs).

n Move data between storage tiers during the application lifecycle as needed.

Storage Policies and Controls for the Management Domain

You create a storage policy for virtual machines to specify which storage capabilities and characteristics are the best match for the virtual machine requirements.

You can identify the storage subsystem capabilities by using the VMware vSphere API for Storage Awareness (VASA) or by using a user-defined storage policy.

Architecture and Design for the Management Domain

VMware, Inc. 127

Page 128: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Storage Policy Definition Description

VMware vSphere API for Storage Awareness With vSphere API for Storage Awareness, storage vendors can publish the capabilities of their storage to vCenter Server so that users can see these capabilities in its user interface.

User-defined storage policy You define the storage policy by using the VMware Storage Policy SDK, VMware vSphere PowerCLI, or vSphere Client.

You can assign a storage policy to a virtual machine. Then, you can periodically check the storage for compliance. In this way, the virtual machine continues to run on storage that has the required performance and availability characteristics.

VMware vSphere Storage I/O Control allows cluster-wide storage I/O prioritization, which results in better workload consolidation and helps reduce extra costs associated with over-provisioning.

vSphere Storage I/O Control extends the constructs of shares and limits to storage I/O resources. Shares are set on a per-virtual machine basis. You can control the amount of storage I/O that is allocated to virtual machines during periods of I/O contention, so that more important virtual machines have greater access to the storage array for I/O resource allocation.

When vSphere Storage I/O Control is enabled on a datastore, the ESXi host monitors the device latency when communicating with that datastore. If the device latency exceeds a threshold, the datastore is considered to be congested and each virtual machine that accesses that datastore is allocated I/O resources according to their shares. vSphere Storage I/O Control has several requirements, limitations, and constraints.

n Datastores that are enabled with vSphere Storage I/O Control must be managed by a single vCenter Server system.

n Storage I/O Control is supported on Fibre Channel-connected, iSCSI-connected, and NFS-connected storage. RDM is not supported.

n Storage I/O Control does not support datastores with multiple extents.

n Before using vSphere Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, verify that the storage array has been certified as compatible with vSphere Storage I/O Control. See VMware Compatibility Guide.

Table 2-127. Design Decisions on Storage Policies and Controls

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-STO-003

Enable Storage I/O Control with the default values on all non-vSAN datastores.

Ensures that all virtual machines on a datastore receive equal amount of I/O capacity.

Virtual machines that use more I/O access the datastore with priority. Other virtual machines can access the datastore only when an I/O contention occurs on the datastore.

Network Design for Shared Storage Design for the Management Domain

In the network design for shared storage in the management domain, you determine the network configuration for vSAN and NFS traffic.

Architecture and Design for the Management Domain

VMware, Inc. 128

Page 129: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

When determining the network configuration, you have to consider the overall traffic bandwidth and decide how to isolate storage traffic.

n Consider how much replication and communication traffic is running between ESXi hosts and storage arrays.

n The amount of storage traffic depends on the number of VMs that are running in the cluster, and on how write- intensive the I/O is for the applications running in the VMs.

For information on the physical network setup for vSAN and NFS traffic, and other system traffic, see Physical Network Infrastructure Design for NSX-T Data Center for the Management Domain.

For information on the virtual network setup for vSAN and NFS traffic, and other system traffic, see Distributed Port Group and VMkernel Adapter Design for the Management Domain.

vSAN Network Design

The vSAN network design includes these components.

Table 2-128. Components of the vSAN Network Design

Design Component Description

Physical NIC speed For best and most predictable performance (IOPS) for the environment, this design uses a minimum of a 10-GbE connection, with 25-GbE recommended, for use with vSAN all-flash configurations.

VMkernel network adapters for vSAN

The vSAN VMkernel network adapter on each ESXi host is created when you enable vSAN on the cluster. Connect the vSAN VMkernel network adapters on all ESXi hosts in a cluster to a dedicated distributed port group including also ESXi hosts that are not contributing storage resources to the cluster.

VLAN All storage traffic should be isolated on its own VLAN. When a design uses multiple vSAN clusters, each cluster should use a dedicated VLAN or segment for its traffic. This approach increases security, prevents interference between clusters and helps with troubleshooting cluster configuration.

Jumbo frames vSAN traffic can be handled by using jumbo frames. Use jumbo frames for vSAN traffic only if the physical environment is already configured to support them, they are part of the existing design, or if the underlying configuration does not create a significant amount of added complexity to the design.

Virtual switch type vSAN supports vSphere Standard Switch or vSphere Distributed Switch. The benefit of using vSphere Distributed Switch is that it supports Network I/O Control for prioritization of bandwidth if contention occurs.

Architecture and Design for the Management Domain

VMware, Inc. 129

Page 130: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-129. Design Decisions on Virtual Switch Configuration for vSAN

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-VI-SDS-015

Use the existing vSphere Distributed Switch instances in the first cluster in the management domain .

Provides guaranteed performance for vSAN traffic in a conection-free network by using existing networking components.

All traffic paths are shared over common uplinks.

SDDC-MGMT-VI-SDS-016

Configure jumbo frames on the VLAN for vSAN traffic.

n Simplifies configuration because jumbo frames are also used to improve the performance of vSphere vMotion and NFS storage traffic.

n Reduces the CPU overhead resulting high network usage.

Every device in the network must support jumbo frames.

NFS Network Design

NFS version 3 traffic is transmitted in an unencrypted format across the LAN. As a best practice, use NFS storage on trusted networks only and isolate the traffic on dedicated VLANs.

Many NFS arrays have built-in security, which enables them to control the IP addresses that can mount NFS exports. Use this feature to configure the ESXi hosts that can mount the volumes that are being exported and have read/write access to those volumes.

Cloud Operations Design

The cloud operations design for the SDDC management domain includes virtual infrastructure provisioning and life cycle management capabilities of the SDDC management components.

The VMware component in this layer is SDDC Manager that is part of VMware Cloud Foundation.

Architecture and Design for the Management Domain

VMware, Inc. 130

Page 131: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-19. Operations Management in the SDDC

CloudAutomation

Service Catalog

Self-Service Portal

Orchestration

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

Compute

Storage

Network

PhysicalInfrastructure

BusinessContinuity

Security and Compliance

Cloud Operations

Monitoring

Logging

Life CycleManagement

Fault Tolerance and Disaster

Recovery

Backup & Restore

Replication Security Policies

Industry Regulations

Identity and Access Management

n SDDC Manager Detailed Design

Operational day-to-day effecincies are delivered through SDDC Manager the core component of VMware Cloud Foundation, these include full lifecycle management tasks such as deployment, configuration, patching and upgrades.

SDDC Manager Detailed Design

Operational day-to-day effecincies are delivered through SDDC Manager the core component of VMware Cloud Foundation, these include full lifecycle management tasks such as deployment, configuration, patching and upgrades.

Logical Design for SDDC Manager

You deploy an SDDC Manager appliance in the management domain for creating workload domains, provisioning additional virtual infrastructure, and life cycle management of the SDDC management components.

SDDC Manager is deployed as a preconfigured virtual appliance that is running the VMware Photon™ operating system. SDDC Manager provides the management interface to VMware Cloud Foundation.

Architecture and Design for the Management Domain

VMware, Inc. 131

Page 132: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-20. Logical Design of SDDC Manager

Solution andUser Authentication

vCenter SingleSign-On Domain

ESXi

NSX-TData Center

vRealizeSuite Lifecycle Manager

SDDC Manager

Virtual Appliance

Region A

Infrastructure Provisioningand Configuration

vCenterServer

Life Cycle Management

vCenter Server

External Services

My VMware

depot.vmware.com

Supporting Infrastructure:Shared Storage, DNS, NTP,

Certificare Authority

Access

User Interface

API

Identity Source

Active Directory

You use SDDC Manager in VMware Cloud Foundation to perform the following operations:

n Commissioning or decommissioning ESXi hosts

n Deployment of workload domains

n Extension of clusters in the management and workload domains with ESXi hosts

n Adding clusters to the management domain and workload domains

n Support for network pools for host configuration in a workload domain

n Product licenses storage

n Deployment of vRealize Lifecycle Manager

Architecture and Design for the Management Domain

VMware, Inc. 132

Page 133: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

n Life cycle management of the virtual infrastructure components in all workload domains, and of vRealize Suite Lifecycle Manager.

n Certificate management

n Password management and rotation

n NSX-T Edge cluster deployment in a workload domain

n Backup configuration

Table 2-130. SDDC Manager Logical Components

Single Availability Zone Multiple Availability Zones

n A single SDDC Manager appliance deployed on the management network segment.

n vSphere HA protects the SDDC Manager appliance.

n A single SDDC Manager appliance deployed on the region-specific network segment.

n vSphere HA protects the SDDC Manager Access appliance.

n A DRS rule specifies that the SDDC Manager appliance runs on an ESXi host in Availability Zone 1.

Table 2-131. Design Decisions on the Logical Design of SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-001 Deploy an SDDC Manager system in the first cluster of the management domain.

SDDC Manager is required to perform VMware Cloud Foundation capabilities, such as provisioning of workload domains, deployment of solutions, patching and upgrade, and others.

None.

Deployment Specification of SDDC Manager

You determine the size of the compute resources, high availability implementation, and life cycle management support for the SDDC Manager according to the design objectives of this design. You also plan access to the My VMware repository for downloading install and upgrade software bundles.

n Deployment Model for SDDC Manager

You determine the amount of compute and storage resources in the management domain for SDDC Manager according to the scale of the environment and plans for deployment of virtual infrastructure workload domains.

n Repository Access Design for SDDC Manager

The repository design consists of characteristics and decisions that support configuring SDDC Manager with the VMware Depot in the Management Domain.

n Certificate Authority Integration Design for SDDC Manager

For an automated generation and replacement of signed certificates for the SDDC management components, you integrate a certificate authority with SDDC Manager.

Architecture and Design for the Management Domain

VMware, Inc. 133

Page 134: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

n Multi-Instance Management Design for SDDC Manager

You can manage multiple VMware Cloud Foundation instances together by grouping them in a federation. Consider using the muli-instance feature if you plant to extend your environment with another VMware Cloud Foundation instance.

n Life Cycle Management Design for SDDC Manager

The life cycle management module of SDDC Manager is responsible for applying patches, updates, and upgrades to the SDDC Manager appliance.

Deployment Model for SDDC Manager

You determine the amount of compute and storage resources in the management domain for SDDC Manager according to the scale of the environment and plans for deployment of virtual infrastructure workload domains.

You cannot customize the SDDC Manager appliance during deployment. You use the default configuration.

Table 2-132. Resource Specification of the SDDC Manager Appliance

Setting Value

Virtual CPUs 4 vCPUs

Memory 16 GB

Disk Capacity 816 GB

Network 1 x VMXNET3

Table 2-133. Design Decisions on Sizing Resources for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-002 Deploy SDDC Manager with its default configuration.

The configuration of SDDC Manager is not configurable and should not be changed from its defaults.

None.

Repository Access Design for SDDC Manager

The repository design consists of characteristics and decisions that support configuring SDDC Manager with the VMware Depot in the Management Domain.

To download software bundles from the VMware Depot vmware.depot.com, and to send Customer Experience Improvement Program (CEIP) data to VMware, SDDC Manager must be connected to the Internet.

Architecture and Design for the Management Domain

VMware, Inc. 134

Page 135: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-134. Design Decisions on Internet Connectivity of SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-003 Connect SDDC Manager to the Internet for downloading software bundles.

SDDC Manager must be able to download install and upgrade software bundles for deployment of workload domains and solutions, and for upgrade from a repository.

The rules of your organization might not permit direct access to the Internet. In this case, you must download software bundles for SDDC Manager.

If a direct connection to the Internet is not available in your environment, you can use a proxy server to download software bundles. SDDC Manager supports only proxy servers that do not require authentication. By default, SDDC Manager uses direct access to the Internet.

Table 2-135. Design Decisions on Network Proxy for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-004 Skip configuring a network proxy.

Most environments usually provide access to the Internet connectivity.

For deployments where Internet connectivity is not directly available, you might need to configure the network proxy.

Access to the VMware Depot is granted by using My VMware credentials. Configure SDDC with an account that has the VMware Cloud Foundation entitlement.

Table 2-136. Design Decisions on Repository Access of SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-005 Configure SDDC Manager with a My VMware account with VMware Cloud Foundation entitlement to check for and download software bundles.

Software bundles for VMware Cloud Foundation are stored in a repository that is secured with access controls.

Requires the use of a My VMware user account with access to VMware Cloud Foundation licensing.

Certificate Authority Integration Design for SDDC Manager

For an automated generation and replacement of signed certificates for the SDDC management components, you integrate a certificate authority with SDDC Manager.

You can configure SDDC Manager only with the Microsoft Certificate Authority. To use certificates that are signed by another certificate authority (CA), you generate certificate signing requests (CSRs) for the selected management components. After the CA sends you the signed certificates, you upload them to SDDC Manager and initiate certificate replacement on the target components.

Architecture and Design for the Management Domain

VMware, Inc. 135

Page 136: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-137. Design Decisions on Certificate Authority Integration for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-006 Configure SDDC Manager with an external certificate authority that is responsible for providing signed certificates.

Provides increased security by implementing signed certificate generation and replacement across the management components.

An external certificate authority, such as Microsoft CA, must be locally available.

Multi-Instance Management Design for SDDC Manager

You can manage multiple VMware Cloud Foundation instances together by grouping them in a federation. Consider using the muli-instance feature if you plant to extend your environment with another VMware Cloud Foundation instance.

You can manage multiple VMware Cloud Foundation instances together by grouping them in a federation. Each VMware Cloud Foundation instance that is a member of the federation have read access to information about the entire federation and the other instances in it. Federation members can view inventory across the VMware Cloud Foundation instances in the federation, and the available and used capacity (CPU, memory, and storage). As a result, you can maintain control over multiple sites from a single interface.

Table 2-138. Design Decisions on Multi-Instance Management for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-007 Enable multi-instance management.

Provides a single view across multiple instances of VMware Cloud Foundation.

You must provide connectivity between the SDDC Manager instances in each VMware Cloud Foundation deployment.

Life Cycle Management Design for SDDC Manager

The life cycle management module of SDDC Manager is responsible for applying patches, updates, and upgrades to the SDDC Manager appliance.

Table 2-139. Design Decisions on Life Cycle Management of SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-008 Use SDDC Manager to manage its own life cycle.

SDDC Manager supports own life cycle management.

None.

Network Design for SDDC Manager

You place SDDC Manager on a VLAN for traffic segmentation, and decide on the IP addressing scheme and name resolution for optimal support for the SDDC management components, and host provisioning and life cycle management.

Architecture and Design for the Management Domain

VMware, Inc. 136

Page 137: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-21. SDDC Manager Network Design

DataCenterUser

ActiveDirectory

Internet/EnterpriseNetwork

Region A(SFO - San Francisco)

VLAN: sfo-m01-cl01-vds01-pg-mgmt

172.16.11.0/24

Management DomainvCenter Server

sfo-m01-vc01.sfo.rainpole.local

SDDCManagersfo-vcf01.

sfo.rainpole.local

PhysicalUpstream

Router

NSX-T Tier-0 Gateway/ Tier-1 Gateway

Network Segments

The SDDC Manager appliance is connected to the management VLAN sfo01-m01-cl01-vds01-pg-mgmt for secure access to the application user interface and API.

Table 2-140. Design Decisions on Network Segments for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-009 Place the SDDC Manager appliance on the management VLAN sfo01-m01-cl01-vds01-pg-mgmt.

Reduces the number of VLANs. You allocate a single VLAN to vCenter Server, NSX-T Data Center, SDDC Manager, and other SDDC management components.

None.

IP Addressing

The IP address for the SDDC Manager appliance can be assigned by using DHCP or statically.

Architecture and Design for the Management Domain

VMware, Inc. 137

Page 138: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-141. Design Decisions on IP Addressing Scheme for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-010 Allocate a statically assigned IP address and host name to the SDDC Manager appliance in the management domain.

Ensures stability across the SDDC, makes it simpler to maintain and track, and to implement a DNS configuration.

Requires precise IP address management.

Name Resolution

Name resolution provides the translation between an IP address and a fully qualified domain name (FQDN), this makes it easier to remember and connect to components across the SDDC. Each IP address must have valid internal DNS registration which includes forward and reverse name resolution. The SDDC Manager appliance must maintain network connections to the following components:

n vCenter Server

n ESXi hosts

n NSX-T Manager cluster.

The host name of SDDC Manager virtual appliance is allocated to a specific domain for name resolution depending on the region they reside in:

n The IP addresses of the SDDC Manager appliance are associated with a fully qualified name.

Table 2-142. Design Decisions on Name Resolution for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-011 Configure forward and reverse DNS records for the SDDC Manager appliance, assigning the records to the child domain for the region.

SDDC Manager is accessible by using a fully qualified domain name instead of by using IP addresses only.

You must provide DNS records for the SDDC Manager appliance.

Time Synchronization

Time synchronization provided by the Network Time Protocol (NTP) is important to ensure that all components within the SDDC are synchronized to the same time source. Configure the SDDC Manager virtual appliance with time synchronization using an internal NTP time source.

Architecture and Design for the Management Domain

VMware, Inc. 138

Page 139: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-143. Design Decisions on Time Synchronization for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-012 Configure time synchronization by using an internal NTP time for the SDDC Manager appliance in the management domain.

Prevents from failures in the deployment of the SDDC Manager appliance.

n An operational NTP service must be available to the environment.

n All firewalls located between the SDDC Manager appliance and the NTP servers must allow NTP traffic on the required network ports.

Information Security and Access Design for SDDC Manager

Information Security and Access Design details the design decisions covering authentication access, controls and certificate management for SDDC Manager.

Identity Management

Users can log in to SDDC Manager only if they are granted access by using vCenter Single Sign-On.

Table 2-144. Design Decisions on Identity Management for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-013 Grant global permissions to an Active Directory group, such as ug-vcf-admins, by assigning it the Admin role in SDDC Manager.

By assigning the Admin role to an Active Directory group, you can easily create user accounts that have administrative rights in SDDC Manager.

The Active Directory group must be created in advance to assigning it the Admin role.

SDDC-MGMT-LCM-VCF-014 Grant permissions to an Active Directory group, such as ug-vcf-operators, by assigning it the Operator role in SDDC Manager.

By assigning the Operator role to an Active Directory group, you can easily create user accounts that have operative rights in SDDC Manager.

The Active Directory group must be created in advance to assigning it the Operator role.

Certificate Management

You access all SDDC Manager interfaces over SSL connection. By default, SDDC Manager uses a certificate that is signed by VMCA. To provide secure access to the SDDC Manager appliance, replace the default certificate with a certificate that is signed by a third-party CA.

Architecture and Design for the Management Domain

VMware, Inc. 139

Page 140: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-145. Design Decisions on Certificate Management for SDDC Manager

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-LCM-VCF-015 Replace the default VMCA-signed certificate of the SDDC Manager appliance with a CA-signed certificate.

Ensures that the communication to the externally facing Web user interface and API of SDDC Manager is encrypted.

Replacing the default certificate with a trusted CA-signed certificate from a certificate authority might increase the deployment preparation time as certificates requests are generated and delivered.

SDDC-MGMT-LCM-VCF-016 Use a SHA-2 algorithm or stronger for signed certificates.

The SHA-1 algorithm is considered less secure and has been deprecated.

Not all certificate authorities support SHA-2.

Security and Compliance Design

The Security and Compliance layer contains the Identity and Access Management component, which is required to control access to the SDDC solutions. In this design, the identity and access management function is provided by VMware Workspace ONE Access. To control access to regional SDDC solutions, you deploy a region-specific Workspace ONE Access instance.

Figure 2-22. Security and Compliance in the SDDC

CloudAutomation

Service Catalog

Self-Service Portal

Orchestration

Hypervisor

Pools of Resources

Virtualization Control

VirtualInfrastructure

Compute

Storage

Network

PhysicalInfrastructure

Cloud Operations

Monitoring

Logging

Life CycleManagement

BusinessContinuity

Fault Tolerance and Disaster

Recovery

Backup & Restore

Replication

Security and Compliance

Security Policies

Industry Regulations

Identity and Access Management

n Region-Specific Workspace ONE Access Design

VMware Workspace ONE Access provides identity and access management services to SDDC solutions. In this design, you use a region-specific deployment of Workspace ONE Access to provide identity and access management services to region-specific SDDC solutions, such as NSX-T Data Center.

Architecture and Design for the Management Domain

VMware, Inc. 140

Page 141: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Region-Specific Workspace ONE Access Design

VMware Workspace ONE Access provides identity and access management services to SDDC solutions. In this design, you use a region-specific deployment of Workspace ONE Access to provide identity and access management services to region-specific SDDC solutions, such as NSX-T Data Center.

n Logical Design for Region-Specific Workspace ONE Access

To provide identity and access management services to regional SDDC solutions, such as NSX-T Data Center, this design uses a standalone Workspace ONE Access node that is deployed in the region-specific management virtual network in each region.

n Deployment Specification for Region-Specific Workspace ONE Access

The configuration design consists of characteristics and decisions that support the logical design. You deploy a standalone Workspace ONE Access node in each region.

n Network Design for Region-Specific Workspace ONE Access

This design uses NSX-T Data Center segments to abstract Workspace ONE Access and its supporting services from the underlying physical infrastructure.

n Identity and Access Management Design for Region-Specific Workspace ONE Access

You integrate the regional SDDC solutions with the region-specific Workspace ONE Access deployments to enable authentication through the identity and access management services.

n Information Security and Access Control Design for Region-Specific Workspace ONE Access

Information security and access design details the design decisions covering authentication access controls and certificate management for the region-specific Workspace ONE Access deployment.

Logical Design for Region-Specific Workspace ONE Access

To provide identity and access management services to regional SDDC solutions, such as NSX-T Data Center, this design uses a standalone Workspace ONE Access node that is deployed in the region-specific management virtual network in each region.

Workspace ONE Access provides:

n Directory integration to authenticate users against existing directories such as Active Directory or LDAP.

n Addition of two-factor authentication through integration with third-party software such as RSA SecurID, Entrust, and others.

Architecture and Design for the Management Domain

VMware, Inc. 141

Page 142: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-23. Logical Design of the Region-Specific Workspace ONE Access Deployment

Virtual Appliance

Region A

Identity Provider

Access

Directory Servicese.g. AD, LDAP

User Interface

REST API

Region-Specific Workspace ONE Access

Supporting Components:Postgres

Supporting Infrastructure:Shared Storage, DNS, NTP, SMTP

Region-Specific Solutions

NSX-TData Center

Architecture and Design for the Management Domain

VMware, Inc. 142

Page 143: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-146. Region-Specific Workspace ONE Access Logical Components

Single Availability Zone Multiple Availability Zones

n A single Workspace ONE Access appliance deployed on the region-specific network segment.

n vSphere HA protects the Workspace ONE Access appliance.

n A single Workspace ONE Access appliance deployed on the region-specific network segment.

n vSphere HA protects the Workspace ONE Access appliance.

n A DRS rule specifies that the Workspace ONE Access appliance runs on an ESXi host in Availability Zone 1.

Each region-specific Workspace ONE Access node is integrated with the corresponding regional SDDC components, such as NSX-T Managers. Workspace ONE Access enables you to configure role-based access control (RBAC) using the users and groups synchronized from an enterprise directory.

Supporting Infrastructure

Workspace ONE Access in this design integrates with the following supporting infrastructure:

n NTP for time synchronization

n DNS for name resolution

n Active Directory (or LDAP) directories

Important Workspace ONE Access does not replace Active Directory or LDAP. Workspace ONE Access integrates with Directory or LDAP for authentication and solution authorization.

Deployment Specification for Region-Specific Workspace ONE Access

The configuration design consists of characteristics and decisions that support the logical design. You deploy a standalone Workspace ONE Access node in each region.

n Deployment Model for Region-Specific Workspace ONE Access

Workspace ONE Access is distributed as a virtual appliance in OVA format. The Workspace ONE Access appliance includes identity and access management services.

n Directories and Identity Provider Design for Region-Specific Workspace ONE Access

You integrate your enterprise directory with Workspace ONE Access to synchronize users and groups to the Workspace ONE Access identity and access management services. You configure the Workspace ONE Access identity provider connector to perform the synchronization of users and groups from your enterprise directory.

n Branding Design for Region-Specific Workspace ONE Access

You can change the appearance of the Workspace ONE Access browser-based user interface to meet minimal branding guidelines of an organization.

Deployment Model for Region-Specific Workspace ONE Access

Workspace ONE Access is distributed as a virtual appliance in OVA format. The Workspace ONE Access appliance includes identity and access management services.

Architecture and Design for the Management Domain

VMware, Inc. 143

Page 144: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Deployment Type

You consider the deployment type - standard or cluster - according to the design objectives for the availability and number of users that the system and integrated SDDC solutions must support. Workspace ONE Access is deployed on the first cluster in the management domain.

In this design, you deploy a standard topology of Workspace ONE Access for region-specific SDDC solutions.

Table 2-147. Topology Attributes of the Region-Specific Workspace ONE Deployment

Deployment Type Number of Nodes User Scale Description

Standard 1 1,000 users You deploy a standalone Workspace ONE Access instance on the first cluster in the management domain in each region.

Table 2-148. Design Decisions on the Deployment of Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-001 Deploy a standalone Workspace ONE Access instance on the first cluster in the management domain in the region.

Each instance provides an identity and access management service to the regional SDDC solutions, such as NSX-T Data Center.

None

SDDC-MGMT-SEC-IAM-002 Use the OVA file to deploy the standalone Workspace ONE Access instance in each region, using the standard deployment type to provide identity and access management services to regional SDDC solutions.

Deploying the standard configuration that includes the single-node appliance architecture satisfies the design objectives in scope for the design allowing Workspace ONE Access to scale to a higher number of consuming users for NSX-T Data Center.

The region-specific Workspace ONE Access instance is not managed by vRealize Suite Lifecycle Manager.

Availability is managed by vSphere High Availability only.

SDDC-MGMT-SEC-IAM-003 Protect each Workspace ONE Access node by using vSphere High Availability.

Supports the availability objectives for Workspace ONE Access without a required manual intervention during a failure event.

The standalone Workspace ONE Access instance for region-specific SDDC solutions becomes unavailable during a vSphere HA failover.

Architecture and Design for the Management Domain

VMware, Inc. 144

Page 145: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-148. Design Decisions on the Deployment of Region-Specific Workspace ONE Access (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-004 When using two availability zones, add the Workspace ONE Access appliance to the primary availability zone VM group, sfo-m01-cl01_primary-az-vmgroup.

Ensures that, by default, the Workspace ONE Access appliance is powered on within the primary availability zone hosts group.

If Workspace ONE Access is deployed after the creation of the stretched clusters for management domain availability zones, the VM group for the primary availability zone virtual machines must be updated to include the Workspace ONE Access appliance.

SDDC-MGMT-SEC-IAM-005 Place each region-specific Workspace ONE Access node in a dedicated VM folder for its region, that is, sfo-m01-fd-wsa for Region A.

Organizes the region-specific Workspace ONE Access nodes in the management domain inventory.

None

Sizing Compute and Storage Resources

A Workspace ONE Access standard deployment requires certain CPU, memory, and storage resources.

Table 2-149. CPU, Memory, and Storage Resources for the Region-Specific Workspace ONE Access Standard Deployment

Attribute Value

Number of appliances 1

CPU 2 vCPUs

Memory 6 GB

Storage 4.8 GB (thin provisioned)

60.2 GB (thick provisioned)

Directories and Identity Provider Design for Region-Specific Workspace ONE Access

You integrate your enterprise directory with Workspace ONE Access to synchronize users and groups to the Workspace ONE Access identity and access management services. You configure the Workspace ONE Access identity provider connector to perform the synchronization of users and groups from your enterprise directory.

Directories

Workspace ONE Access has its own concept of a directory, corresponding to Active Directory or LDAP directories in your environment. This internal Workspace ONE Access directory uses attributes to define users and groups. You first create one or more directories in the identity and access management service, then you synchronize each directory with your corresponding Active Directory or LDAP directory. Workspace ONE Access integrates with the following types of directories:

Architecture and Design for the Management Domain

VMware, Inc. 145

Page 146: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-150. Supported External Directories in Workspace ONE Access

Directory Type Description

Active Directory over LDAP You create this directory type if you plan to connect to a single Active Directory domain environment. The connector binds to Active Directory by using simple bind authentication. If you have more than one domain in a forest, you create a directory for each domain.

Active Directory over Integrated Windows Authentication You create this directory type if you plan to connect to a multi-domain or multi-forest Active Directory environment. The connector binds to Active Directory by using Integrated Windows Authentication. The type and number of directories that you create vary for the Active Directory environment, such as single-domain or multi-domain, and for the type of trust used between domains. In most environments, you create a single directory.

LDAP directory Create the LDAP directory to integrate your enterprise LDAP directory with Workspace ONE Access. You can integrate only a single-domain LDAP directory. Workspace ONE Access supports only those OpenLDAP implementations that support paged search queries.

To integrate Workspace ONE Access, you must:

n Specify the attributes for users required in the Workspace ONE Access service.

n Add a directory in Workspace ONE Access for the directory type for your organization.

n Map user attributes between your enterprise directory and Workspace ONE Access.

n Specify and synchronize directory users and groups.

n Establish a synchronization schedule or synchronize on-demand.

Architecture and Design for the Management Domain

VMware, Inc. 146

Page 147: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-151. Design Decisions on Directories for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-006 Configure a directory service connection, rainpole.io, for the Workspace ONE Access instance in each region.

You can integrate your corporate directory with Workspace ONE Access to synchronize users and groups to the Workspace ONE Access identity and access management services.

None

SDDC-MGMT-SEC-IAM-007 Use Active Directory with Integrated Windows Authentication as the Directory Service connection option.

Integrated Windows Authentication supports establishing trust relationships in a multi-domain or multi-forest Active Directory environment.

The Workspace ONE Access nodes must be joined to the Active Directory domain.

For information about integrating with Active Directory environments with different forests and domain models, see Directory Integration with Workspace ONE Access.

SDDC-MGMT-SEC-IAM-008 Configure the directory to synchronize only groups required for the integrated SDDC solutions.

Limits the number of replicated groups required for each product.

Reduces the replication interval for group information.

See Information Security and Access Control Design for Region-Specific Workspace ONE Access and Information Security and Access Control Design for NSX-T Data Center for the Management Domain.

You must manage the groups from your enterprise directory selected for synchronization to the Workspace ONE Access directory.

SDDC-MGMT-SEC-IAM-009 Enable the synchronization of group members to the directory when a group is added to the Workspace ONE Access directory.

When enabled, members of the groups are synchronized to Workspace ONE Access when groups are added from the corporate directory. When disabled, group names are synchronized to the directory, but the members of the group are not synchronized until the group is entitled to an application or the group name is added to an access policy.

None

Architecture and Design for the Management Domain

VMware, Inc. 147

Page 148: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-151. Design Decisions on Directories for Region-Specific Workspace ONE Access (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-010 Enable Workspace ONE Access to synchronize nested group members by default.

Allows Workspace ONE Access to update and cache the membership of groups without querying your corporate directory.

Changes to group membership are not reflected until the next synchronization event.

SDDC-MGMT-SEC-IAM-011 Add a filter to the directory settings to exclude users from the directory replication.

Limits the number of replicated users for each Workspace ONE Access instance within the maximum scale. For each region-specific Workspace ONE Access instance, the maximum scale is 1,000 user accounts.

To ensure that replicated user accounts are managed within the maximums, you must define a filtering schema that works for your organization based on your directory attributes.

Architecture and Design for the Management Domain

VMware, Inc. 148

Page 149: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-151. Design Decisions on Directories for Region-Specific Workspace ONE Access (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-012 Configure the mapped attributes included when a user is added to the Workspace ONE Access directory.

You can configure the minimum required and extended user attributes to synchronize directory user account for Workspace ONE Access to be used as an authentication source for SDDC solutions.

User accounts in your organization must have the following required attributes mapped:

n firstname, for example, givenname for Active Directory

n lastName, for example, sn for Active Directory

n email, for example, mail for Active Directory

n userName, for example,sAMAccountName for Active Directory

n If you require users to sign in with an alternate unique identifier, for example, userPrincipalName, you must map the attribute and update the identity and access management preferences.

SDDC-MGMT-SEC-IAM-013 Configure the directory synchronization frequency to a reoccurring schedule, for example, 15 minutes.

Ensures that any changes to group memberships in the corporate directory are available for integrated SDDC solutions in a timely manner.

Schedule the synchronization interval to be longer than the time to synchronize from the corporate directory. If users and groups are being synchronized to Workspace ONE Access when the next synchronization is scheduled, the new synchronization starts immediately after the end of the previous iteration. With this schedule, the process is continuous.

Identity Providers and Connectors

Workspace ONE Access synchronizes with corporate directories, such as Active Directory rainpole.io, by using the connector component. Any required users and groups that are provided access to the SDDC solutions are synchronized into Workspace ONE Access. In addition, the connector is the default identity provider and authenticates users to the identity and access management service. Authentication uses your corporate directory, but searches are made against the local Workspace ONE Access directory mirror.

Architecture and Design for the Management Domain

VMware, Inc. 149

Page 150: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Branding Design for Region-Specific Workspace ONE Access

You can change the appearance of the Workspace ONE Access browser-based user interface to meet minimal branding guidelines of an organization.

You can change the logo, the background and text colors, or the information in the header and footer.

Table 2-152. Design Decisions on Branding for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-014 Apply branding customizations for the Workspace ONE Access user interface that is presented to users when logging to the integrated SDDC solutions.

Provides minimal corporate branding to the user interface consumed by end users.

n Company name

n Product

n Favorite icon

n Logo image

n Background image

n Colors

You must provide an icon, logo, and background image icon that meets the minimum size and resolution requirements.

Network Design for Region-Specific Workspace ONE Access

This design uses NSX-T Data Center segments to abstract Workspace ONE Access and its supporting services from the underlying physical infrastructure.

Virtual Network Segment

The network segment design consists of characteristics and decisions that support the placement of the Workspace ONE Access appliance in the management domain.

This network design has the following features:

n A Workspace ONE Access node for the region-specific SDDC solutions is deployed on a region-specific virtual network segment in each region.

n All Workspace ONE Access components have routed access to the VLAN-backed management network through the NSX-T Data Center Tier-0 Gateway.

n Routing to the VLAN-backed management network and other external networks is dynamic and is based on the Border Gateway Protocol (BGP).

Architecture and Design for the Management Domain

VMware, Inc. 150

Page 151: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Figure 2-24. Network Design of the Region-Specific Workspace ONE Access Deployment

APP

OS

DataCenterUser

ActiveDirectory

Internet/EnterpriseNetwork

Region A(SFO - San Francisco)

Region-SpecificWorkspace One Access Instance

Region A

192.168.31.0/24sfo-m01-seg01

sfo-wsa01

NSX-T Tier-0 Gateway/ Tier-1 Gateway

Management Domain

vCenter Server sfo-m01-vc01.sfo.rainpole.io

PhysicalUpstream

Router

Workload Domain vCenter Server

sfo-w01-vc01.sfo.rainpole.io

VLAN: sfo01-m01-cl01-vds01-pg-mgmt

172.16.11.0/24

Architecture and Design for the Management Domain

VMware, Inc. 151

Page 152: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-153. Design Decisions on the Virtual Network Segments for the Region-Specific Workspace ONE Access Nodes

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-015 Place the Workspace ONE Access nodes for regional SDDC solutions on the existing region-specific virtual network segments, that is, sfo-m01-seg01 for Region A.

Authentication and authorization can sustain operations if there is a service interruption in the cross-region network.

Ensures a consistent deployment model for management applications.

You must use an implementation in NSX-T Data Center to support this network configuration.

IP Addressing Scheme

Allocate a statically assigned IP address and host name from the region-specific network segment to the region-specific Workspace ONE Access appliance.

Table 2-154. Design Decisions on the IP Addressing for the Region-Specific Workspace ONE Access Nodes

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-016 Allocate a statically assigned IP address and host name to the regional Workspace ONE Access appliance in the management domain.

Using statically assigned IP addresses ensures stability across the SDDC and makes it simpler to maintain and easier to track.

Requires precise IP address management.

Name Resolution

The name resolution design consists of characteristics and decisions that support name resolution of the Workspace ONE Access appliance in the management domain.

The IP addresses of the region-specific Workspace ONE Access nodes are associated with a fully qualified name whose suffix is set to the child domain, that is, sfo.rainpole.io for Region A.

Note The design uses an Active Directory forest with two regional Active Directory child domains, so the examples use a hierarchical DNS name space. However, the design supports the use of a flat DNS name space.

Table 2-155. Design Decisions on Name Resolution for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-017 Configure forward and reverse DNS records for each Workspace ONE Access appliance IP address for each regional instance.

Workspace ONE Access is accessible by using a fully qualified domain name instead of by using IP addresses only.

You must provide DNS records for each Workspace ONE Access appliance IP address.

Architecture and Design for the Management Domain

VMware, Inc. 152

Page 153: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Time Synchronization

The time synchronization design consists of characteristics and decisions that support network time protocol configuration of the Workspace ONE Access appliance in the management domain.

Table 2-156. Design Decisions on Time Synchronization for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-018 Configure NTP for each Workspace ONE Access appliance.

Workspace ONE Access depends on time synchronization.

All firewalls located between the Workspace ONE Access nodes and the NTP servers must allow NTP traffic.

Identity and Access Management Design for Region-Specific Workspace ONE Access

You integrate the regional SDDC solutions with the region-specific Workspace ONE Access deployments to enable authentication through the identity and access management services.

After the integration, information security and access control configurations for the integrated SDDC products can be configured.

Table 2-157. Design Decisions on Integrations for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-019 Configure the region-specific Workspace ONE Access instance as the authentication provider for the NSX-T Managers in the region.

Enables authentication through Workspace ONE Access identity and access management services for regional NSX-T Managers.

Allows users to authenticate to NSX-T Data Center in the event of connectivity loss between regions.

To log in to NSX-T Data Center with a local account, such as admin, after integration with Workspace ONE Access, you must use the URL https://FQDN/login.jsp?local=true, that is, https://sfo-m01-nsx01.sfo.rainpole.io/

login.jsp?local=true.

Information Security and Access Control Design for Region-Specific Workspace ONE Access

Information security and access design details the design decisions covering authentication access controls and certificate management for the region-specific Workspace ONE Access deployment.

Identity Management Design

You manage access to your Workspace ONE Access deployments by assigning users and groups to Workspace ONE Access roles.

In Workspace ONE Access, you can assign users three types of role-based access.

Architecture and Design for the Management Domain

VMware, Inc. 153

Page 154: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-158. Workspace ONE Access Roles and Example Enterprise Groups

Role Description Enterprise Group

Super Admins A role with the privileges to administer all Workspace ONE Access services and settings.

rainpole.io\ug-wsa-admins

Directory Admins A role with the privileges to administer Workspace ONE Access users, groups, and directory management.

rainpole.io\ug-wsa-directory-admins

ReadOnly Admins A role with read-only privileges to Workspace ONE Access.

rainpole.io\ug-wsa-read-only

For more information about the Workspace ONE Access roles and their permissions, see the Workspace ONE Access documentation.

As the cloud administrator for Workspace ONE Access, you establish an integration with your corporate directories which allows you to use your corporate identity source for authentication. You can also set up a multi-factor authentication as part of access policy settings.

The region-specific Workspace ONE Access deployment allows you to control authorization to your regional SDDC solutions, such as NSX-T Data Center, by assigning roles to your organization directory groups, such as Active Directory security groups.

Assigning roles to groups is more efficient than assigning roles to individual users. As a cloud administrator, you determine the members that make up your groups and what roles they are assigned. Groups in the connected directories are available for use Workspace ONE Access. In this design, enterprise groups are used to assign roles in Workspace ONE Access.

Table 2-159. Design Decisions on Identity Management for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-020 Assign roles to groups, synchronized from your corporate identity source for Workspace ONE Access.

Provides access management and administration of Workspace ONE Access by using corporate security group membership.

You must define and manage security groups, group membership and, security controls in your corporate identity source for Workspace ONE Access administrative consumption.

SDDC-MGMT-SEC-IAM-021 Create a security group in your organization directory services for the Super Admin role, rainpole.io\ug-wsa-admins, and synchronize the group in the Workspace ONE Access configuration.

Streamlines the management of Workspace ONE Access roles to users.

You must create the security group outside of the SDDC stack.

Architecture and Design for the Management Domain

VMware, Inc. 154

Page 155: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-159. Design Decisions on Identity Management for Region-Specific Workspace ONE Access (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-022 Assign the enterprise group for super administrators, rainpole.io\ug-wsa-admins, the Super Admins Workspace ONE Access role.

Provides the following access control features:

n Access to Workspace ONE Access services is granted to a managed set of individuals that are members of the security group.

n Improved accountability and tracking access to Workspace ONE Access.

You must maintain the life cycle and availability of the security group outside of the SDDC stack.

SDDC-MGMT-SEC-IAM-023 Create a security group in your organization directory services for the Directory Admin role, rainpole.io\ug-wsa-directory-admins, and synchronize the group in the Workspace ONE Access configuration.

Streamlines the management of Workspace ONE Access roles to users.

You must create the security group outside of the SDDC stack.

SDDC-MGMT-SEC-IAM-024 Assign the enterprise group for directory administrator users, rainpole.io\ug-wsa-directory-admins, the Directory Admins Workspace ONE Access role.

Provides the following access control features:

n Access to Workspace ONE Access services is granted to a managed set of individuals that are members of the security group.

n Improved accountability and tracking access to Workspace ONE Access.

You must maintain the life cycle and availability of the security group outside of the SDDC stack.

Architecture and Design for the Management Domain

VMware, Inc. 155

Page 156: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-159. Design Decisions on Identity Management for Region-Specific Workspace ONE Access (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-025 Create a security group in your organization directory services for the ReadOnly Admin role, rainpole.io\ug-wsa-read-only, and synchronize the group in the Workspace ONE Access configuration.

Streamlines the management of Workspace ONE Access roles to users.

You must create the security group outside of the SDDC stack.

SDDC-MGMT-SEC-IAM-026 Assign the enterprise group for read-only users, rainpole.io\ug-wsa-read-only, the ReadOnly Admin Workspace ONE Access role.

Provides the following access control features:

n Access to Workspace ONE Access services is granted to a managed set of individuals that are members of the security group.

n Improved accountability and tracking access to Workspace ONE Access.

You must maintain the life cycle and availability of the security group outside of the SDDC stack.

Note In an Active Directory forest, consider using a security group with a universal scope. Add security groups with a global scope that includes service accounts and users from the domains in the Active Directory forest.

Password Management Design

The password management design consists of characteristics and decisions that support configuring user security policies of the region-specific Workspace ONE Access nodes in the management domain.

Architecture and Design for the Management Domain

VMware, Inc. 156

Page 157: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-160. Design Decisions on Password Management for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-027 Rotate the appliance root user password on a schedule post deployment.

The password for the root user account expires 60 days after the initial deployment.

You must manage the password rotation schedule for the root user account in accordance with your corporate policies and regulatory standards, as applicable.

You must manage the password rotation schedule on each region-specific Workspace ONE Access instance.

SDDC-MGMT-SEC-IAM-028 Rotate the appliance sshuser user password on a schedule post deployment.

The password for the appliance sshuser user account expires 60 days after the initial deployment.

You must manage the password rotation schedule for the appliance sshuser user account in accordance with your corporate policies and regulatory standards, as applicable.

You must manage the password rotation schedule on each region-specific Workspace ONE Access instance.

Architecture and Design for the Management Domain

VMware, Inc. 157

Page 158: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-160. Design Decisions on Password Management for Region-Specific Workspace ONE Access (continued)

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-029 Rotate the admin application user password on a schedule post deployment.

The password for the default administrator application user account does not expire after the initial deployment.

You must manage the password rotation schedule for the admin application user account in accordance with your corporate policies and regulatory standards, as applicable.

You must manage the password rotation schedule on each region-specific Workspace ONE Access instance.

You must use the API to manage the Workspace ONE Access local directory user password changes.

SDDC-MGMT-SEC-IAM-030 Configure a password policy for the Workspace ONE Access local directory admin user.

You can set a policy for Workspace ONE Access local directory user that addresses your corporate policies and regulatory standards.

The password policy is applicable only to the local directory users and does not impact your corporate directory.

You must set the policy in accordance with your corporate policies and regulatory standards, as applicable.

You must apply the password policy on each region-specific Workspace ONE Access instance.

Certificate Design

The certificate design consists of characteristics and decisions that support configuring signed certificates of the Workspace ONE Access appliance in the management domain.

The Workspace ONE Access user interface and API endpoint use an HTTPS connection. By default, Workspace ONE Access uses a self-signed certificate. To provide secure access to the Workspace ONE Access user interface and API, replace the default self-signed certificates with a CA-signed certificate.

Architecture and Design for the Management Domain

VMware, Inc. 158

Page 159: Modified on 23 JUN 2020 VMware Validated Design 6.0 VMware Cloud … · 2020-06-25 · The management domain detailed design considers components for physical infrastructure, virtual

Table 2-161. Design Decisions on Certificates for Region-Specific Workspace ONE Access

Decision ID Design Decision Design Justification Design Implication

SDDC-MGMT-SEC-IAM-031 Replace the default self-signed certificates with a Certificate Authority-signed certificate during the deployment.

Ensures that all communications to the externally facing Workspace ONE Access browser-based UI, API, and between the components are encrypted.

Replacing the default certificates with trusted CA-signed certificates from a certificate authority increases the deployment preparation time as certificates requests are generated and delivered.

You must manage the life cycle of the certificate replacement.

You must use a multi-SAN certificate for the cross-region Workspace ONE Access cluster instance.

SDDC-MGMT-SEC-IAM-032 Import the certificate for the Root Certificate Authority to each Workspace ONE Access instance.

Ensures that the certificate authority is trusted by each Workspace ONE Access instance.

None

SDDC-MGMT-SEC-IAM-033 Use a SHA-2 or higher algorithm when signing certificates.

The SHA-1 algorithm is considered less secure and has been deprecated.

Not all certificate authorities support SHA-2.

Architecture and Design for the Management Domain

VMware, Inc. 159