27
VMware vCloud Implementation Example Private Enterprise vCloud TECHNICAL WHITE PAPER

VMware vCloud Private Enterprise Cloud

Embed Size (px)

Citation preview

Page 1: VMware vCloud Private Enterprise Cloud

VMware vCloud Implementation ExamplePrivate Enterprise vCloud

T E C H N I C A L W H I T E P A P E R

Page 2: VMware vCloud Private Enterprise Cloud

VMware vCloud Implementation Example

T E C H N I C A L W H I T E P A P E R / 2

Table of Contents

1. Purpose and Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1 Executive Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4

1.2 Business Requirements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4

1.3 Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4

1.4 Document Purpose and Assumptions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4

2. VMware vCloud Architecture Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

2.1 vCloud Definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .5

2.2 vCloud Component Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

3. vSphere Architecture Design Overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

3.1 High Level Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6

3.2 Site Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

3.3 Design Specifications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

4. vSphere Architecture Design – Management Cluster . . . . . . . . . . . . . . . . . . . . . . . . . 8

4.1 Compute Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

4.1.1. Datacenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

4.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .8

4.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

4.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .9

4.3 Shared Storage Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

4.4 Management Components . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .10

4.5 Management Component Resiliency Considerations . . . . . . . . . . . . . . . . . . . . . . . . .11

5. vSphere Architecture Design – Resource Groups . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.1 Compute Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.1.1. Datacenters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.1.2. vSphere Clusters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

5.1.3. Host Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

5.2 Network Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .13

5.3 Shared Storage Logical Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .14

5.4 Resource Group Datastore Considerations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .15

5.4.1. Datastore Sizing Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .16

6. vCloud Provider Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16

6.1 Abstractions and VMware vCloud Director Constructs . . . . . . . . . . . . . . . . . . . . . . .16

6.2 Provider vDCs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

6.3 Organizations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

6.4 Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

Page 3: VMware vCloud Private Enterprise Cloud

VMware vCloud Implementation Example

T E C H N I C A L W H I T E P A P E R / 3

6.4.1. External Networks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

6.4.2. Network Pools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

6.4.3. Networking Use Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

6.5 Catalogs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .22

7. vCloud Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

7.1 vSphere Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

7.1.1. Host Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

7.1.2. Network Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

7.1.3. vCenter Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

7.2 VMware vCloud Director Security . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .23

8. vCloud Management . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

8.1 vSphere Host Setup Standardization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24

8.2 VMware vCloud Director Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24

8.3 vSphere Host Logging . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .24

8.4 VMware vCloud Director Monitoring . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .25

Appendix A – Bill of Materials . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

Page 4: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 4

VMware vCloud Implementation Example

1. Purpose and Overview1.1 Executive SummaryACME Enterprise will be implementing an “internal next generation datacenter” private cloud built on VMware technologies.

This document defines the vCloud architecture and provides detailed descriptions and specifications of the architectural components and relationships for the initial implementation. This design is based on a combination of VMware best practices and specific business requirements and goals.

1.2 Business RequirementsThe vCloud for ACME Enterprise has the following characteristics and provides:

• Compute capacity to support 300 virtual machines, which are predefined workloads.

• Secure multi-tenancy, permitting more than one organization to share compute resources. In a private cloud, organizations typically represent different departments, and each department may have several environments such as development or production.

• A self-service portal where Infrastructure as a Service (IaaS) can be consumed from a catalog of predefined applications (vApp Templates).

• A chargeback mechanism, so resource consumption can be metered and the associated cost provided back to the appropriate organization or business unit.

Refer to the corresponding Service Definition for further details.

1.3 Use CasesThe target use case for the vCloud includes the following workloads:

• Development and test

• Pre-production

• Demos

• Training

• Tier 2 and Tier 3 applications

1.4 Document Purpose and AssumptionsThis vCloud Architecture Design document is intended to serve as a reference for ACME Enterprise architects, and assumes they have familiarity with VMware products, including VMware vSphere, vCenter, and VMware vCloud Director.

The vCloud architecture detailed in this document is organized into these sections:

SECTION DESCRIPTION

vCloud Definition Inventory of components that comprise the cloud solution

vSphere – Management vSphere and vCenter components that support running workloads

Page 5: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 5

VMware vCloud Implementation Example

SECTION DESCRIPTION

vSphere – Resources Resources for cloud consumptionDesign organized by compute, networking, and shared storageDetailed through logical and physical design specifications and considerations

Management and Security Considerations as they apply to vSphere and VMware vCloud Director management components

vCloud Logical Design VMware vCloud Director objects and configurationRelationship of VMware vCloud Director to vSphere objects

This document is not intended as a substitute for detailed product documentation. Refer to the installation and administration guides for the appropriate product as necessary for further information.

2. VMware vCloud Architecture Design Overview2.1 vCloud DefinitionThe VMware vCloud is comprised of the following components:

vCLOUD COMPONENT DESCRIPTION

VMware vCloud Director Abstracts and coordinates underlying resourcesIncludes:• VMware vCloud Director Server (1 or more

instances, each installed on a Linux VM and referred to as a “cell”)

• VMware vCloud Director Database (1 instance per clustered set of VMware vCloud Director cells)

• vSphere compute, network and storage resources

VMware vSphere Foundation of underlying cloud resourcesIncludes:• VMware ESXi hosts (3 or more instances for

Management cluster and 3 or more instances for Resource Cluster, also referred to as Compute Cluster)

• vCenter Server (1 instance managing a management cluster of hosts, and 1 or more instances managing one or more resource groups of hosts reserved for vCloud consumption. In a Proof of Concept installation, 1 instance of vCenter server managing both the management cluster and a single resource group is allowable.)

• vCenter Server Database (1 instance per vCenter Server)

Page 6: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 6

VMware vCloud Implementation Example

vCLOUD COMPONENT DESCRIPTION

VMware vShield Provides network security services including NAT and firewallIncludes:• vShield Edge (deployed automatically as virtual

appliances on hosts by VMware vCloud Director)• vShield Manager (1 instance per vCenter Server in

the cloud resource groups)

VMware vCenter Chargeback Provides resource metering, and chargeback modelsIncludes:• vCenter Chargeback Server (1 instance)• Chargeback Data Collector (1 instance)• vCloud Data Collector (1 instance)• VSM Data Collector (1 instance)

2.2 vCloud Component Design OverviewThe components comprising the vCloud are detailed in this document in the following sections:

DESIGN SECTION VCLOUD COMPONENT(S)

vSphere Architecture – Management Cluster • vCenter Server and vCenter Database• vCenter cluster and ESXi hosts• vCenter Chargeback Server and Database• vCenter Chargeback Collectors• vShield Manager and vShield Edge(s)• VMware vCloud Director Cell(s) and Database

(Oracle)

vSphere Architecture –Resource Group • vCenter Server(s) and vCenter Database(s)• vCenter Cluster(s) and ESXi hosts

3. vSphere Architecture Design Overview3.1 High Level ArchitecturevSphere resources are organized and separated into:

• A management cluster containing all core components and services needed to run the cloud.

• One or more resource groups or “compute clusters” that represent dedicated resources for cloud consumption. Each resource group is a cluster of ESXi hosts managed by a vCenter Server, and is under the control of VMware vCloud Director. Multiple resource groups can be managed by the same VMware vCloud Director.

Reasons for organizing and separating vSphere resources along these lines are:

• Facilitating quicker troubleshooting and problem resolution. Management components are strictly contained in a relatively small and manageable management cluster. Otherwise, running on a large set of host clusters could lead to situations where it is time-consuming to track down and manage such workloads.

Page 7: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 7

VMware vCloud Implementation Example

• Management components are separate from the resources they are managing.

• Resources allocated for cloud use have little overhead reserved. For example, cloud resource groups would not host vCenter VMs.

• Resource groups can be consistently and transparently managed, carved up, and scaled horizontally.

The high level logical architecture is depicted as follows.

Shared Storage

SAN

Virtual MachinesShared Storage

SAN

Shared Storage

SAN

Compute Resources

vSphere4.1

Compute Resources

vSphere4.1

Org vDC#1

Org vDC#2 (future)

Compute Resources

vSphere4.1

Resource GroupsManagement Cluster

VCD vSM vCenter (MC)

VCenter (RG) MSSQL AD/DNS

Chargeback Oracle 11g

VM VMVM VM

VM VM

VM VMVM VM

VM VM

VM VMVM VM

VM VM

VM VM

vCenter DB

Log/Mon(optional)

Figure 1 – vCloud Logical Architecture Overview

The following diagram depicts the physical design corresponding to the logical architecture previously described.

Fabric

Fabric

Switch

Switch

10Gbps10Gbps

10Gbps10Gbps

10Gbps10Gbps

10Gbps10Gbps

Network Infrastructure

Storage Infrastructure

Server Infrastructure

vCenter01 - Cluster01

vCenter01 - Cluster02

10Gbps10Gbps

10Gbps10Gbps

10Gbps

Management Cluster

vCloud Resource Groups

Physical Layer vSphere Layer

Management and DB Cluster

Provider vDC Cluster A Provider vDC Cluster B

Resource Pool

Data Store

Port Group

HA=N+1CPU=TBDMEM=TBD

Host C1

Host C2

Host C3

Host C4

Host C5

Host C6

Resource Pool

Data Store

Resource Pool

HA=N+1CPU=TBDMEM=TBD

Resource Pool

Data Store

Port Group

HA=N+1CPU=TBDMEM=TBD

Host M1

Host M3

Host M2

10Gbps10Gbps 10Gbps

10Gbps 10Gbps

FC SAN

Figure 2 – vCloud Physical Design Overview

Page 8: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 8

VMware vCloud Implementation Example

3.2 Site ConsiderationsThe management cluster and the resource group (compute cluster) reside within a single physical Datacenter. Servers in both clusters are striped across the server chasses. This provides for business continuity of clusters, i.e. HA, should one chassis go down.

Neither secondary nor DR sites are in the scope for this project.

3.3 Design SpecificationsThe architecture is described by a logical design that is independent of hardware-specific details. The focus is on components, their relationships, and quantity.

Additional details are found in Appendix A.

4. vSphere Architecture Design — Management Cluster4.1 Compute Logical DesignThe compute design encompasses the ESXi hosts contained in the management cluster. In this section the scope is limited to only the infrastructure supporting the management component workloads.

4.1.1. DatacentersThe management cluster is contained within a single vCenter datacenter.

4.1.2. vSphere ClustersThe management cluster will be comprised of the following vSphere cluster:

ATTRIBUTE SPECIFICATION

Number of ESXi Hosts 3

VMware DRS Configuration Fully automated

VMware DRS Migration Threshold 3 stars

VMware HA Enable Host Monitoring Yes

VMware HA Admission Control Policy Cluster tolerances 1 host failure (Percentage based)

VMware HA Percentage 67%

VMware HA Admission Control Response Prevent VMs from being powered on if they violate availability constraints

VMware HA Default VM Restart Priority N/A

VMware HA Host Isolation Response Leave VM Powered On

VMware HA Enable VM Monitoring Yes

VMware HA VM Monitoring Sensitivity Medium

Table 1 – vSphere Clusters – Management Cluster

Page 9: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 9

VMware vCloud Implementation Example

4.1.3. Host Logical DesignEach ESXi host in the management cluster will have the following specifications:

ATTRIBUTE SPECIFICATION

Host Type and Version VMware ESXi Installable

Processors x86 Compatible

Storage Local for ESX binariesSAN LUN for virtual machines

Networking Connectivity to all needed VLANs

Memory Sized to support estimated workloads

Table 2 – Host Logical Design Specifications – Management Cluster

4.2 Network Logical DesignThe network design section defines how the vSphere virtual networking will be configured.

Following best practices, the network architecture will meet these requirements:

• Separate networks for vSphere management, VM connectivity, and vMotion traffic

• Redundant vSwitches with at least 2 active physical (or vNIC) adapter ports each

• Redundancy across different physical adapters to protect against NIC or PCI slot failure

• Redundancy at the physical switch level

SWITCH NAME

SWITCH TYPE

FUNCTION # OF PHYSICAL NIC PORTS

vSwitch0 Standard Management ConsolevMotion “Production” VMs

Table 3 – Virtual Switch Configuration – Management Cluster

The physical NIC ports will be connected to redundant physical switches.

The following diagrams depict the virtual network infrastructure designs:

Switch

Switch

Fabric

Management

vMotion

Production VirtualMachines

vSwitch0

NativevLAN 443

vmnic0vmnic1vLAN 442

Host

Host Networks

vLAN 440

Figure 3 – vSphere Logical Network Designs – Management Cluster

Page 10: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 0

VMware vCloud Implementation Example

PARAMETER SETTING

Load Balancing Route based on NIC load

Failover Detection Link status

Notify Switches Enabled

Failover Order All active except for Management NetworkManagement Console: Active, StandbyvMotion: Standby, Active

Table 4 – Virtual Switch Configuration Settings – Management Cluster

4.3 Shared Storage Logical DesignThe shared storage design section defines how the vSphere datastores will be configured. The same storage will be used for both the Management cluster as well as the VMware vCloud Director Resource groups.

Following best practices, the shared storage architecture will meet these requirements:

• Storage paths will be redundant at the host (connector), switch, and storage array levels.

• All hosts in a cluster will have access to the same datastores.

ATTRIBUTE SPECIFICATION

Number of Initial LUNs 1 dedicated, 1 interchange (shared with Compute cluster)

LUN Size 539 GB

Zoning Single initiator, single target

VMFS Datastores per LUN 1

VMs per LUN 10 (distribute redundant VMs)

Table 5 – Shared Storage Logical Design Specifications – Management Cluster

4.4 Management ComponentsThe following components will run as VMs on the management cluster hosts:

• vCenter Servers

• vCenter Database

• vCenter Update Manager Database

• vCloud Director Cells

• vCloud Director Database

• vCenter Chargeback Server

• vCenter Chargeback Database

• vShield Manager

VMware vCloud Director Cells are stateless in operation with all information stored in the database. There is some caching that happens at the VMware vCloud Director cell level, such as SSL session data, but all refreshes and updates are done to information stored in the database. As such, the database is critical to the operation of VMware vCloud Director. In a production environment, VMware recommends the database be housed in either a managed cluster configuration, or at the very least have a hot standby available.

Page 11: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 1

VMware vCloud Implementation Example

VSM Data Collector

Load Balancer

Data Collector

vCloud Data Collector

vCenter Chargeback UI

VSM

ESXiESXi

vCenterServer

vCenterDatabase

vCDDatabase

ChargebackDatabase

vCenterChargeback

JDBC

JDBC

VIM API

JDBCHTTPS

HTTPS

Figure 4 – vCenter Chargeback Logical Diagram

4.5 Management Component Resiliency Considerations

The following management components will rely on HA and FT for redundancy.

MANAGEMENT COMPONENT HA ENABLED?

vCenter Server Yes

VMware vCloud Director Yes

vCenter Chargeback Server Yes

vShield Manager Yes

Table 6 – Management Component Resiliency

Page 12: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 2

VMware vCloud Implementation Example

5. vSphere Architecture Design — Resource Groups5.1 Compute Logical DesignThe compute design encompasses the ESXi host clusters. In this section the scope is further limited to only the infrastructure dedicated to the cloud workloads.

5.1.1. DatacentersResource groups can map to different datacenters and are managed by a single vCenter server.

5.1.2. vSphere ClustersAll vSphere clusters will be configured similarly with the following specifications.

ATTRIBUTE SPECIFICATION

VMware DRS Configuration Fully automated

VMware DRS Migration Threshold 3 stars

VMware HA Enable Host Monitoring Yes

VMware HA Admission Control Policy Cluster tolerances 1 host failure (Percentage based)

VMware HA Percentage 83%

VMware HA Admission Control Response Prevent VMs from being powered on if they violate availability constraints

VMware HA Default VM Restart Priority N/A

VMware HA Host Isolation Response Leave VM Powered On

Table 7 – vSphere Cluster Configuration – Resource Group

The resource groups will have the following vSphere cluster.

CLUSTER NAME VCENTER SERVER NAME

# OF HOSTS HA PERCENTAGE

VCDCompute01 ACMEmgmtVC01.vcd.acme.com

6 83%

Table 8 – vSphere Clusters – Resource Groups

Page 13: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 3

VMware vCloud Implementation Example

5.1.3. Host Logical DesignEach ESXi host in the resource groups will have the following specifications.

ATTRIBUTE SPECIFICATION

Host Type and Version VMware ESXi Installable

Processors x86 Compatible

Storage Local for ESX binariesShared for virtual machines

Networking Shared for virtual machinesConnectivity to all needed VLANs

Memory Enough to run estimated workloads

Table 9 – Host Logical Design Specifications – Resource Groups

5.2 Network Logical DesignThe network design section defines how the vSphere virtual networking will be configured.

Following best practices, the network architecture will meet these requirements:

• Separate networks for vSphere management, VM connectivity, vMotion traffic

• Redundant vSwitches with at least 2 active physical adapter ports

• Redundancy across different physical adapters to protect against NIC or PCI slot failure

• Redundancy at the physical switch level

SWITCH NAME SWITCH TYPE FUNCTION # OF NIC PORTS

vSwitch0 Standard Management ConsolevMotion

2 x 10 GigE vNIC

vDSwitch Distributed External NetworksNetwork Pools

2 x 10 GigE vNIC

Table 10 – Virtual Switch Configuration – Resource Groups

When using the distributed virtual switch, dvUplink ports are the number of physical NIC ports on each host.

The physical NIC ports will be connected to redundant physical switches.

Page 14: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 4

VMware vCloud Implementation Example

The following diagram depicts the virtual network infrastructure design.

Switch

Fabric

Management

vMotion

Production VirtualMachines

vSwitch0

NativevLAN 443

vmnic0vmnic1vLAN 442

Management Cluster

Networking

vLAN 440

External Networks

(Production) vmnic2vmnic3

vNetwork Distributed Switch(vDS)

Switch

vLAN 440

Network Pools

Figure 5 – vSphere Logical Network Design – Resource Groups

PARAMETER SETTING

Load Balancing Route based on NIC load (for vDS)

Failover Detection Link status

Notify Switches Enabled

Failover Order All active except for Management NetworkManagement Console: Active, Standby vMotion: Standby, Active

Table 11 – Virtual Switch Configuration Settings – Resource Groups

5.3 Shared Storage Logical DesignThe shared storage design section defines how the vSphere datastores will be configured.

Following best practices, the shared storage architecture will meet these requirements:

• Storage paths will be redundant at the host (HBA), switch, and storage array levels.

• All hosts in a cluster will have access to the same datastores.

Page 15: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 5

VMware vCloud Implementation Example

ATTRIBUTE SPECIFICATION

Number of Initial LUNs 6 dedicated, 1 interchange(shared with Management cluster)

LUN Size 539 GB

Zoning Single initiator, single target

VMFS Datastores per LUN 1

VMs per LUN 12

Table 12 – Shared Storage Logical Design Specifications – Resource Groups

5.4 Resource Group Datastore ConsiderationsThe most common aspect of LUN/datastore sizing is what limit should be implemented regarding the number of VMs per datastore. The reason for limiting this number is to minimize the potential for SCSI locking and to spread the I/O across as many storage processors as possible. Most mainstream storage vendors will provide VMware-specific guidelines for this limit, and VMware recommends an upper limit of 15 VMs per VMFS datastore, regardless of storage platform. In many cases it is forgotten that the number of VMs per LUN is also influenced by the size and I/O requirements of the VM but perhaps more importantly the selected storage solution and even disk types.

When VMware vCloud Director provisions VMs it automatically places the VMs on datastores based on the free disk space of each of the associated datastores in an Org vDC. Due to this mechanism, we will need to keep the size of the LUNs and the number of VMs per LUN relatively low to avoid possible I/O contention.

When considering the number of VMs to place on a single datastore, some of the following factors should be considered in conjunction with any recommended VMs-per-LUN ratio:

• Average VM workload/profile (in particular, the amount of I/O)

• Typical VM size (including configuration files, logs, swap files, and snapshot files)

• VMFS metadata

• Max requirement for IOPs and throughput per LUN, dependency on storage array and design

• Max RTO, if a LUN is lost, i.e. your backup and restore design

If we approach this from an average I/O profile it would be tempting to create all LUNs the same, say as RAID 5, and let the law of averages take care of I/O distribution across all the LUNs and VMs on those LUNs. Another approach would be to create LUNs with different RAID profiles based on anticipated workloads within an Organization. This would dictate creating Provider virtual datacenters (vDCs) that took into account the allocation models as well as the storage profile in use. We would end up with the following types of Provider vDCs as an example:

• Allocated_High_Performance

• Allocated_Generic

As a starting point, VMware recommends RAID 5 storage profiles, and only creating storage tier-specific Provider vDCs as one-offs to address specific organization or business unit requirements.

The VMware Scalable Storage Performance Study provides additional information regarding vSphere storage design.

Page 16: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 6

VMware vCloud Implementation Example

5.4.1. Datastore Sizing EstimationAn estimate of the typical datastore size can be approximated by considering the following factors.

VARIABLE VALUE

Maximum Number of VMs per Datastore 12

Average Size of Virtual Disk(s) per VM 60 GB

Average Memory Size per VM 2 GB

Safety Margin 10%

Table 13 – Datastore Size Estimation Factors

For example,

((12 * 60GB) + (15 * 2GB))+ 10% = (720GB + 30GB) * 1.1 = 825GB

6. vCloud Provider Design6.1 Abstractions and VMware vCloud Director ConstructsA key tenet of the cloud architecture is resource pooling and abstraction. VMware vCloud Director further abstracts the virtualized resources presented by vSphere by providing logical constructs that map to vSphere logical resources:

• Organization – organizational unit to which resources (vDCs) are allocated.

• Virtual Datacenter (vDC) – Deployment environments, scoped to an organization, in which virtual machines run.

• Provider Virtual Datacenter – vSphere resource groupings that power vDCs, further segmented out into organization vDCs.

• Organization Virtual Datacenter (vDC) – An organization’s allocated portion of provider vDC.

vCD

vSphere

Physical

Org Network

External Network Network Pool Provider vDC

Organization vDC

Resource Pool

Compute Cluster

Physical Host Storage ArrayPhysical NetworkVLAN

(d)VS Port Group vDS Datastore

Figure 6 – VMware vCloud Director Abstraction Layer Diagram

Page 17: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 7

VMware vCloud Implementation Example

6.2 Provider vDCsThe following diagram shows how the Provider vDCs map back to vSphere resources:

vcd_compute_01(539GB)

VMFS

vcd_compute_02(539GB)

VMFS

Provider vDC “GIS”

Vcd-compute-cluster1-2

Vcd-compute-cluster1-3

Vcd-compute-cluster1-4

Vcd-compute-cluster1-1

vcd_compute_0X(539GB)

VMFS

Future vDCs

VCD Compute01

Vcd-compute-clusterx-2

Vcd-compute-clusterx-3

Vcd-compute-clusterx-4

Vcd-compute-clusterx-1

Figure 7 – Provider vDCs in Resource Groups

All ESXi hosts will belong to a vSphere cluster which will be associated with one and only one ACME Enterprise vDC.

A vSphere cluster will scale to 25 hosts, allowing for up to 14 clusters per vCenter Server (the limit is bound by the maximum number of hosts per datacenter possible) and an upper limit of 10,000 VMs (this is a vCenter limit) per resource group.

The recommendation is to start with 8 hosts in a cluster and add resources (Hosts) to the cluster as dictated by customer consumption. However, for the initial implementation, the provider vDC will start with 6 hosts. When utilization of the resources reaches 60%, VMware recommends that a new provider vDC/cluster be deployed. This provides for growth within the provider vDCs for the existing organizations / business units without necessitating their migration as utilization nears maxing out a cluster’s resources.

As an example, a fully loaded resource group will contain 14 Provider vDCs, and up to 350 ESXi hosts, giving an average VM consolidation ratio of 26:1 assuming a 5:1 ratio of vCPU:pCPU. To increase this ratio, ACME Enterprise would need to increase the vCPU:pCPU ratio that they are willing to support. The risk associated with an increase in CPU over commitment is mainly in degraded overall performance that can result in higher than acceptable vCPU ready times. The vCPU:pCPU ratio is based on the amount of CPU over commitment, for the available cores, that ACME is comfortable with. For VMs that are not busy this ratio can be increased without any undesirable effect on VM performance. Monitoring of vCPU ready times helps identify if the ratio needs to be increased or decreased on a per cluster basis. A 5:1 ratio is a good starting point for a multi-core system.

Page 18: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 8

VMware vCloud Implementation Example

A Provider vDC can map to only one vSphere cluster, but can map to multiple datastores and networks.

Multiple Provider vDCs are used to map to different types/tiers of resources.

• Compute – this is a function of the mapped vSphere clusters and the resources that back it

• Storage – this is a function of the underlying storage types of the mapped datastores

• Networking – this is a function of the mapped vSphere networking in terms of speed and connectivity

Multiple Provider vDCs are created for the following reasons:

• The cloud requires more compute capacity than a single vSphere cluster (a vSphere resource pool cannot span vSphere clusters)

• Tiered storage is required; each Provider vDC maps to datastores on storage with different characteristics

• Requirement for workloads to run on physically separate infrastructure

ATTRIBUTE SPECIFICATION

Number of Provider vDCs 1

Number of Default External Networks 1 (Production)

Table 14 – Provider vDC Specifications

PROVIDER VDC CLUSTER DATASTORES VSPHERE NETWORKS

GIS VCDCompute01 vcd_compute-01 vcd_compute-02 vcd_compute-03 vcd_compute-04 vcd_compute-05

Production

Table 15 – Provider vDC to vSphere Mapping

VMware recommends assessing workloads to assist in sizing. Following is a standard sizing table that can be used as a reference for future design activities.

VM SIZE DISTRIBUTION NUMBER OF VMS

1 vCPU / 1 GB RAM 65%

2 vCPU / 2 GB RAM 29%

4 vCPU / 4 GB RAM 5%

8 vCPU / 8 GB RAM 1%

Total 100%

Table 16 – Virtual Machine Sizing and Distribution

Page 19: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 1 9

VMware vCloud Implementation Example

6.3 Organizations

ORGANIZATION NAME DESCRIPTION

AIS ACME Information Systems

Table 17 – Organizations

6.4 Networks

ATTRIBUTE SPECIFICATION

Number of Default External Networks 1

Number of Default vApp Networks End-user controlled

Number of default Organization Networks 2

Default Network Pool Types Used vCloud Director Network Isolation (vCD-NI)

Is a Pool of Public Routable IP Addresses Available? Yes, for access to Production but only a certain range is given to each Organization.

Table 18 – Network Specifications

6.4.1. External NetworksACME Enterprise will provide the following External Network for the initial implementation:

• Production (VLAN 440)

Part of the provisioning for an organization can involve creating an external network for each Organization, such as internet access, and a VPN network if desired, and associating them with the required Org Networks.

6.4.2. Network PoolsACME will provide the following sets of Network Pools based on need:

• VMware vCloud Director - Network Isolation-backed

• VLAN-Backed (Optional)

For the vCD-NI-backed pool VMware recommends the transport VLAN (VLAN ID: 1254) be a VLAN that is not in use within the ACME infrastructure for increased security and isolation. In the case of this initial implementation, we do not have this option so will use Production VLAN 440.

6.4.3. Networking Use CasesACME will provide the following two use cases for the initial implementation to both demonstrate VMware vCloud Director capabilities and as a use case for deploying their production vApp:

1. Users should be able to completely isolate vApps for their Development and/or Test Users

2. Users should be able to connect to the Organization Networks either directly or via fencing and the Organization Networks will not have access to any public Internet.

Page 20: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 0

VMware vCloud Implementation Example

vApp01

DBx.10

Webx.11

Appx.12

vAppNetwork1

Network Pool

(vCD-NI-backed/VLAN- backed/

Portgroup-backed)

Figure 8 – vApp Isolated Network

Isolated Org Network

vApp01

DBx.10

Webx.11

Appx.12

vApp02

DBx.13

Webx.14

Appx.15

vAppNetwork2

Direct

Fenced

vAppNetwork1

Network Pool

(vCD-NI-backed/VLAN- backed/

Portgroup-backed)

Figure 9 – vApp Network Direct Attached to Org Network

This is an example for a Dev/Test environment where developers will use the different IPs in their vApps, so the VMs in a vApp can communicate to the VMs in another vApp without any conflicts.

Page 21: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 1

VMware vCloud Implementation Example

Isolated Org Network

vApp01

DBx.10

Webx.11

Appx.12

vApp02

DBx.10

Webx.11

Appx.12

vAppNetwork2

Fenced

Fenced

vAppNetwork1

Network Pool

(vCD-NI-backed/VLAN- backed/

Portgroup-backed)

Figure 10 – vApp Network Fenced to Org Network

This is an example for Dev/Test where developers will have duplicate IPs in their vApps.

Physical Backbone

External Network

Org Network

vApp01

DBx.10

Webx.11

Appx.12

vAppNetwork1

vApp02

DBx.13

Webx.14

Appx.15

vAppNetwork2

Direct or Fenced

Direct

Network Pool

(vCD-NI-backed/VLAN- backed/

Portgroup-backed)

Figure 11 – vApp Network Bridged or Fenced to an Org Network that is Direct attached to External Network

Page 22: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 2

VMware vCloud Implementation Example

Physical Backbone

External Network

Org Network

vApp01

DB1.10

Web1.11

App1.12

vAppNetwork1

vApp02

DB1.13

Web1.14

App1.15

vAppNetwork2

Direct or Fenced

Fenced

Network Pool

(vCD-NI-backed/VLAN- backed/

Portgroup-backed)

Figure 12 – vApp Network Fenced to Fenced Org Network

This is one way to connect the External network and preserve VLANs by sharing the same VLAN for the Internet among multiple Organizations. The vShield Edge is needed to provide NAT and firewall services for the different Organizations.

Once the External Networks have been created, a VMware vCloud Director Administrator can create the Organization Networks as shown above. The vShield Edge (VSE) device is needed to perform Address translation between the different networks. The VSE can be configured to provide for port address translation to jump hosts located inside the networks or to gain direct access to individual hosts.

VMware recommends separating External and Organization networks by using two separate vDS switches. For ACME’s initial implementation, we do not have the option to create two vDS switches as we only had one network (Production VLAN 440) to route vCD-NI traffic between ESX hosts.

6.5 CatalogsThe catalog contains ACME-specific templates that are made available to all organizations / business units. ACME will make a set of catalog entries available to cover the classes of virtual machines, templates, and media as specified in the corresponding Service Definition.

For the initial implementation, a single cost model will be created using the following fixed cost pricing and chargeback model:

Page 23: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 3

VMware vCloud Implementation Example

VM CONFIGURATION PRICE

1 vCPU and 512 MB RAM $248.00

1 vCPU and 1 GB RAM $272.00

1 vCPU and 2 GB RAM $289.00

2 vCPUs and 2 GB RAM $308.00

1 vCPU and 3 GB RAM $315.00

2 vCPUs and 3 GB RAM $331.00

1 vCPU and 4 GB RAM $341.00

2 vCPUs and 4 GB RAM $354.00

4 vCPUs and 4 GB RAM $386.00

1 vCPU and 8 GB RAM $461.00

2 vCPUs and 8 GB RAM $477.00

4 vCPUs and 8 GB RAM $509.00

Table 19 – ACME Fixed-cost Cost Model

7. vCloud Security7.1 vSphere Security7.1.1. Host SecurityChosen in part for its limited management console functionality, ESXi will be configured by ACME with a strong root password stored following corporate password procedures. ESXi lockdown mode will also be enabled to prevent root access to the hosts over the network, and appropriate security policies and procedures will be created and enforced to govern the systems. Because ESXi cannot be accessed over the network, sophisticated host-based firewall configurations are not required.

7.1.2. Network SecurityVirtual switch security settings will be set as follows:

FUNCTION SETTING

Promiscuous Mode Management cluster – RejectResource Group - Reject

MAC Address Changes Management cluster – RejectResource Group - Reject

Forged Transmits Management cluster – RejectResource Group - Reject

Table 20 – Virtual Switch Security Settings

Page 24: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 4

VMware vCloud Implementation Example

7.1.3. vCenter SecurityvCenter Server is installed using a local administrator account. When vCenter Server is joined to a domain, this will result in any domain administrator gaining administrative privileges to vCenter. VMware recommends ACME remove this potential security risk by creating new vCenter Administrators group in Active Directory and assign it to the vCenter Server Administrator Role, making it possible to remove the local Administrators group from this role.

7.2 VMware vCloud Director SecurityStandard Linux hardening guidelines need to be applied to the VMware vCloud Director VM. There is no need for local users, and the root password will only be needed during install and upgrades to the VMware vCloud Director binaries. Additionally, certain network ports must be open for vCloud Director use. Refer to the vCloud Director Administrator’s guide for further information.

8. vCloud Management8.1 vSphere Host Setup StandardizationHost Profiles can be used to automatically configure network, storage, security and other features. This feature along with automated installation of ESXi hosts is used to standardize all host configurations.

VM Monitoring is enabled on a cluster level within HA and uses the VMware Tools heartbeat to verify a virtual machine is alive. When a virtual machine fails, causing VMware Tools heartbeat to not be updated, VM Monitoring will verify if any storage or networking I/O has occurred over the last 120 seconds and if not, the virtual machine will be restarted.

As such VMware recommends enabling both VMware HA and VM monitoring on the Management cluster and the Resource Group clusters.

8.2 VMware vCloud Director LoggingEach VMware vCloud Director cell logs audit messages to the database where they are retained for 90 days by default. If log retention is needed longer than 90 days and or centralized logging is required, an external Syslog server can be configured and used as a duplicate destination for the events that are logged.

8.3 vSphere Host LoggingRemote logging to a central host provides a way to greatly increase administration capabilities. Gathering log files on a central server facilitates monitoring of all hosts with a single tool as well as enables aggregate analysis and the ability to search for evidence of coordinated attacks on multiple hosts. This will apply to the following log analysis:

• messages (host log)

• hostd (host agent log)

• vpxa (vCenter agent log)

Within each ESXi host, Syslog behavior is controlled by the Syslog advanced settings. These settings determine the central logging host that will receive the Syslog messages. The hostname must be resolvable using DNS.

For this initial implementation, none of the ESXi hosts at ACME will be configured to send log files to a central Syslog server residing in the management cluster.

Page 25: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 5

VMware vCloud Implementation Example

8.4 VMware vCloud Director MonitoringThe following items should be monitored through VMware vCloud Director. As of VMware vCloud Director 1.0 this will need to be done with custom queries to VMware vCloud Director using the Admin API to get the consumption data on the different components. Some of the components in VMware vCloud Director can also be monitored by aggregating the Syslog-generated logs from the different VMware vCloud Director cells that would be found on the centralized log server.

SCOPE ITEM

System LeasesQuotasLimits

vSphere Resources CPUMemoryNetwork IP address poolStorage free space

Virtual Machines/vApps Not in scope

Table 21 – VMware vCloud Director Monitoring Items

Appendix A – Bill of MaterialsThe inventory and specifications of components comprising the vCloud are provided.

ITEM QUANTITY NAME/DESCRIPTION

ESXi Host 3 • Vendor X Compute Resource• Chassis: 3• Blades per Chassis: 1• Processors: 2 Socket Intel® Xeon® X5670 (6

core, 2.9 GHz (Westmere)• Memory: 96GB• Version: vSphere 4.1 (ESXi)

vCenter Server 1 • Type: VM• Guest OS: Windows 2008 x86_64• 2 x vCPU• 4 GB memory• 1 vNIC• Min. free disk space: 10GB• Version: 4.1

vCenter and Update Manager Database

0 N/A

VMware vCloud Director Cell 1 • Minimum number of VMware vCloud Director Cells:1

• Type: VM• Guest OS: RHEL 5 x64• 4 vCPU• 4 GB memory• 2 vNIC• Version: 1.0

Page 26: VMware vCloud Private Enterprise Cloud

T E C H N I C A L W H I T E P A P E R / 2 6

VMware vCloud Implementation Example

ITEM QUANTITY NAME/DESCRIPTION

VMware vCloud Director Database

1 • Type: VM (unless using an existing, managed db cluster)

• Guest OS: RHEL• Oracle 11g• 4 x vCPU• 4 GB memory• 1 vNICs

vShield Manager 1 • Type: VM appliance• Version: 4.1• 1 x vCPU• 4 GB memory• 1 vNIC

vCenter Chargeback Server 1 • Type: VM• Guest OS: Windows 2008 x64• 2 x vCPU• 2 GB memory• 1 vNIC• Version: 1.5

vCenter Chargeback Database 1 • Type: VM (unless using an existing, managed db cluster)

• Guest OS: Windows 2008 x86_64• SQL 2008/ MS• 2 x vCPU• 4 GB memory• 1 vNIC

NFS Appliance 0 N/A

vShield Edge Appliances Multiple • Type: VM• 1 vCPU• 256MB RAM• 1 vNIC

Domain Controllers (AD) 1 • Isolated AD VM built specifically for PoC infrastructure, no access to other DCs.

• Type: VM• MS Windows 2008 Datacenter• 2 x vCPU• 4 GB Memory• 1 x vNIC

API Servers N/A

Monitoring Server N/A

Logging Server N/A

Storage 1 • FC SAN Array• VMFS• LUN Sizing: 539 GB• RAID Level: 5

Table 22 – Management Cluster Inventory

Page 27: VMware vCloud Private Enterprise Cloud

VMware vCloud Implementation Example

ITEM QUANTITY NAME/DESCRIPTION

ESXi Host 6 • Vendor X Compute Resource• Chassis: 6• Blades per Chassis: 1• Blade Type: N20-B6625-1• Processors: 2 Socket Intel® Xeon® X5670 (6

core, 2.9 GHz (Westmere)• Memory: 96GB • Version: vSphere 4.1 (ESXi)

vCenter Server 1 • Same as Management Cluster

Storage 1 • FC SAN Array• VMFS• LUN Sizing: 539 GB• RAID Level: 5

Table 23 – Resource Groups Inventory

VMware, Inc. 3401 Hillview Avenue Palo Alto CA 94304 USA Tel 877-486-9273 Fax 650-427-5001 www .vmware .comCopyright © 2010 VMware, Inc . All rights reserved . This product is protected by U .S . and international copyright and intellectual property laws . VMware products are covered by one or more patents listed at http://www .vmware .com/go/patents . VMware is a registered trademark or trademark of VMware, Inc . in the United States and/or other jurisdictions . All other marks and names mentioned herein may be trademarks of their respective companies . Item No: VMW_10Q3_WP_Private_p27_A_R2