119
1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the Design Workshop

1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

Embed Size (px)

Citation preview

Page 1: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

1 Confidential

Additional Info (some are still draft)Tech notes that you may find useful as input to the design. A lot more material can be found at the Design Workshop

Page 2: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

2 Confidential

Internal Cloud: Gartner model and VMware modelGartner take:

• Virtual infrastructure• On-demand, elastic, automated/dynamic• Improves agility and business continuity

Virtual infrastructure management

Self-service provisioning portal

Service catalog

Performance management

En

terp

rise

se

rvic

e m

an

ag

em

en

t

Ide

ntit

y a

nd

acc

ess

ma

na

ge

me

nt

Life cycle management

Orc

he

stra

tor

Ext

. cl

ou

d c

on

ne

cto

rChargeback system

Configuration and change management

Capacity management

Physical infrastructure

Virtual infrastructure Se

rvic

e g

ove

rno

r/in

fra

stru

ctu

re a

uth

orit

y

Page 3: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

3 Confidential

Master / Slave concept

3

Page 4: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

4 Confidential

Cluster: SettingsFor the 3 sample sizes, here is my personal recommendation

• DRS fully automated. Sensitivity: Moderate • Use anti-affinity or affinity rules only when needed.

• More things for you to remember.• Gives DRS less room to maneuver

• DPM enabled. Choose hosts that support DPM• Do not use WOL. Use DPM or IPMI

• VM Monitoring enabled. • VM monitoring sensitivity: Medium• HA will restart the VM if the heartbeat between the host and the VM has not been received within a 60 second interval

• EVC enabled. Enable you to upgrade in future. • Prevent VMs from being powered on if they violate availability constraints better availability• Host isolation response: Shut down VM

• See http://www.yellow-bricks.com/vmware-high-availability-deepdiv/• Compared with “Leave VM Powered on”, this prevent data/transaction integrity risk. The risk is rather low as the VM itself has lock• Compared with “Power off VM”, this allows graceful shutdown. Some application needs to run consistency check after a sudden power

off.

Page 5: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

5 Confidential

DRS, DPM, EVCIn our 3 sizes, here are the settings:

• DRS: Fully Automated• DRS sensitivity: Leave it at default (middle. 3 Star migration)• EVC: turn on.

• It does not reduce performance.• It is a simple mask.

• DPM: turn on. Unless HW vendor shows otherwise• VM affinity: use sparingly. It adds complexity as we are using group affinity.• Group affinity: use (as per diagram in design)

Why turn on DPM• Power cost is real concern

Singapore example: S$0.24 per kWh x (600 W + 600 W) x 24 hours 365 days x 3 years / 1000 W = $5100This is quite close of buying 1 serverFor every 1W of power consumed, we need minimum 1W of power for aircond + UPS + lighting

Page 6: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

6 Confidential

VMware VMmarkUse VMmark as the basis for CPU selection only, not entire box selection.

• It is the official benchmark for VMware, and it uses multiple workload• Other benchmark are not run on vSphere, and typically test 1 workload• VMmark does not include TCO. Consider entire cost when choosing HW platform

Use it as a guide only• Your environment is not the same.• You need head room and HA.

How it’s done• VMmark 2.0 uses 1 - 4 vCPU• MS Exchange, MySQL, Apache, J2EE,

File Server, Idle VM

Result page: • VMmark 2.0 is not compatible with 1.x results• www.vmware.com/products/vmmark/results.html

This slide needs update

Page 7: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

7 Confidential

VMware VMmark

Page 8: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

8 Confidential

VMmark: sample benchmark result (HP only)I’m only showing result from 1 vendor as vendor comparison is more than just VMmark result.

IBM, Dell, HP, Fujitsu, Cisco, Oracle, NEC have VMmark results

Look at this number. 20 tiles = 100 Active VM

This tells us that Xeon 5500 can run 17 Tiles, at 100% utilisation.

Each Tile has 6 VM, but 1 is idle. 17 x 5 VM = 85 active VM in 1 box.

At 80% Peak utilisation, that’s ~65 VM.

Opteron 8439, 24 cores

Xeon 5570, 8 cores

Opteron 2435, 12 cores

Xeon 5470, 8 cores

This number is when comparing with same #Tiles± 10% is ok for real-life sizing. This is benchmark

Page 9: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

9 Confidential

Fault Tolerance

Workload Type Application Specifics

Databases The most popular workloads on FT.

Small to medium instances. Mostly SQL Server.

MS Exchange and Messaging

BES, Exchange.

Gaming company has 750 mailboxes on 1 FT VM.

See FT load test at blogs.vmware.com

Web and File servers

File server might be stateless but application using it may be sensitive to denial of service and may be very costly to lose. A simulation relying on a file server might have to be restarted if the file server fails.

Manufacturing and Custom Applications

These workloads keep production lines moving. Breaks result in loss of productivity and material.

Examples: Propeller factory, meat factory, pharma line.

SAP SAP ECC 6.0 System based on SAP NetWeaver 7.0 platform. ASCS, a Message and Transaction locking service, is a SPOF.

BlackBerry BlackBerry Enterprise Server 4.1.6 (BES)

1 vCPU BES can support 200 users, 100-200 emails/day

Page 10: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

10 Confidential

MS ClusteringESX Port Group properties

• Notify Switches = NO • Forged Transmits = Accept.

Win08 does not support NFS

Storage Design• Virtual SCSI adapter

• LSI Logic Parallel for Windows Server 2003• LSI Logic SAS for Windows Server 2008

ESXi changes• ESXi 5.0 uses a different technique to

determine if RDM LUNs are used for MSCS cluster devices, by introducing a configuration flag to mark each device as "perennially reserved" that is participating in a MSCS cluster.

Unicast mode reassigns the station (MAC) address of the network adapter for which it is enabled and all cluster hosts are assigned the same MAC address, you cannot have ESX send ARP or RARP to update the physical switch port with the actual MAC address of the NICs as this break the the unicast NLB communication

Page 11: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

11 Confidential

Symantec ApplicationHACan install agent to multiple VM simultaneously

Additional Roles for security

It does not cover Oracle yet

Presales contact for ASEAN: Vic

Page 12: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

12 Confidential

VMware HA and DRS

Read Duncan’s yellowbrick first. • Done? Read it again. This time, try to internalise it. See speaker notes below for an example.

vSphere 4.1• Primary Nodes

• Primary nodes hold cluster settings and all “node states” which are synchronized between primaries. Node states hold for instance resource usage information. In case that vCenter is not available the primary nodes will have a rough estimate of the resource occupation and can take this into account when a fail-over needs to occur.

• Primary nodes send heartbeats to primary nodes and secondary nodes. • HA needs at least 1 primary because the “fail-over coordinator” role will be assigned to this primary, this role is also described as “active primary”.• If all primary hosts fail simultaneously no HA initiated restart of the VMs will take place. HA needs at least one primary host to restart VMs. This is why

you can only take four host failures in account when configuring the “host failures” HA admission control policy. (Remember 5 primaries…)• The first 5 hosts that join the VMware HA cluster are automatically selected as primary nodes. All the others are automatically selected as secondary

nodes. A cluster of 5 will be all Primary.• When you do a reconfigure for HA the primary nodes and secondary nodes are selected again, this is at random. The vCenter client does not show

which host is a primary and which is not.

• Secondary Nodes• Secondary nodes send their state info & heartbeats to the primary nodes only.

• HA does not knows if the host is isolated or completely unavailable (down).• The VM lock file is the safety net. In VMFS, the file is not visible. In NFS, it is the .lck file.

Nodes send a heartbeat every 1 second. The mechanism to detect possible outages.

Page 13: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

13 Confidential

vSphere 4.1: HA and DRSBest Practices

• Avoid using advance settings to decrease slot size as it might lead to longer down time. Admission control does not take fragmentation of slots into account when slot sizes are manually defined with advanced settings.

What can go wrong in HA• VM Network lost• HA network lost• Storage Network lost

Failed Not Failed What happen as a result

VM Network HA NetworkStorage Network

Users can’t access VM. If there are active users, they will complain.HA does nothing as it’s not within the scope of HA in vSphere 4.1

HA Network VM NetworkStorage Network

It depends: Split Brain or Partitioned?If the host is isolated, it will execute Isolation Response (shut down VM)Lock is released.Other host will gain lock. Other host will then start the VM

Storage Network Does not matter VM probably crash as it can’t access disk.Lock expires. Host will lose connection to array. Other host (first one to get the lock?) will boot the VM.

Page 14: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

14 Confidential

VMware HA and DRSSplit Brain >< Partitioned Cluster

• A large cluster that spans across racks might experience partitioning. Each partition will think they are full cluster. So long there is no loss is storage network, each partition will happily run their own VM.

• Split Brain is when 2 hosts want to run a VM.• Partitioned can happen when the cluster is separated by multiple switches. Diagram below shows a cluster of 4 ESX.

Page 15: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

15 Confidential

HA: Admission Control Policy (% of Cluster)Specify a percentage of capacity that needs to be reserved for failover

• You need to manually set it so it is at least equal to 1 host failure.• E.g. you have a 8 node cluster and wants to handle 2 node failure. Set the % to be 25%

Complexity arises when nodes are not equal• Different RAM or CPU• But this also impact the other Admission Control option. So always keep node size equal, especially in Tier 1.

Total amount of reserved resource < (Available Resources – Reserved Resources)

If no reservation is set a default of 256 MHz is used for CPU and 0MB + overhead for MEM

Monitor the thresholds with vCenter on the Cluster’s “summary” tab

Page 16: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

16 Confidential

SnapshotOnly keep for maximum 1-3 days.

• Delete or commit as soon as you are done.• A large snapshot may cause issue when committing/deleting.

For high transaction VM, delete/commit as soon as you are done verifying• E.g. databases, emails.

3rd party tool• Snapshots taken by third party software (called via API) may not show up in the vCenter Snapshot Manager. Routinely check for snapshots

via the command-line.

Increasing the size of a disk with snapshots present can lead to corruption of the snapshots and potential data loss. • Check for snapshot via CLI before you increase

Page 17: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

17 Confidential

vMotionCan be encrypted. At a cost certainly. If vMotion network is isolated, then there is no need.

May lose 1 ping.

Inter-cluster vMotion is not the same with intra-cluster• Involves additional calls into vCenter, so hard limit• Lose VM cluster properties (HA restart priority,

DRS settings, etc.)

Page 18: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

18 Confidential

ESXi: Network configuration with UCSIf you are using Cisco UCS blade

• 2x 10G or 4x 10G depending on blade model and mezzanine card

All mezzanine card models support FCoE• Unified I/O • Low Latency

The Cisco Virtualized Adapter (VIC) supports• Multiple virtual adapters per physical adapter• Ethernet & FC on the same adapter• Up to 128 virtual adapters (vNICs)• High Performance 500K IOPS• Ideal for FC, iSCSI and NFS

Once you decide it’s Cisco,discuss the detail with Cisco.

Page 19: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

19 Confidential

What Is Auto Deploy

Without Auto Deploy… With Auto Deploy…

Host image tied to physical server • Each host needs full install and config• Not easy to recover host • Redundant boot disks/dedicated LUN

Host image decoupled from server• Run on any server w/ matching hardware • Config stored in Host Profile• No boot disk

A lot of time/effort building hosts• Deploying hosts is repetitive and tedious• Heavy reliance on scripting• Need to update for each new release

Agile deployment model• Deploy many hosts quickly and efficiently• No pre/post install scripts• No need to update with each release

Configuration drift between hosts• Config drift always a concern• Compromises HA/DR• Manging drift consumes admin resources

Host State Guaranteed• Single boot image shared across hosts

Every reboot provides consistent image• Eliminate need to detect/correct drift

Page 20: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

20 Confidential

Auto Deploy Components

Component Sub-Components Notes

PXE Boot Infrastructure

• DHCP Server• TFTP Server

• Setup independently• gPXE file from vCenter• Can use Auto Deploy

Appliance

Auto Deploy Server • Rules Engine• PowerCLI Snap-in• Web Server

• Build/Manage Rules• Match server to Image

and Host Profile• Deploy server

Image Builder • Image Profiles,• PowerCLI Snap-in

• Combine ESXi image with 3rd party VIBs to create custom Image Profiles

vCenter Server • Stores Rules• Host Profiles• Answer Files

• Provides store for rules• Host configs saved in

Host Profiles• Custom Host settings

saved in Answer Files

Page 21: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

21 Confidential

Datastore Space I/O Connected CPU

Connected Memory

Integrated Metric

1 High High Low Low Low

2 Low Medium Medium Medium Medium

3 High Medium High High High

Datastore 3Datastore 1 Datastore 2

Storage DRS and DRSInteractions:

• Storage DRS placement may impact VM-host compatibility for DRS• DRS placement may impact VM-datastore compatibility for Storage DRS

Solution: datastore and host co-placement• Done at provisioning time by Storage DRS• Based on an integrated metric for space, I/O, CPU

and memory resources• Overcommitted resources get more weights

in the integrated metric• DRS placement proceeds as usual

But easier to architect it properly. Map ESX Cluster to Datastore Cluster manually.

Page 22: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

22 Confidential

Unified Fabric with Fabric Extender

Multiple points of managementFCEthernetBlade switches

High cable count

Unified fabric with Fabric extenderSingle point of managementReduced cables

Fiber between racksCopper in racks

End of Row Deployment Fabric Extender

Page 23: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

23 Confidential

Storage IO ControlSuggested Congestion Threshold values

One: Avoid different settings for datastores sharing underlying resources• Use same congestion threshold on A, B• Use comparable share values

(e.g. use Low/Normal/High everywhere)

Storage Media Congestion Threshold

Solid State Disks 10 - 15 milliseconds

Fiber Channel 20 - 30 milliseconds

SAS 20 - 30 milliseconds

SATA 30 - 50 milliseconds

Auto-tiered StorageFull LUN auto - tiering

Vendor recommended value. If none provided, recommended threshold from above for the slowest storage

Auto-tiered StorageBlock level / sub-LUN auto - tiering

Vendor recommended value. If none provided, combination of thresholds from above for the fastest and the slowest media types

Physical drives

Datastore A Datastore B

SIOC SIOC

Page 24: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

24 Confidential

NAS & NFSTwo key NAS protocols:

• NFS (the “Network File System”). This is what we support.• SMB (Windows networking, also known as “CIFS”)

Things to know about NFS• “Simpler” for person who are not familiar with SAN complexity• To remove a VM lock is simpler as it’s visible.

• When ESX Server accesses a VM disk file on an NFS-based datastore, a special .lck-XXX lock file is generated in the same directory where the disk file resides to prevent other ESX Server hosts from accessing this virtual disk file.

• Don’t remove the .lck-XXX lock file, otherwise the running VM will not be able to access its virtual disk file.

• No SCSI reservation. This is a minor issue• 1 Datastore will only use 1 path

• Does Load Based Teaming work with it?• For 1 GE, throughput will peak at 100 MB/s. At 16 K block size, that’s 7500 IOPS.

• The Vmkernel in vSphere 5 only supports NFS v3, not v4. Over TCP only, no support for UDP.• MSCS (Microsoft Clustering) is not supported with NAS.• NFS traffic by default is sent in clear text since ESX does not encrypt it.

• Use only NAS storage over trusted networks. Layer 2 VLANs are another good choice here.

• 10 Gb NFS is supported. So is Jumbo Frames, and configure it end to end.• Deduplication can save sizeable amount. See speaker notes

Page 25: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

25 Confidential

iSCSIUse Virtual port storage system instead of plain Active/Active

• I’m not sure if they cost much more.

Has 1 additional Array Type over traditional FC: Virtual port storage system• Allows access to all available LUNs through a single virtual port. • These are active-active Array, but hide their multiple connections though a single port. ESXi multipathing cannot detect the

multiple connections to the storage. ESXi does not see multiple ports on the storage and cannot choose the storage port it connects to. These array handle port failover and connection balancing transparently. This is often referred to as transparent failover

• The storage system uses this technique to spread the load across available ports.

Page 26: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

26 Confidential

iSCSILimitations

• ESX/ESXi does not support iSCSI-connected tape devices.• You cannot use virtual-machine multipathing software to perform I/O load balancing to a single physical LUN.• A host cannot access the same LUN when it uses dependent and independent hardware iSCSI adapters simultaneously.• Broadcom iSCSI adapters do not support IPv6 and Jumbo Frames. [e1: still true in vSphere 5??]• Some storage systems do not support multiple sessions from the same initiator name or endpoint. Multiple sessions to such

targets can result in unpredictable behavior.

Dependant and Independent• A dependent hardware iSCSI adapter is a third-party adapter that depends on VMware networking, and iSCSI configuration and

management interfaces provided by VMware. This type of adapter can be a card, such as a Broadcom 5709 NIC, that presents a standard network adapter and iSCSI offload functionality for the same port. The iSCSI offload functionality appears on the list of storage adapters as an iSCSI adapter

Error correction• To protect the integrity of iSCSI headers and data, the iSCSI protocol defines error correction methods known as header digests

and data digests. These digests pertain to the header and SCSI data being transferred between iSCSI initiators and targets, in both directions.

• Both parameters are disabled by default, but you can enable them. Impact CPU. Nehalem processors offload the iSCSI digest calculations, thus reducing the impact on performance

Hardware iSCSI• When you use a dependent hardware iSCSI adapter, performance reporting for a NIC associated with the adapter might show

little or no activity, even when iSCSI traffic is heavy. This behavior occurs because the iSCSI traffic bypasses the regular networking stack

Best practice• Configure jumbo frames end to end.• Use NIC with TCP segmentation offload (TSO)

Page 27: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

27 Confidential

iSCSI & NFS: caveat when used together

Avoid using them together

iSCSI and NFS have different HA models.• iSCSI uses vmknics with no Ethernet failover – using MPIO instead• NFS client relies on vmknics using link aggregation/Ethernet failover• NFS relies on host routing table.• NFS traffic will use iSCSI vmknic and results in links without

redundancy• Use of multiple session iSCSI with NFS is not supported by NetApp• EMC supports, but best practice is to have separate subnets, virtual

interfaces

Page 28: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

28 Confidential

NPIVWhat it is

• Allow a single Fibre Channel HBA port to register with the Fibre Channel fabric using several worldwide port names (WWPNs). This ability makes the HBA port appear as multiple virtual ports, each having its own ID and virtual port name. Virtual machines can then claim each of these virtual ports and use them for all RDM traffic.

• Note that is WWPN, not WWNN• WWPN – World Wide Port Name• WWNN – World Wide Node Name• Single port HBA typically has a single WWNN and a single WWPN (which may be the same).• Dual port HBAs may have a single WWNN to identify the HBA, but each port will typically have its own WWPN.• However they could also have an independent WWNN per port too.

Design consideration• Only applicable to RDM• VM does not get its own HBA nor FC driver required. It just gets an N-port, so it’s visible from the fabric.• HBA and SAN switch must support NPIV• Cannot perform Storage vMotion or VMotion between datastores when NPIV is enabled. All RDM files must be in the same datastore.

• Still in place in v5

First one is WW Node NameSecond one is WW Port Name

Page 29: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

29 Confidential

2 TB VMDK barrierYou need to have > 2 TB disk within a VM.

• There are some solutions, each with pro and cons.• Say you need a 5 TB disk in 1 Windows VM.• RDM (even with physical compatibility) and DirectPath I/O do not increase virtual disk limit.

Solution 1: VMFS or NFS• Create a datastore of 5 TB.• Create 3 VMDK. Present to Windows• Windows then combine the 3 disk into 1 disk.• Limitation

• Certain low level storage-softwares may not work as they need 1 disk (not combined by OS)

Solution 3: iSCSI within the Guest• Configure the iSCSI initiator in Windows• Configure a 5 TB LUN. Present the LUN directly to Windows, bypassing the ESX layer. You can’t monitor it.• By default, it will only have 1 GE. NIC teaming requires driver from Intel. Not sure if this supported.

Page 30: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

30 Confidential

Storage: Queue DepthWhen should you adjust the queue depth?

• If a VM generates more commands to a LUN than the LUN queue depth; Adjust the device/LUN queue.• Generally with fewer, very high IO VMs on a host, larger queues at the device driver will improve performance.

• If the VM’s queue depth is lower than the HBA’s; Adjust the vmkernel.

Be cautious when setting queue depths• With too large of device queues, the storage array can easily be overwhelmed and its performance may suffer with high latencies.• Device driver queue depths is global and set per LUN setting.

• Change the device queue depth for all ESX hosts in the cluster

Calculating the queue depth:• To verify that you are not exceed the queue depth for an HBA use the following formula:

• Max. queue depth of the HBA = Device queue setting * # of LUNs on HBA

Queue are at multiple levels• LUN queue for each LUN at ESXi host.• If the above queue is full, then kernel queue will be filled up• LUN queue at array level for each LUN

• If this queue does not exist, then the array writes straight into disk.

• Disk queue• The queue at the disk level, if there is no LUN queue

30

Page 31: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

31 Confidential

Sizing the Storage Array

• For RAID 1 (it has IO Penalty of 2)• 60 Drives= ((7000 x 2 x 30%) + (7000 x 70%)) / 150 IOPS

• Why RAID 5 has 4 IO Penalty?

RAID Level IO Penalty

1 2

5 4

6 6

Page 32: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

32 Confidential

Storage: Performance MonitoringGet a baseline of your environment during a “normal” IO time frame.

• Capture as many data points as possible for analysis.• Capture data from the SAN Fabric, the storage array, and the hosts.

Which statistics should be captured• Max and average read/write IOps • Max and average read/write latency (ms)• Max and average Throughput (MB/sec)• Read and write percentages• Random vs. sequential• Capacity – total and used

Page 33: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

33 Confidential

SCSI Architecture Model (SAM)

Page 34: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

34 Confidential

Fibre Channel Multi-Switch Fabric

34

Fabric Switch 1

TR

RC

N_Port 0

Node A

Node D

TR

RC

N_Port 3F_PortRC

TR

F_PortRC

TR TR

RCN_Port 1

TR

RC

N_Port 2

Node B

Node C

F_PortRC

TR

F_PortRC

TR

Fabric Switch 2

TR

RC

N_Port 0

Node E

Node F

TR

RC

N_Port 3

F_PortRC

TR

F_PortRC

TR TR

RCN_Port 1

TR

RC

N_Port 2

Node G

Node H

F_PortRC

TR

F_PortRC

TR

E_P

ortR

C

TR

E_P

ort

RC

TR

Page 35: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

35 Confidential

Backup: VADP vs Agent-basedESX has 23 VM. Each VM is around 40 GB.

• All VMs are idle, so this CPU/Disk are purely on back up.• CPU Peak is >10 GHz (just above 4 cores)• But Disk Peak is >1.4 Gbps of IO, almost 50% of a 4 Gb HBA.

After VAPD, both CPU and Disk drops to negligible

Page 36: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

36 Confidential

VADP: Adoption StatusThis is as at June 2010.

Always check with vendor for the most accurate data

Partner Name Product Name Version Integration Status

CA ArcServe 12.5 w/patch Released

Commvault Simpana 8.0 SP5 Released

EMC Avamar 5.0 Released

EMC Networker 7.6.x Not yet

HP Data Protector 6.1.1 with patch Not yet

IBM Tivoli Storage Manager 6.2.0 Released

Symantec Backup Exec 2010 Released

Symantec Backup Exec System Recovery 2010 Released

Symantec NetBackup 7.0 Released

Vizioncore vRanger Pro 4.2 Released

Veeam Backup & Replication 4.0 Released

Page 37: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

37 Confidential

Partition alignmentAffects every protocol, and every storage array

• VMFS on iSCSI, FC, & FCoE LUNs• NFS• VMDKs & RDMs with NTFS, EXT3, etc

VMware VMFS partitions that align to 64KB track boundaries give reduced latency and increased throughput• Check with storage vendor if there are any recommendations to follow. • If no recommendations are made, use a starting block that is a multiple of 8 KB.

Responsibility of Storage Team.• Not vSphere Team

On NetApp :• VMFS Partitions automatically aligned. Starting block in multiples of 4k• MBRscan and MBRalign tools available to detect and correct misalignment

Cluster

Chunk

Cluster

Chunk

Cluster

Chunk

BlockVMFS 1MB-8MB

Array 4KB-64KB

FS 4KB-1MB

Page 38: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

38 Confidential

Tools: Array-specific integrationThe example below is from NetApp. Other Storage partners have integration capability too.

Always check with respective product vendor for latest information.

Page 39: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

39 Confidential

Tools: Array-specific integrationManagement of the Array can be done from vSphere client. Below is from NetApp

Ensure storage access is not accidently given to vSphere admin by using RBAC

Page 40: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

40 Confidential

Data RecoveryNo integration with tape

• Can do manual

If a third-party solution is being used to backup the deduplication store, those backups must not run while the Data Recovery service is running. Do not back up the deduplication store without first powering off the Data Recovery Backup Appliance or stopping the datarecovery service using the command service datarecovery stop.

Some limits• 8 concurrent jobs on the appliance at any time (backup & restore).• An appliance can have at the most 2 dedupe store destinations due to the overhead involved in deduping.• VMDK or RDM based deduplication stores of up to 1TB or CIFS based deduplication stores of up to 500GB.• No IPv6 addresses• No multiple backup appliances on a single host.

VDR cannot back up VMs• that are protected by VMware Fault Tolerance.• with 3rd party multi-pathing enabled where shared SCSI buses are in use.• with raw device mapped (RDM) disks in physical compatibility mode.• Data Recovery can back up VMware View linked clones, but they are restored as unlinked clones.

Using Data Recovery to backup Data Recovery backup appliances is not supported. • This should not be an issue. The backup appliance is a stateless device, so there is not the same need to back it up like other types of VMs.

Page 41: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

41 Confidential

VMware Data RecoveryWe assume the following requirements

• Back up to external array, not the same array.• External Array can be used for other purpose too. So the 2 arrays are backing up each other.• How to ensure Write performance as the array is shared?

• 1x a day back up. No need multiple back up per day on the same VM.

Consideration• Bandwidth: Need dedicated NIC to the Data Recovery VM• Performance: Need to reserve CPU/RAM for the VM?• Group like VM together. It maximises dedupe• Destination: RDM LUN presented via iSCSI to the Appliance. See picture below (hard disk 2)

• Not using VMDK format to enable LUN level operation • Not using CIFS/SMB as Dedupliation Store is 0.5 TB vs 1 TB on RDM/VMDK

• Space calculation: need to find a tool to help estimate the disk requirements.

Page 42: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

42 Confidential

Mapping: Datastore – VMCriteria to use when placing a VM into a Tier:

• How critical is the VM? Importance to business.• What are its performance and availability requirements?• What are its Point-in-Time restoration requirements?• What are its backup requirements?• What are its replication requirements?

Have a document that lists which VM resides on which datastore group• Content can be generated using PowerCLI or Orchestrator, which shows datastores and their VMs.

• Example tool: Quest PowerGUI

• While rarely happen, you can’t rule out if datastore metadata get corrupted.• When that happens, you want to know what VMs are affected.

A VM normally change tiers throughout its life cycle • Criticality is relative and might change for a variety of reasons, including changes in the organization, operational processes, regulatory

requirements, disaster planning, and so on.• Be prepared to do Storage vMotion.

• Always test it first so you know how long it takes in your specific environment• VAAI is critical, else the traffic will impact your other VMs.

Datastore Group VM Name Size (GB) IOPS

Total 12 VM 1 TB 1400 IOPS

Page 43: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

43 Confidential

RDMUse sparingly.

• VMDK is more portable, easier to manage, and easier to resize. • VMDK and RDM have similar performance.

Physical RDM• Can’t take snapshot.• No Storage vMotion. But can do vMotion.• Physical mode specifies minimal SCSI virtualization of the mapped device, allowing the greatest flexibility for SAN management

software.• VMkernel passes all SCSI commands to the device, with one exception: the REPORT LUNs command is virtualized so that the

VMkernel can isolate the LUN to the owning virtual machine.

Virtual RDM• Specifies full virtualization of the mapped device. Features like snapshot, etc works• VMkernel sends only READ and WRITE to the mapped device. The mapped device appears to the guest operating system exactly

the same as a virtual disk file in a VMFS volume. The real hardware characteristics are hidden.

Page 44: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

44 Confidential

Space balanced

BASIL Expert 1 Expert 20

5

10

15

20

25

30

35

40

45

50

Late

ncy

(ms)

Space balanced

BASIL Expert 1 Expert 20

500

1000

1500

2000

2500

IOP

SHuman Experts vs Storage DRS2 VMware performance engineers vs Storage DRS competing to balance the following:

• 13 VMs: 3 DVD store, 2 Swingbench, 4 mail servers, 2 OLTP, 2 web servers• 2 ESX hosts and 3 storage devices (different FC LUNs in shades of blue)

Storage DRS provides lowest average latency, while maintaining similar throughput. Why human expert lost?• Too many numbers to crunch, too many dimensions to the analysis. Human took a couple of hours to think this through.

Why bother anyway

StorageDRSStorage

DRS

Green: Average Latency (ms)

Page 45: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

45 Confidential

Alternative Backup MethodVMware ecosystem may provide new way of doing back up.

• Example below is from NetApp

NetApp SnapManager for Virtual Infrastructure (SMVI)• In Large Cloud, SMVI server should sit on a separate VM from with vCenter.

• While it has no performance requirement, it is best from Segregation of Duty point of view.• Best practice is to keep vCenter clean & simple. vCenter is playing much more critical role in larger environment where plug-ins are relying on vCenter

up time.

• Allows for consistent array snapshots & replication.• Combine with other SnapManager products (SM for Exchange, SM for Oracle, etc) for application consistency

• Exchange and SQL work with VMDK• Oracle, SharePoint, SAP require RDM

• Can be combined with SnapVault for vaulting to disk. • 3 levels of data protection :

• On disk array snapshots for fast backup (seconds) & recovery (up to 255 snapshot copies of any datastore can be kept with no performance impact)• Vaulting to separate array for better protection, slightly slower recovery• SnapMirror to offsite for DR purposes

• Serves to minimize backup window (and frozen vmdk when changes are applied)• Option to not create a vm snapshot to create crash consistent array snapshots

Page 46: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

46 Confidential

One VMKernel port& IP subnet

Supportmulti-switch

Linkaggr?

Use multiple links withIP hash load balancing on

the NFS client (ESX)

Use multiple links withIP hash load balancing on

The NFS server (array)

Storage needs multiplesequential IP addresses

Use multiple VMKernelPorts & IP subnets

Use ESX routing table

Storage needs multiplesequential IP addresses

Yes

Page 47: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

47 Confidential

Page 48: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

48 Confidential

vMotion Performance on 1 GbE Vs 10 GbE

Scenario CPU %USED Web Traffic

Idle VM 0 0 Gbps

Moderately Loaded VM 140% 2.5 Gbps

Heavily Loaded VM 325% 6 Gbps

Duration of vMotion (lower the better)

Idle/Moderately loaded VM scenarios

• Reductions in duration when using 10 GbE vs 1 GbE

on both vSphere 4.1 and vSphere 5

Consider switch from 1 GbE to 10 GbE vMotion network

Heavily loaded VM scenario

• Reductions in duration when using 10 GbE vs 1 GbE

• 1 GbE on vSphere 5 : SDPS kicked-in resulting in

zero connection drops

vMotion in vSphere 5 never fails due to memory copy convergence issues

• 1 GbE on vSphere 4.1: Memory copy convergence

issues lead to network connection drops

Page 49: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

49 Confidential

Impact on Database Server Performance During vMotion

Performance impact minimal during the memory trace phase in vSphere 5

Throughput was never zero in vSphere 5 (due to switch-over time < half a second)

Time to resume to normal level of performance about 2 seconds better in vSphere 5

1 18 35 52 69 86 1031201371541711882052222392562732900

50

100

150

200

250

300

350

400

450vSphere 4.1

orders-per-second

Time (in seconds)

vMotion duration : 23 sec

Impact during guest trace period

Impact during switch-over period

1 18 35 52 69 86 1031201371541711882052222392562730

50

100

150

200

250

300

350

400

450vSphere 5

orders-per-second

Time (in seconds)

vMotion duration : 15 sec

Impact during guest trace period

Impact during switch-over period

Page 50: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

50 Confidential

vMotion Network Bandwidth Usage During Evacuation

Page 51: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

51 Confidential

Network SettingsLoad-Based Teaming

• We will not use as we are using 1 GE in this design.• If you use 10 GE, the default settings is a good starting point. It gives VM 2x the share versus hypervisor.

NIC Teaming• If the physical switch can support, then use IP-Hash

• Need a Stacked-Switch. Basically, they can be managed as if they are 1 bigger switch. Multi-chassis EtherChannel Switch is another name.• IP-Hash does not help if the source and address are constant. For example, vMotion always use 1 path only as source-destination pair is constant.

Connection from VMkernel to NFS server is contant,

• If the physical switch can’t support, then use Source Port• You need to manually balance this, so not all VM go via the same port.

VLAN• We are using VST. Physical switch must support VLAN trunking.

PVLAN• Not using in this design. Most physical switches are PVLAN aware already.• Packets will be dropped or security can be compromised if physical switch is not PVLAN aware.

Beacon Probing• Not enabled, as my design only has 2 NIC per vSwitch. ESXi will flood both NIC if it has 2 NIC only.

Review default settings• Change Forged Transmit to Reject.• Change MAC address changes to Reject

Page 52: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

52 Confidential

VLANNative VLAN

• Native VLAN means the switch can receive and transmit untagged packets.• VLAN hopping occurs when an attacker with authorized access to one VLAN creates packets that trick physical switches into transmitting the

packets to another VLAN that the attacker is not authorized to access. Attacker send forms an ISL or 1Q trunk port to switch by spoofing DTP messages, getting access to all VLANs. Or attacker can send double tagged 1Q packets to hop from one VLAN to another, sending traffic to a station it would otherwise not be able to reach.

• This vulnerability usually results from a switch being misconfigured for native VLAN, as it can receive untagged packets.

Local vSwitches do not support native VLAN. Distributed vSwitch does.• All data passed on these switches is appropriately tagged. However, because physical switches in the network might be configured for native

VLAN, VLANs configured with standard switches can still be vulnerable to VLAN hopping.• If you plan to use VLANs to enforce network security, disable the native VLAN feature for all switches unless you have a compelling reason to

operate some of your VLANs in native mode. If you must use native VLAN, see your switch vendor’s configuration guidelines for this feature.

VLAN 0: the port group can see only untagged (non-VLAN) traffic.

VLAN 4095: the port group can see traffic on any VLAN while leaving the VLAN tags intact.

Page 53: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

53 Confidential

Distributed SwitchDesign consideration

• Version upgrade• ?? Upgrade procedure

Page 54: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

54 Confidential

Feature Comparison Among Switches (partial)

Feature vSS vDS Cisco N1K

VLAN yes yes yes

Port Security yes yes yes

Multicast Support yes yes yes

Link Aggregation static static LACP

Traffic Management limited yes yes

Private VLAN no yes yes

SNMP, etc. no no yes

Management Interface vSphere Client vSphere client Cisco CLI

Netflow No yes yes

Page 55: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

55 Confidential

Port Groups are policy definitions for a set

or group of ports.e.g. VLAN membership,

port security policy,teaming policy, etc

Uplinks (physical NICs)attached to vSwitch.

vNetwork Standard Switch (vSwitch)

vSS defined on a per host basis from Home Inventory Hosts and Clusters.

vNetwork Standard Switch: A Closer Look

Page 56: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

56 Confidential

DV Port Groups span all hosts

covered by vDSand are groups

of portsdefined with the

same policye.g. VLAN, etc

DV Uplink Port Groupdefines uplink policies

DV Uplinks abstractactual physical nics (vmnics) on hosts

vmnics on each hostmapped to dvUplinks

vNetwork Distributed Switch: A Closer LookvDS operates off the local cache – No operational dependency on vCenter server

• Host local cache under /etc/vmware/dvsdata.db and /vmfs/volumes/<datastore>/.dvsdata• Local cache is a binary file. Do not hand edit

Page 57: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

57 Confidential

Nexus 1000V: VSMVM properties

• Each requires a 1 vCPU, 2 GB RAM. Must be reserved, so it will impact the cluster Slot Size.• Use “Other Linux 64-bit" as the Guest OS.• Each needs 3 vNIC. • Requires the Intel e1000 network driver. Because No VMware Tools installed?

Availability• 2 VSMs are deployed in an active-standby configuration, with the first VSM functioning in the primary role and the other VSM functioning in

a secondary role. • If the primary VSM fails, the secondary VSM will take over. • They do not use VMware HA mechanism.

Unlike cross-bar based modular switching platforms, the VSM is not in the data path. • General data packets are not forwarded to the VSM to be processed, but rather switched by the VEM directly.

Page 58: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

58 Confidential

Nexus 1000V: VSM has 3 Interface for “mgmt”Control Interface

• VSM – VEMs communication, and VSM – VSM communication• Handles low-level control packets such as heartbeats as well as any configuration data that needs to be exchanged between the VSM and

VEM. Because of the nature of the traffic carried over the control interface, it is the most important interface in Nexus 1000V• Requires very little bandwidth (<10 KBps) but demands absolute priority. • Always the first interface on the VSM. Usually labeled "Network Adapter 1" in the VM network properties.

Management Interface• VSM – vCenter communication.• Appears as the mgmt0 port on a Cisco switch. As with the management interfaces of other Cisco switches, an IP address is assigned to

mgmt0. • Does not necessarily require its own VLAN. In fact, you could use the same VLAN with vCenter

Packet Interface• carry network packets that need to be coordinated across the entire Nexus 1000V. Only two type of control traffic: Cisco Discovery Protocol

and Internet Group Management Protocol (IGMP) control packets.• Always the third interface on the VSM and is usually labeled "Network Adapter 3" in the VM network properties.• Bandwidth required for packet interface is extremely low, and its use is very intermittent. If Cisco Discovery Protocol and IGMP features are

turned off, there is no packet traffic at all. The importance of this interface is directly related to the use of IGMP. If IGMP is not deployed, then this interface is used only for Cisco Discovery Protocol, which is not considered a critical switch function

Page 59: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

59 Confidential

vNetwork Distributed Portgroup Binding

Port Binding: Association of a virtual adapter with a dvPort

Static Binding: Default configuration• Port bound when vnic connects to portgroup

Dynamic binding• Use when #VM adapters > #dvPorts in a portgroup and all VMs are not active

Ephemeral binding• Use when #VMs > #dvPorts and port history is not relevant• Max Ports is not enforced

VMware ESX

ProxySwitch

DVPort created on proxySwitch and bound to vnic

Use static binding for best performance and scale

Page 60: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

60 Confidential

Physical Wire

SCSI

Network Stack ComparisonGood attributes of FCoE

• Has less overhead than FCIP or iSCSI. See diagram below.• FCoE is managed like FC at initiator, target, and switch level • Mapping FC frames over Ethernet Transport• Enables Fibre Channel to run over a lossless Ethernet medium• Single Adapter, less device proliferation, lower power consumption• No gateways required • NAS certification: FCoE CNAs can be used to certify NAS storage. Existing NAS devices listed on VMware SAN Compatibility Guide do not

require recertification with FCoE CNAs.

Mixing of technologies always increase complexity

Ethernet

IP

TCP

iSCSI FCIP

FCoE FC

FCP

SCSI iSCSI FCIP FCoE FC

Page 61: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

61 Confidential

Physical Switch SetupSpanning Tree Protocol

• vSwitch won’t create loops• vSwitch can’t be linked.• vSwitch does not take incoming packet from pNIC and forward as outgoing

packet to another pNIC

Recommendations1. Leave STP on in physical network2. Use “portfast” on ESX facing ports3. Use “bpduguard” to enforce STP boundary

VM0 VM1

vSwitch

Physical Switches

vSwitch

MAC a MAC b MAC c

Page 62: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

62 Confidential

1 GE switchSample from Dell.com (US site, not Singapore)

Around US$5 K. Need a pair.

48 ports• Each ESXi needs around 7 – 13 ports (inclusive of iLO port)

Page 63: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

63 Confidential

10 GE switchSample from Dell.com (US site, not Singapore)

Around US$10 – 11 K. Need a pair.

24 ports• Each ESXi only need 2 port• iLO port can connect to existing GE/FE switch

Compared with 1 GE switch,Price is very close. Might be even cheaper in TCO

Page 64: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

64 Confidential

vSphere vNetwork

Multi security zones (w/ vShield Edge to protect vApp Network)

External Network

Organization

vApp

PG PG

Org Network

Web1

DB

vApp Network

PG

Web1

DB

Reminder: this is self-service (UI / API)

vCD will deploy this

vCD “logical” View vSphere “operational” View

Page 65: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

65 Confidential

vSphere vNetwork

Two-tier application (w/ vShield App to protect backend)

External Network

Organization

vApp

PG PG

Org Network

Web1

DB

Web1

DB

Reminder: this is NOT self-service (today)

vShield Admin config (today)

Front-endenclave

Back-endenclave

vCD “logical” View vSphere “operational” View

Page 66: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

66 Confidential

vShield Edge in short

L2Network-A L2Network-B

NATDHCPLB

vSphere vNetwork

PortGroup PortGroup

vNIC vNIC

Virtual Appliance

vShield Edge

FirewallRoutingVPN

Security Zone 1 Security Zone 2

vCD “logical” View vSphere “operational” View

Page 67: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

67 Confidential

vShield App in short

L2Network

Firewall

vSphere vNetwork

PortGroup

vNIC

vShield App

KernelModule

Security Zone 1 Security Zone 2

vCD “logical” View vSphere “operational” View

vNIC

Page 68: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

68 Confidential

Security Compliance: PCI DSSPCI applies to all systems “in scope”

• Segmentation defines scope• What is within scope? All systems that Store, Process, or Transmit

cardholder data, and all system components that are in or connected to the cardholder data environment (CDE).

The DSS is vendor agnostic• Does not seem to cover virtualisation.

Relevant statements from PCI DSS• “If network segmentation is in place and will be used to reduce the scope

of the PCI DSS assessment, the assessor must verify that the segmentation is adequate to reduce the scope of the assessment.” - (PCI DSS p.6)

• “Network segmentation can be achieved through internal network firewalls, routers with strong access control lists or other technology that restricts access to a particular segment of a network.” – PCI DSS p. 6

• “At a high level, adequate network segmentation isolates systems that store, process, or transmit cardholder data from those that do not. However, the adequacy of a specific implementation of network segmentation is highly variable and dependent upon such things as a given network's configuration, the technologies deployed, and other controls that may be implemented. “– PCI DSS p. 6

• “Documenting cardholder data flows via a dataflow diagram helps fully understand all cardholder data flows and ensures that any network segmentation is effective at isolating the cardholder data environment.” – p.6

Page 69: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

69 Confidential

Security Compliance: PCI DSSAdded complexity from Virtualisation

• System boundaries are not as clear as their non-virtual counterparts• Even the simplest network is rather complicated• More components, more complexity, more areas for risk• Digital forensic risks are more complicated• More systems are required for logging and monitoring• More access control systems• Memory can be written to disk• VM Escape?• Mixed Mode environments

Sample Virtualized CDE

Page 70: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

70 Confidential

Requirement Unique Risks to Virtual Environments How you can address them

3.Protect stored cardholder data.

There is a chance that memory that was previously only stored as volatile memory may now be written to disk as stored (i.e., taking snapshots of systems).

How are memory resources and other shared resources protected from access? (How do you know that there are no remnants of stored data?)

Apply data retention and disposal policy to CDE-VMs, snap-shots, and any other components which have the possibility of storing CHD, encryption keys, passwords, etc.

Document storage configuration and SAN implementation.

Document any encryption process, encryption keys, & encryption key management used to protects stored CHD?

Fully isolate the VMotion network to ensure that as hosts are moved from one physic server to another, memory and other sensitive running data cannot be sniffed or logged.

7.Restrict access to cardholder data by business need-to-know.

Access controls are more complicated. In addition to hosts, there are now additional applications, virtual components, and storage of these components (i.e., what protects their access while they are waiting to be provisioned).

Organizations should carefully document all the access controls in place, and ensure that there are separate access controls for different “security zones.”

Document all the types of different Role Based Access Controls (RBAC) used for access to physical hosts, virtual hosts, physical infrastructure, virtual infrastructure, logging systems, IDS/IPS, multi-factor authentication, and console access.

Ensure that physical hosts do not rely on virtual RBAC systems that they host.

9.Restrict physical access to cardholder data.

Risks are greater since physical access to the hypervisor could lead to logical access to every component.

Ensure that you are considering physical protection in your D/R site.

Address the risk that physical access to a single server or SAN can result in logical access to hundreds of servers.

10.Track and monitor all access to network resources and cardholder data.

Some virtual components do not have the robust logging capabilities of their physical counterparts. Many systems are designed for troubleshooting and are not designed to create detailed event and system logs which provide sufficient detail to meet PCI logging requirements and assist with a digital forensic investigation.

PCI requires logs to be stored in a central location that is independent of the systems being logged.

Establish unified and centralized log management solutions which cannot be altered or disabled by access to the hypervisor.

ESX logs should not be stored on a virtual host on the same ESX server, as compromising the ESX server could compromise the logs. Be prepared to demonstrate that the logs are forensically sound.

PCI: Virtualization Risks by Requirement

Page 71: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

71 Confidential

vNetwork Appliances

Advantages• Flexible deployment• Scales naturally as more ESX hosts are deployed

Architecture• Fastpath agent filter packets in datapath, transparent to vSwitch• Optionally forward packets to VM (slowpath agent)

Solutions• VMware vShield, Reflex, Altor, Checkpoint, etc.

Lightweight filtering in “Fast Path” agent

Heavyweight filtering in “Slow Path” agent

Page 72: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

72 Confidential

vShield

Setup Perimeter servicesInstall vShield Edge

• External – InternalProvision Services

• Firewall• NAT, DHCP• VPN• Load Balancer

Setup Internal Trust ZonesInstall vShield App

• vDS / dvfilter setup• Secure access to shared services

Create interior zones• Segment internal net• Wire up VMs

Shared Services

APP DBDMZ

INTERNET

vShield Edge

Org vDC

vSphere vSphere vSphere vSphere

Virtual Distributed Switch

vShield App vShield App vShield App

Page 73: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

73 Confidential

vShield and Fail-Safehttp://www.virtualizationpractice.com/blog/?p=9436

Page 74: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

74 Confidential

SecuritySteps to delete “Administrator” from vCenter

• Move it to the “No Access” role. Protect it with alarm if this is modified.• All other plug-in or mgmt products that use Administrator will break

Steps to delete “root” from ESX• Replaced with another ID. Can’t be tied to AD?• Manual warns of removing this user.

Create another ID with root group membership• vSphere 4.1 now support MS AD integration

Page 75: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

75 Confidential

VCM’s Free vSphere Compliance Checker (Download)

5 ESX HostsESX related hardening rules

VM shell related hardening rules

http://www.vmware.com/products/datacenter-virtualization/vsphere-compliance-checker/overview.html

Page 76: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

76 Confidential

security P2V issue – loss of physical control

physical security cloud security in virtual data center

static dynamic

perimeter security was achieved using physical firewall, IPS and VPN

due to mobility of the VM’s it is not sufficient to achieve the same

interior security was achieved using VLAN or subnet based policies

leads into VLAN sprawl and complex policies

endpoints are protected with AV agents results in more AV agents in each VM impacting the host or other VM’s

physical organizational boundaries or security zones can be achieved easily with physical appliances

can be achieved only with the help of different subnets resulting in VLAN sprawl

sharing of same physical hosts by multiple VM’s results in complex multi tenancy policies – to enable logical boundaries

opaque with poor visibility greater transparency & visibility – given tools are virtualization aware

Page 77: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

77 Confidential

Windows VM monitoringUse the new Perfmon counters provided.

The built-in from Windows is misleading in virtual environment

Page 78: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

78 Confidential

Time Keeping and Time DriftCritical to have the same time for all ESX and VM.

All VM & ESX to get time from the same 1 internal NTP server• Synchronize the NTP Server with an external stratum 1 time source

The Internal NTP server to get time from a reliable external server or real atomic clock• Should be 2 sources

Do not virtualise the NTP server• As a VM, it may experience time drift if ESXi host is under resource constraint

Physical candidates for NTP Server:• Back up server (with vStorage API for Data Protection)• Cisco switch

See MS AD slide for specific MS AD specific impact.

Page 79: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

79 Confidential

LinuxNew features in ext4 filesystem:

• Extents reduce fragmentation• Persistent preallocation• Delayed allocation• Journal checksumming• fsck is much faster

RHEL 6 & ext4 properly align filesystems

Tips: use the latest OS• Constant Improvements

• Built-in paravirtual drivers• Better timekeeping• Tickless kernel. On-demand timer interrupts. Systems stay totally idle

• Hot-add capabilities• Reduces need to oversize “just in case”• Might need to tweak udev. See VMware KB 1015501

• Watch for jobs that happen at the same time (across VM)• Monitoring (every 5 minutes)• Log rotation (4 AM)

• Don’t need sysstat & sar running. Use vCenter metrics instead

Page 80: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

80 Confidential

Guest Optimization Swap File LocationSwap file for Windows guests should be on separate dedicated drives

• Cons:• This requires another vmdk file. Management overhead as it has to be resized when RAM changes too.

• Pro:• No need to back up• Keep the application traffic and OS disk traffic separate from the page file traffic thereby increasing performance.

• Swap partition equal to 1.5x RAM• 1.5x is the default recommendation for best performance (knowing nothing about the application). • Monitor the page file usage to see how much of it is actually being used, in the old days whatever memory was installed was what they were

committed to and making a change was an act of congress, look to leverage the virtual flexibility and modify for best usage.• http://support.microsoft.com/kb/889654 Microsoft limits on page file’s .

Microsoft’s memory recommendations and definition of physical address extension explained http://support.microsoft.com/?kbid=555223

80

Page 81: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

81 Confidential

Infrastructure VM

Purpose CPU RAM Remarks

Admin Client.Win 7 32 bit

For higher security.

1 2 GB Dedicated for vSphere management/administration purpose.vSphere Client has plug-ins. So it’s more convenient to have a ready made client.Higher security than typical administrator personal notebook/desktop, which serve many other purpose (email, internet browsing, MS office, iTunes, etc)Can be placed in the Management LAN. From your laptop, do an RDP jump to this VM. Suitable for SSLFUseful when covering during leave, etc. But do not use shared ID.Softwares installed: Microsoft PowerShell (no need to install CLI as it’s in vMA), VMware Orchestrator

vCenterWin08 R2 64 bit Ent Edition

2 4 GB 1 CPU is not sufficient. 2 vCPU 4 GB RAM 5 GB data drive is enough for 50 ESX and 500 VM. No need to over allocate, especially on vCPU and RAM.Ensure MS IIS is removed prior to vCenter installation~ 1.5 MB of RAM per VM and ~3 MB of RAM per managed hostAvoid installing vCenter on a Domain Controller. But deploy it on a system that is part of the AD domain; facilitates security and flexibility in setting up VC roles, permissions and DB authentication

IT Database ServerWin08 R2 64 bit Ent Edition

2 4 GB SQL Server 2005 64 bit. See next slide. Need to plan carefully.

IT Database ServerWin08 R2 64 bit Ent Edition

2 4 GB SQL Server 2008 64 bit. See next slide. Need to plan carefully.

Update ManagerWin08 R2 64 bit?

1 4 GB 50 GB of D:\ drive for Patch Store is sufficient. Use Thin Provisioning.See: “VMware Update Manager Performance Best Practices” VMworld session.

vShield 1 1 GB Tier 1 as traffic goes here.1 per ESXi host vSwitch (serving VM, not VMkernel)

vShield Manager 1 1 GB Management console only

Patch Management Server 1 4 GB I’m assuming client has the tool in place and wants to continue

Page 82: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

82 Confidential

Infrastructure VM

Purpose CPU RAM RP Tier Resource Pool

vMA 1 1 GB 3 Management console only

SRM 5 2 2 GB 2 Recommend to separate from VC.

Converter 1 2 GB 1 If possible, do not run in Production Cluster, so it does not impact the ESX utilisation

Not set to 3 as you want the conversion process to be completed as soon as possible.

vShield Security VM from partner

2 2 GB 1 Tier 1 as it’s in the data path

1 per ESXi host

Cisco Nexus. If you use Nexus

1 2 GB 3 Management console only. Not data path

Requires 100% reservation, so this impacts the cluster Slot Size

Cisco Nexus VSM (HA) 1 2 GB 3 The HA is managed by Cisco Nexus itself, not managed by VMware.

Database Bit Upd Mgr SRM vCenter Orchestrator View 5

SQL Server 2008 Std Ed (not SP1) 64 bit No Yes Yes Yes Need SP1

SQL Server 2008 Ent Ed (not SP1) 64 bit Yes Yes Yes Yes Need SP1

Oracle 10g Enterprise Edition, R2 64 bit Yes Yes Yes Yes Yes

Oracle 11g Standard Edition, R1 (not R2) 32 bit Yes Yes Yes Yes Yes

Page 83: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

83 Confidential

Capacity PlannerVersion 2.8 does not yet have the full feature for Desktop Cap Plan. Wait for next upgrade.

• But you can use it on case by case basis, to collect those demanding desktop.

Default setting of paging threashold does not take into account server RAM. • Best practice for the Paging threshold is 200 Pg/sec/GB. So, you have 48GB RAM x 200= 9600 Pgs/sec.• Reason is that this paging value provides for the lowest latency access to memory pages.• You might get high paging if back up job run.

Create project if you need to separate result (e.g. per data center)

Win08 has firewall on. Need to turn off using command line.

To be verified in 2.8: You can't change prime time. It's based on the local time zone.

Page 84: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

84 Confidential

P2VAvoid if possible. Best practice is to install from template (which was optimised for virtual machine)

• Remove unneeded devices after P2V

MS does not support P2V of AD Domain Controller.

Static servers are good candidate for P2V:• Web servers, print servers

Servers with retail licence/key will require Windows reactivation. Too many hardware changes.

Resize• Relative CPU comparison• MS Domain Controller: 1 vCPU, 2 GB is enough.

Page 85: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

85 Confidential

Many Solutions Depend on vCenter Server

Operations

Site Recovery Manager

vCloud Director

View Server and

Composer

CapacityIQChargeback

Configuration Manager

vCenter Server

85

Page 86: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

86 Confidential

Orchestrator Integrated Workflow EnvironmentAutomation: A way to perform frequently repeated process without manual intervention.

• Basic building block: a shell script, a Perl script, a PowerShell script• Example: given a list of hostnames, add ESX to VC.

Orchestration: A way to manage multiple automated processes across and among heterogeneous systems.• Example - Add ESX hosts from a list to VC, update CMDB with successfully added ESX hosts, then send email notification.

Example• If a datastore on a host is more than 95% utilized, open a change control ticket then perform s-vMotion and send email notification

Page 87: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

87 Confidential

vCenter Chargeback Manager deployment options – cont. For vCD and VSM data collector

• Deploy at least 2 data collectors for vCD and VSM each for high availability

CBM instance can be installed/upgraded at the time of vCD install/upgrade or later

vCenter Chargeback Servers

vCenter Chargeback Web Interface

vCenter Chargeback Database

Chargeback Data Collector

vCenter Server

vCenter Database

vCenter Server

vCenter Database

Chargeback Load Balancer

vCenter Server

vCenter Database

Page 88: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

88 Confidential

VR vSphere Replication Server

VRMS vSphere Replication Management System

ESX ESX ESX

ESX ESX ESX

VMVM

SRM

VRMS

VC

Primary Site Secondary Site

VR Service

VR Server

SRM

VRMS

VC

NFC Service

SRM UI

VR Framework

VR Filter

VM

Site Pairing

Page 89: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

89 Confidential

SRM Architecture with vSphere Replication

[Protected Site] [Recovery Site]

vRMS vRMS

Storage

SRM ServerSRM Server

Storage

vSphere Client

ESX ESX ESX ESX ESX

VMFS VMFS

StorageVMFS VMFS

vCenter Server vCenter Server

vRA vRA vRA

Replication

SRM Plug-In

vSphere Client

SRM Plug-In

vRS

Page 90: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

90 Confidential

Service Provider

[Customer A]

vRMS

Storage

SRM Server

VMFS VMFS

vCenter

[DRaaS Provider] [Customer B]

vRMS

SRM Server

StorageNFS NFS

StorageNFS NFS

vRS

ESX

vRA

ESX

vRA

ESX

vRA

vRS

Replication

vRMS

SRM Server

Replication

SRM Server

vRMS

vCenter

vCenter

ESX

vRA

ESX

vRA

ESX

vRA

ESX ESX

vCenter

ESXESX

Page 91: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

91 Confidential

Branch Office

[Remote Site A] [Central Office]

StorageVMFS VMFS

ESX

vRA

ESXESX ESX

Replication

vCentervRMS

vCenter

ESX

vRA

[Remote Site B]

Replication

Replication

vRS

vRMS

SRM Server

SRM Server

Why is talking to this VRMS?

Page 92: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

92 Confidential

Decision TreesDevelop decision trees that is tailored to the organisation. Below are 2 examples.

Page 93: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

93 Confidential

vSphere Replication Performance1 vSphere Replication “replication server” appliance can process up to 1 Gbps of sustained throughput using approximately 95% of 1 vCPU.

• 1 Gbps is much larger than most WAN bandwidth

For a VM protected by VR the impact on application performance is 2 - 6% throughput loss

Page 94: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

94 Confidential

MS SQL Server 2008: LicensingAlways refer to official statement from vendor web site.

• Emails, spoken words, SMS from a staff (e.g. Sales Manager, SE) is not legally binding

Licensing a Portion of the Physical Processors If you choose not to license all of the physical processors, you will need to know the number of virtual processors supporting each virtual OSE (data point A) and the number of cores per physical processor/socket (data point B). Typically, each virtual processor is the equivalent of one core

vSphere 4.1 introduce multi-core. Will you save more $? Need to check with MS reseller + official MS documents

Page 95: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

95 Confidential

SQL Server 2008 R2Get the Express from http://www.microsoft.com/express/Database/

In most cases, the Standard edition will be sufficient.

vCenter 4.1 and Update Manager 4.1 does not support the Express edition.• Hopefully Update 1 will?

Page 96: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

96 Confidential

Windows Support

Interesting. It is the other way around.vSphere 4.1 passed the certification for Win08 R2. So Microsoft supports Win03 too.It is version specific. Check for vSphere 5

http://www.windowsservercatalog.com/default.aspx

Page 97: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

97 Confidential

SQL Server: General Best PracticesFollow Microsoft Best Practices for SQL Server deployments

Defrag SQL Database(s) – http://support.microsoft.com/kb/943345

Preferably 4-vCPU, 8+GB RAM for medium/larger deployments

Design back-end to support required workload (IOPS)

Monitor Database & Log Disks -Disks Reads/Writes, Disk Queues

Separate Data, Log, TempDB etc., IO

Use Dual Fibre Channel Paths to storage• Not possible in vmdk

Use RAID 5 for database & RAID 1 for logs in read-intensive deployments

Use RAID 10 for database & RAID 1 for logs for larger deployments

SQL 2005 TempDB (need to update to 2008)• Move TempDB files to dedicated LUN• Use RAID 10 • # of TempDB files = # of CPU cores (consolidation) • All TempDB files should be equal in size• Pre-allocate TempDB space to accommodate expected workload

• Set file growth increment large enough to minimize TempDB expansions. • Microsoft recommends setting the TempDB files FILEGROWTH increment to 10%

Page 98: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

98 Confidential

What is SQL Database Mirroring?Database-level replication over IP…, no shared storage requirement

Same advantages as failover clustering (service availability, patching, etc.)

At least two copies of the data…, protection from data corruption (unlike failover clustering)

Automatic failover for supported applications (DNS alias required for legacy)

Works with SRM too. VMs recover according to SRM recovery plan

Page 99: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

99 Confidential

VMware HA with Database Mirroring for Faster Recovery

Highlights:• Can use Standard Windows and SQL Server editions• Does not require Microsoft clustering• Protection against HW/SW failures and DB corruption• Storage flexibility (FC, iSCSI, NFS)• RTO in few seconds (High Safety)• vMotion, DRS, and HA are fully supported!

Note:• Must use High Safety Mode

for Automatic Failover• Clients applications must be aware of Mirror or use DNS Alias

Page 100: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

100 Confidential

MS SharePoint 2010Go for 1 VM = 1 Role

Page 101: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

101 Confidential

Java ApplicationRAM best practice

• Size the virtual machine’s memory to leave adequate space• For the Java heap• For the other memory demands of the Java Virtual Machine code • For any other concurrently executing process that needs memory from the same guest operating system • To prevent swapping in the guest OS

• Do not reserve RAM 100% unless HA Cluster is not based on Host Failure.• This will impact HA Slot Size

• Consider the VMware vFabric as it takes advantage of vSphere.

Others• Use the Java features for lower resolution timing as supplied by your JVM (Windows/Sun JVM example: -XX:+ForceTimeHighResolution)• Use as few virtual CPUs as are practical for your application • Avoid using /pmtimer in boot.ini for Windows with SMP HAL

Page 102: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

102 Confidential

VM Size for Benchmark 4 vCPU 8 vCPU

SD Users 1144 2056

Response Time (s) 0.97 0.98

SAPS 6250 11230

VM CPU Utilization 98% 97%

ESX Server CPU Utilization <30% <80 %

SAPNo new benchmark data on Xeon 5600.

• Need to check latest Intel data.

Regarding the vSphere benchmark.• It’s a standard SAP SD 2-tier benchmark. In real life, we should split DB and CI instance, hence cater for more users• vSphere 4.0, not 4.1• SLES 10 with MaxDB• Xeon 5570, not 5680 or Xeon 7500 series.• SAP ERP 6.0 (Unicode) with Enhancement Package 4

Around 1500 SAPS per core• Virtual at 93% to 95% of Native Performance. For sizing, we can take 90%

of physical result.• Older UNIX servers (2006 – 2007) are good candidates for migration to

X64 due to low SAPS per core.

Central Instance can be considered for FT. • 1 vCPU is enough for most cases

Page 103: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

103 Confidential

SAP 3-Tier SD Benchmark

Page 104: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

104 Confidential

MS ADGood candidate.

• 1 vCPU 2 GB RAM are sufficient. Use the UP HAL.• 100,000 users require up to 2.75GB of memory to cache directory (x86)• 3 Million users require up to 32GB of memory to cache entire directory (x64)

• Disk is rather small• Disk2 (D:) for Database. Around ~16GB or greater for larger directories• Disk3 (L:) for Log files. Around 25% of the database LUN size

Changes in MS AD design once all AD are virtualised• VM is not a reliable source of time. Time drift may happens inside a VM.• Instead of synchronising with the Forest PDC emulator or the “parent” AD, synchronise with Internal NTP Server.

Best practices• Set the VM to auto boot.• Boot Order

• vShield VM• AD• vCenter DB• vCenter App

• Regularly monitor Active Directory replication• Perform regular system state backups as these are

still very important to your recovery plan

Page 105: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

105 Confidential

Exchange 2003 32-bit Windows 900 MB database cache 4 Kb block size High read/write ratio

Exchange 2007 64-bit Windows 32+ GB database cache 8 Kb block size 1:1 read/write ratio 70% reduction in disk I/O

Exchange 2010 64-bit Windows

32 Kb block size I/O pattern optimization Further 50% I/O reduction

MS ExchangeExchange has become leaner and more scalable

Building block CPU and RAM sizing for 150 sent/received• http://technet.microsoft.com/en-us/library/ee712771.aspx

Database Availability Group (DAG)• DAG feature in Exchange 2010 necessitates a different approach to sizing the Mailbox Server role, forcing the administrator to account for

both active and passive mailboxes.• Mailbox Servers that are members of a DAG can host one or more passive databases in addition to any active databases for which they may

be responsible.• Not supported by MS when combined

Building Block 1000 mail box

Profile 150 sent/received daily

Megacycle Requirement 3,000

vCPU 2 (1.3 actual)

Cache Requirement 9 GB

Total Memory Size 16 GB

Page 106: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

106 Confidential

VMware HA + DAGs (no MS support)

Protects from hardware and application failure• Immediate failover (~ 3 to 5 secs)• HA decreases the time the database is in an

‘unprotected state’

No passive servers.

Windows Enterprise edition.

Exchange Standard or Enterprise editions

Complex configuration and capacity planning

2x or more storage needed

Not officially supported by Microsoft

Page 107: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

107 Confidential

Realtime ApplicationsOverall: Extremely Latency Sensitive

• All apps are somewhat latency sensitive• RT apps break with extra latency

“Hard Realtime Systems”• Financial trading systems• Pacemakers

“Soft Realtime Systems”• Telecom: Voice over IP

• Technically Challenging, but possible. Mitel and Cisco both provide official support. Need 100% reservation.• Not life-or-death risky

Financial Desktop Apps (need hardware PCoIP)• Market News• Live Video• Stock Quotes• Portfolio Updates

Page 108: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

108 Confidential

File ServerWhy virtualise?

• Cheaper• Simpler.

Why not virtualise• You already have an NFS server• You don’t want additional layer.

Page 109: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

109 Confidential

Upgrade to vSphere 5

Page 110: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

110 Confidential

Upgrade Best PracticesTurn Upgrade into Migrate

• Much lower risk. Ability to roll back and much simpler project.• Fewer stages. 3 stages 1

• Upgrade + New Features + Rearchitecture in 1 clean stage.

• Faster overall project• Need to do server tech refresh for older ESXi

Think of both data centers• vCenter 5 can’t linked-mode to vCenter 4.

Involve App Team• Successful upgrade should result in faster performance

Involve Network and Storage team• There cooperation is required to take advantage of vSphere 5

Compare Before and After• …. and document your success!

Page 111: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

111 Confidential

Migrate: Overall ApproachDocument the Business Drivers and Technical Goals

• Upgrade is not simple. And you’re not doing it for fun • If you are going to support larger VM, you might need to change server

Check compabitility• Array to ESXi 5.

• Is it supported?• You need firmware upgrade to take advantage of new vStorage API

• Backup software to vCenter 5• Products that integrates with vCenter 5

• VMware “integration” products: SRM, View, vCloud Director, vShield, vCenter Heartbeat• Partner integration products: TrendMicro DS, Cisco Nexus• VMware management products, partner management products.• All these products should be upgraded first

Assuming all the above is compatible, proceed to next step

Read the Upgrade Guide

Plan and Design the new architecture• Based on vSphere 5 + SRM 5 + vShield 5 + others• Decide which architectural changes you are going to implement. Examples:

• vSwitch to vDS?• Datastore Cluster?• Auto-deploy?• vCenter appliance? Take note the limitation (View, VCM, LinkedMode, etc limitation)

• What improvements are you implementing? Examples:• Datastore clean up or consolidation.• SAN: fabric zoning, multi-pathing, 8 Gb, FCoE• Chargeback? This will impact your design

Page 112: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

112 Confidential

Migrate: Overall ApproachUpgrade vCenter

Create the first ESXi cluster• Start with IT cluster

Migrate first 4.x cluster into vCenter 5• 1 cluster at a time.• Follow VM schedule downtime• Capture Before Performance, for comparison or proof.• Back up VM, then migrate.• Once last VM migrated, the hosts are free for reuse or decommissioned.

Repeat until last cluster is migrated

Upgrade VM to latest hardware version and upgrade VMware Tools.

Page 113: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

113 Confidential

New features that impact designNew features with major design impact

• Storage Cluster• Auto Deploy

• You need infrastructure to support it

• vCenter appliance• VMFS-5

• Larger datastore, so your datastore strategy might change to “less but larger” one.

Other new features can wait after upgrade.• Example, Network IO Control can be turned on after upgrade.

Page 114: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

114 Confidential

Over Time The DMZ Evolved

SEC 1880

Increased technological and operational complexity

114

# of systems & complexity increase over time1995 2005

UTM

Security Zones

Page 115: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

115 Confidential

SEC 1880

Design Consideration for DMZ Zone5-dimensional decision model

115

Page 116: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

116 Confidential

vDMZ Operations is different

Virtual DMZ Operations• Needs VMware Know-how• Needs Windows Know-how• Needs Hardware Know-how• Needs Application Know-how• Needs security automation• Needs organizational integration

Virtual DMZ Operations• Highly dynamic & agile• Additional systems (vSphere,

Windows)• Additional Hardware (Blades,

Converged Networking)• Server sprawl inside the DMZ

Physical DMZ Operations• Network, Network-Security & Unix

only• Disparate Silos • Manual operations• No integration into “internal ops”

DMZ Operations:• Maintenance: Upgrading, Updating

and Troubleshooting• ServiceChanges: Changing existing

Services• Innovation: Introduction of new

Services• Monitoring: Keeping things “in the

green” & “secure”

Page 117: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

117 Confidential

Chargeback

VSM data collector

vShield Manager

REST

vCloud data collector

JDBCvCloud Director

database

Chargeback data collector

vCenter database

JDBC

REST

Chargeback database

Chargeback Server

JDBC

JDBC

JDBCREST

Page 118: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

118 Confidential

Automation impacts 8 areas of IT Excellence

118

Organization and Skill Development

Transformation Planning

Security Management

Configuration Management

Financial Management

Capacity Management

Systems Management

Life Cycle Management

Service DesignContinual Process Improvement• Supplier Management• Service Level Management• Service Catalog Management• Availability Management

Page 119: 1 Confidential Additional Info (some are still draft) Tech notes that you may find useful as input to the design. A lot more material can be found at the

119 Confidential

Internal Cloud Maturity Model

Plan IA policy

requirements

Define HIaaS

standard models

Update audit/

accounting practices

Update procurement and

change management

Include virtualization in software procurement

HIaaSinfrastructure

Cloud infrastructure management

Servicemanagement

Serviceautomation

Application- centric

Automatecloud

bursting

Automate service provisioning

Automate application

provisioning

Automate VM

provisioning

Assess and deploy lab automation

Define IA service

management requirements

Implement or update service catalog

Implement show-back,

update data protection

Implement service pools

Define service

tiers

Deploy virtual datacenters

Deploy virtual

infrastructureappliances

Enforce QoS

Deploy essential

management services

Define standard templates

Optimize for cloud portability

Optimize fortier-1 apps, multi-

tenants

Deploy load balancing, tier-2 apps

Deploy HA services, tier-3 apps

Consolidate physical to virtual

Service-oriented

Cloud- enabled

Operationally ready

Technologicallyproficient

Governance