63
Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

  • Upload
    lave

  • View
    42

  • Download
    3

Embed Size (px)

DESCRIPTION

Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure. Agenda. Introductions Support and Advantages Summary Reference Architectures & Performance Test Results Best Practices Sample Case Study: Deploying Exchange 2007 on VI Availability and Recovery strategies - PowerPoint PPT Presentation

Citation preview

Page 1: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Design, Deploy, and OptimizeExchange 2007 on VMware Infrastructure

Page 2: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Agenda

1) Introductions2) Support and Advantages Summary3) Reference Architectures & Performance Test

Results4) Best Practices5) Sample Case Study: Deploying Exchange 2007

on VI6) Availability and Recovery strategies7) Customer Success Stories

Page 3: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Support and AdvantagesSummary

Page 4: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Scenario 1:Support through Microsoft Server Virtualization Validation Program

ESX 3.5 U2, Windows Server 2008, Exchange 2007

Scenario 2:Support through server OEMhttp://www.vmware.com/support/policies/ms_support_statement.html

Scenario 3:Support through MS Premier contract

http://support.microsoft.com/kb/897615/en-us

Scenario 4:Support through VMware GSS

Best effort support with MS escalation path (TSA Net)

Changes in Support Options

Page 5: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Summarizing Key Benefits

5 key benefits of a VI3 platform:Trim the fat from Exchange

Improve sizing and provisioning

Flexibility with Exchange building blocks

Improve availability

Simplify disaster recovery

Additional information:http://www.vmware.com/files/pdf/Advantages_Virtualizing_Exchange_2007_final_April_2008.pdf

Page 6: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Reference Architectures & Performance Test Results

Page 7: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Exchange 2007 Performance Analysis

JetstressStorage performance assessment for Exchange provided by Microsoft

Uses Exchange libraries to simulate multi-threaded Exchange-like workload across storage configuration

LoadGenExchange deployment performance assessment provided by Microsoft

Runs end-to-end tests from client to measure typical Exchange activities

SendMail, Logon, CreateTask, RequestMeeting, etc.

Page 8: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/EMC/Dell Reference Architecture

Page 9: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/EMC/Dell Performance Results

1,000 “heavy” users

CLARiiON CX3

Dell PowerEdge 2950

VMware ESX 3.0.2

Mailbox virtual machine

2 vCPU

7GB RAM

Comparable performance between native and virtual

Page 10: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/NetApp Reference Architecture

FAS3050C

activi ty statu s power

FAS3050C

activi ty statu s power

FAS3050C

activi ty statu s power

FAS3050C

activi ty statu s power

FAS3050C

activi ty status powe r

FAS3050C

activi ty status powe r

FAS3050C

activi ty status powe r

FAS3050C

activi ty status powe r

ESX server with production mailbox

virtual machines

DB

LOG

DB(SNAP)

LOG(SNAP)

1. SnapManager VSS Backup

2. Consistency Check

3. SnapMirror Replication

DB(DR)

LOG(DR)

4. Mount with Database Portability

Key

Mailbox Server Role

AD Domain Controller

Hub Transport Server Role

Client Access Server Role

SnapManager Mount Host

PRIMARY SITE DISASTER RECOVERY SITE

ESX server at DR site with all Exchange roles

deployed in VMs

ESX server with Exchange 2007 roles

and SnapManager Mount Host

Page 11: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/NetApp Results

6,000 users3 x 2,000 user VMs

IBM LS41 blade8 cores, 32GB RAM

NetApp FAS iSCSI storage

ESX 3.5.0

Exchange 2007 SP1

Jetstress and LoadGen comparable across native and virtual

0

500

1000

1500

2000

2500

3000

3500

Native VM1 VM2 VM3

6,000 User Jetstress Achieved IOPS

Native

VM1

VM2

VM3

Page 12: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/EMC 16,000 Users on Single Server

Page 13: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/EMC 16,000 User Results

16,000 users4 x 4,000 user VMs

Dell R90016 cores, 128GB RAM

EMC CLARiiON CX3

ESX 3.5.0

Exchange 2007 SP1

1.3 million messages/day

40% CPU average

Page 14: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/HP Lab ConfigurationMailbox Server - DL580 G4: Four- 3.2GHz Dual-Core processors (eight cores) 32GB memory (PC5300) installed in four memory

controllers Dual-Port Emulex A8803A PCI-E Host Bus

Adapter (HBA) Two- 72GB 10k Small factor Serial Attached SCSI

(SAS) host operating system (OS) Two- 72GB SAS for guest VM OS RAID 1 disk arrays for host OS disk and guest VM

OS disk Two integrated NC371i- 1 Gb network interfaces VT enabled in BIOS Hyperthreading enabled

Page 15: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/HP JetStress Results

Page 16: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/HP LoadGen: Mailbox Counter ResultsCounter Phys.

500 user

Virt. 500 user

Phys. 1000 user

Virt. 1000 user

Phys. 2000 user

Virt. 2000 user

Criteria

IOPS / user 0.48 0.42 0.43 0.36 0.46 0.34 Less than 1.0

Avg. Disk sec/Read 0.01 0.01 0.01 0.01 0.01 0.01 Less than 50 ms at all times.

Avg. Disk sec/Write 0 0 0 0 0 0 Less than 50 ms at all times.

MSExchangeIS Mailbox\messages queued for submission

0 0 1 0 1 1 Average less than 250 and max of 1000

MSExchangeIS counter\RPC Average Latency

10ms 9ms 14ms 14ms 12ms 15ms Average less than 50 and max of 100

MSExchangeIS counter\RPC Requests

1 0 1 1 1 1 Average less than 50 and max of 100

Page 17: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware/HP Building Block CPU Performance

Page 18: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Summarizing Performance

Performance has been validated by VMware and PartnersMinimal CPU overhead observed (5-10%)

No impact on disk I/O latency

RPC latency comparable

No virtualization performance degradation observed

New Exchange 2007 workload performs extremely well on VI3

Can exceed native scalability using building blocks

Page 19: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Best Practices

Page 20: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Virtual CPUs

ConsiderationsUnavailable pCPUs can result in VM “ready time.”

Idle vCPUs will compete for system resources.

Best Practices for vCPUsDo not over-commit pCPUs when running Exchange VMs.

Do not over-allocate vCPUs; try to match the exact workload.

If the exact workload is unknown, start with fewer vCPUs initially and increase later if necessary.

The total number of vCPUs assigned to all VMs should be less than or equal to the total number of cores on the ESX Server (in production).

Page 21: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Virtual Memory

ESX Memory Management FeaturesMemory pages can be shared across VMs that have similar data (e.g. same guest OS)

Memory can be over-committed, (i.e. allocating more memory to VMs than is physically available on the ESX Server)

A memory balloon technique wherein virtual machines that do not need all they have been allocated give memory to virtual machines that are using all of their allocated memory.

Page 22: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Virtual MemoryMemory Overhead

A fixed system-wide overhead for the service console (about 272 Mb for ESX 3.x…0 Mb for ESXi).

A fixed system-wide overhead for the Vmkernel, depending on number and size of device drivers.

Additional overhead for each VM. The virtual machine monitor for each VM requires some memory for its code and data.

A memory overhead table can be found in the VMware Resource Management Guide for ESX 3.5.

Page 23: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Virtual MemoryVM Memory Settings

Configured = memory size of VM assigned at creation.

Reservation = guaranteed lower bound of memory that the host reserves for the VM and cannot be reclaimed for other VMs.

Touched memory = memory actually used by the VM. Guest memory is only allocated on demand by ESX Server.

Swappable = VM memory that can be reclaimed by the balloon driver or worst case by ESX Server swapping.

Page 24: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Virtual MemoryBest Practices

Available physical memory for Exchange VMs = total physical memory minus system-wide overhead, VM overhead, and a user-defined “memory buffer”.

Do not over-commit memory until VC reports that steady state usage is below the amount of physical memory on the server.

Set the memory reservation to the configured size of the VM, resulting in a per-VM vmkernel swap file of zero bytes. The guest OS within the VM will still have its own separate page file.

Do not disable the balloon driver (installed with VMware Tools).

To minimize guest OS swapping, the configured size of the VM should be greater than the average memory usage of Exchange running in the guest. Follow Microsoft guidelines for memory and swap/page file configuration of Exchange VMs.

Page 25: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

StorageStorage Virtualization Concepts

Storage array – consists of physical disks that are presented as logical disks (storage array volumes or LUNs) to the ESX Server.

Storage array LUNs – formatted as VMFS volumes.

Virtual disks –presented to the guest OS; can be partitioned and used in guest file systems.

Page 26: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

StorageBest Practices

Deploy Exchange VMs on shared storage – allows VMotion, HA, and DRS. Aligns well with mission-critical Exchange deployments, often installed on shared storage management solutions.

Ensure heavily-used VMs not all accessing same LUN concurrently.Storage Multipathing – Setup a minimum of four paths from an ESX Server to a storage array (requires at least two HBA ports).

Create VMFS file systems from VirtualCenter to get best partition alignment

Page 27: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMFS and RDM Trade-offsVMFSVolume can host many virtual machines (or

can be dedicated to one virtual machine). Increases storage utilization, provides better

flexibility, easier administration and management.

RDMMaps a single LUN to one virtual machine so

only one virtual machine is possible per LUN. More LUNs are required, so it is easier to hit

the LUN limit of 256 that can be presented to ESX Server.

Although not required, RDM volumes can help facilitate swinging Exchange to standby physical boxes in certain support scenarios.

Leverage array level backup and replication tools that integrate with Exchange databases

Required for third party clustering software (e.g. MSCS). Cluster data and quorum disks should be configured with RDM.

Experimental support for Site Recovery Manager.

Large 3rd party ecosystem with V2P products to aid in certain support situations.

Does not support Quorum disks required for third party clustering software.

Full support for Site Recovery Manager

Page 28: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

StorageMultiple VMs per LUN

The number of VMs allocated to a VMFS LUN influences the final architecture.

Page 29: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

NetworkingVirtual Networking Concepts

Virtual Switches – work like Ethernet switches; support VLAN segmentation at the port level. VLANs in ESX Server allow logical groupings of switch ports to communicate as if all ports were on the same physical LAN segment.

Virtual Switch Tagging (VST mode): virtual switch port group adds and removes tags.

Virtual Machine Guest Tagging (VGT mode): an 802.1Q VLAN trunking driver is installed in the virtual machine.

External Switch Tagging (EST mode): external switches perform VLAN tagging so Ethernet frames moving in and out of the ESX Server host are not tagged with VLAN IDs.

Page 30: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

NetworkingVirtual Networking Concepts (cont.)

Port groups – templates for creating virtual ports with a particular set of specifications. In ESX Server, there are three types of port group / virtual switch connections:

Service console port group: ESX Server management interface

VMkernel port group: VMotion, iSCSI and/or NFS/NAS networks

Virtual machine port group: virtual machine networks

NIC Teaming – A single virtual switch can be connected to multiple physical Ethernet adapters using the VMware Infrastructure feature called NIC teaming. This provides redundancy and/or aggregation.

Page 31: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

NetworkingBest Practices

Ensure Host NICs run with intended speed and duplex settings.Use same virtual switch to connect VMs on the same host, helping to eliminate physical network chatter (e.g. mailbox and GC).

Keep Production network traffic separate from VMotion and Admin traffic. (e.g. use VLAN technology to logically separate the traffic).

Team all the NICs on the ESX server – VMotion and Admin networks are not typically used heavily, while Production traffic is nearly constant with Exchange, one practice is to:

Connect to trunk ports on the switch

Use VLAN tagging to direct the traffic at the switch level to allow better utilization of bandwidth.

This practice frees up the majority of capacity for Production traffic when the VMotion and Admin VLANs are not being heavily used.

Page 32: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Networking

Page 33: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Best PracticesVMotion and automated DRS are not currently supported for MSCS cluster nodes. Cold migration is the best option for these roles.Affinity rules

“Keep Virtual Machines Together": if the VMs are known to communicate a lot with each other (e.g. mailbox server and GC).

"Separate Virtual Machines": If the VMs stress/saturate the same system resource (CPU, memory, network or storage)

"Separate Virtual Machines": If the VMs rely on each other for availability and recovery (e.g. mailbox server separate from transport dumpster, CCR nodes separate from File Share Witness).

When configuring an ESX clusterConsider VMotion compatibility between systems

Consider mix of VM configurations and workloads

Resource Management & DRS

Page 34: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Sample Case Study:Deploying Exchange 2007 on VI

Page 35: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 1 – Collect Current Messaging Stats

Use the Microsoft Exchange Server Profile Analyzer to collect information from your current environment.

Example: •1 physical location•16,000 users•Mailbox profiles

•Average - 10 messages sent/40 received per day•Average message size of 50KB•500MB mailbox quota

Page 36: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 2 – Define User Profile

User type (usage profile)

Send/receive per day approximately 50-kilobyte (KB) message size

Light 5 sent/20 receivedAverage 10 sent/40 receivedHeavy 20 sent/80 receivedVery heavy 30 sent/120 received

Understanding Exchange 2007 Workload Requirements

Knowledge worker profiles for Outlook users (http://technet.microsoft.com/en-us/library/aa998874(EXCHG.80).aspx)

Page 37: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 3 – Design the Mailbox Server VMhttp://technet.microsoft.com/en-us/library/bb738142(EXCHG.80).aspx

CPU Requirements1000 Average profile users per processor core

500 Heavy profile users per processor core

Up to 8 processor cores maximum

Memory Requirements

User type Mailbox server memory recommendation Light 2 GB plus 2 MB per mailboxAverage 2 GB plus 3.5 MB per mailboxHeavy 2 GB plus 5 MB per mailbox

Storage RequirementsPlanning Storage Configurations (Microsoft TechNet)

Exchange 2007 Mailbox Server Role Storage Requirements Calculator

Page 38: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Mailbox Server “Building Blocks”

Building Block 500 1000 2000 4000

Profile Average Average Average AveragevCPU 1 1 2 4Base RAM (GB) 2 2 2 2Variable (MB / Mailbox) 3.5MB/mailbox 3.5MB/mailbox 3.5MB/mailbox 3.5MB/mailboxTotal (GB) 4 6 9 16

Building block CPU and RAM sizing for mailboxes with “average” profilehttp://www.microsoft.com/technet/prodtechnol/exchange/2007/plan/hardware.mspx

The Building Block ApproachVMware-recommended Best Practice

Pre-sized VMs with predictable performance patterns

Improved performance when scaling up (memory page sharing)

Flexibility and simplicity when scaling out (deployment advantages)

Page 39: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Sample 4,000-User Building Block Configuration

CPU: 4 vCPUMemory: 16 GB Storage: SCSI Controller 0Network: NIC 1

Page 40: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 4 – Design Peripheral Server RolesServer Role Ratios (Processor Cores)

Server role ratio Recommended processor core ratio Mailbox:Hub 7:1 (no antivirus scanning on Hub)

5:1 (with antivirus scanning on Hub)Mailbox:Client Access 4:1

Memory RequirementsExchange 2007 server role

Minimum per server

Recommended Maximum per server

Hub Transport 2 GB 1 GB per core (2 GB minimum)

16 GB

Client Access 2 GB 2 GB per core (2 GB minimum)

16 GB

Page 41: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Sample Resource Summary for 16,000 average users

Resources required to support 16,000 average profile mailboxes

Resource Requirements by Server Role

Server Role # VMs vCPU(per VM)

vMemory(per VM)

OS / App Storage(per VM)

Network

Mailbox Server 4 4 16 GB 16 GB NIC1Client Access Server

2 2 4 GB 16 GB NIC1

Hub Transport Server

2 2 2 GB 16 GB(+ 32 GB for protocol,

tracking logs, temp files)

NIC1

Page 42: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Sample Hardware Layout for 16,000 average usersExchange VM Distribution 

ESX Host VM(s)ESX Host 1 Exchange Mailbox VM 1 (4 vCPU/16GB RAM)

Exchange Mailbox VM 2 (4 vCPU/16 GB RAM)Exchange Client Access VM 1 (2 vCPU/4 GB RAM)Exchange HUB Transport VM 1 (2 vCPU/2 GB RAM)

ESX Host 2 Exchange Mailbox VM 3 (4 vCPU/16GB RAM)Exchange Mailbox VM 4 (4 vCPU/16 GB RAM)Exchange Client Access VM 2 (2 vCPU/4 GB RAM)Exchange HUB Transport VM 2 (2 vCPU/2 GB RAM)

ESX Host Specifications ESX Host ConfigurationAll ESX Hosts 16 cores (4x4)

48 GB RAM2 Fiber Channel HBAs4 Gigabit Network Adapters

Page 43: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

ESX Host Architecture

Characteristics (each host)Sized for app requirements plus overheadSupports 8K mailboxesCan be used as a “building block” to scale out even further

Page 44: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 5 – Prepare the VMware Infrastructure

http://www.vmware.com/support/pubs/vi_pages/vi_pubs_35.html.

VMware Infrastructure Administration

Advanced VMware Infrastructure Features (VMotion, HA, DRS, etc.)

ESX Host Installation and Configuration

Virtual Networking

Storage

Page 45: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 6 – Create Templates and Deployhttp://www.vmware.com/pdf/vc_2_templates_usage_best_practices_wp.pdf

StepsCreate Templates

Install Guest Operating System

Patch and Install Extras (i.e. PowerShell)

Customize and Deploy

Page 46: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 7 – Install and Configure Exchange

Deployment StepsMicrosoft Exchange Deployment Guide Prepare the Topology Install Client Access Server(s) Prepare Schema Install Hub Transport(s) Install Mailbox Server(s)

Page 47: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 8 – Performance Monitoring

Ongoing Performance Monitoring and Tuning

Subsystem esxtop Counters VirtualCenter CounterCPU %RDY

%USEDReadyUsage

Memory %ACTVSWW/sSWR/s

ActiveSwapinSwapout

Storage ACTVDAVG/cmdKAVG/cmd

CommandsdeviceWriteLatency & deviceReadLatencykernelWriteLatency & kernelReadLatency

Network MbRX/sMbTX/s

packetsRxpacketsTx

Performance counters of particular interest to Exchange administrators.

Page 48: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Step 9 – Move Mailboxes

Page 49: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Sample Availability & Recovery Options

Page 50: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

CharacteristicsMSCS required? – NoMS License Requirement – Windows/Exchange Standard EditionRecovery time – RebootTransport Dumpster enabled? – NoProtects from – hardware failure only

Simple Standalone Server Model with HA/DRS

Page 51: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

CharacteristicsMSCS required? – NoMS License Requirement – Windows/Exchange Standard EditionRecovery time – Reboot (or manual failover in case of database corruption)Transport Dumpster enabled? – YesProtects from – hardware failure and database corruption

Hub Transport (Transport Dumpster)

HA + LCR for DB Protection/Transport Dumpster

Page 52: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

CharacteristicsMSCS required? – YesMS License Requirement – Windows/Exchange Enterprise EditionRecovery time – ImmediateTransport Dumpster enabled? – YesProtects from – hardware failure, database corruption, and application failure

Hub Transport (Transport Dumpster)

HA + CCR for Faster Failover

Page 53: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Remote Site Recovery

Page 54: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware Site Recovery Manager orchestrates the failover of entire datacenters, including Exchange deployments, and enables DR tests to be conducted frequently.

Site Recovery Manager

Page 55: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Customer Success Stories

Page 56: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware: Business and User CharacteristicsBusiness Type: Software Company3 physical locations

Primary datacenter in Palo Alto

DR Site Santa Clara

Palo Alto (FSW)

1 gig connection between facilities

Users (6,500 and growing)1,000 number of extra-large mailboxes (5 GB)

5,500 number of regular mailboxes (2 GB)

Mailbox profiles2 GB mailbox size for regular mailboxes

5 GB mailbox size for extra-large mailboxes

Average 1,000 mailboxes per VM

Page 57: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware: Technical CharacteristicsSystem-Wide

Active Directory 2003

Exchange 2007

CCR over stretch LAN used for Mailbox replication

1 GB connection between sites

Palo Alto DataCenter13 Exchange Servers

4 Hub/CAS

9 Mailbox Servers

FibreChannel SAN

3 DC/GC VMs

Santa Clara Site12 Exchange Server VMs

3 Hub/CAS (redirect and routing only)

9 Mailbox VMs

3 DC/GC VMs

Page 58: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

VMware: Architecture Diagram

Page 59: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Plymouth-U: Business and User CharacteristicsBusiness Type: University1 Physical locationUsers (46,000)

Students

Faculty

Mailbox profiles4 Mailbox VMs

1 CAS VM

Hub role loaded on Mailbox server(s)

Average 11,500 mailboxes per VM

Page 60: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

Plymouth-U: Customer Quote

“The university has virtualized 50,000 Exchange 2007 mailboxes on VMware Infrastructure. We not only have a more manageable and flexible Exchange environment, but we have replaced Microsoft clustering with VMware’s built-in high availability solutions such as HA and VMotion. We couldn’t be happier with the uptime and performance of our Exchange implementation on VMware. VMware technology works for small companies all the way up to massive financial institutions. And clearly, it has worked for us.”

Adrian JaneHead of Faculty Support and StrategyUniversity of Plymouth

Page 61: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

UTB: Business and User CharacteristicsBusiness Type: University2 physical locations

Primary datacenter

DR Site 5 hours away

20 megabit connection between facilities

Users (25,000 and growing)23,000 students

2,000 faculty/staff

Mailbox profiles30 MB size for students (light users) – standalone servers

Unlimited size for faculty/staff (heavy users) – Microsoft Clustered servers

Average 4000 mailboxes per server

Page 62: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

UTB: Technical CharacteristicsSystem-Wide

Active Directory 2003

Exchange 2003

CA XOSoft used for Mailbox replication

20 megabit connection between sites

Primary DataCenter11 Exchange Servers

4 Front-End Servers

7 Mailbox Servers

1 2-nodes cluster (Act/Pass)

FibreChannel SAN

Production AD and DNS Servers

DR Site7 Exchange Server VMs

1 Front-End VM

6 Mailbox VMs

1 AD VM

1 DNS VM

No shared storage

Page 63: Design, Deploy, and Optimize Exchange 2007 on VMware Infrastructure

UTB: Architecture Diagram