Upload
jeffrey-anderson
View
221
Download
0
Tags:
Embed Size (px)
Citation preview
VIRTUALIZING MICROSOFT EXCHANGESERVER WITH HYPER-V
SESSION CODE: VIR-EXL308
Glen GoodaSenior ConsultantMicrosoft
What We Will Cover
► Why Microsoft Virtualization for Microsoft Server Applications?
► Base Hypervisor Performance► ESG Performance Results► Exchange Server Virtualization Best Practices► Links to Reference Material
Trends – Changing Market Landscape
► Virtualization is exploding resulting in VM proliferation and impacting OS share
Licensed Windows
61%Unpaid
Windows11%
Linux 21%
Unix6%
Other1%
Y2005 Y2006 Y2007 Y2008 Y2009 Y2010 Y2011 Y20120
2,000,000
4,000,000
6,000,000
8,000,000
10,000,000
12,000,000
14,000,000
Physical Units Logical Units
Number of physical servers shipments used for virtualization will grow to 1.7M+ in 2012 at a CAGR of 15%
19% of physical server shipments will be used for virtualization, increasing from 11.7% in 2007
IDC Server Virtualization Forecast
Why Microsoft Virtualizationfor Microsoft ServerApplications
Management Platform
Microsoft Virtualization for Server Applications
Virtualization Platform
Microsoft VirtualizationWindows Server 2008 R2 Hyper-V & System Center
Microsoft Server ApplicationsDatabases: Communication:Collaboration:
Business Critical Applications
Enterprise Applications
Line of Business (LOB) Custom Applications
Microsoft Virtualization: The Best Choice for Microsoft Server Applications
BuiltforWindows
*Built-in Virtualization with one-stop support
*Large PartnerEcosystem
Increased Deployment Options
CompleteManagementSolution
*Deep Application Knowledge
*Physical & Virtual Management
*Cross-Platform & Hypervisor Support
Low CostCompleteSolution
*A comparable solution can cost almost 4 times more†
*Lower On-goingCosts
Virtualization-Friendly Licensing
*Only available with Microsoft Virtualization
†Based on a comparison of Microsoft® System Center Server Management Suite Datacenter with VMware® vSphere Enterprise Plus with VMware vCenter Server.. Assumes a five host configuration, 2 processors on each host, 2 years support costs for both products, and no operating system costs included.. The Microsoft solution can use either the free Microsoft Hyper-V Server 2008 R2 hypervisor or an existing Windows Server 2008 R2 hypervisor. Based on Microsoft estimated retail prices and published VMware prices available at https://www.vmware.com/vmwarestore as of 08/04/2009 for purchases in the United States. Actual reseller prices may vary.
Windows Server 2008 R2 SP1► Many of the scenarios discussed leverage features new to R2 and
R2 SP1► Live Migration & HA► New Processor Support
– Improved Performance & Lower Costs► Enhanced Scalability
– Greater VM density & Lower TCO► Networking enhancements
– Improve performance & 10 Gb/E ready► Dynamic VM capabilities► Enhancements to Server Core
– Ease management & Lower TCO► Dynamic Memory & Remote FX
in SP1
Hyper-V Configuration Guidelines
► Hyper-V Root Configuration– Plan for 1GB+ memory reserve for the management OS in the root
partition– Plan for one dedicated NIC for management purposes– Plan (ideally) for one dedicated NIC for live migration– Separate LUNs/Arrays for management OS, guest OS VHDs and VM
storage– Management OS and VHD LUNs should employ RAID to provide data
protection and performance– Challenge for blades with 2 physical disks
(c) 2011 Microsoft. All rights reserved.
Hyper-V Configuration Guidelines
► Hyper-V Guest Configuration– Fixed-sized VHDs for Virtual OS
• Need to account for page file consumption in addition to OS requirementsOS VHD Size (minimum 15GB) + VM Memory Size = Minimum VHD size
– Account for space needed by additional files by VM• Example for SQL: OS VHD Size + (VM Memory Size) + Data Files +
Log Files
Dynamic Memory in SP1► Overview
– A memory management enhancement for Hyper-V– Enables customers to dynamically grow and decrease the memory of a
VM– Available as a feature in Windows Server 2008 R2 SP1
► Benefits– Better consolidation ratios with predictable performance– Enables linear scalability for deployment environments– Designed for production use– Recommendations for use for server applications still being evaluated.
► With the SP1 release, we will meet all the Gartner/Burton group requirements for an Enterprise ready Hypervisor
How does it work?► VM memory configuration parameters:
– Initial (what VM will boot with)– Maximum (what VM can grow to)
► Memory is pooled and dynamically distributed across VMs
► Memory is dynamically allocated/removed based VM usage with no service interruption
► Guest enlightened: guests & Hyper-V work TOGETHER
► Memory is added and removed via synthetic memory driver (memory VSC) support
How is it different?► There is a difference between memory overcommit and
overcommitting memory– We warned of the dangers of overcommitting memory– We pressed against the use of overcommit from a marketing perspective
► Dynamic Memory allows for predictable, consistent performance even after overcommitting memory
► Key is avoid the performance cliff which is possible with the VMware solution
► The features VMware has does NOT provide significantly more scale than Hyper-V with DM does (despite what VMware will tell our customers)
How does VMware do it?► Memory Ballooning and Un-ballooning
– Set VM memory, reservation, and limit– Will add and remove as necessary, between the reservation and limit
► Memory Compression► Page Sharing
– Optimized common memory pages– Affected by ASLR and SLAT
► Hypervisor Memory Paging– Pages at the Hypervisor when physical memory is completely committed
► All these (italicized) are only used after all physical memory is committed, which is not recommended by VMware or Microsoft!
Base HypervisorPerformance
Project Virtual Reality Check► Available at www.virtualrealitycheck.net ► Performed by Ruben Spruijt and Jeroen van de Kamp► Not sponsored by any one company, although VMware
and Citrix have assisted the site► Results are focused on running and replacing Terminal
Server workloads only, on vSphere, Hyper-V, and XenServer
(c) 2011 Microsoft. All rights reserved.
► Results are not for redistribution or validation, although they are public
► Phase II results from February 2010, with significant increase in vSphere performance
Project VRC Results► Enable EPT/RVI results in a significant increase in capacity of VMs
running TS– vSphere – 90% increase– XenServer – 95% increase– Hyper-V – 154% increase
► When scaling x86 TS VMs w/o Hyper-threading, vSphere is 5% better than both Xen and Hyper-V
(c) 2011 Microsoft. All rights reserved.
Project VRC Results► When scaling x86 TS VMs w/ Hyper-threading, Xen and Hyper-V are
15% better than vSphere– When scaling up to 100 TS sessions, response times for all three
hypervisors are fairly equal– Beyond 100 sessions, vSphere response times increases with each new
session
► When scaling x64 TS VMs, Xen and Hyper-V are within 13.6% of bare metal, and are 27% better than vSphere
2010 ESG Paper► 3rd Party Performance Validation White Paper, sponsored by
Microsoft► Key Points
– Hyper-V is easy to install to get running for those administrators familiar with Windows
– Hyper-V provides high availability with transparent and automatic failover of Virtual Machines (VMs)
– Hyper-V Live Migrations provides for zero-downtime migrations of VMs between Hyper-V servers
– Most importantly, the report shows Hyper-V performance versus physical is excellent, with 95% to 99% of the performance of physical disks and 89% to 98% of performance of the tested workloads versus physical.
2010 ESG Paper► All this strongly supports the position that Hyper-V performance
is not a bottleneck or blocker to implementation and that customers can run almost all their workloads on Hyper-V
► Publically available and you can distribute to your customers/clients
► http://www.enterprisestrategygroup.com/2010/07/microsoft-hyper-v-r2-scalable-native-server-virtualization-for-the-enterprise/
► http://www.infostor.com/index/articles/display/5976242552/articles/infostor/esg-lab-review/2010/july-2010/microsoft-hyper-v.html
Hyper-V Fixed Virtual Hard Disk Performance Overhead Analysis
SQL Log OLTP Database File Server Exchange DB
Native vs. Fixed VHD IO Performance Analysis(Window 2008 R2 Physical Disk(Native) vs. Fixed VHD(VM), ESG Lab
Iometer workloads)
Physical Disk (Native) Fixed VHD (VM)
Dis
k IO
s pe
r sec
ond
(IOPS
)
Hyper-V R2 Application Overhead Analysis
SQL query
Subsequent copies
Directory level copy
Application install
0% 10% 20% 30% 40% 50% 60% 70% 80% 90% 100%
Virtual vs. Physical Performance Analysis(Relative difference in elapsed time per operation; Windows 2008 R2 data
center)
Native Hyper-V R2 VM
Microsoft/Intel iSCSI Test► Used Windows Server 2008
R2, Intel Xeon 5500 processors, and Intel 10Gbps Ethernet Adapters
► Reached over One Million IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Native HW
► Reached over 700,000 IOPS over a single 10 Gbps Ethernet link using a software iSCSI initiator on Hyper-V to the Guest OS
Microsoft/Intel iSCSI Test► Native Performance ► In-Guest VM Performance
Microsoft/Intel iSCSI test► iSCSI Performance can be as good or better than either Fibre
Channel or FCoE. iSCSI can offer a compelling offering for those who don’t want to invest in FC.
► Windows Server and Hyper-V is not a limiting factor to IO performance. There shouldn’t be any significant concern around IO for virtualizing with Hyper-V.
► More details (along with a great Photoshop picture) at http://gestaltit.com/all/tech/storage/stephen/microsoft-and-intel-push-one-million-iscsi-iops/
ESG PerformanceResults
ESG Lab Summary 2011► Available online: http://
www.microsoft.com/virtualization/en/us/solution-business-apps.aspx
► Useful links, resources, case studies, white papers and webcasts
2011 ESG Test Lab - Physical
RAID-10 Pools:
Data (88):
Logs(16):
OS(24):
Apps(16):
EMC CX4-960155 15K RPM FC disk
drives
SharePoint
Utilities
Exchange
SQL Server
SAN
2x4 Gbps FC per server
HP BL680C up to 24 cores and
128 GB RAM per blade
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 ESG Test Lab - Virtual
Hyper-V R2
Exchange
SAN
Exchange
Exchange
Exchange
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Application: Exchange 2010VM configuration: 4 vCPU, 4 GB
RAM Microsoft Windows Server 2008
R2 SP1Hypervisor: Microsoft Hyper-V R2
Physical OS: Windows Server 2008 R2 SP1
Virtual machine images: Fixed VHD
Exchange data and logs: Fixed VHD
Microsoft Exchange Server Jetstress 2010
► Scale from 5,000 to 20,000 simulated Exchange mailboxes
► 5,000 mailboxes per VM► 250 MB mailboxes► 0.15 IOPs per mailbox ► 2 databases/2 copies per VM
http://download.microsoft.com/download/1/7/F/17FB551C-0905-4A04-AB46-2EBA616CFDF3/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20SharePoint%20Mar%2011_Wide.pdf
2011 Exchange Workload Results
1 2 3 40
5,000
10,000
15,000
20,000
25,000
Hyper-V R2 Virtual Application Workload Scalability(Up to 20,000 Exchange 2010 mailboxes, Windows 2008
R2 SP1)
Hyper-V R2 Virtual Machines
Exch
ange
201
0 M
ailb
oxes
Exc
hang
e D
B Re
ad R
espo
nse
Tim
e (m
s)
25
20
15
10
5
0
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 Exchange Workload Results
1 2 3 40
1,0002,0003,0004,0005,0006,0007,0008,000
Hyper-V R2 Virtual Application Workload Scalability(Up to 20,000 Exchange 2010 mailboxes, Windows 2008
R2 SP1)
Hyper-V R2 Virtual Machines
Exch
ange
IO O
pera
tions
per
seco
nd
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange Tested Solutions► Microsoft and participating server and storage vendors► Key design decision points for common scenarios► Cost effective resource planning (server, storage, network) ► Device Access Group and multi-site designs for HA ► Documented results of exhaustive performance tests
► 500 Mailboxes in a Single Site Running Hyper-V on Dell Servers ► 9000 Mailboxes in Two Sites Running Hyper-V on Dell M610 Servers, Dell EqualLogic Storage, and F5 Load Balancing So
lutions
► 15000 Mailboxes in Two Sites Running Hyper-V on Unisys ES7000 Servers and Hitachi Adaptable Modular Storage 2000 Family
► 16000 Mailboxes in a Single Site Deployed on IBM and Brocade Hardware ► 20000 Mailboxes in Two Sites Running Hyper-V on Dell R910 Servers, EMC CLARiiON
Storage, and Brocade Network Solutions ► 32400 Mailboxes In Three Sites Running Hyper-V on Cisco Unified Compute System Blade Servers and EMC CLARiiON
Storage http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange Tested Solutions with Hyper-V
1 2 3 40
5,000
10,000
15,000
20,000
25,000
30,000
35,000
Exchange Tested Solutions
Exch
ange
201
0 M
ailb
oxes
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange Tested Solutions on Hyper-VSolution Mboxes Sites
Cores RAM(GB) Drives
Hyper-V R2 VMs
Mbox CAS/Hub Total
1 9,000 2 36 432 144 9 9 18
2 15,000 2 96 512 136 12 8 20
3 20,000 2 128 768 160 16 16 32
4 32,000 3 96 1,152 432 24 12 36
1. http://technet.microsoft.com/en-us/library/gg513522.aspx 2. http://www.microsoft.com/downloads/en/details.aspx?FamilyID=4D83E5AC-7D9A-47C6-A843-A5520D118FC4 3. http://www.microsoft.com/downloads/en/details.aspx?FamilyID=6FD7E23F-30C8-4FB7-BCA3-A5DE9AA9A8E4 4. http://www.emc.com/collateral/hardware/white-papers/h7337-exchange-unified-cisco-hyper-v-wp.pdf
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
2011 Exchange Results Summary► 20,000 simulated Exchange 2010 users on a single server
in four Hyper-V R2 VMs with Jetstress► Exchange tested solutions from 9,000 through 32,000
mailboxes ► The 32,000 mailbox solution was deployed in 36 Hyper-V
R2 VMs over three sites► The performance, scalability and low overhead of Hyper-V
R2 can be used to reduce costs and improve the manageability, flexibility, and availability of consolidated Exchange applications.
http://download.microsoft.com/download/F/0/9/F09FEDCA-1FEA-4DB4-B131-563145BF0B6F/ESG%20Preso%20Microsoft%20Hyper-V%20Performance%20Exchange%20Mar%2011_Wide.pdf
Exchange ServerVirtualizationBest Practices
Why Virtualize ExchangeTake advantage of virtualization capabilities to optimize server utilization
Host in Datacenter VM
Consolidate under-utilised servers into a single virtualized hostsLower costs by reducing space needs and power consumptionRapid provisioning of a mobile infrastructure
1 Exchange 2010CAS & HUB
Exchange 2010 MBX
File & Print Server
2 Exchange 2010 MBX
3 Exchange 2010CAS & HUB
Management ServerN
LB
DC 1
DC 2 Database Server
Exchange 2010 UM
DAG
Updated Support Guidance– Support for virtualized Exchange servers since Exchange Server 2007 SP1– Exchange 2010 release continued support for virtualization– Expanding support scenarios– Release of Exchange 2010 Virtualization Guidance whitepaper
Ex 2007 Ex 2010RTM
Ex 2010SP1 (Now)
Any hypervisor validated under Windows SVVP
All storage used by an Exchange guest must be block level storage
Virtual storage must be fixed size, SCSI pass-through, or iSCSI
Taking virtual snapshots of Exchange guest, not supported
Virtual processor-to-logical processor ration no greater than 2:1
Exchange HA in combination with hypervisor clustering or migration
Unified Messaging role supported
Support Guidance (continued)► Storage
– Fixed sized VHD, SCSI or iSCSI pass-through.– Dynamic or Thin Disks are not supported– Differencing or Delta Disks are not supported– Disk space should be 15GB + VM Memory Size
(c) 2011 Microsoft. All rights reserved.
Support Guidance (continued)► Other considerations
– Only the management software in the parent partition. No other roles should be running.
– VM Snapshots are not supported– 2:1 vCPU to physical CPU ratio is supported but 1:1 is best– Current guidance to not use any memory management or
oversubscription
Deployment Recommendations
► Exchange application is not ‘virtualization aware’► Core Exchange Design Principles Still Apply
– Design for Performance, Reliability and Capacity– Design for Usage Profiles (CAS/MBX)– Design for Message Profiles (Hub/Edge)
(c) 2011 Microsoft. All rights reserved.
Deployment Recommendations
► Virtualization Design Principles Now Apply– Design for Performance, Reliability and Capacity– Virtual machines should be sized specific to the
Exchange role (EDGE, HUB, CAS, MBX, multi-role)– Hosts should be sized to accommodate the guests that
they will support
Best Practices:Capacity, Sizing and Performance
Sizing Process Overview► Start with the physical server sizing process– Calculator & TechNet guidance
► Account for virtualization overhead► Determine VM placement– Account for VM migration if planned
► Size root servers, storage, and network infrastructure
Guest Sizing Rules of Thumb► Size Mailbox role first
– CPU ratios for other roles based on Mailbox role sizing– Mailbox role performance is key to user experience– High availability design significantly impacts sizing
► Don’t oversubscribe resources– Size based on anticipated peak workload, don’t under
provision physical resources► Don’t forget network needs
Guest Sizing for Unified Messaging► Newly supported for virtualization
– Requires Exchange 2010 SP1 (or greater)
► Role is susceptible to poor voice quality and/or latency if undersized
► Requires min. 4 virtual processors► UM must be able to utilize physical processors on demand► Consider network requirements (low latency, sufficient
bandwidth) to meet UM needs► Tests show that 4VP/16GB VM can handle 40 concurrent
calls with VM Preview and 65 calls without
Root Server Sizing► Root server storage sizing includes space for the OS &
required hypervisor components, plus connectivity to storage for guest VMs– Don’t forget about high availability of storage if required
(multi-path HBAs or iSCSI NICs, redundant paths, etc.)► Network sizing is critical: number of interfaces and
bandwidth– Consider app connectivity, storage networking,
heartbeats, CSV, VM migration
Root Server Sizing► CPU sizing should include root needs plus per-guest overhead
– Follow hypervisor vendor recommendations► Memory sizing should not assume oversubscription
– Follow hypervisor vendor recommendations– Provide memory for root plus sum of running VM requirements– Memory for Hyper-V root = the larger of 512MB or the per-VM value
(summed for running VMs) of 32MB for the first 1GB of virtual RAM + 8MB for each additional GB of virtual RAM• Example: 8 VMs running, each with 32GB RAM. Root requires 8 *
(32MB + 8MB*31) = 2240MB
Virtual Processors► Scale up CPU on VMs as much as possible
– Don’t deploy 4 x 1 vCPU machines vs. 1 x 4 vCPU machine: take advantage of Exchange scalability
► Don’t oversubscribe CPUs unless consolidating with P2V, or similar scenario
► Generally assume 1 logical CPU == 1 virtual CPU, don’t assume that a hyperthreaded (SMT) CPU counts
Best Practices:Server Deployment
Locating Virtual Machines► VM placement is important for high availability► Don’t co-locate DAG database copies on physical hosts► Exchange unaware of VM location relative to other VMs
– No path correction in transport to avoid data loss
► Ensure peak workload can run in standard VM locations– OK to move temporarily for maintenance assuming high
availability requirements are met and current workload can be serviced
Storage Decisions► Exchange performance and health highly dependent on
availability and performance of storage► Many options for presentation of storage to VMs
– VHD– FC– iSCSI, FCoE– DAS
► Optimize for performance and general design goals– We recommend looking for options that provide large
mailboxes and low cost
Storage Decisions► Exchange storage should be on spindles separate from
guest OS VHD physical storage► Exchange storage must be fixed VHD, SCSI pass-through or
iSCSI– Preference is to use SCSI pass-through to host queues,
DBs, and logfile streams– Hyper-V Live Migration suggests Cluster Shared Volumes
with fixed VHD (faster “black-out” period)► FC/SCSI HBAs must be configured in Root OS with LUNs
presented to VMs as pass-through or VHD
(c) 2011 Microsoft. All rights reserved.
Storage Decisions► Internet SCSI (iSCSI)
– Standard best practices for iSCSI connected storage apply (dedicated NIC, jumbo frames, offload, etc…)
– iSCSI initiator in the guest is supported but need to account for reduced performance
► Exchange storage must be block-level– Network attached storage (NAS) volumes not supported
Exchange VM Deployment
► Exchange setup must be run when VM is provisioned– Not “sysprep friendly”
► Possible to script Exchange setup to fully automate Exchange VM provisioning
► Build “starter image” with desired OS, patches, pre-reqs, and Exchange install binaries
Best Practices:High Availability & VM Migration
High Availability And Disaster Recovery
► Exchange High Availability Definition– Automatic switch over of application services which doesn’t compromise
the integrity of application data– Selection of “active” data set occurs within the application automatically
► Exchange Disaster Recovery Definition– Manual fail over of application services with high retention of data
integrity– Selection of “active” data set occurs manually outside the application,
Exchange application provides support to minimize data loss through replication
Exchange 2010 High Availability► Database Availability Group (DAG)
– A group of up to 16 Exchange Server 2010 Mailbox servers that provide automatic database-level recovery
– Uses continuous log replication and a subset of Windows Failover Clustering technologies
– Can extend across multiple datacenters/AD sites
(c) 2011 Microsoft. All rights reserved.
Exchange 2010 High Availability► Benefits of Exchange Native Data Protection
– Protection from database, server or network failure– Automatic failover protection and manual switchover
control is provided at the mailbox database level instead of at the server level.
– Support for up to 16 copies, support for lag copies
Host Based Failover Clustering► Host Based Failover Clustering HA
– Using Host Based Failover Clustering and automatically failing VMs to an alternate cluster node in the event of a critical hardware issue (virtualization platform independent)
► What you need to be aware of:– Not an Exchange Aware Solution– Only protects against server hardware/network failure– No HA in the event of storage failure / data corruption– Trend is larger mailboxes = larger database sizes =
longer time to recover from data loss = DAG– Requires a shared storage deployment
VM Migration and Exchange 2010► Physical Computer Maintenance
– Operating System/Application Updates– Hardware Maintenance
► Rebalancing Workloads– Dynamic redistribution of VM’s to optimize workload on
physical hardware► Green IT
– ‘Off Peak’ Virtual Machine Consolidation
VM Cluster & Migration Considerations► Minimize “outage” during migration operations
– Consider CSV rather than pass-through LUNs for all Mailbox VM storage
► Disable migration technologies that save state and migrate: always migrate live or completely shut down
► Consider relaxing cluster heartbeat timeouts– Cluster nodes considered down after 5 seconds by default
► Be aware of additional network interface requirements for VM migration technologies – size network appropriately
Best Practices:Coexistence With Other Workloads
Private Cloud Considerations► Given fixed resource requirements, isolate Exchange within
private cloud as much as possible► Be prepared to apply different resource management
polices to Exchange VMs vs. other workloads which may be less mission critical
► Use private cloud as pre-built infrastructure, not necessarily dynamic– Based on deployment sizing, understand overall resource
requirements and allocate accordingly from pool of cloud resources
Resource Allocation & Balancing► Disable hypervisor-based auto tuning features
– Dynamic memory– Storage tuning/rebalancing
► Exchange Mailbox role IOPS heavily dependent on ESE cache, dynamic memory can negatively impact
► Size for calculated resource requirements – no reliance on dynamic tuning should be needed
Enhancing Exchangeon Hyper-V
System Center Integration• Protect Exchange up to every 15 minutes• ‘Lossless Restore’ – within 1 committed
transaction• Understands & protects scenarios such as DAG
• Proactively monitor Exchange infrastructure• Drive down the cost of ensuring service levels• New Correlation Engine help to monitor more
efficiently
• Rapidly provision Exchange infrastructure• Automate migrations for load balancing of VMs• Seamlessly migrate workloads
Citrix NetScaler VPX► Hyper-V Virtual Appliance
– Can be deployed through SCVMM► Speed up deployment with application templates &
configuration wizards► Once deployed, NetScaler will load balance Exchange 2010
Client Access servers► Ships with PRO-enabled OpsMgr MP
– Load balance, optimize, secure & report– Dynamically scale infrastructure based on monitoring
Partner Evidence: Exchange Virtualization
► HP recommended configurations for Exchange 2010 virtualization using Hyper-V R2– Sizing for 20,000 users, 512 MB mailboxes
• All roles virtualized and Live Migration for MBX servers• Hardware: HP ProLiant BL460c G6 (BL460c G6) server blade and HP
StorageWorks 8400 Enterprise Virtual Array (EVA8400) storage systems – http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-2127ENW.pdf
– Sizing for 5,000 users, 256 MB mailboxes• All roles virtualized and Live Migration for MBX servers• Hardware: HP ProLiant BL460c G6 (BL460c G6) server blades and HP
LeftHand P4500 (P4500) storage systems– http://h20195.www2.hp.com/V2/GetPDF.aspx/4AA1-9895ENW.pdf
Summary► Why Microsoft Virtualization for Microsoft Server
Applications?► Why Hyper-V?► ESG Performance Results► Exchange Server Virtualization Best Practices► Surrounding Ecosystem► Resources
Related ContentBreakout Sessions:
EXL303: Exchange Server 2010 : High Availability ConceptsEXL304 : Load Balancing with Microsoft Server 2010EXL308 : Exchange Virtualisation : Is it right for you?EXL306: Best Practices for Virtualization Of Microsoft Exchange 2010SVR-VIR308 : iSCSI: Getting Best Performance, High-Availability and VirtualisationVIR307 : Understand How Microsoft Virtualisation Compares to VMWareVIR/COS301 : Best Practises for Private Cloud Implementation
Track ResourcesYou can also find the latest information about our products at the following links:
Windows Azure - http://www.microsoft.com/windowsazure/
Microsoft System Center - http://www.microsoft.com/systemcenter/
Windows Server - http://www.microsoft.com/windowsserver/
Cloud Power - http://www.microsoft.com/cloud/
Private Cloud - http://www.microsoft.com/privatecloud/
Enrol in Microsoft Virtual Academy Today
Why Enroll, other than it being free?The MVA helps improve your IT skill set and advance your career with a free, easy to access training portal that allows you to learn at your own pace, focusing on Microsoft technologies.What Do I get for enrolment?► Free training to make you become the Cloud-Hero in my Organization► Help mastering your Training Path and get the recognition► Connect with other IT Pros and discuss The Cloud
Where do I Enrol?
www.microsoftvirtualacademy.com Then tell us what you think. [email protected]
(c) 2011 Microsoft. All rights reserved.
© 2010 Microsoft Corporation. All rights reserved. Microsoft, Windows, Windows Vista and other product names are or may be registered trademarks and/or trademarks in the U.S. and/or other countries.
The information herein is for informational purposes only and represents the current view of Microsoft Corporation as of the date of this presentation. Because Microsoft must respond to changing market conditions, it should not be interpreted to be a commitment on the part of Microsoft, and Microsoft cannot guarantee the accuracy of any information provided after the date of this
presentation. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS PRESENTATION.
(c) 2011 Microsoft. All rights reserved.
www.msteched.com/Australia
Sessions On-Demand & Community
http:// technet.microsoft.com/en-au
Resources for IT Professionals
http://msdn.microsoft.com/en-au
Resources for Developers
www.microsoft.com/australia/learning
Microsoft Certification & Training Resources
Resources