Upload
others
View
1
Download
0
Embed Size (px)
Citation preview
Microsoft Exchange Server 2010 storage
overview and HP storage options
Technical white paper
Table of contents
Exchange Server 2010 storage is all about options… .............................................................................. 2
Exchange technology changes .............................................................................................................. 2
Microsoft requirements and recommendations ......................................................................................... 8
Exchange high availability .................................................................................................................. 11
Drive technologies ............................................................................................................................. 17
Storage controller technologies ............................................................................................................ 20
HP storage platforms .......................................................................................................................... 23
Other considerations .......................................................................................................................... 28
Summary .......................................................................................................................................... 30
For more information .......................................................................................................................... 31
2
Exchange Server 2010 storage is all about options…
With Microsoft® Exchange Server 2010, Microsoft has made possible the cost-effective
implementation of large mailboxes for messaging environments. Whether you want to better manage
Microsoft Office Outlook personal storage (PST) files in your company, add voicemail capability to
your Exchange Server implementation or simply give your users enough capacity to keep and access
years of historical e-mail information, it is now easier to implement and less expensive.
In the past, Exchange had rigid storage requirements for the use of high performance storage devices.
During the development of Exchange Server 2010, Microsoft has rewritten Exchange, changing the
manner in which data is accessed on disk. The focus of these changes increased the per mailbox
storage capacity providing the average user with a large multi-gigabyte mailbox which performs well
while lowering the overall cost of storage. With the increased amounts of data being stored in
Exchange, additional high availability features have been added to maintain each user‟s access to
Exchange data.
This document gives an overview of the storage changes Microsoft has made in Exchange Server
2010 and presents an overview of the various HP storage devices and their use in an Exchange
2010 implementation.
This document is not a recommendation of a particular type of storage but describes the range of
options that HP provides which should be considered when designing a storage solution for Exchange
2010 and yes, Exchange 2010 is still supported on a Storage Area Network (SAN). All messaging
environments are unique to a particular company and are created to fulfill business requirements and
meet service level agreements (SLA).
Exchange Solution Reviewed Program
Microsoft makes available the Exchange Solution Reviewed Program 3.0 (ESRP 3.0) for vendors to
test their Exchange Server 2010 storage configurations and publish test results. The ESRP program
shows sample configurations developed by vendors and can offer a variety of design ideas to provide
input into the planning phase of Exchange Server 2010 implementation. HP publishes ESRP test results
for solutions on each of our storage products and the configuration information for a particular type of
storage is published in the ESRP with the test results.
The information provided in the ESRP configurations should not be used in place of testing a
configuration prior to deploying it.
Further information is available from: http://technet.microsoft.com/en-us/exchange/ff182054.aspx.
Exchange technology changes
In order to store large amounts of Exchange data in a cost-effective manner, Microsoft has enabled
the use of larger capacity, lower cost hard disks which do not have the performance of enterprise
class storage. The use of larger hard drives with a slower rotational speed required a change to the
manner in which Exchange reads and writes data to the database. While it is possible to effectively
use slower midline serial-attached SCSI (SAS) or serial-attached ATA (SATA) drives, they may not be
appropriate for all situations.
With Exchange Server 2007, Microsoft focused on making more efficient use of storage by reducing
the performance demands on disks using features such as an increased amount of database cache
and larger database page size. Exchange Server 2010 continues this optimization process resulting
in a further reduction of storage I/O up to 70%. For Exchange Server 2010, the average per mailbox
performance requirement is now 0.12 IOPS.
3
Note
The heavy workload profile for Exchange Server 2010 is 100 messages
sent and received per user per day. This workload is the same as the 20
sent and 80 received workload in Exchange 2003 and Exchange 2007.
The average message size in Exchange 2010 is 75KB.
Figure 1. Exchange IOPS per mailbox by version (using heavy workload profile)
The 0.12 IOPS figure is for Exchange 2010 user workload only and does not include other
performance requirements like Desktop Search and BlackBerry Enterprise Server (BES) or other add-ins
or applications.
Exchange store I/O optimization
The Exchange store schema which determines the organization of data internally to the Extensible
Storage Engine (ESE) database has been rewritten to perform fewer large sequential I/Os compared
to shorter random I/Os. The use of a higher amount of sequential I/O matches the best performance
profile of midline drives. When using a sequential I/O pattern, the disk head moves less allowing
higher IOPS compared to using random I/O which requires a higher rate of disk head movement
necessary to access data.
Throughout a 24-hour cycle, the disk I/O varies in Exchange and is demanded by different sources
ranging from user load to maintenance and backups. In the past, the highest load was usually due to
online maintenance that resulted from housekeeping tasks Exchange performed to clean and update
the database. By default, database maintenance now occurs on a 24 x 7 basis as data is written to
disk to ensure that the database is constantly maintained in an optimal state. Database maintenance
can be set to occur during specific time periods like it has been scheduled in previous versions of
Exchange. For more information on the Exchange database performance factors, see:
http://technet.microsoft.com/en-us/library/ee832791.aspx.
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
0.80
0.90
1.00
Exchange 2003 Exchange 2007 Exchange 2010
Exchange IOPS by version
4
Exchange Server 2010 high availability
With the ability to hold an increased amount of data in Exchange comes the need to protect it from
loss or failure and to make it quickly accessible in the event of a problem. To meet this need, high
availability features have been added to Exchange Server 2010 that build on the technology released
in Exchange Server 2007. With Exchange Server 2010 database resiliency is implemented at the
individual database level and failovers (unplanned) or switchovers (planned) are expected to take
approximately 30 seconds at the mailbox database level. Since all client connections in Exchange
Server 2010 are made to the Client Access Server (CAS), the mailbox database failover or
switchover (referred to as *over) is transparent to the user.
The use of multiple copies of the database permit additional features such as page patching – the
ability to replace a corrupt page in the database from another database copy – when encountering a
logical corruption (-1018) error or on a lost flush where data is not properly written to disk. The use of
a lag copy of a database enables recovery to a specific point in time by creating a delay in
committing transaction logs to the database lag copy for up to 14 days.
Dumpster 2.0
In Exchange Server 2010, a new version of the Dumpster (2.0) has been implemented which allows
for better control and recovery of deleted items. The need to recover deleted items results from many
causes ranging from end user requests to compliance or legal hold requirements. With Dumpster 2.0,
messages and calendar item versions are maintained for easy recovery.
A feature of Dumpster 2.0 ensures that the contents of the dumpster move with the user in the event
that their mailbox is moved within the Exchange organization. This ensures that data needed for
compliance searches is available. Part of the Dumpster feature is the retention of items under legal
hold. When a mailbox is placed in legal hold status, items are retained and cannot be deleted by the
user and the dumpster continues to grow until legal hold is removed from the mailbox. When using
legal hold, the dumpster has a hard limit of 30 GB of data per mailbox. This can cause a significant
increase in the amount of storage required to hold mailbox data for the mailboxes where legal hold is
turned on. With the ability to move mailboxes and include their dumpster data, the mailboxes under
legal hold can be placed in databases where storage has been allocated to allow the mailboxes to
grow to these large sizes.
Storage Groups
With Exchange Server 2010, the Storage Group object has been removed and each individual
database is associated with its own dedicated transaction log stream. The use of a one to one ratio of
database to log stream was used for Exchange 2007 Continuous Replication implementations and is
standard for all Exchange 2010 deployments.
Another difference for Exchange 2010 is that all database objects are stored at the organization
(ORG) level and database names must be unique throughout the ORG. Since a copy of the database
can be activated on any server participating in a Database Availability Group (DAG), it is
recommended that the database name not be tied to a server name. Each database can be moved to
another site or server as needs require, presenting a flexible deployment and allowing for adjustment
fine tuning. The portability of databases and mailboxes permits the administrator to balance storage
capacity and performance needs throughout the lifecycle of Exchange.
LUN design
Exchange Server 2010 makes available the use of a variety of LUN design options. From the
operating system and application perspective, regardless of the underlying physical storage, a LUN is
presented to hold the various types of Exchange data. When determining the LUN layout, there are
three commonly used models:
One LUN per Database
5
Two LUNs per Database
Two LUNs per Backup Set
The use of one of the three models is recommended for simplicity and each of the three models has
particular benefits and design constraints.
The single LUN per database model is commonly used with Just a Bunch of Disks (JBOD) designs
when using Exchange 2010 Database Availability Groups (DAGs) for high availability. JBOD
configurations are recommended to have three or more copies of each database to prevent data loss
or interruption of client access to mailbox data. The single LUN per database model places the
Exchange database and transaction logs on the same LUN. The use of the single LUN per database
model isolates the database and its associated transaction logs on a single LUN. In the event that the
LUN is compromised, only a single database and log set is affected.
The two LUN per database model is typically used with RAID and SAN designs and separates the
database LUN from the transaction log LUN. One driver for this method is the use of hardware-based
Volume Shadow Copy Service (VSS) to back up and restore individual Exchange databases. The two
LUN per database model separates the transaction log I/O and database I/O as well as only
affecting a single database if either volume is lost.
The two LUN per backup set model is intended for use when grouping all of the databases that are
included in a daily or weekly Exchange full backup. All databases are grouped together on one LUN
and the transaction logs for those databases on the second LUN. This model generally uses fewer
LUNs and is easier to manage but the loss of either the database LUN or the transaction log LUN will
take all of the databases associated with that LUN offline.
With the possibility of up to 100 databases per Exchange 2010 mailbox server, mount points can be
used to deploy the volumes that hold Exchange data. A critical piece to identify when using mount
points is the mount point root (MPR). The mount point root serves as the base for all mount points and
if the MPR goes offline, all mount points under it will fail. For this reason it is recommended to place
the mount point root on the RAID protected C: drive which holds the operating system files. When
using DAGs, if the MPR is on the C: drive and the C: drive fails then all of the databases on that
server will be activated elsewhere.
Personal Archive
Exchange Server 2010 makes available a new feature called Personal Archives. The Personal
Archives feature makes a secondary online mailbox available to the user to store messages that are
not needed as frequently and can be managed by retention policies. The Personal Archive appears as
another mailbox in the Outlook 2010 Client or in Outlook Web Access (OWA) similar to how a
personal folder (PST) looks. The user must be online with Outlook 2010 Client or OWA to access the
Personal Archive which is stored on the Exchange server.
The use of Personal Archives allows for the implementation of smaller primary mailboxes since
historical messages can be moved into the archive. This will benefit mobile users that use offline mode
in Outlook who will not need to synchronize a large amount of data in their mailbox with the offline
copy of their mailbox (OST). The information will still be available to them when they are connected
to Exchange. The use of a smaller primary mailbox in combination with a Personal Archive permits a
smaller OST file which is stored on the user‟s laptop which can reduce the need to provide users with
new laptops with larger hard drives.
When using Exchange Server 2010 RTM (released to manufacturing), the Personal Archive mailbox is
located in the same database as the user‟s mailbox. Exchange Server 2010 Service Pack 1 will
change some of the behavior relating to Personal Archives. The Personal Archive with SP1 can be
stored on a different database from where the user‟s mailbox resides and can be placed onto a
different server entirely. This option is helpful since a server could be dedicated to the storage of
Personal Archive files with a different type of storage attached since the Personal Archive would not
6
be accessed as frequently as the user‟s production mailbox. Another change with Exchange 2010
SP1 is the addition of the Outlook 2007 client to the list of clients that can access the Personal Archive
which will open up the number of users for whom Personal Archives can be deployed. Source:
http://msexchangeteam.com/archive/2010/04/05/454533.aspx. The features contained in
Exchange Server 2010 Service Pack 1 have not been released and are subject to change.
RAID-less storage
With Exchange Server 2010 a new storage model, RAID-less storage, also known as Just a Bunch of
Disks (JBOD), is available for use when Exchange is deployed using the Database Availability Group
(DAG) high availability model. A JBOD configuration consists of a single physical disk spindle hosting
a data volume without RAID. When using the JBOD model a different storage design can be used that
places the Exchange transaction logs on the same spindle with the Exchange database and content
index files. For most JBOD implementations it is acceptable to put a single database with its
associated transaction log stream and content index files on a single spindle. In the event that a
spindle is lost, that database fails over to another server in the DAG that holds an up-to-date copy of
the database, logs and content index until recovery is performed for the failed disk. Below is a figure
that illustrates the data types hosted on a single spindle in this configuration.
Figure 2. Exchange data content per spindle (example)
It is important to ensure that sufficient free space is maintained on each spindle in a JBOD
configuration to prevent shut down due to unexpected volume growth.
When using JBOD, the HP Smart Array controller is used to configure each disk as a single member
RAID0 set also called a logical volume or logical drive. When using this configuration a single
physical disk is configured as a logical volume and presented to Microsoft Windows® and
Exchange. When implementing a JBOD solution, no hot spare can be used and recovery is a manual
63% 7%
10%
20%
Data by type
Database Logs Content Index Free Space
7
process that requires an administrator to replace the failed disk and to recreate the volume before
making it available for Exchange to use and initiate the seeding process to recreate all of the data on
the new volume.
Figure 3. RAID-less storage configuration
When using a JBOD configuration, it is important to review the HP QuickSpecs for each array
controller to determine the number of logical drives that are supported by the array controller. A
logical drive is a unique volume that is presented to the host from the array controller. Most array
controllers support a higher quantity of physical drives than logical drives. For example, the HP Smart
Array P812 Controller supports 64 logical drives according to
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608507/c01608507.pdf.
Typically, array controllers are used to provide the ability to build RAID sets of multiple drives but
Exchange Server 2010 mailbox resiliency makes available the use of a single spindle in a JBOD
configuration. It is important to ensure that the correct number of array controllers is implemented to
host the required number of logical drives. With an Exchange JBOD configuration, data in the
Exchange databases is copied to other DAG members resulting in database redundancy being
implemented at the Exchange application layer in place of the storage layer with RAID.
These considerations have a direct impact on the capacity and performance required to design a
storage platform to support Exchange Server 2010. Exchange storage is typically discussed in terms
of Storage Area Network (SAN) or Direct Attached Storage (DAS) both of which use RAID. With
Exchange Server 2010 a new option is available that uses DAS but in a RAID-less (JBOD)
configuration. When using a DAS configuration, it can be configured to use RAID or to run RAID-less.
Raid-less JBOD
LUN01 -DATABASE-LOGS
LUN02 -DATABASE-LOGS
LUN03 -DATABASE
-LOGS
LUN04 -DATABASE-LOGS
LUN05 -DATABASE-LOGS
LUN06 -DATABASE
-LOGS
LUN07 -DATABASE-LOGS
LUN08 -DATABASE-LOGS
LUN09 -DATABASE
-LOGS
LUN10 -DATABASE-LOGS
LUN11 -DATABASE-LOGS
LUN12 -DATABASE
-LOGS
8
Note
Smart Array controllers use RAID when creating volumes. To configure a
single spindle as a volume (JBOD) it must be configured as a one-disk array
with a RAID0 logical drive.
Microsoft requirements and recommendations
Designing storage for Exchange has always involved striking a balance between the capacity and
performance needed to support the expected load on the storage subsystems. This continues to be
true in Exchange Server 2010. In implementations with large mailboxes, capacity is the most likely
constraint and the use of larger but slower midline hard drives will drive the design.
The first step in determining storage environment requirements is the measurement of existing
messaging systems to determine a baseline for accurate planning. The analysis of actual production
data from an existing system is the best method to ensure a predictable result. Tools are available
from Microsoft and third-party vendors that make the process of collecting and analyzing data easier.
Microsoft Exchange Server Profile Analyzer –
http://technet.microsoft.com/en-us/exchange/bb330849.aspx
Quest software MessageStats – http://www.quest.com/messagestats/
When looking at the performance required to support an Exchange storage configuration, several
factors are important to identify:
User workload (average message size, and messages sent and received per user per day)
Number of users
User concurrency (users active at a given time)
Third party applications: BlackBerry Enterprise Server and desktop search for online clients
Disaster Recovery (DR) protection strategy and number of database copies
Edge and Hub Transport queue storage and load to be processed daily
Message tracking and other types of transport logging
Use of anti-virus on Edge and Hub Transport or mailbox server
Performance of hard drives used in design including their type (SAS or SATA), speed and RAID
configuration
When looking at the capacity required to support an Exchange 2010 storage configuration, several
factors are important to identify:
Mailbox size
Average message size
Number of database copies
Backup method and space required
Number of days to tolerate backup or replication failure – logs continue to be stored until truncated
Use of continuous replication circular logging (CRCL)
Single item recovery and/or calendar version storage
Size of content index files
Edge and Hub Transport Queue and logging
Restore LUN size
Deleted items retention window
9
Personal archive size
Legal hold expectations
Microsoft recommends a set of best practices for the implementation on Exchange storage and these
guide the design of the best solution for a particular configuration.
RAID levels and recoverability
With Exchange Server 2010, RAID is optional depending on the high availability model used. If the
DAG feature of Exchange Server 2010 is used and there are three or more copies of each database
available, the use of RAID-less JBOD is possible. RAID storage is still a viable option for Exchange
Server 2010 storage and does have advantages, such as the ability to use a hot spare which allows
the rebuilding of data in the event of a disk spindle failure, and the ability to spread I/O over more
disks. Without RAID, it is up to the Exchange application to fail over to another database copy in the
event of a disk problem.
Microsoft has updated guidance for using RAID with slower speed drives (5.4K or 7.2K RPM) that
recommends using JBOD, RAID1 or RAID10 only, to ensure that sufficient performance is provided to
Exchange during normal operations as well as during a drive failure or rebuild event. As with any
recommendation, the use of the proper RAID level is intended to protect storage and user availability
from impact due to unexpected failures. If choosing to use RAID5 on these slower drives it is important
to consider performance during normal operations, performance with a failed drive and during the
rebuild process, and the impact it could have on user perception of Exchange responsiveness.
Another recommendation from Microsoft surrounds the choice of stripe size for RAID sets. The new
recommendation is to set the stripe size to 256 KB to maximize performance for Exchange. Testing of
Exchange 2010, shown in figure 3, demonstrates the increase in Average Disk Bytes per Read up to
256KB during the time period when Database Maintenance is running. When database maintenance
finishes its run; the Average Disk Bytes per Read decreases. In Exchange 2010, database
maintenance is configured to run continually (24x7) by default and during the time database
maintenance is running a large amount of sequential I/O is generated while the database is read.
10
Figure 4. Impact of database maintenance on database read I/O size
During normal Exchange user operations the Average Disk Bytes per Read stayed in the 32K range
as shown in the above figure.
Storage configuration
With the availability of larger drives, the proper configuration of storage is important to get the best
performance from Exchange. The following table outlines storage recommendations from Microsoft for
Exchange.
Table 1. Microsoft Storage Recommendations
RAID
Recommended for stand-alone servers which do not use DAG and some
high availability configurations with less than three database copies
Optional: Depends on the high availability configuration used
Logical Disk Type Basic: Recommended
Dynamic: Supported
Item Count Up to 100,000 items per mailbox folder
11
Disk Alignment 1 MB (Windows Server 2008 default)
NTFS formatting:
Allocation Unit Size 64 KB
BitLocker Encryption Supported (database and log)
Partition Type GPT: Recommended
MBR: Supported
RAID Stripe Size 256 KB
Storage configuration is also driven by backup design. Microsoft makes available a new type of
backup called Exchange Native Data Protection that relies on a combination of Dumpster, database
copies and lagged copies for recoverability of data in the event of loss. Typical backup procedures
make use of the Volume Shadow Copy Service (VSS) framework and can be implemented with either
hardware or software providers. Working in conjunction with backup is the recovery volume which is
used to hold Exchange data during recovery operations.
Additional information on storage recommendations from Microsoft are available from:
http://technet.microsoft.com/en-us/library/ee832792.aspx.
Exchange 2010 storage calculators
Both HP and Microsoft provide calculators to assist with Exchange storage design. The sizing tools
provide a design resource allowing an engineer to input Exchange-specific information and calculate
a best fit solution using parameters such as client load, server and storage type, high availability and
backup solutions. Different types of designs can be generated to perform “What if” comparisons. It is
recommended to use both the HP and Microsoft tools and compare the results to generate the best
configuration.
The HP sizer is available from: http://www.hp.com/solutions/activeanswers. From the HP
ActiveAnswers page, click on the “Tools” link in the left pane and scroll down to the link “HP Sizer for
Microsoft Exchange Server 2010”. The Microsoft calculator is available from:
http://msexchangeteam.com/archive/2009/11/09/453117.aspx which includes instructions on
the use of the calculator and links to additional information pertaining to sizing.
Exchange high availability
With Exchange Server 2010, Microsoft built on the continuous replication feature introduced in
Exchange 2007 and further refined its capabilities. The continuous replication model is used for
Exchange 2010 and provides more granularity than was possible in Exchange 2007 while being
easier to implement. It is now possible to perform a failover or switchover (referred to as *over) on a
single database. The *over of a database to activate a passive copy takes approximately 30 seconds
and is transparent to the user. The Database Availability Group (DAG) feature of Exchange Server
2010 features a collection of up to sixteen Exchange mailbox servers that participate in a single DAG
and each of those servers can host a mix of active, passive and lagged database copies. In the event
that you need to deploy more databases and copies than can be held in a single DAG, additional
DAGs can be deployed. Each Exchange 2010 mailbox server can host up to 100 combined (active
plus passive) database copies.
12
Licensing and database copies
When deploying Exchange Database Availability Groups, the Enterprise version of Windows Server
2008 (SP2 or later) or Windows Server 2008 R2 is required since the DAG makes use of the failover
clustering feature. The choice between the Standard and Enterprise version of Exchange Server 2010
for the mailbox server that will participate in the DAG is driven by the number of database copies that
the server will host. If using five or less (active plus passive) databases per server, the Standard
version can be used. Active and passive copies are combined together when counting the number of
databases that reside on a server. For deployments with more than five database copies then the
Enterprise version of Exchange 2010 needs to be used.
DAG implementation
The implementation of the DAG is a straight-forward process with Exchange managing the cluster
configuration for the servers participating in the DAG. The configuration of the quorum resources is
also managed by Exchange. The selection by Exchange of the proper quorum configuration is
dependent on the number of mailbox servers participating in the DAG. When an even number of
mailbox servers are members of the DAG, Exchange configures a File Share Witness (FSW) on an
available Hub Transport server, unless you specify a server for this use when creating the DAG. As
nodes are added or removed from the DAG, Exchange automatically switches between using the
FSW when there is an even number of mailbox servers and using the node-majority quorum model
when there are an odd number of servers in the DAG.
Database copies and sites
The implementation of the Exchange DAG high availability feature enables creation of additional
copies of each protected database. The number of copies required depends on specific requirements
and is typically based on service level agreements (SLA). When designing storage for an environment
where Exchange DAGs are to be used, understanding the SLA requirements and business
requirements are useful to ensure that an effective design is put into place.
When using three or more copies of a database, the use of a JBOD storage configuration is possible.
The extension of a DAG to include multiple sites also affects storage design. The best way to show this
is to describe a possible production datacenter design. A company has two physical datacenter
locations and each location is a different AD site. The company requests that the Exchange design is
capable of sustaining two server failures in the primary site and that in the event the primary location
is unavailable, the secondary site is able to mount and run the databases so that users are able to
access their e-mail. From this starting point the mailbox and storage design can be generated that
provides three copies of each database in the primary datacenter with one additional copy in the
secondary datacenter. Since the primary datacenter has three copies of the database, JBOD will be
used to provide the lowest cost storage. In the secondary datacenter, RAID will be used to ensure that
the data is available even if a single spindle is lost.
13
Figure 5. Exchange Server 2010 DAG design in two sites
Once the above design is presented to the customer, they request that a lagged copy of the database
be added to the primary datacenter. The lagged database copy is added to the design in the primary
datacenter and is placed on RAID disks.
Figure 6. Exchange Server 2010 DAG design in two sites with Lag database copy
DAG01
AD Site 2
MailboxServer 2-1
RAID1 Set – Passive database
AD Site 1
MailboxServer 1-1
Activedatabase
MailboxServer 1-2
Passivedatabase
Passivedatabase
MailboxServer 1-3
DAG01
AD Site 2
MailboxServer 2-1
RAID1 Set – Passive database
AD Site 1
MailboxServer 1-1
Activedatabase
MailboxServer 1-2
Passivedatabase
Passivedatabase
MailboxServer 1-3
MailboxServer 1-4
RAID1 Set – LAG database
14
Note
The use of RAID is recommended for any database copy that could
represent the only remaining or surviving copy including lagged and single
offsite copies.
Exchange 2010 RAID and JBOD
Exchange Server 2010 high availability features do require investigation into the impact that
maintaining additional database copies has on the overall design. Microsoft recommends keeping
Exchange on RAID-based storage when less than three database copies are used. The
recommendation to use RAID1 (or RAID10) when using the larger capacity, lower performance hard
drives is to protect from performance issues when using RAID5 especially during times of rebuild to
recover from a spindle failure. The RAID1 design uses four disks to hold the database volumes as
shown in figure 7.
Figure 7. Database copies when using RAID1
DAG01 – RAID1
MailboxServer 1
RAID1 Set
Activedatabase
Activedatabase
MailboxServer 2
RAID1 Set
Passivedatabase
Passivedatabase
15
When three or more database copies are used then JBOD storage becomes an option. By adding the
third copy and using JBOD it is possible to remove one of the four disks used for RAID1 mirror copies
of the database. This configuration reduces the number of disks used for storage and adds a mailbox
server as shown in figure 8.
Figure 8. Database copies when using JBOD
Exchange high availability and database paths
When using the Exchange DAG feature, it is necessary to ensure that the filesystem path to the
database and logs on each server is consistent across all DAG members. This requires planning to
ensure that the directory path structure and naming convention provides a flexible structure and
provides for future growth. Remember that databases have unique names, are stored at the ORG level
and can be moved to any server in the ORG. The inclusion of a site or server name in the naming
convention would tie the database to a particular server or site and if the database was moved to
another server or site the naming convention would give inaccurate information. Databases can be
moved into or out of DAGs which means that the inclusion of DAG information in the naming
convention would have similar results as the use of server or site. Since each of the copies of a
particular database could become active at any time, it is best to leave the „active‟ or „passive‟
designation out of the naming convention. A simple naming convention which denotes the database
name only (DB001, DB002 and so on), works well.
The ability to use up to 100 databases per server requires the use of mount points. When using mount
points, use something similar to ExchMPR for the empty mount point root on the C: drive. Then use a
standardized naming scheme which denotes the database identity (DB001). The use of a naming
scheme makes the paths to each instance of a database consistent on all servers that have copies of
that particular database. The next database would be C:\ExchMPR\DB002 and so on. Since the
naming scheme is consistent on all servers, the creation of the initial C:\ExchMPR empty mount point
root folder can be standardized across all members of a DAG. A sample of two mount point layouts
is shown in figure 9 and includes the two LUNs per database and one LUN per database models.
DAG01 - JBOD
MailboxServer 1
Activedatabase
MailboxServer 2
Passivedatabase
Passivedatabase
MailboxServer 3
16
Figure 9. Exchange 2010 mount points
Consistent naming across all storage objects – including Exchange database, filesystem folder and
volume – helps to make sure that each object is in its intended location and makes troubleshooting
easier.
Before implementing a copy on a secondary server, ensure that the appropriate matching mount point
has been assigned for the database being replicated. If using the single LUN per database model,
each of the disk spindles that is configured as a mount point will contain the transaction logs as well.
When using the two LUNs per database or backup set, ensure that the relative paths are the same
before trying to add the database copy, if using a two LUN model then both paths need to be created
first.
Exchange with JBOD and BitLocker
The use of Exchange high availability in combination with a JBOD storage configuration presents a
unique situation where the database and transaction logs may be stored on a single disk and since
RAID is not used to stripe the data across multiple spindles, a complete set of data is present on one
disk. In this situation the use of drive encryption like BitLocker is available to ensure that in the event a
drive is removed from the server, only encrypted data is present on that disk. Microsoft supports the
use of BitLocker for Exchange 2010 data and HP ProLiant G6 (and later) servers can be ordered with
a Trusted Platform Module (TPM) which is enabled by Microsoft BitLocker in Windows 2008 to store
keys for encrypted volumes. Details on the configuration of the TPM module and the implementation of
BitLocker are available from:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01681891/c01681891.pdf.
DAG01 – Two LUNs per Database or Backup Set
MailboxServer 1 DB001
Mount Point
DB001-LogMount Point
DAG01 – One LUN per Database
C: DriveOS
C:\ExchMPRMount Point Root
MailboxServer 2 DB001
Mount Point
DB001-LogMount Point
C: DriveOS
C:\ExchMPRMount Point Root
MailboxServer 1 DB001
Mount Point
C: DriveOS
C:\ExchMPRMount Point Root
MailboxServer 2 DB001
Mount Point
C: DriveOS
C:\ExchMPRMount Point Root
DB002Mount Point
DB002Mount Point
17
Note
When using BitLocker, it is recommended to back up the encryption keys to
a USB drive or file and keep them in a safe place. BitLocker encryption
keys can also be stored in Active Directory.
Exchange Native Data Protection
The implementation of DAGs makes available the use of the Exchange 2010 Native Data Protection
feature. Exchange Native Data Protection relies on having multiple copies of each database available
in the event that recovery is required. With Exchange 2010 database sizes of up to 2 TB, the ability
to use another copy of the database that is up to date decreases the amount of time needed to
recover from a failure.
A common recovery task that administrators perform is the recovery of items that the user has
accidentally deleted. In Exchange Server 2010, features such as calendar versioning and single item
recovery make this a straight forward process that can be accomplished without a typical recovery of
a database using a backup application.
Another situation where recovery work was needed in the past is fixing a database due to logical
corruption. One of the signs that logical corruption has occurred is the generation of a -1018 error in
the event logs which signals that Exchange has identified a corrupt page. Exchange 2010
automatically places a request into the log stream asking that a copy of the page from another replica
of the database be sent to patch the bad page that caused the error. Similar to this is the ability of
Exchange 2010 to recover from a lost flush where a page gets missed on the write to disk; the page
is requested and is placed into the active database copy.
A different type of recovery requires the restoration of an Exchange database to a specific point in
time. When using the DAG mechanism, database copies are kept as up to date as possible through
the use of continuous replication. Exchange 2010 makes it possible to create a lagged copy of any
active database in the DAG and allows for a configurable delay of up to 14 days before transaction
logs are played into the lagged database. This feature gives the administrator the ability to use the
lagged copy and play the transaction logs up to a specified point and then use the database without
the remainder of the logs being applied to it.
The use of the DAG high availability feature impacts storage requirements for Exchange Server 2010
since each copy of the database and its associated logs need to be maintained. When using lagged
database copies, additional space is required to store the transaction logs until they are committed to
the database. With lagged copies, Microsoft recommends that RAID-based storage be used so that
the database and logs are always available in the event that a single spindle is lost.
Drive technologies
When Microsoft wanted to increase the amount of data stored per mailbox while keeping the cost as
low as possible, they chose to re-write Exchange to take advantage of slower SATA drive technology.
SATA performs quite well when a sequential I/O workload is used. Does this mean that you should
always choose slower and larger midline SATA drives? No, the choice of the appropriate disk drive
technology is based on several factors such as:
Is my storage performance bound?
Is my storage capacity bound?
Are you adding to an existing solution (SAN or DAS) or adding new storage technology?
What mailbox size is appropriate for your Exchange deployment?
18
Are you using third-party applications that have a significant impact on performance?
Are you frequently using the legal hold feature of Exchange?
Changes in hard drive technology have caused many questions to arise especially when considering
storage options for Exchange Server 2010. Exchange Server 2010 makes use of the latest drive
technology effective. The proper design of storage is dependent on choosing the right tool for the job
that you are asking Exchange to perform. Below are answers to questions common to deploying
Exchange Server 2010 on new drive technology.
What is the difference between types of midline drives?
The main difference between midline SAS and SATA drives is the electronic interface package used to
connect the drives with the drive shelf and Smart Array controller. There is a minimal cost difference
between the two types of drives and the midline SAS drives support dual port/dual domain
configuration and a faster transfer rate.
Table 2. SAS and SATA 2 TB Midline drives
2 TB SAS Midline 2 TB SATA Midline
Capacity 2,000,398 MB 2,000,398 MB
Size 3.5 inch 3.5 inch
Rotational Speed 7,200 RPM 7,200 RPM
Interface SAS Serial ATA
Transfer Rate 6 Gb per second 3 Gb per second
Seek Time: Average 8.5 ms 8.2 ms
Seek Time: Full Stroke 16.2 ms 16.9 ms
Bytes per Sector 512 512
Dual Port Drive Yes No
Warranty 1 Year 1 Year
19
Figure 10. 2 TB SAS and SATA Midline drives
When considering the use of drives for Exchange storage, the use of higher capacity but slower
drives is appropriate because the application has been designed to perform well with this drive
technology.
While Microsoft is enabling organizations to deploy very large mailboxes at a reasonable cost
through the use of large capacity, slower drives, not every organization will require or benefit from
using drives in the same manner.
What is the difference between MDL, ENT and ETY or SFF and LFF drives?
HP classifies drives as Enterprise (ENT), Midline (MDL) or Entry (ETY) and the designation does not
refer to SAS and SATA. The designation of ENT, MDL or ETY refers to the expected use of the drives
within a storage infrastructure. Enterprise drives are designed to be used to provide the highest
performance available while midline and entry drives offer higher capacity with lower performance.
The warranty on entry and midline drives is one year while enterprise drives have a three year
warranty.
When choosing a drive to fit into a server or external drive shelf, drives are classified by Small Form
Factor (SFF) which is a 2.5-inch drive or by Large Form Factor (LFF) which is a 3.5-inch drive. Both the
Small Form Factor (SFF) and Large Form Factor (LFF) drive sizes can contain either enterprise (ENT),
midline (MDL) or entry (ETY) drives. Additional information on HP SAS hard drives is available from:
http://h18004.www1.hp.com/products/quickspecs/12244_na/12244_na.pdf and for HP SATA
drives: http://h18004.www1.hp.com/products/quickspecs/13021_na/13021_na.pdf.
Are midline SAS and SATA drives suitable for use in the enterprise?
For many years, SATA drives have been available for use in workstation and datacenter environments
but a concern with using them in the datacenter has been their perceived failure rate. Drive failure
rates are expressed in Annual Failure Rate (AFR). The AFR is the “probable percent of failures based
on a manufacturers total number of installed units of a similar type”1. It is important to note that the
AFR is not stated for individual drives.
1 Seagate: “Hard disk drive reliability and MTBF / AFR [174791]”
http://seagate.custkb.com/seagate/crm/selfservice/search.jsp?DocId=174791&NewLang=en
20
The usage of midline drives is changing with the increase in reliability of newer drives and with the
input of studies performed on drive populations with a large amount of disks. One of the largest
studies on disk drive failure rates in a production environment has been performed by Google™ and
is available from:
http://static.googleusercontent.com/external_content/untrusted_dlcp/labs.google.com/en/us/paper
s/disk_failures.pdf. The Google study concluded that SATA drives are suitable for use in enterprise
environments and have an AFR ranging from 1.7% in the first year to 8.6% in the third year.
When designing storage for Exchange 2010, the use of midline drives that provide high capacity can
be a cost effective tool that allows for the implementation of large mailboxes. Exchange Server 2010
has been designed to take advantage of larger drives and when implemented with multiple database
copies, user availability remains in the event that a drive needs to be replaced.
Can I use SSD drives for Exchange storage?
Solid State Drives (SSD) are available for use and supported with Exchange 2010. SSD drives
provide extremely high performance but lower storage capacity. SSD drive cost per GB is high and
typically does not provide a cost effective solution for Exchange 2010 storage.
How do Fibre Channel and FATA drives fit into the drive picture?
Fibre Channel drives (Enterprise) and Fibre Attached Technology Adapted (FATA) drives (Midline) use
a Fibre Channel based interface to connect the drive to the controller. Fibre Channel based drives
typically have similar capacities and performance as SAS or SATA drives but are placed into storage
arrays and work together with advanced features like redundant controllers, multi-path I/O and
storage management offered by the array controller to provide shared storage to servers using Fibre
Channel host bus adapters (HBAs).
What is the difference between dual port and Dual-Domain?
Dual port refers to a hard drive‟s capability to support a secondary path to the drive for data. Dual-
Domain requires dual port drives and adds the ability to implement use of a secondary channel for
data communication through a separate physical path to the drive. Not all array controllers, drive
shelves and drives support the use of dual domain for multiple data paths and it is important to ensure
that all components in the data path from the server to the drive support this feature if it will be used.
Smart Array controllers like the P700m and the HP StorageWorks 600 Modular Disk System
(MDS600) offer support for the Dual-Domain feature. Before placing a dual domain configuration into
production it is important to test the implementation to ensure that the configuration has been properly
implemented and documented. Additional information on configuring redundancy in dual domain
SAS configurations is available from:
http://bizsupport2.austin.hp.com/bc/docs/support/SupportManual/c01451157/c01451157.pdf
Where can I find more information on HP disk drives?
The “Drive technology overview” discusses the drive technology used in HP hard drive products and is
available from:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01071496/c01071496.pdf
Storage controller technologies
When choosing storage systems, storage controllers are used to connect the hard drives to the server
and to provide additional features such as cache, RAID and online spare disks. Controller features
vary depending on the storage technology used for a given solution. The common types of storage
controllers for use in ProLiant servers are:
Smart Array controllers – to interface using SAS protocols with drives internal to the server or with
external drive shelves containing drives.
21
Fibre Channel host bus adapters (HBA) – to interface with Storage Area Networks (SAN) using
Fibre Channel.
HP iSCSI storage technology – to interface with iSCSI-based storage.
Smart Array controllers
The Smart Array controller family provides storage interface options for ProLiant servers. Both PCI-
Express (PCIe) and BladeSystem mezzanine cards are available with a full range of features and
cache sizes. One of the new features implemented in recent Smart Array controllers is flash-backed
write cache (FBWC) with a super-capacitor power supply that allows indefinite data retention
compared to two days for battery-backed write cache. The capacitors charge faster (minutes versus
hours) than batteries which is an advantage since the controller disables write caching while charging
is underway.
Figure 11. Smart Array Controllers (left to right: P411 with FBWC and SuperCap, P212 and P700m)
In comparing the features provided by the Smart Array controllers, it is important to select a controller
that will provide the required capabilities. Each Smart Array controller supports a stated number of
physical drives as well as a stated number of logical drives. When implementing a JBOD
configuration where every individual spindle will become a logical drive, choose a Smart Array
controller that has sufficient logical drive capacity. For example, the P812 Smart Array controller
supports 108 physical drives and 64 logical drives. For additional information on the use and
specifications of Smart Array controllers see:
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c01608507/c01608507.pdf.
When using HP BladeSystem ProLiant servers with the P700m array controller the HP StorageWorks 3
Gb SAS BL Switch is required. The SAS switch is installed in the BladeSystem enclosure and provides
a zoned or shared SAS solution which connects with external storage shelves like the MDS600 and
D2000.
Fibre Channel host bus adapters
The Fibre Channel host bus adapter (HBA) family provides Fibre Channel storage interface options for
ProLiant servers. Both PCI-Express (PCIe) and BladeSystem mezzanine cards are available. HP
provides HBA cards from multiple vendors allowing customers the choice of features to meet their
needs. The cards are available in single and dual channel models for use with HP‟s Fibre Channel
based array solutions.
22
Figure 12. HP StorageWorks PCIe 8Gb Host Bus Adapters group
SAN-based Exchange storage solutions provide the option to utilize Multi-path I/O (MPIO) which
enables multiple data paths to storage arrays preventing a single point of failure. Device specific
modules (DSM) are available to download for the various StorageWorks arrays. MPIO works in
combination with either dual port HBAs, or multiple single port HBAs in a server.
When using BladeSystem ProLiant servers with Fibre Channel HBA mezzanine cards, a SAN switch
like the Brocade 8 Gb SAN Switch for HP BladeSystem is required. The SAN switch or switches are
installed in the BladeSystem enclosure and provide a SAN solution which connects with external
storage arrays like the HP StorageWorks Enterprise Virtual Array (EVA) or XP products.
HP iSCSI storage technology
HP iSCSI storage technology can be used with ProLiant servers and multi-function network interface
controllers (NICs) to enable access to iSCSI-based StorageWorks products with Microsoft Windows
2003 and 2008 operating systems. HP multi-function NICs like the NC382i, NC382T, NC380T,
NC373i, NC373F, NC373T, NC371i, NC370F and the NC373T support advanced iSCSI
technology in ProLiant DL and ML class servers. For ProLiant blade servers the QLogic QMH4062
iSCSI adapter provides advanced iSCSI initiator technology in a mezzanine card form factor. Other
mezzanine adapters that support advanced iSCSI for ProLiant blade servers include: NC382m,
NC374m, NC373m and NC532m. When choosing a multi-function adapter it is important to review
the iSCSI-related features required for implementation like iSCSI boot and choose an adapter that
supports these features.
HP provides options to optimize iSCSI solutions including the Accelerated iSCSI Pack for Embedded
Multifunction Server Adapters which enables the ability to offload processing to the NIC, reducing
CPU utilization. The Accelerated iSCSI Pack supports iSCSI storage targets using Microsoft iSCSI
Software Initiator version 2.0.
Additional information on HP ProLiant Accelerated iSCSI for Windows is available from:
http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00577553/c00577553.pdf
23
Note
Always use the latest production firmware and drivers for Smart Array
controllers, host bus adapters and network multi-function adapters to ensure
that features are available and any patches to fix issues are in place.
HP storage platforms
Maintaining the proper measures of capacity and performance are critical to a well-designed storage
infrastructure for Exchange Server. Most Exchange environments started with direct-attached storage
(DAS) and over time moved into storage area networks (SAN) and now the shift is back toward DAS
with some customers using JBOD and the use of lower cost disk with large capacity. One of the
reasons that SANs were used was to allow the pooling of disks into a shared infrastructure that can
be presented to any server needing a particular resource. The previous use of DAS with Exchange
servers meant that expensive enterprise class disks were locked into the server that they were attached
to and as a result, DAS solutions resulted in islands of storage. At that point in Exchange Server‟s
lifetime, Exchange was designed to effectively use the available disks and the performance of the
disks was the same whether they were deployed on SAN or DAS. SAN solutions were used to deploy
highly-available Exchange Server solutions and to support the deployment of Windows clusters to
support Exchange Server. The use of Database Availability Groups (DAGs) with Exchange 2010
allows the implementation of Exchange high-availability while using DAS.
We now step forward to Exchange Server 2010 which is designed specifically to make the most of
larger, slower disks to allow large user mailboxes. Due to the shift in design it is important to
understand what new options are offered by the changes Microsoft made and not design for previous
versions of Exchange.
Exchange Server 2010 storage
When using ProLiant servers, many options exist for extending storage capacities. These storage
options work for stand-alone servers (DL or ML) or for blade servers (BL) providing an integrated
solution. It is important to identify the features required in a particular storage situation and select the
combination of server, controller card and storage that work together to make the required features
available.
The following disk-based Exchange storage options are available:
StorageWorks XP Disk Array
StorageWorks EVA Disk Array
StorageWorks P4000 G2 SAN Storage
StorageWorks D2000 Disk Enclosure
StorageWorks MDS600 Modular Disk System
StorageWorks P2000 G3 MSA Array System
ProLiant Internal Server Storage
StorageWorks SB40c Storage Blade
StorageWorks XP Disk Array
The StorageWorks XP disk array is a large enterprise class storage system providing a fully redundant
platform with data replication capabilities. The XP array allows for 1,152 drives and makes use of
drive types including: SSD and Fibre Channel, up to 2 TB in size. Constant uptime is possible for
situations that require the highest in availability.
24
Figure 13. StorageWorks XP Disk Array
StorageWorks EVA Disk Array
The StorageWorks EVA disk array line consists of the EVA8400, EVA6400 and EVA4400 models.
Each model in the range fills a different position in regard to the capabilities offered, allowing the
choice of the array to fit specific performance and capacity requirements.
The EVA8400 allows the use of up to 324 drives with either 14 GB or 22 GB of controller cache.
The EVA6400 allows the use of up to 216 drives with 8 GB of cache
The EVA4400 allows the use of up to 96 drives with 4 GB of cache.
Drive sizes up to 1 TB are available for use in the EVA disk arrays and additional features can be
added to the EVA arrays by the purchase of licenses.
The StorageWorks EVA disk array line supports both Fibre Channel and iSCSI connectivity options.
25
Figure 14. StorageWorks EVA8400 Disk Array, front and back
StorageWorks P4000 G2 SAN storage
The StorageWorks P4000 G2 SAN Storage system is an iSCSI-based SAN built on LeftHand
technology providing solutions based on either performance or capacity with a selection of
redundancy options. When choosing a P4000 G2 solution it is important to note that all features are
turned on with the initial purchase. The P4000 G2 storage system is implemented in increments called
nodes each of which is a twelve (12) drive shelf (P4500) or an eight (8) drive shelf (P4300) with
associated controllers. The nodes are combined together to form the storage system and the number
of nodes required is based on the performance and capacity required for a particular
implementation. If using a 10 Gb Ethernet infrastructure, a module is available that provides CX4
connections to 10 Gb Ethernet switches.
Figure 15. StorageWorks P4000 G2 SAN Storage node (left to right: P4500 G2 and P4300 G2)
26
StorageWorks D2000 disk enclosure
The StorageWorks D2000 disk enclosure is a direct attached solution that uses 6G technology to
provide a 6 Gb data path from the server to the drives. Two models of D2000 enclosures are
available: the D2600 holds up to twelve (12) large form factor (LFF) drives while the D2700 holds up
to 25 small form factor (SFF) drives. Both enclosures provide 6 Gb SAS connectivity and are used
with Smart Array controllers. Drive sizes up to 2 TB LFF are available for the D2600, and 500 GB
SFF for the D2700. Both SAS and SATA disks are available for use with these enclosures.
The StorageWorks D2000 disk enclosures support direct-attach connectivity using HP Smart Array
controllers.
Figure 16. StorageWorks D2600 Drive Shelf – LFF Drives
StorageWorks 600 Modular Disk System (MDS600)
The StorageWorks 600 Modular Disk System (MDS600) provides high density storage allowing up to
70 large form factor (LFF) hot-pluggable SAS or SATA drives to be implemented within 5U of rack
space. The MDS600 works with both stand-alone or blade ProLiant servers but has additional features
when used with blade servers. The MDS600 contains two drawers with 35 drives each and 3 Gb
SAS connectivity. Up to six MDS600 systems can be connected to a BladeSystem enclosure which
provides 420 drives in 30 U of rack space allowing for 840 TB of data storage when using 2 TB
drives. Various cabling options are available to support features like Single-Domain or Dual-Domain
and wide-port configuration which provides an x8 SAS link to each drawer. Additional deployment
and cabling details are documented in the HP Direct-Connect External SAS Storage for HP
BladeSystem Solutions Deployment Guide available from:
http://bizsupport1.austin.hp.com/bc/docs/support/SupportManual/c01956983/c01956983.pdf
The StorageWorks 600 Modular Disk System (MDS600) supports direct-attach connectivity using HP
Smart Array controllers; when used in a BladeSystem ProLiant server the StorageWorks 3Gb SAS BL
Switch is also required.
27
Figure 17. StorageWorks MDS600 Modular Disk System
StorageWorks P2000 G3 MSA Array System
The StorageWorks P2000 G3 MSA array system is an iSCSI or Fibre Channel attached solution. The
P2000 contains either a single or dual controller. Two models of P2000 G3 array systems are
available: one holds up to twelve (12) large form factor (LFF) drives while the other holds up to 24
small form factor (SFF) drives. Drive sizes up to 2 TB are available for the LFF array and 500 GB for
the SFF array. Both SAS and SATA disks are available for use with these arrays. Expansion of the LFF
array is accomplished through the addition of up to seven P2000 drive enclosures for a total of 96
LFF drive bays. Expansion of the SFF array is accomplished by the addition of up to five D2700 drive
enclosures for a total of 149 SFF drive bays.
Figure 18. StorageWorks P2000 G3 MSA Array System – SFF drives
ProLiant internal server storage
ProLiant servers like the DL370 G6 offer the capability to support up to 24 - 500 GB SFF or 14 – 2 TB
LFF SAS or SATA drives. Smart Array controllers can be used to support the internal drives and offer
additional features.
ProLiant server internal storage configurations support direct-attach connectivity using HP Smart Array
controllers.
28
Figure 19. ProLiant DL370 G6 server
StorageWorks SB40c storage blade
BladeSystem ProLiant servers also have the capability to add direct attached storage drives by using a
StorageWorks SB40c storage blade. This blade offers the capability to support up to 6 - 500 GB SFF
SAS or SATA drives. An onboard P400 Smart Array controller with 256 MB battery-backed write
cache connects to the adjacent server using a PCI Express connection in the enclosure backplane.
Figure 20. StorageWorks SB40c storage blade
Other considerations
When designing any technology solution, it is important to consider the needs and requirements that
have already been defined in this paper. With Exchange Server 2010, most needs can be met in a
straightforward manner using features included within Exchange. The area that will provide the
largest portion of the framework for your design is that of service level agreements (SLA). SLAs
describe the agreement between the technology provider and the business group relating to what the
29
services contain and how they will be recovered in the event of a problem. Recoverability for
Exchange has always existed but the options have been expanded with Exchange 2010.
Tiered recovery
Exchange Server 2010 provides the ability to have an effective tiered recovery strategy through the
use of features like single item recovery, calendar versioning, Database Availability Groups, multiple
database copies, lagged copies and legal hold. Each of these features does have an effect on
storage. Both the quantity and quality of storage are affected by these design decisions. When using
the Exchange Database Availability Group feature, the copies of Exchange that will be used during a
failover or switchover (*over) should be placed onto storage designed for production use. This is
important since during a *over the same level of service needs to be maintained to end users as they
access their Exchange mailboxes.
Other performance and capacity demands
Exchange makes available a single point of access to many forms of electronic communication
information and through the use of BlackBerry, Windows Mobile® and other mobile messaging
devices the Exchange infrastructure makes it possible for users to access their electronic lives from
anywhere and at any time. Each of these capabilities can increase the performance requirements for
storage. Exchange is not limited to the storage of only e-mail messages. Voicemail adds to the
capabilities of Exchange when using the Unified Messaging features all of which add to the need for
a reliable and stable storage design.
Some desktop search engines place a higher load on Exchange storage performance as they access
and index information for fast retrieval when the user searches for items.
Skill level and training
The skill level of the administrators who support Exchange should be taken into consideration when
designing storage systems. Ensuring that the engineers have the skill set necessary to support the
chosen technology is a critical factor in the ability to effectively manage and troubleshoot storage on
a daily basis and when problems happen. If the support team is comfortable working with the
selected storage technology then the implementation and ongoing support will be a much more
positive experience.
Depending on organizational structure, the support of Exchange storage may be performed by the
Exchange support team, the SAN support team, the server support team or another group. When
designing Exchange storage it is important to take this into account to ensure that the capability exists
to provide the proper amount and type of storage and its support. If a customer has a large stable
SAN implementation in place and wishes to continue using it for Exchange then SAN storage can
provide a good solution with support provided by experienced staff members.
Validation and monitoring of storage
When implementing a storage solution for Exchange, an easily overlooked step is the evaluation of
storage after it has been put in place to determine a baseline for that storage. Microsoft makes tools
to enable this testing. Jetstress and LoadGen available for Exchange 2010 can be used to test storage
or Exchange overall and establish a baseline for future comparison.
Jetstress is available from:
http://www.microsoft.com/downloads/details.aspx?displaylang=en&FamilyID=13267027-8120-
48ed-931b-29eb0aa52aa6
LoadGen is available from:
http://www.microsoft.com/downloads/details.aspx?familyid=CF464BE7-7E52-48CD-B852-
CCFC915B29EF&displaylang=en
30
The Jetstress tool places stress on the storage aspects of Exchange and will provide a „maximum‟
amount of performance that the storage is capable of providing. This is useful when troubleshooting
issues in the future as the maximum performance for the storage is a known value and comparison
can be made to determine if that maximum is being approached.
The LoadGen tool places a „user like‟ load on the Exchange infrastructure which tests the Client
Access servers, Hub Transport servers and mailbox servers. The LoadGen tool is installed and used
with a non-production Active Directory implementation and is capable of testing the servers with anti-
virus or other third-party applications installed.
Once storage has been put into place and is ready for production use, it is important to monitor and
maintain it to ensure stability. Many tools are available to perform this work and using information
from the testing phases of implementation is helpful in determining the thresholds put in place to signal
when a problem is occurring. Performance monitoring of storage has a range of monitoring levels
and a good starting point is to determine if the Exchange application‟s view of storage is healthy or
not from a logical perspective. Below that level, Windows has the ability to see the logical and
physical performance for storage devices and additional monitoring can be performed at the storage
controller or even physical spindle layers. When using a SAN array, additional levels of information
may be available to add more detail to the overall health of storage.
Summary
With Exchange Server 2010, storage is critical to the successful implementation of a messaging
infrastructure. Microsoft has made significant changes to Exchange‟s use of storage in Exchange
2010 which impacts the design process. The presence of multiple HP storage options and features
offers the ability to design a solution to meet a wide range of needs. HP ProLiant and StorageWorks
solutions provide a platform to implement a SAN, DAS or JBOD solution supported by a single
vendor.
The ability to offer larger user mailboxes, gain control of PST files and effectively manage storage is
enhanced both by the innovations in Exchange Server 2010 and by the technology provided by HP.
With the significant changes implemented in Exchange Server 2010 a large number of storage
options are available. The identification of implementation-specific design drivers helps refine the
options available for each Exchange 2010 design. Storage designs can be analyzed using tools
available from Microsoft and HP to compare features and costs.
This document presents design drivers and possible storage platforms for use during the design
process. The Exchange Server 2010 storage information outlined in this document covers a range of
options available for the engineer to use and additional information is available in links throughout
the document as well as in the “For more information” section that follows.
For more information
HP ActiveAnswers for Microsoft Exchange Server, www.hp.com/solutions/activeanswers/exchange
HP ActiveAnswers sizers, www.hp.com/go/activeanswers/sizers
HP ProLiant servers, www.hp.com/go/proliant
HP BladeSystem, www.hp.com/go/bladesystem
HP storage, http://www.hp.com/go/storage
HP BladeSystem storage, www.hp.com/go/storageblades
HP StorageWorks Enterprise Virtual Array, www.hp.com/go/eva
HP StorageWorks Modular Smart Array (MSA) storage systems, www.hp.com/go/msa
Technical specifications (QuickSpecs) and product data sheets, refer to www.hp.com/go/quickspecs
To help us improve our documents, please provide feedback at
http://h20219.www2.hp.com/ActiveAnswers/us/en/solutions/technical_tools_feedback.html
© Copyright 2010 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.
Microsoft, Windows and Windows Mobile are U.S. registered trademarks of Microsoft Corporation. Google is a trademark of Google Inc.
4AA2-1309ENW, Created June 2010