46
Reference Architecture: Microsoft Exchange Server 2013 Designed for organizations implementing Microsoft Exchange Server 2013 in a virtual environment Includes sizing recommendations for servers, storage, and networks for medium to large organizations Describes a highly available, site-resilient, deployment for Lenovo servers, storage, and network components Contains detailed Bill of Materials for servers, storage and networking Roland Mueller Last update: 08 October 2015

Lenovo Reference Architecture for Microsoft Exchange Server 2013

Embed Size (px)

Citation preview

Page 1: Lenovo Reference Architecture for Microsoft Exchange Server 2013

Reference Architecture: Microsoft Exchange Server 2013

Designed for organizations implementing Microsoft Exchange Server 2013 in a virtual environment

Includes sizing recommendations for servers, storage, and networks for medium to large organizations

Describes a highly available, site-resilient, deployment for Lenovo servers, storage, and network components

Contains detailed Bill of Materials for servers, storage and networking

Roland Mueller

Last update: 08 October 2015

Page 2: Lenovo Reference Architecture for Microsoft Exchange Server 2013

ii Lenovo Reference Architecture for Microsoft Exchange Server 2013

Table of Contents

1 Introduction ............................................................................................... 1

2 Business problem and business value ................................................... 2

2.1 Business problem .................................................................................................... 2

2.2 Business value ......................................................................................................... 2

3 Requirements ............................................................................................ 3

3.1 Functional requirements........................................................................................... 3

3.2 Non-functional requirements .................................................................................... 4

4 Architectural overview ............................................................................. 5

5 Component model .................................................................................... 6

5.1.1 Key concepts and terminology .................................................................................................... 6

6 Operational model .................................................................................... 8

6.1 Hardware components ............................................................................................. 8 6.1.1 Lenovo System x3550 M5 ........................................................................................................... 8 6.1.2 Lenovo Flex System Enterprise Chassis ..................................................................................... 8 6.1.3 Lenovo Flex System x240 M5 Compute Node ............................................................................ 9 6.1.4 IBM Storwize V3700 .................................................................................................................. 10 6.1.5 Lenovo Flex System Fabric CN4093 10 Gb Converged Scalable Switch.................................. 11 6.1.6 Lenovo Flex System FC5022 16Gb SAN Scalable Switch Module .......................................... 12 6.1.7 Lenovo RackSwitch G8124E ..................................................................................................... 12 6.1.8 Brocade 6505 Switch ................................................................................................................. 13

6.2 Logical mapping of Exchange components ............................................................ 15

6.3 High Availability design for Exchange 2013 ........................................................... 16 6.3.1 Key Exchange high availability concepts and terminology ........................................................ 16 6.3.2 Environment sizing .................................................................................................................... 16 6.3.3 Database availability groups ...................................................................................................... 17 6.3.4 Exchange database distribution ................................................................................................ 19 6.3.5 CAS availability .......................................................................................................................... 22

6.4 Compute servers .................................................................................................... 23 6.4.1 Windows Server Failover Clustering clusters ............................................................................ 23 6.4.2 Compute server sizing ............................................................................................................... 24

Page 3: Lenovo Reference Architecture for Microsoft Exchange Server 2013

iii Lenovo Reference Architecture for Microsoft Exchange Server 2013

6.4.3 Highly available VMs for Exchange Server 2013 ...................................................................... 24

6.5 Shared storage ...................................................................................................... 24 6.5.1 Storage configuration for Exchange databases ......................................................................... 24 6.5.2 Storage for WSFC cluster .......................................................................................................... 26 6.5.3 Storage configuration for IBM Storwize V3700 .......................................................................... 27

6.6 Networking ............................................................................................................. 27 6.6.1 Key networking concepts and terminology ................................................................................ 28 6.6.2 VLANs ........................................................................................................................................ 29 6.6.3 NIC teaming and virtual network adapter configuration ............................................................ 30 6.6.4 LACP and vLAG configuration ................................................................................................... 31 6.6.5 Network load balancer ............................................................................................................... 32

6.7 Networking for shared storage ............................................................................... 33 6.7.1 Key storage concepts and terminology ..................................................................................... 33 6.7.2 SAN multi-pathing ...................................................................................................................... 34 6.7.3 Storage zoning ........................................................................................................................... 34

6.8 Deployment example with Lenovo Flex System ..................................................... 35

6.9 Deployment Example with Lenovo System x ......................................................... 36

7 Deployment considerations ................................................................... 37

7.1 Considerations for virtualizing Exchange ............................................................... 37

8 Appendix: Lenovo Bill of Materials ....................................................... 39

8.1 BOM for compute servers ...................................................................................... 39

8.2 BOM for Flex System Enterprise Chassis .............................................................. 41

8.3 BOM for networking ............................................................................................... 42

8.4 BOM for shared storage ......................................................................................... 42

Trademarks and special notices ................................................................. 43

Page 4: Lenovo Reference Architecture for Microsoft Exchange Server 2013

1 Lenovo Reference architecture for Microsoft Exchange Server 2013

1 Introduction This document describes the reference architecture for a virtualized implementation of Microsoft® Exchange Server 2013 that is based on Lenovo® servers and IBM® Storwize® V3700 storage. The intended audience of this document is IT professionals, technical architects, sales engineers, and consultants to assist in planning, designing, and implementing Microsoft Exchange Server 2013.

Microsoft Exchange Server 2013 is the market leader in the enterprise messaging and collaboration market. Exchange Server 2013 builds upon the Exchange Server 2010 architecture and is redesigned for simplicity of scale, improved hardware utilization, and increased failure isolation. Exchange Server 2013 brings a rich set of technologies, features, and services to the Exchange Server product line. Its goal is to support people and organizations as their work habits evolve from a communication focus to a collaboration focus. At the same time, Exchange Server 2013 helps lower the total cost of ownership.

This reference architecture targets organizations that are implementing Microsoft Exchange Server 2013 in a virtual environment that uses Microsoft Hyper-V®. The solution that is described in this document provides a site resilient, highly available, clustered infrastructure that is running in two data centers with each data center having a 3-node Windows Server Failover Clustering (WSFC) cluster. Each cluster uses the Microsoft Hyper-V hypervisor and is based on Lenovo servers and networking combined with IBM Storwize shared storage.

This document provides the planning, design considerations, and best practices for implementing the described architecture to support medium to large sized organizations with less than 10,000 employees. However, the principles and techniques that are described throughout this document can be expanded upon to support much larger user populations with the addition of storage and compute resources.

For more information about setup and configuration for the highly available, clustered Exchange Server 2013 environment that is described in this document, see the Installation and Configuration Guide, which is available at this website: http://lenovopress.com/tips1324

Page 5: Lenovo Reference Architecture for Microsoft Exchange Server 2013

2 Lenovo Reference architecture for Microsoft Exchange Server 2013

2 Business problem and business value This section describes the business challenges organizations face with email and how the reference architecture that is described in this document can help meet those challenges.

2.1 Business problem Today’s IT managers are faced with many obstacles when maintaining an enterprise-class email system. Proactively responding to obstacles, such as inefficiencies in communication that hinder productivity, changing regulations, and the threat of litigation, and the constant need to secure and protect valuable enterprise resources directly corresponds to the vitality of the business.

IT managers are also looking for efficient ways to manage and grow their IT infrastructure with confidence. Good IT practices recognize the need for high availability and maximum resource utilization. Responding quickly to changing business needs with simple, fast deployment and configuration while maintaining healthy systems and services is critical to meeting the dynamic needs of the organization. Natural disasters, malicious attacks, and even simple software upgrade patches can cripple services and applications until administrators resolve the problems and restore any backed up data. The challenge of maintaining uptime only becomes more critical as businesses consolidate physical servers into a virtual server infrastructure to reduce data center costs, maximize utilization, and increase workload performance. 2.2 Business value Exchange Server 2013 actively protects your communications with built-in defenses against email threats. Multi-layered anti-spam filtering comes with continuous updates to help guard against increasingly sophisticated spam and phishing threats, while multiple anti-malware engines work to protect your email data from viruses.

Microsoft Hyper-V technology serves as a key cloud component in many customer virtualization environments. Hyper-V is included as a standard component in Windows Server® 2012 R2 Standard and Datacenter editions.

This Lenovo reference architecture for Microsoft Exchange Server 2013 provides businesses with an affordable, interoperable, and reliable industry-leading virtualization solution for their Exchange infrastructure. Built around the latest Lenovo servers and networking, and IBM® Storwize® V3700 storage, this offering takes the complexity out of the solution by providing a step-by-step implementation guide. This reference architecture combines Microsoft software, consolidated guidance, and validated configurations for compute, network, and storage. The design provides a high level of redundancy and fault tolerance across the compute servers, storage, networking, and application layer to ensure high availability of resources and Exchange Server databases.

Page 6: Lenovo Reference Architecture for Microsoft Exchange Server 2013

3 Lenovo Reference architecture for Microsoft Exchange Server 2013

3 Requirements The functional and non-functional requirements for an enterprise email system are desribed in this section.

3.1 Functional requirements An enterprise email system should fulfill the following user requirements:

• Receive incoming messages with or without attachments • Send outgoing messages with or without attachments • Interpret message content • Distinguish between mailboxes • Store incoming messages distinctly according to mailbox • Store sent messages • Delete messages • Forward messages • Allow sending messages to multiple recipients (cc) • Allow sending messages to multiple recipients by using a private send list (bcc) • Prevent incoming messages from being read by someone other than the intended recipient • Prevent outgoing messages from being read by someone other than the intended recipient • Prevent mailboxes from being accessed by someone other than the mailbox’s assigned owner • Prevent messages from being sent by someone other than the mailbox’s assigned owner • Allow users to categorize their messages • Allow users to sort their messages based on criteria (such as date, sender, subject, or size) • Allow users to search their inbox • Provide users with a 2 GB mailbox • Provide the performance to send/receive 100 messages per day • Provide the storage and throughput for an average message size of 75 KB

An enterprise email system should fulfill the following administrator requirements:

• Create a mailbox • Configure the default email address for new user accounts • Delete a mailbox • Move a mailbox • Move a mailbox database • Recover accidentally deleted messages • Monitor performance • Prevent spam messages from reaching recipients • Create public folders • Configure and change a mailbox quota • View the current mailbox size, message count, and last logon for a user • Prevent a mailbox from exceeding its quota size • Configure journaling

Page 7: Lenovo Reference Architecture for Microsoft Exchange Server 2013

4 Lenovo Reference architecture for Microsoft Exchange Server 2013

3.2 Non-functional requirements Table 1 lists the non-functional requirements for an enterprise email system.

Table 1. Non-functional requirements

Requirement Description Supported by

Scalability Solution components can scale for growth

Compute and storage can be scaled independently within a rack or across racks without service downtime.

Load balancing Workload is distributed evenly across compute servers

Network interfaces are teamed and load balanced.

High availability Single component failure does not lead to whole system unavailability

Hardware architecture ensures that computing, storage, and networking is automatically switched to remaining components; redundancy in hardware.

Physical footprint Compact solution Lenovo compute servers, network devices, and software are integrated into one rack with validated performance and reliability.

Support Available vendor support • Hardware warranty and software support are included with component products.

• Separately available commercial support from Microsoft.

Flexibility Solution supports variable deployment methodologies

• Hardware and software components can be modified or customized to meet various unique customer requirements

• Provides local and shared storage for workload.

Robustness Solution continuously works without routine supervision

Integration tests on hardware and software components.

Security Solution provides means to secure customer infrastructure

• Security is integrated in the Lenovo Flex System™ hardware with System x Trusted Platform Assurance, which is an exclusive set of industry-leading security features and practices.

• Networks are isolated by virtual LAN (VLAN).

High performance Solution components are high-performance

Reference architecture provides information for capacity and performance planning of typical deployments.

Page 8: Lenovo Reference Architecture for Microsoft Exchange Server 2013

5 Lenovo Reference architecture for Microsoft Exchange Server 2013

4 Architectural overview Figure 1 shows the architectural overview of a multi-site Exchange Server 2013 deployment with two data centers. Each data center has three compute servers, two 10 Gigabit Ethernet (GbE) network switches, one layer 4 network load balancer, two 8 Gb Fiber Channel (FC) switches, and one storage area network (SAN) storage system. A WSFC cluster is installed in each of the data centers to host the Exchange infrastructure. Multiple paths connect the clustered compute servers to the networking and storage infrastructure to maintain access to critical resources if there is a planned or unplanned outage.

Each clustered compute server has the Hyper-V role installed and hosts virtual machines (VMs). The VMs run Exchange Server 2013 and support up to 10,000 users.

Figure 1. Architectural overview

Page 9: Lenovo Reference Architecture for Microsoft Exchange Server 2013

6 Lenovo Reference architecture for Microsoft Exchange Server 2013

5 Component model This section describes the logical component view of the Exchange Server 2013 environment. Figure 2 shows a high-level component model.

Figure 2. Exchange Server 2013 logical component view

5.1.1 Key concepts and terminology The following basic concepts and terminology are used throughout this section:

Exchange Admin Center (EAC) – The EAC is the web-based management console in Microsoft Exchange Server 2013 that is optimized for on-premises, online, and hybrid Exchange deployments. The EAC replaces the Exchange Management Console (EMC) and the Exchange Control Panel (ECP), which were the two interfaces used to manage Exchange Server 2010.

Exchange Control Panel (ECP) – The ECP is a web application that runs on a Client Access Server and provides services for the Exchange organization.

Exchange Web Services (EWS) – EWS provides the functionality to enable client applications to communicate with the Exchange server.

Internet Information Services (IIS) – IIS is an extensible web server that was created by Microsoft for use with Windows NT family.

Internet Message Access Protocol (IMAP) – IMAP is a communications protocol for email retrieval and storage developed as an alternative to POP.

Page 10: Lenovo Reference Architecture for Microsoft Exchange Server 2013

7 Lenovo Reference architecture for Microsoft Exchange Server 2013

Microsoft Exchange ActiveSync (EAS) – EAS is a communications protocol that is designed for the synchronization of email, contacts, calendar, tasks, and notes from a messaging server to a smartphone or other mobile device.

Microsoft Outlook® Web App (OWA) – OWA (formerly Outlook Web Access) is a browser-based email client with which users can access their Microsoft Exchange Server mailbox from almost any web browser.

Offline Address Book (OAB) – The OAB is a copy of an address list collection that was downloaded so a Microsoft Outlook user can access the address book while disconnected from the server. Microsoft Exchange generates the new OAB files and then compresses the files and places them on a local share.

Outlook Anywhere – Outlook Anywhere is a service that provides RPC/MAPI connectivity for Outlook clients over HTTP or HTTPS by using the Windows RPC over HTTP component. In previous versions of Exchange Server, this function was used for remote or external access only. However, in Exchange Server 2013, all Outlook connectivity is via HTTP/HTTPS (even for internal clients).

Post Office Protocol (POP) – The POP is an application-layer Internet standard protocol that is used by local email clients to retrieve email from a remote server over a TCP/IP connection

Real-time Transport Protocol (RTP) – RTP is a network protocol for delivering audio and video over IP networks.

Remote PowerShell (RPS) – RPS allows you to use Windows PowerShell on your local computer to create a remote Shell session to an Exchange server if you do not have the Exchange management tools installed.

RPC Client Access (RPC) – In Microsoft Exchange Server 2007, the Client Access server role was introduced to handle incoming client connections to Exchange mailboxes. Although most types of client connections were made to the Client Access server, Microsoft Office Outlook still connected directly to the Mailbox server when it was running internally with the MAPI protocol.

A new service was introduced with Exchange Server 2010 to allow these MAPI connections to be handled by the Client Access server. The RPC Client Access service provides data access through a single, common path of the Client Access server, with the exception of public folder requests (which are still made directly to the Mailbox server). This change applies business logic to clients more consistently and provides a better client experience when failover occurs.

Remote Procedure Call over HTTP – The RPC over HTTP component wraps RPCs in an HTTP layer that allows traffic to traverse network firewalls without requiring RPC ports to be opened. In Exchange 2013, this feature is enabled by default because Exchange 2013 does not allow direct RPC connectivity.

Session Initiation Protocol (SIP) – SIP is a protocol that is used for starting, modifying, and ending an interactive user session that involves multimedia elements, such as video, voice, and instant messaging.

Simple Mail Transfer Protocol (SMTP) – SMTP is an Internet standard for email transmission.

Unified Messaging (UM) – UM allows an Exchange Server mailbox account that was enabled for UM to receive email, voice, and fax messages in the Inbox.

Page 11: Lenovo Reference Architecture for Microsoft Exchange Server 2013

8 Lenovo Reference architecture for Microsoft Exchange Server 2013

6 Operational model This section describes an operational model for deploying Microsoft Exchange Server 2013 that uses Lenovo servers that are clustered with Microsoft Windows Server 2012 R2 operating system. The servers are attached via FC to an IBM Storwize V3700 storage system with multiple expansion enclosures.

6.1 Hardware components This section describes the components in an Exchange Server 2013 deployment.

6.1.1 Lenovo System x3550 M5 The Lenovo System x3550 M5 server delivers the performance and reliability that is required for business-critical applications, such as Exchange Server 2013. Lenovo System x3550 M5 servers can be equipped with up to two 18-core E5-2600 v3 series processors and up to 1.5 TB of TruDDR4 memory. Up to three PCIe 3.0 expansion slots, four integrated 1 GbE network ports, and an optional embedded dual-port 10/40 GbE network adapter provides ports for your data and storage connections.

The Lenovo System x3550 M5 includes an on-board RAID controller and the choice of spinning hot swap SAS or SATA disks and small form factor (SFF) hot swap solid-state drives (SSDs). The x3550 M5 supports a maximum of 24 TB of internal storage.

The x3550 M5 supports the following components:

• Up to 10 front and two rear SFF hard disk drives (HDDs) or SSDs • Up to four 3.5-inch HDDs

The x3550 M5 also supports remote management via the Lenovo Integrated Management Module (IMM), which enables continuous management capabilities. All of these key features and others help solidify the dependability Lenovo customers are accustomed to with System x servers.

The Lenovo x3550 M5 server is shown in figure 3.

Figure 3. Lenovo System x3550 M5

For more information, see this website: lenovopress.com/tips1194.html

6.1.2 Lenovo Flex System Enterprise Chassis To meet today’s complex and ever-changing business demands, the Lenovo Flex System Enterprise Chassis provides a high-performance, integrated infrastructure platform that supports a mix of compute, storage, and networking capabilities. The Flex System Enterprise Chassis was designed and built specifically to provide the efficiency you need now, along with the growth path necessary to protect your investment into the future.

Page 12: Lenovo Reference Architecture for Microsoft Exchange Server 2013

9 Lenovo Reference architecture for Microsoft Exchange Server 2013

The Flex System Enterprise Chassis is the foundation of the Flex System offering, which features 14 standard (half-width) Flex System form factor compute node bays in a 10U chassis that delivers high-performance connectivity for your integrated compute, networking, and management resources. Adding compute, storage, or networking capability is done by adding more nodes, modules, or chassis. This flexible chassis was designed to deploy easily now and can scale to meet your needs in the future.

The Lenovo Flex System Enterprise Chassis is shown in figure 4.

Figure 4. Lenovo Flex System Enterprise Chassis with compute nodes

For more information about the Lenovo Flex System Enterprise Chassis, see this document: lenovo.com/images/products/system-x/pdfs/datasheets/flex_system_enterprise_chassis_ds.pdf

6.1.3 Lenovo Flex System x240 M5 Compute Node At the core of this reference architecture, six Lenovo Flex System x240 M5 Compute Nodes deliver the performance and reliability that is required for business-critical applications (such as Microsoft Exchange).

Lenovo Flex System x240 M5 Compute Nodes can be equipped with up to two 18-core E5-2600 v3 series processors and a maximum of 1.5 TB of TruDDR4 of memory. Two PCIe 3.0 expansion slots for mezzanine cards provide ports for your data and storage connections.

The Lenovo Flex System x240 M5 Compute Node includes an on-board RAID controller with integrated RAID-0 or RAID-1 and the choice of spinning hot swap SAS or SATA disks and SFF hot swap SSDs.

The x240 M5 Compute Node also supports remote management via the Lenovo IMM that enables continuous management capabilities. All of these key features and others help solidify the dependability of Lenovo System x servers. Figure 5 shows the Lenovo Flex system x240 M5 Compute Node.

Page 13: Lenovo Reference Architecture for Microsoft Exchange Server 2013

10 Lenovo Reference architecture for Microsoft Exchange Server 2013

Figure 5. Lenovo Flex System x240 M5 Compute Node

For more information about the Lenovo Flex System x240 M5 Compute Node, see this document: lenovo.com/images/products/system-x/pdfs/datasheets/x240_m5_ds.pdf

6.1.4 IBM Storwize V3700 The IBM Storwize V3700 combines best-of-breed storage development with leading iSCSI, FC over Ethernet (FCoE), or FC host interfaces and SAS/NL-SAS/SSD drive technology. With its simple, efficient, and flexible approach to storage, the Storwize V3700 is a cost-effective complement to the Lenovo solution for Microsoft Exchange 2013. By offering substantial features at a price that fits most budgets, the Storwize V3700 delivers superior price-to-performance ratios, functionality, scalability, and ease of use for the mid-range storage user.

The Storwize V3700 offers the following benefits:

• Simplified management with an integrated, intuitive user interface for faster system accessibility • Reduced network complexity with FCoE and iSCSI connectivity • Optimize costs for mixed workloads, with up to 200% better performance with SSDs by using IBM

System Storage® Easy Tier® • Thin provisioning, which supports business applications that must grow dynamically, while using only

the space that is actually used • Improved application availability and resource utilization for organizations of all sizes

IBM Storwize V3700 (shown in Figure 6) is well-suited for Microsoft virtualized environments. The Storwize V3700 complements the Lenovo servers by delivering proven disk storage in flexible, scalable configurations. Connecting optional expansion units to a Storwize V3700 can scale up to 240 SAS and SSDs. The Storwize V3700 comes standard with 4 GB cache per controller (upgradable to 8 GB cache per controller) for a maximum of 16 GB cache for the entire system.

Page 14: Lenovo Reference Architecture for Microsoft Exchange Server 2013

11 Lenovo Reference architecture for Microsoft Exchange Server 2013

Figure 6. IBM Storwize V3700

For more information, see this website: ibm.com/systems/storage/disk/storwize_v3700

6.1.5 Lenovo Flex System Fabric CN4093 10 Gb Converged Scalable Switch The Flex System Fabric CN4093 10 Gb Converged Scalable Switch provides unmatched scalability, port flexibility, performance, convergence, and network virtualization. The switch also delivers innovations to help address several networking concerns today and provides capabilities that help you prepare for the future. The switch offers full Layer 2/3 switching and FCoE Full Fabric and FC NPV Gateway operations to deliver a truly converged integrated solution. It is designed to install within the I/O module bays of the Flex System Enterprise Chassis. The switch can help clients migrate to a 10 GbE or 40 GbE converged Ethernet infrastructure and offers virtualization features, such as Virtual Fabric and VMready®.

The CN4093 offers up to 12 external Omni Ports, which provide extreme flexibility with the choice of SFP+ based 10 Gb Ethernet connectivity or 4/8 Gb FC connectivity, depending on the SFP+ module that is used.

The base switch configuration comes standard with 22x 10 GbE port licenses that can be assigned to internal connections or external SFP+, Omni or QSFP+ ports with flexible port mapping. For example, this feature allows you to trade off four 10 GbE ports for one 40 GbE port (or vice versa) or trade off one external 10 GbE SFP+ or Omni port for one internal 10 GbE port (or vice versa). You then have the flexibility of turning on more ports when you need them by using Lenovo Features on Demand (FoD) upgrade licensing capabilities that provide “pay as you grow” scalability without the need to buy more hardware.

The Flex Fabric CN4093 10 Gb Converged Scalable Switch module is shown in Figure 7.

Figure 7. Flex Fabric CN4093 10 Gb Converged Scalable Switch module

Page 15: Lenovo Reference Architecture for Microsoft Exchange Server 2013

12 Lenovo Reference architecture for Microsoft Exchange Server 2013

For more information about the Flex Fabric CN4093 10 Gb Converged Scalable Switch, see the Lenovo Press product guide: http://lenovopress.com/tips0910.

6.1.6 Lenovo Flex System FC5022 16Gb SAN Scalable Switch Module The Flex System FC5022 16Gb SAN Scalable Switch models leverage Gen 5 FC and support 12-to-48 ports operating from 16 Gbps to 2 Gbps FC speeds via both 16 and 8 Gbps optics. The switch provides up to 28 internal ports to compute nodes by way of the Flex System chassis mid-plane and 20 external SFP+ FC ports. Some clients will start off with a 12-port module to get a lower entry cost for a partial chassis deployment, while other clients deploying a full chassis will select one of the 24-port models. Clients will primarily deploy the FC5022 in one of two modes Access Gateway or Full Fabric mode. For clients wanting tighter integration with their external Brocade Fabric might select the 24-ports ESB which includes support for advanced diagnostic and monitoring using Fabric Vision. All FC5022 models can support simplified monitoring and management by using end-to-end Brocade tools, such as Web Tools and Brocade Network Advisor (BNA).

The Flex System FC5022 16Gb SAN Switch module is shown in Figure 8.

Figure 8. Lenovo Flex System FC5022 16Gb SAN Switch module

For more information about the Flex System F5022 16Gb SAN Switch, see the product guide: lenovopress.com/tips0870

6.1.7 Lenovo RackSwitch G8124E The Lenovo RackSwitch™ G8124E is a 10 Gigabit Ethernet switch that is specifically designed for the data center and provides a virtualized, cooler, and easier network solution. The G8124E offers 24 10 GbE ports in a high-density, 1U footprint.

Designed with ultra-low latency and top performance in mind, the RackSwitch G8124E provides line-rate, high-bandwidth switching, filtering, and traffic queuing without delaying data. Large data center grade buffers keep traffic moving. The G8124E also supports Converged Enhanced Ethernet (CEE) and Data Center Bridging for support of FCoE and can be used for NAS or iSCSI.

The G8124E is virtualized and supports VMready technology, which is an innovative, standards-based solution to manage VMs in small to large-scale data center and cloud environments. VMready works with all leading VM providers. The G8124E also supports Virtual Fabric, which allows for the carving up of a physical NIC into 2 - 8 vNICs for improved performance, availability, and security while reducing cost and complexity.

The G8124E is cooler and implements a choice of directional cooling to maximize data center layout and provisioning. Its superior airflow design complements the hot-aisle and cold-aisle data center cooling model.

Page 16: Lenovo Reference Architecture for Microsoft Exchange Server 2013

13 Lenovo Reference architecture for Microsoft Exchange Server 2013

The G8124E is easier to configure with server-oriented provisioning via point-and-click management interfaces. Its industry-standard CLI and easy interoperability simplifies configuration for those familiar with Cisco environments.

The Lenovo RackSwitch G8124E is shown in Figure 9.

Figure 9. Lenovo RackSwitch G8124E

The RackSwitch G8124E includes the following benefits:

• A total of 24 SFP+ ports that operate at 10 Gb or 1 Gb Ethernet speeds • Optimal for high-performance computing and applications that require high bandwidth and low latency • All ports are nonblocking 10 Gb Ethernet with deterministic latency of 570 nanoseconds • VMready helps reduce configuration complexity and improves security levels in virtualized

environments • Variable-speed fans automatically adjust as needed, which helps to reduce energy consumption • Easy, standards-based integration into Cisco and other networks helps reduce downtime and learning

curve

For more information, see this website: lenovopress.com/tips0787.html

6.1.8 Brocade 6505 Switch As the value and volume of business data continue to rise, organizations need technology solutions that are easy to implement and manage and that can grow and change with minimal disruption. The Brocade 6505 Switch provides small to medium-sized enterprises with SAN connectivity that simplifies their IT management infrastructures, improves system performance, maximizes the value of virtual server deployments, and reduces overall storage costs.

The 8 Gbps or 16 Gbps FC Brocade 6505 Switch provides a simple, affordable solution for both new and existing SANs. The EZSwitchSetup wizard and other usability and configuration enhancements can be used to simplify deployment. The switch also provides state-of-the-art performance and Ports on Demand (PoD) scalability to support SAN expansion and enable long-term investment protection.

The Brocade 6505 Switch is shown in Figure 10.

Page 17: Lenovo Reference Architecture for Microsoft Exchange Server 2013

14 Lenovo Reference architecture for Microsoft Exchange Server 2013

Figure 10. Brocade 6505 Switch

The Brocade 6505 Switch includes the following benefits:

• Provides an affordable, flexible foundation for entry-level SANs, and an edge switch for core-to-edge SAN environments

• Delivers up to 24 ports of 8 Gbps or 16Gps performance in an energy-efficient, optimized 1U form factor to support the most demanding server and virtual server deployments

• Simplifies configuration and management with easy-to-use tools (such as the Brocade EZSwitchSetup wizard) and is Microsoft Simple SAN-compatible

• Enables “pay-as-you-grow” expansion with PoD scalability from 12 to 24 ports in 12-port increments with the Brocade 300 or from 12 to 24 ports on the Brocade 6505.

• Offers dual functionality as a full-fabric SAN switch or as an NPIV-enabled Brocade Access Gateway that simplifies server connectivity in heterogeneous enterprise fabrics

• Protects device investments with auto-sensing 1, 2, 4, 8, and 16 Gbps capabilities and native operation with Brocade and Brocade M-Series fabrics

• Delivers higher availability and reliability with support for redundant power supplies. • Ensures investments work in the future by enabling organizations to use 8 Gbps SFPs today and

upgrade to 16 Gbps SFP+ when required For more information about the Brocade 6505 Switch, see this website: shop.lenovo.com/us/en/systems/storage/san/fibre-channel-switches/brocade-6505/

Page 18: Lenovo Reference Architecture for Microsoft Exchange Server 2013

15 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.2 Logical mapping of Exchange components In this section, we describe how the components of an Exchange Server 2013 deployment are mapped to the logical design of the solution.

Figure 11 shows the components that are shown in Figure 2 as they map to the logical design of the solution.

Figure 11. Component mapping to the logical design

Lenovo recommends deploying the Client Access Server (CAS) role on the same compute server or VM as the mailbox role (multi-role deployment). Therefore, the CAS role is deployed in a 1:1 ratio with the mailbox role.

Page 19: Lenovo Reference Architecture for Microsoft Exchange Server 2013

16 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.3 High Availability design for Exchange 2013 This section describes the high availability functionality for an Exchange Server 2013 environment.

6.3.1 Key Exchange high availability concepts and terminology The following Exchange high availability concepts and terminology are used in this section:

Mailbox database – A mailbox database is a unit of granularity in which mailboxes are created and stored. A mailbox database is stored as an Exchange database (.edb) file. In Microsoft Exchange Server 2013, each mailbox database has its own properties that can be configured.

Highly available database copies – Highly available database copies are configured with a replay lag time of zero. As their name implies, highly available database copies are kept up-to-date by the system, can be automatically activated by the system, and are used to provide high availability for mailbox service and data.

Lagged database copy – Lagged database copies are configured to delay transaction log replay for some time. Lagged database copies provide point-in-time protection, which can be used to recover from store logical corruptions, administrative errors (for example, deleting or purging a disconnected mailbox), and automation errors (for example, bulk purging of disconnected mailboxes).

6.3.2 Environment sizing When a new email system is implemented, it is important to correctly profile the user population to determine the average number of emails that are sent and received per day and the average message size.

Microsoft provides the Exchange 2013 Server Role Requirements Calculator tool to help accurately design an organization’s Exchange infrastructure. The calculator was used for the environment sizing that it used in this document. The calculator can be downloaded from this website: blogs.technet.com/b/exchange/archive/2013/05/14/released-exchange-2013-server-role-requirements-calculator.aspx

The sizing can be determined by using the defined requirements and the profile information with the calculator that is provided by Microsoft.

The configuration that is described in this document supports up to 10,000 users and provides a solid foundation for a virtualized Microsoft Exchange Server 2013 environment. It can be expanded to multiple servers and storage for more compute capacity or storage.

Page 20: Lenovo Reference Architecture for Microsoft Exchange Server 2013

17 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.3.3 Database availability groups A database availability group (DAG) is the base component of the high availability and site resilience framework that is built into Microsoft Exchange Server 2013. A DAG is a group of up to 16 mailbox servers that hosts a set of mailbox databases and provides automatic database-level recovery from failures that affect individual servers or databases.

A DAG is a boundary for mailbox database replication, database and server switchovers, failovers, and an internal component called Active Manager. Active Manager, which runs on every server in a DAG, manages switchovers and failovers.

Any server in a DAG can host a copy of a mailbox database from any other server in the DAG. When a server is added to a DAG, it works with the other servers in the DAG to provide automatic recovery from failures that affect mailbox databases (such as a disk failure or server failure).

Figure 12 shows an example of the design for a DAG. An active/active, multi-site implementation (with a user population of 5,000 at each geographically dispersed site) requires two DAGs that span both data centers. Four Exchange Server VMs (with the Mailbox and CAS roles installed) are required in each DAG to host the active copy, the two passive copies, and the lagged copy of each mailbox database.

Each VM is assigned seven volumes; six for housing database and log files and a volume to be used to restore databases.

Figure 12. DAG design component diagram

Page 21: Lenovo Reference Architecture for Microsoft Exchange Server 2013

18 Lenovo Reference architecture for Microsoft Exchange Server 2013

The active database copies are hosted by the two VMs in the data center closest to the users whose mailboxes comprise those databases. Hosting the active databases close to the users prevents users from losing access to their email if there is a wide area network (WAN) outage.

Each DAG is assigned a witness server (a file server) that is used as another vote to maintain quorum if there is a WAN outage. Each DAG’s witness must be in the same data center that hosts the active database copies (during normal runtime) for that DAG. For example, DAG1’s witness server should be in the primary data center because DAG1’s user population and active databases are also there.

If there is a WAN failure, DAG1 has three quorum votes in the primary data center (the two mailbox servers and the witness server), but only two votes in the secondary data center (the two mailbox servers). Therefore, the databases that are hosted by DAG1 in the primary datacenter remain active and the databases in the secondary data center deactivate if there is a WAN outage. DAG1’s user population is near the primary data center so the users do not lose access to their mailboxes because they do not have to traverse the WAN.

The same holds true for the secondary data center in a WAN outage. Users who are near the secondary data center maintain access to their mailboxes because the databases were hosted locally.

Figure 13 shows the environment during a WAN outage.

Figure 13. The Exchange environment during a WAN outage

Page 22: Lenovo Reference Architecture for Microsoft Exchange Server 2013

19 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.3.4 Exchange database distribution Figure 14 shows a detailed view of the database layout for DAG1 in the Exchange environment at normal runtime (all VMs are operational).

Figure 14. Exchange environment when all VMs are operational

Note: DAG2 is the mirror image of DAG1 (as shown in Figure 14) with the passive and lagged database copies in the primary data center and the active database copies in the secondary data center.

To support the 5,000 Exchange users who are local to the primary data center, DAG1 consists of four Exchange server VMs (two in each data center). Six mailbox databases are required to meet the needs of the local user population of 5,000. The two VMs in the primary data center each host three active mailbox databases and three passive mailbox database copies of the databases that are hosted by the other VM (for example, if MBX1 hosts the active copy of a database, MBX2 hosts the passive copy of that database). At the secondary data center, two Exchange server VMs each host three passive mailbox database copies and three lagged mailbox database copies of the databases that are active in the primary data center.

Page 23: Lenovo Reference Architecture for Microsoft Exchange Server 2013

20 Lenovo Reference architecture for Microsoft Exchange Server 2013

If there is a server failure or a VM is taken offline for maintenance, the active copies of the databases that are hosted on the affected server go offline and the passive copies that are hosted by the other VM in the same data center become active, as shown in Figure 15.

Figure 15. Exchange environment with a single VM failure

If there is a second server failure in the primary data center, the passive copies that are hosted by the VMs in the secondary data center become active. At this point, Lenovo recommends playing the logs forward on the lagged database copies to convert them to a highly-available passive copy of the database rather than a lagged copy. Doing so prevents a disruption of mail service if the environment sustains a third server failure, as shown in Figure 16.

Page 24: Lenovo Reference Architecture for Microsoft Exchange Server 2013

21 Lenovo Reference architecture for Microsoft Exchange Server 2013

Figure 16. Exchange environment with two VM failures

Finally, if there is a third server failure, the passive database copies on the remaining VM become active to support the user population of 5,000 for DAG1, as shown in Figure 17.

Page 25: Lenovo Reference Architecture for Microsoft Exchange Server 2013

22 Lenovo Reference architecture for Microsoft Exchange Server 2013

Figure 17. Exchange environment that is running with a single VM per DAG

6.3.5 CAS availability Where the DAG provides high availability for the mailbox databases, the CAS role requires separate consideration. In Exchange Server 2010, CAS server high availability was achieved by using an array of CAS servers that were load balanced by using a network load balancer. In Exchange Server 2013, the CAS array was replaced with the concept of a single namespace for Outlook connectivity.

In a default installation, each CAS server registers its fully qualified domain name (FQDN) as its internal host name in Outlook Anywhere. When an Outlook client makes a connection to a CAS server, it connects to the server’s registered internal host name. If the server fails, the connection times out and Outlook Anywhere automatically discovers an available CAS server and creates a connection. However, this process is slow and can leave an Outlook client disconnected for some time. To reduce the time that is required to create a connection, each CAS server can be configured to use a single namespace as its internal host name in Outlook Anywhere. This configuration requires registering the single namespace as the internal host name for each CAS server and creating a dynamic name system (DNS) record on the DNS server that points to the single namespace. This technique ensures Outlook clients take less time to re-establish connectivity to one of

Page 26: Lenovo Reference Architecture for Microsoft Exchange Server 2013

23 Lenovo Reference architecture for Microsoft Exchange Server 2013

the other IP addresses to which the shared namespace resolves.

Note: DNS round-robin can be used for loud distribution, but a network load balancer is a better option because it provides faster switching capabilities. When a network load balancer is used, the single namespace resolves to the virtual IP (VIP) that is defined by the network load balancer rather than the IP address of a CAS server. When the network load balancer detects a server failure, it redirects incoming connections to CAS servers that remain online.

6.4 Compute servers In this section, we describe how the component model for Exchange Server 2013 is mapped onto physical compute servers and VMs.

Windows Server 2012 R2 Standard and Datacenter editions with Hyper-V provide the enterprise with a scalable and highly elastic platform for virtualization environments. It can support today’s largest servers with up to 4 TB of RAM, 320 logical processors, and 64 nodes per cluster. It also includes key features, such as high-availability clustering, simultaneous live migration, in-box network teaming, and improved Quality of Service (QoS) features. With these capabilities, IT organizations can simplify resource pools that are used to support their cloud environments. Under Windows Server 2012 R2, Hyper-V also uses the operating system’s ability to better use resources that are presented to VMs by offering up to 64 virtual CPUs, 1 TB of RAM, and virtual Host Bus Adapter (vHBA) support.

6.4.1 Windows Server Failover Clustering clusters Three Lenovo servers are installed in each of the data centers. The three servers in each of the data centers have the Hyper-V role installed and are configured as a WSFC cluster that helps eliminate single points of failure so users have near-continuous access to important server-based, business-productivity applications (such as Microsoft Exchange). These servers host the VMs that comprise the Exchange environment.

Each data center includes four VMs. Each VM has its own operating system instance and is isolated from the host operating system and other VMs. VM isolation helps promote higher business-critical application availability while the Failover Clustering feature, found in the Windows Server 2012 R2 Standard and Datacenter Editions, can dramatically improve production system uptimes.

Page 27: Lenovo Reference Architecture for Microsoft Exchange Server 2013

24 Lenovo Reference architecture for Microsoft Exchange Server 2013

Figure 18. WSFC clusters that are running Microsoft Hyper-V and hosting Exchange server VMs

6.4.2 Compute server sizing The recommended configuration for the compute servers includes the following components:

• Minimum of 384 GB RAM (320 GB is required to support two VMs on each compute server) • Two Intel Xeon E5-2697 v3 (Haswell) 2.6 Ghz 14-core processors • One dual-port Emulex 10 GbE adapter • One dual-port Emulex 8 Gb FC HBA

A minimum of three compute servers per data center are recommended for the virtualization host cluster. Two compute servers can support the four VMs in a data center. The third compute server allows a compute server to be taken down for maintenance (or an unplanned outage) without bringing down Exchange server VMs.

6.4.3 Highly available VMs for Exchange Server 2013 Exchange VMs provide the virtual infrastructure for the Exchange environment. For this reference architecture, four VMs are provisioned on each cluster with the following configuration:

• 160 GB memory • 13 virtual processors (only 8 if hosting passive and lagged databases in a remote data center) • 1 SCSI controller with one VHDX files: 127 GB VHDX for the system files and operating system • 3 network adapters:

o Network adapter 1 = Corporate network 192.168.40.x o Network adapter 2 = MAPI network 192.168.30.x o Network adapter 3 = Database replication network 192.168.20.x

• Windows Server 2012 R2 Standard Edition

For more information about creating highly available VMs for Exchange Server 2013, see Installation and Configuration Guide.

6.5 Shared storage Shared storage is used for the Exchange mailbox databases and logs and for the quorum and cluster shared volumes (CSVs) that are used by the WSFC cluster.

6.5.1 Storage configuration for Exchange databases Each Exchange server VM requires six volumes for the mailbox databases and logs and one volume to use when mailbox databases (Restore volume) are restored.

Isolate the database and log workload from the restore volume by creating separate storage pools or RAID arrays; one RAID-10 storage pool (or array) for the database and log volumes and one, single-disk RAID-0 storage pool (or array) for the restore volume.

Figure 19 shows the storage design for a single VM that uses a storage pool-based disk storage system. Lenovo performed cost-based analysis and determined that 3.5-inch 3 TB NL-SAS drives are the least expensive option regarding the number of disks and storage subsystems that are required. Therefore, this

Page 28: Lenovo Reference Architecture for Microsoft Exchange Server 2013

25 Lenovo Reference architecture for Microsoft Exchange Server 2013

design is based on 3 TB 7.2k NL-SAS drives.

Figure 19. An example storage design for each VM in the Exchange environment

Each Exchange server VM requires six 2,762 GB volumes that are allocated for database and log files. Also, a single 2,790 GB volume should be allocated and assigned to each Exchange server VM for temporary storage space when a failed database is reseeded.

For more information about creating the storage pools and volumes by using the Storwize V3700, see the accompanying Installation and Configuration Guide.

There is no single storage configuration that is appropriate for every organization. Lenovo recommends gaining a thorough understanding of the capacity needs of your organization before implementing the storage design and then monitoring the solution for bottlenecks.

Microsoft provides comprehensive guidance on sizing and capacity planning for Exchange. For more information about Exchange sizing and capacity planning, see the TechNet article that is available at this website: technet.microsoft.com/en-us/library/dn879075(v=exchg.150).aspx

Page 29: Lenovo Reference Architecture for Microsoft Exchange Server 2013

26 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.5.2 Storage for WSFC cluster To increase the high availability of a WSFC cluster and the roles that are hosted on that cluster, it is important to set the cluster quorum configuration appropriately. The quorum for a cluster is determined by the number of voting elements that must be part of active cluster membership for that cluster to start properly or continue running. By default, every node in the cluster has a single quorum vote. Also, a quorum witness (when configured) has another single quorum vote. A quorum witness can be a designated disk resource or a file share resource. Each element can cast one “vote” to determine whether the cluster can run. Whether a cluster has quorum to function properly is determined by the majority of the voting elements in the active cluster membership.

As a general rule, the voting elements in the cluster should be an odd number when you configure a quorum. Therefore, if the cluster contains an even number of voting nodes, you should configure a disk witness or a file share witness. By adding a witness vote, the cluster continues to run if one of the cluster nodes fails or is disconnected. A disk witness often is recommended if all nodes can see the disk.

For this reference architecture, the virtualization host cluster uses the disk witness method for maintaining quorum. The cluster is assigned a small, logical drive that can be simultaneously accessed from all three members of the cluster. If one compute server fails, the remaining compute servers and the disk witness maintain majority and the remaining server stay online.

In addition to the quorum disk, the virtualization host cluster requires a larger logical drive for housing the VM system files and the VM virtual hard disks (VHDX) files.

Therefore, a storage pool (or array) with two logical volumes should be created initially: one small 5 GB logical volume for the cluster quorum disks and a larger logical volume for housing the VM VHDXs files. These volumes should be made accessible to all three compute servers in the WSFC cluster. Figure 20 shows the logical volumes that are required for the virtualization host cluster.

Figure 20. Logical volumes required for the virtualization host cluster

Page 30: Lenovo Reference Architecture for Microsoft Exchange Server 2013

27 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.5.3 Storage configuration for IBM Storwize V3700 Table 2 lists the recommended storage requirements in this reference architecture.

Important: The number of HDDs that is required by the storage pools for the Exchange mailbox database volumes is listed for a single VM. Each VM requires the volumes that are listed. Therefore, the number of HDDs that is required for the entire solution is eight times that which is shown.

Table 2. Storage requirements

Volumes for Exchange Mailbox Databases (each volume is required on each VM)

Volume Description Volume Number and Size

Storage Pool HDDs

RAID Level RAID space (3 TB HDDs)

Mailbox database and logs

6x 2,762 GB 16 RAID 10 22,251 GB

Restore volume 1x 1,993 GB 1 RAID 0 2,793

Shared volumes for WSFC clusters (volumes are accessible by the compute servers at a site)

Volume Description Volume Sizes Storage Pool Drives

RAID Level RAID space (3 TB HDDs)

• CSV

• Quorum for SharePoint cluster

5,500 GB

5 GB

4 RAID10 5,587 GB

If 3 TB HDDs are used, this reference architecture requires 17 HDDs per server for the Exchange mailbox databases and logs and the restore volume. The WSFC cluster at each data center requires four HDDs for the cluster shared volume and quorum volume. Therefore, a total of 72 drives are required at each data center (144 disks for the entire solution).

Note: If V3700 is used as the storage platform, the disk requirements go up slightly due to the way storage pools are handled. When using V3700, 80 HDDs are required per data center; 68 HDDs for the mailbox database and logs storage pool, 6 for the restore volume storage pool, and 6 for the CSV and quorum storage pool)

6.6 Networking This section describes the networking topology and includes design guidance to correctly configure the network environment for redundancy and failover when Lenovo Flex System or Lenovo System x components are used.

This reference architecture uses two ultra low-latency, high-performance, Flex System CN4093 10 Gb Converged Scalable Switches (or Lenovo RackSwitch G8124 10 GbE network switches for System x servers) to provide primary data communication services. Lenovo recommends the use of the Flex System CN4022 2-port 10 Gb Converged Adapter for network connectivity for the compute servers (or the Emulex Virtual Fabric

Page 31: Lenovo Reference Architecture for Microsoft Exchange Server 2013

28 Lenovo Reference architecture for Microsoft Exchange Server 2013

Adapter 5 (VFA5) network adapters for System x servers).

Note: The Flex System CN4022 2-port 10 Gb Converged Adapters are also compatible with the Flex System EN4023 10 Gb Scalable Switch.

For more information about configuring the network, see the accompanying Installation and Configuration Guide.

6.6.1 Key networking concepts and terminology This section describes the following basic networking concepts and terminology that are used throughout the next sections:

Inter-Switch Link (ISL) – An ISL is a physical network connection from a physical network port on one switch to a physical network port on another switch that enables communication between the two switches. This reference architecture uses two physical connections between the two networking switches that are aggregated by using a trunk group.

Trunk group – A trunk group creates a virtual link between two switches that operates with aggregated throughput of the physical ports that are used. Most networking switches support two trunk group types: static and dynamic Link Aggregation Control Protocol (LACP). Lenovo’s recommendation (and the method that is used in this reference architecture) is to use dynamic trunk groups when available. Figure 21 shows a dynamic trunk group that aggregates two ports from each switch to form an ISL.

Figure 21. A dynamic trunk group aggregating two ISL connections between two switches

Link Aggregation Control Protocol (LACP) – LACP is an IEEE 802.3ad standard for grouping several physical ports into one logical port (known as a dynamic trunk group) with any device that supports the standard. The 802.3ad standard allows standard Ethernet links to form a single Layer 2 link that uses LACP. Link aggregation is a method of grouping physical link segments of the same media type and speed in full duplex and treating them as if they were part of a single, logical link segment. If a link in a LACP trunk group fails, traffic is reassigned dynamically to the remaining links of the dynamic trunk group.

Virtual Link Aggregation Group (vLAG) – A switch or server in the access layer can be connected to more than one switch in the aggregation layer to provide for network redundancy, as shown in Figure 22. Typically, Spanning Tree Protocol (STP) is used to prevent broadcast loops, which blocks redundant uplink paths. Therefore, there is the unwanted consequence of reducing the available bandwidth between the layers by as much as 50%. In addition, STP can be slow to resolve topology changes that occur during a link failure and can result in considerable MAC address flooding.

Page 32: Lenovo Reference Architecture for Microsoft Exchange Server 2013

29 Lenovo Reference architecture for Microsoft Exchange Server 2013

By using vLAGs, the redundant uplinks remain active and use all available bandwidth. To maintain maximum bandwidth over the multiple connections, vLAG is enabled on the LACP teams in this reference architecture.

Figure 22. STP blocking implicit loops

Virtual LAN (VLAN) – VLANs are a way to logically segment networks to increase network flexibility without changing the physical network topology. With network segmentation, each switch port connects to a segment that is a single broadcast domain. When a switch port is configured to be a member of a VLAN, it is added to a group of ports that belongs to one broadcast domain. Each VLAN is identified by a VLAN identifier (VID). A VID is a 12-bit portion of the VLAN tag in the frame header that identifies an explicit VLAN.

Tagged Port – A tagged port is a port that is configured as a member of a specific VLAN. When an untagged frame exits the switch through a tagged member port, the frame header is modified to include the 32-bit tag that is associated with the port VLAN ID (PVID). When a tagged frame exits the switch through a tagged member port, the frame header remains unchanged (original VID remains).

Untagged Port – An untagged port is a port that is not configured as a member of a specific VLAN. When an untagged frame exits the switch through an untagged member port, the frame header remains unchanged. When a tagged frame exits the switch through an untagged member port, the tag is stripped and the tagged frame is changed to an untagged frame.

6.6.2 VLANs A combination of physical and virtual isolated networks is configured on the compute servers and the switches to satisfy isolation best practices.

VLANs are used to provide logical isolation between the various types of data traffic. Table 3 lists the VLANs that are required to support the WSFC cluster and Exchange Server 2013 workload as described in this reference architecture.

Page 33: Lenovo Reference Architecture for Microsoft Exchange Server 2013

30 Lenovo Reference architecture for Microsoft Exchange Server 2013

Table 3. VLAN definitions

Network VLAN Name Description

VLAN 70 Management Network A network that is used for host management, storage management, and out-of-band communication to IMM devices.

VLAN 60 Cluster Private Network (virtualization host servers)

A network reserved for cluster private (heartbeat and cluster shared volume) communication between clustered virtualization host servers. There should be no IP routing or default gateways for cluster private networks.

VLAN 50 Live Migration Network A network to support VM Live Migration for the Hyper-V cluster. There should be no routing on the Live Migration network.

VLAN 40 Cluster Public (Corporate Network) / MAPI Network

A network reserved for connecting to the domain controller and the corporate network. MAPI traffic will use this network.

VLAN 30 Exchange (Replication Network) A network for Exchange database replication.

Note: The VLANs that are described in this section are sharing the bandwidth of the single NIC team. Therefore, QoS is applied from Windows to ensure each VLAN has available bandwidth.

The required VLANs are listed in Table 1.

6.6.3 NIC teaming and virtual network adapter configuration All data traffic is over a single NIC team that uses the 10 Gb network adapter in each of the compute servers. The NIC team is created by using the Windows Server 2012 R2 in-box NIC teaming feature that provides fault tolerance and load balancing for the networks.

After NIC teaming is configured, several virtual network adapters are created and assigned to appropriate VLANs.

Virtualization host cluster network adapter configuration

Four virtual network adapters are created from within Windows on each compute server and assigned VLANs by using Windows PowerShell. These adapters are used for management (VLAN 70), cluster private traffic (VLAN 60), VM Live Migration (VLAN 50), and accessing the corporate intranet (VLAN 40). In addition to the

Page 34: Lenovo Reference Architecture for Microsoft Exchange Server 2013

31 Lenovo Reference architecture for Microsoft Exchange Server 2013

virtual network adapters that are used by the virtualization host cluster compute servers, each VM that is hosted by the compute servers is assigned two virtual network adapters from within Hyper-V. The first network adapter is assigned VLAN 40 and is used to connect to the organization’s intranet and domain controllers. Exchange MAPI traffic will also use this network. The second network adapter is assigned VLAN 30 and is used for Exchange database replication.

Figure 23 shows the network adapters and their assigned VLANs.

Figure 23. Virtual network adapter configuration for the Hyper-V cluster servers

6.6.4 LACP and vLAG configuration To maintain VLAN information when multiple network switches are interconnected, an ISL is required. However, because a single ISL is limited in bandwidth to the 10 Gbps of a single connection and is not redundant, two ISL connections are recommended. Create two ISL connections by physically cabling switch 1 to switch 2 with two 10 GbE networking cables. For example, two external ports of switch 1 are cabled to two external ports of switch 2.

Page 35: Lenovo Reference Architecture for Microsoft Exchange Server 2013

32 Lenovo Reference architecture for Microsoft Exchange Server 2013

LACP is used to combine the two physical ISL connections into a single virtual link, called a trunk group. LACP teams provide for higher bandwidth connections and redundancy between LACP team members.

Important: Lenovo recommends enabling network health checking on networking switches that are configured as VLAG peers. Although the operational status of the VLAG peer is typically determined via the ISL connection, enabling network health checking provides an alternative means to check peer status if the ISL link fails. As a best practice, use an independent link between the two switches (for example, use the 1 Gb management port).

LACP teams are formed on the ISLs between the switches and on the host connections to the switches, which provides for host connection redundancy. To maintain maximum bandwidth over the multiple connections, vLAGs are also configured on the LACP teams.

Note: Disabling Spanning Tree on the LACP teams helps avoid the wasted bandwidth that is associated with links blocked by Spanning Tree.

The vLAG/LACP configuration that is used for this reference architecture is shown in Figure 24.

Figure 24. LACP/vLAG design

6.6.5 Network load balancer For performance and resilience reasons, load balancing user connections to the CAS server role on the Exchange servers VMs is required. Microsoft recommends the use of a layer 4 network load balancer, such as the Network Load Balancing feature in Windows Server 2012.

In Exchange Server 2013, the network load balancer no longer must be configured to ensure persistence across CAS servers. For a specific protocol session, CAS now maintains a 1:1 relationship with the Mailbox server that is hosting the user’s data. If the active database copy is moved to a different Mailbox server, CAS closes the sessions to the previous server and establishes sessions to the new server.

Page 36: Lenovo Reference Architecture for Microsoft Exchange Server 2013

33 Lenovo Reference architecture for Microsoft Exchange Server 2013

For more information about network load balancing and configuration, see this website: blogs.technet.com/b/exchange/archive/2014/03/05/load-balancing-in-exchange-2013.aspx

6.7 Networking for shared storage This section describes the storage topology and limited sizing guidance to help configure the storage environment when Lenovo Flex System or Lenovo System x components are used.

For more information about configuring the SAN switches, zoning, enabling multi-pathing, and configuring the Storwize V3700, see the accompanying Installation and Configuration Guide.

Each compute server has one Flex System FC3052 2-port FB HBA mezzanine cards (or an Emulex 8Gb FC Dual-port HBA for System x servers) that are used for connecting to the SAN. Each compute servers maintains one, 8 Gb connection to each of the two Flex System FC5022 16Gb SAN Switches (or Brocade 300 or Brocade 6505 FC SAN Switches for System x servers) that are in the back of the Flex System Enterprise Chassis.

Figure 25 shows the FC connections between the compute servers, FC switches, and the storage controllers.

Figure 25. SAN cabling

6.7.1 Key storage concepts and terminology This section describes the following basic concepts and terminology that are used throughout the next sections:

World Wide Name (WWN) – A WWN is 64-bit identifier for devices or ports. All devices with multiple ports include WWNs for each port, which provides more detailed management. Because of their length, WWNs are expressed in hexadecimal numbers, which is similar to MAC addresses on network adapters.

Page 37: Lenovo Reference Architecture for Microsoft Exchange Server 2013

34 Lenovo Reference architecture for Microsoft Exchange Server 2013

Zoning – Zoning allows the isolation of a single server to a group of storage devices or a single storage device (or associate a grouping of multiple servers with one or more storage devices) as might be needed in a server cluster deployment. Zoning is implemented at the hardware level (by using the capabilities of FC switches) and often can be done on a port basis (hardware zoning) or on a WWN basis (software zoning). Zoning is configured on a per-target and initiator basis.

Cluster Shared Volume (CSV) – CSV is a feature supported by WSFC. A CSV is a logical drive concurrently visible to all cluster nodes and allows for simultaneous access from each cluster node.

6.7.2 SAN multi-pathing The IBM Subsystem Device Driver Specific Module (SDDDSM) multi-path driver with the Windows Server 2012 R2 Multi-path I/O (MPIO) feature is used to provide fault tolerance and dynamic path management between compute servers and storage. If one or more hardware component fails and causes a path to fail, multi-pathing logic chooses an alternative path for I/O so applications can still access their data. The SDDDSM uses a load-balancing policy to equalize the load across all preferred paths. No user intervention is required, other than the typical new device discovery on a Windows operating system.

For more information about the SDDDSM, see this website: pic.dhe.ibm.com/infocenter/svc/ic/index.jsp?topic=%2Fcom.ibm.storage.svc.console.doc%2Fsvc_w2kmpio_21oxvp.html

6.7.3 Storage zoning When a WSFC cluster that is designed to host VMs is created, Lenovo recommends the use of WWN-based zoning (software zoning) rather than port-based zoning (hardware zoning) on the FC switches. In port-based zoning, a port is placed into a zone and anything that is connecting to that port is included in the zone (or zones). In WWN-based zoning, zones are defined by using the WWNs of the connected interfaces. WWN-based zoning allows for a virtual SAN to be defined at the virtualization layer, which uses the same physical HBA ports the host uses while maintaining isolation between the host and the VM’s data traffic. For this reference architecture, software zoning is used for all SAN connections.

Lenovo recommends the use of single-initiator zoning for path isolation. In single-initiator zoning, zones are created that are based on a single initiator. Therefore, each zone contains a single HBA port WWN, or initiator. Multiple storage array WWNs can be added to the zone without violating the single initiator rule (storage arrays are the targets).

Page 38: Lenovo Reference Architecture for Microsoft Exchange Server 2013

35 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.8 Deployment example with Lenovo Flex System Figure 26 shows the example Exchange Server 2013 environment (as described in this reference architecture if Lenovo Flex System is used) that is deployed in a 25U rack in both data centers.

The rack contains the Lenovo Flex System Enterprise Chassis, two Flex System FC5022 16Gb SAN Switches (installed in the rear of the chassis), two Flex System CN4093 10 Gb Converged Scalable Switches (installed in the rear of the chassis), three x240 M5 Compute Nodes for the virtualization host cluster, one IBM Storwize V3700 dual controller, and six IBM Storwize V3700 expansion enclosures.

Figure 26. Deployment example: Hardware for each data center (Lenovo Flex System)

Page 39: Lenovo Reference Architecture for Microsoft Exchange Server 2013

36 Lenovo Reference architecture for Microsoft Exchange Server 2013

6.9 Deployment Example with Lenovo System x Figure 27 shows the example Exchange Server 2013 environment (as described in this reference architecture if Lenovo System x is used) that is deployed in a 25U rack in both data centers.

The rack contains a network load balancer, two Brocade 6505 SAN Switches, two Lenovo RackSwitch G8124E Network Switches, three System x3550 M5 compute servers for the virtualization host cluster, one IBM Storwize V3700 dual controller, and six IBM Storwize V3700 expansion enclosures.

Figure 27. Deployment example: Hardware for each data center (Lenovo System x)

Page 40: Lenovo Reference Architecture for Microsoft Exchange Server 2013

37 Lenovo Reference architecture for Microsoft Exchange Server 2013

7 Deployment considerations A successful deployment and operation of an Exchange Server 2013 enterprise solution can be significantly attributed to a set of test-proven planning and deployment techniques. Proper planning includes sizing the required compute resources (CPU and memory), storage (capacity and IOPS), and networking (bandwidth and VLAN assignment) that is needed to support the infrastructure. This information can then be implemented by using industry standard best practices to achieve optimal performance and growth headroom that is necessary for the life of the solution.

For more information about configuration best practices and implementation guidelines that aid in planning and configuring the solution, see the accompanying Installation and Configuration Guide.

7.1 Considerations for virtualizing Exchange Consider the following points regarding virtualizing Exchange:

• All Exchange 2013 server roles are supported in a VM.

• Exchange server VMs (including Exchange server VMs that are part of a database availability group or DAG) can be combined with host-based failover clustering and migration technology, if the VMs that are configured as such that they do not save and restore state on disk when they are moved or taken offline. All failover activity that is occurring at the hypervisor level must result in a cold boot when the VM is activated on the target node. All planned migration must result in shutdown and cold boot, or an online migration that makes use of a technology, such as Hyper-V Live Migration. Hypervisor migration of VMs is supported by the hypervisor vendor; therefore, you must ensure that your hypervisor vendor tested and supports migration of Exchange server VMs. Microsoft supports Hyper-V Live Migration of these VMs.

Therefore, host-based failover cluster migrations (such as Hyper-V Quick Migration) are supported only if the virtual Exchange DAG server is restarted immediately after the quick migration completes.

• Only management software (for example, antivirus software, backup software, or VM management software) can be deployed on the physical host machine. No other server-based applications (for example, Exchange, SQL Server, Active Directory, or SAP) should be installed on the host machine. The host machine should be dedicated to running guest VMs.

• Some hypervisors include features for taking snapshots of VMs. VM snapshots capture the state of a VM while it is running. This feature enables you to take multiple snapshots of a VM and then revert the VM to any of the previous states by applying a snapshot to the VM. However, VM snapshots are not application aware, and the use of snapshots can have unintended and unexpected consequences for a server application that maintains state data, such as Exchange. Therefore, making VM snapshots of an Exchange guest VM is not supported.

• Disable Hyper-threading.

• Many hardware virtualization products allow you to specify the number of virtual processors that should be allocated to each guest VM. The virtual processors that are in the guest VM share a fixed number of logical processors in the physical system. Exchange supports a virtual processor-to-logical

Page 41: Lenovo Reference Architecture for Microsoft Exchange Server 2013

38 Lenovo Reference architecture for Microsoft Exchange Server 2013

processor ratio no greater than 2:1, although we recommend a ratio of 1:1. For example, a dual processor system that uses quad core processors contains a total of eight logical processors in the host system. On a system with this configuration, do not allocate more than a total of 16 virtual processors to all guest VMs combined.

When you are calculating the total number of virtual processors that are required by the host machine, you must also account for I/O and operating system requirements. In most cases, the equivalent number of virtual processors that is required in the host operating system for a system hosting Exchange server VMs is two. This value should be used as a baseline for the host operating system virtual processor when you are calculating the overall ratio of physical cores to virtual processors. If performance monitoring of the host operating system indicates that more processor utilization is consumed than the equivalent of two processors, the number of virtual processors that is assigned to guest VMs should be reduced accordingly. Also, verify that the overall virtual processor-to-physical core ratio is no greater than 2:1.

• The operating system for an Exchange guest machine must use a disk that has a size equal to at least 15 GB plus the size of the virtual memory that is allocated to the guest machine. This requirement is necessary to account for the operating system and paging file disk requirements. For example, if the guest machine is allocated 16 GB of memory, the minimum disk space that is needed for the guest operating system disk is 31 GB.

• It is possible that guest VMs can be prevented from directly communicating with FC or SCSI HBAs that are installed in the host machine. In this event, you must configure the adapters in the host machine's operating system and present the logical unit numbers (LUNs) to guest VMs as a virtual disk or a pass-through disk. Testing found that virtual hard disks perform better than pass-through disks in host-based clustering environments.

Page 42: Lenovo Reference Architecture for Microsoft Exchange Server 2013

39 Lenovo Reference architecture for Microsoft Exchange Server 2013

8 Appendix: Lenovo Bill of Materials This appendix contains the Bill of Materials (BOMs) for different configurations of hardware for Exchange Server 2013 virtualized deployments. There are sections for compute servers, storage, networking switches, and chassis that are orderable from Lenovo.

The BOM lists in this appendix are not meant to be exhaustive and must always be verified with the configuration tools. Any description of pricing, support, and maintenance options is outside the scope of this document.

The connector cables are configured with the device for connections between Top of Rack (TOR) switches and devices (servers, storage, and chassis). The TOR switch configuration includes only transceivers or other cabling that is needed for failover or redundancy.

8.1 BOM for compute servers Table 4 and Table 5 list the Bill of Materials for enterprise compute servers, as described in 6.4.2 “Compute server sizing” on page 24.

Table 4. Flex System x240

Code Description Quantity

9532AC1 Flex System node x240 M5 Base Model 1 A5TH Intel Xeon Processor E5-2697 v3 14C 2.6GHz 35MB 2133MHz 145W 1 A5T0 Intel Xeon Processor E5-2699 v3 18C 2.3GHz 45MB 2133MHz 145W 1 A5RM Flex System x240 M5 Compute Node 1 A5SG Flex System x240 M5 2.5" HDD Backplane 1 A5RP Flex System CN4052 2-port 10Gb Virtual Fabric Adapter 1

A2N5 Flex System FC3052 2-port 8Gb FC Adapter 1

A5RV Flex System CN4052 Virtual Fabric Adapter SW Upgrade (FoD) 1 ASD9 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24

A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2

Page 43: Lenovo Reference Architecture for Microsoft Exchange Server 2013

40 Lenovo Reference architecture for Microsoft Exchange Server 2013

Table 5. System x3550 M5

Code Description Quantity

5463AC1 Lenovo System x3550 M5 1 ASCQ Intel Xeon Processor E5-2697 v3 14C 2.6GHz 35MB 2133MHz 145W 1

ASD1 Addl Intel Xeon Processor E5-2697 v3 14C 2.6GHz 35MB 2133MHz 145W 1 A58Y System x3550 M5 10x 2.5" Base Chassis 1 A59V System x3550 M5 Planar 1 A5AG System x3550 M5 PCIe Riser 1 (1x LP x16 CPU0) 1 A5B0 System x 900W High Efficiency Platinum AC Power Supply 2 6400 2.8m, 13A/125-10A/250V, C13 to IEC 320-C14 Rack Power Cable 2 A1ML Integrated Management Module Advanced Upgrade 1 A5AF System x3550 M5 PCIe Riser 2, 1-2 CPU (LP x16 CPU1 + LP x16 CPU0) 1 A47F Super Cap Cable 925mm for ServRAID M5200 Series Flash 1 A5A0 System x3550 M5 10x 2.5" HS HDD Kit 1 A3YZ ServeRAID M5210 SAS/SATA Controller 1 A3Z2 ServeRAID M5200 Series 2GB Flash/RAID 5 Upgrade 1 A4TR 300GB 15K 6Gbps SAS 2.5" G3HS HDD 2 A5B7 16GB TruDDR4 Memory (2Rx4, 1.2V) PC4-17000 CL15 2133MHz LP RDIMM 24 A5UT Emulex VFA5 2x10 GbE SFP+ PCIe Adapter 1 3581 Emulex 8Gb FC Dual-port HBA 1 9297 2U Bracket for Emulex 10GbE Virtual Fabric Adapter 1 4048 2U bracket for Emulex 8Gb FC Dual-port HBA for System x 1 A1PH 1m Passive DAC SFP+ Cable 2 3704 5m LC-LC Fiber Cable 2

Page 44: Lenovo Reference Architecture for Microsoft Exchange Server 2013

41 Lenovo Reference architecture for Microsoft Exchange Server 2013

8.2 BOM for Flex System Enterprise Chassis Table 6 lists the BOM for the Lenovo Flex System Enterprise Chassis.

Table 6. Flex System Enterprise Chassis

Code Description Quantity

8721HC1 Lenovo Flex System Enterprise Chassis Base Model 1 A0TA Flex System Enterprise Chassis 1 A0UC Flex System Enterprise Chassis 2500W Power Module Standard 2 6252 2m, 16A/100-240V, C19 to IEC 320-C20 Rack Power Cable 6 A0UD Flex System Enterprise Chassis 2500W Power Module 4 A0UA Flex System Enterprise Chassis 80mm Fan Module 4 A0UE Flex System Chassis Management Module 1 3761 3m Black Cat5e Cable 1 A1NF Flex System Console Breakout Cable 1 2300 BladeCenter Chassis Configuration 1 Select network connectivity for 10 GbE and 8 Gb FC

A3DP 5084 ASR8

Lenovo Flex System FC5022 16Gb SAN Switch Brocade 8Gb SFP+ SW Optical Transceiver 5m LC-LC Fiber Cable

2 4 4

A3HL A3HH A1PH A1PJ

Flex System Fabric CN4093 Converged Scalable Switch (Upgrade 1) Lenovo Flex System Fabric CN4093 10Gb Scalable Switch 1m Passive DAC SFP+ Cable 3 m IBM Passive DAC SFP+ Cable

2 2 5 4

Page 45: Lenovo Reference Architecture for Microsoft Exchange Server 2013

42 Lenovo Reference architecture for Microsoft Exchange Server 2013

8.3 BOM for networking Table 7 and Table 8 list the BOMs for the network switches, as described in 6.6 “Networking" on page 27 and 6.7 “Networking for shared storage” on page 33.

Table 7. Lenovo RackSwitch G8124E

Code Description Quantity

7159BR6 Lenovo System Networking RackSwitch G8124E (Rear to Front) 1 90Y9430 3m IBM Passive DAC SFP+ Cable 2 00D6185 Adjustable 19" 4 Post Rail Kit 1

Table 8. Brocade 6505 SAN Switch

Code Description Quantity

3873AR2 Brocade 6505 FC SAN Switch 1 00MY807 Brocade 6505 Redundant Power Supply 1 88Y6416 Brocade 8Gb SFP+ Optical Transceiver 12

8.4 BOM for shared storage Table 9 and Table 10 list the BOMs for the shared storage, as described in 6.5 "Shared storage" on page 24.

Table 9. Storwize V3700 control enclosure

Code Description Quantity

609912C IBM Storwize V3700 Disk 3.5-inch Storage Controller Unit 1 ACKB 3 TB 7,200 rpm 6 Gb SAS NL 3.5 Inch HDD 12 ACHB 4GB to 8GB Cache Upgrade 2 ACHK 8Gb FC 4 Port Host Interface Card 2 ACHS 8Gb FC SW SFP Transceivers (Pair) 2 ACSK 5m Fiber Cable (LC) 4

Table 10. Storwize V3700 expansion enclosure

Code Description Quantity

609912E Storwize V3700 Disk Expansion Enclosure 1 ACKB 3 TB 7,200 rpm 6 Gb SAS NL 3.5 Inch HDD 12 ACTB 1.5m SAS Cable (mini-SAS HD to mini-SAS HD) 2

Page 46: Lenovo Reference Architecture for Microsoft Exchange Server 2013

43 Lenovo Reference architecture for Microsoft Exchange Server 2013

Trademarks and special notices © Copyright Lenovo 2015. References in this document to Lenovo products or services do not imply that Lenovo intends to make them available in every country. Lenovo, the Lenovo logo, ThinkCentre, ThinkVision, ThinkVantage, ThinkPlus and Rescue and Recovery are trademarks of Lenovo. IBM, the IBM logo, and ibm.com are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both. Intel, Intel Inside (logos), MMX, and Pentium are trademarks of Intel Corporation in the United States, other countries, or both. Other company, product, or service names may be trademarks or service marks of others. Information is provided "AS IS" without warranty of any kind. All customer examples that are described are presented as illustrations of how those customers have used Lenovo products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer. Information concerning non-Lenovo products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by Lenovo. Sources for non-Lenovo list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. Lenovo has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-Lenovo products. Questions on the capability of non-Lenovo products should be addressed to the supplier of those products. All statements regarding Lenovo future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. Contact your local Lenovo office or Lenovo authorized reseller for the full text of the specific Statement of Direction. Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in Lenovo product announcements. The information is presented here to communicate Lenovo’s current investment and development activities as a good faith effort to help with our customers' future planning. Performance is based on measurements and projections using standard Lenovo benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations, such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here. Photographs shown are of engineering prototypes. Changes may be incorporated in production models. Any references in this information to non-Lenovo websites are provided for convenience only and do not in any manner serve as an endorsement of those websites. The materials at those websites are not part of the materials for this Lenovo product and use of those websites is at your own risk.