18
© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18 White Paper Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vSphere 5.5

Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

Embed Size (px)

Citation preview

Page 1: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 1 of 18

White Paper

Cisco UCS-Mini Solution with StorMagic SvSAN with VMware vSphere 5.5

Page 2: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 2 of 18

Introduction

Executive Summary

Business Objectives

Virtualization in today’s computer infrastructure has lead to a distinct and prominent need for storage accessible to

multiple servers simultaneously. Concurrently shared storage provides abilities like vMotion®, High Availability, and

replication, which provide stability and application availability. This need is satisfied in large data centers typically

through the use of large-scale SAN devices. Frequently these solutions leave the smaller deployment models

unaddressed, such as Remote Office/Branch Office (ROBO), needs unresolved, as the SAN networking is costly to

extend, and suffers from the involved latency.

This paper provides an Integrated Infrastructure solution that enables shared storage at the remote data center,

using capacity attached directly to the compute layer, for a self-contained application delivery platform. StorMagic

SvSAN combined with Cisco UCS-Mini provides compute, networking, and storage in even the most remote

location. Combined in a proven and confirmed architecture, compute and storage support for hundreds of virtual

desktops as well as necessary infrastructure VMs can be reliably deployed.

Target Audience

This white paper is written for Solution Architects and IT designers who have responsibility for application support

at remote office facilities, or any technical member of staff who desires a cost-effective, easy to maintain

infrastructure without the need for a special SAN for shared storage.

Solution Overview

Solution Architecture

Solution Primary Components

Cisco UCS Mini—an Edge Scale Solution

Cisco UCS, originally designed for the data center, is now optimized for branch and remote offices, point-of-sale,

and smaller IT environments with Cisco UCS Mini. UCS Mini is for customers who need fewer servers but still want

the robust management capabilities provided by UCS Manager. This solution delivers servers, storage, and 10

Gigabit networking in an easy-to-deploy, compact form factor. Cisco UCS Mini provides a total computing solution

with the proven management simplicity of the award-winning Cisco UCS Manager.

Cisco UCS 6324 Fabric Interconnect

The Cisco UCS 6324 Fabric Interconnect extends the Cisco UCS architecture into environments that require

smaller domains. Providing the same unified server and networking capabilities as the top-of-rack Cisco UCS 6200

Series Fabric Interconnects, the Cisco UCS 6324 Fabric Interconnect embeds the connectivity within the Cisco

UCS 5108 Blade Server Chassis to provide a smaller domain of up to 15 servers: 8 blade servers and up to 7

direct-connect rack servers. UCS C series servers can connect to the UCS-Mini via the SFP and QSFP ports built

in to the UCS 6324 Fabric Interconnect, allows scalability of up to 15 servers for the entire system.

Page 3: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 3 of 18

Cisco UCS B200 M4 Blade Server

Delivering enterprise-class performance in a compact form factor, the Cisco UCS B200 M4 Blade Server can

quickly deploy stateless physical and virtual workloads. The server addresses a broad set of workloads, including

IT and web infrastructure and distributed database, enterprise resource planning (ERP), and customer relationship

management (CRM) applications. The Cisco UCS B200 M4 is built on the Intel® Xeon

® processor E5-2600 v3

processor family, with up to 768 GB of memory, up to two hot-pluggable drives, and up to 80-Gbps total bandwidth.

It offers exceptional levels of performance, flexibility, and I/O throughput to run the most demanding applications.

Cisco UCS C240 M4 Rack Server

The Cisco UCS C240 M4 Rack Server is an enterprise-class server designed to deliver exceptional performance,

expandability, and efficiency for storage and I/O-intensive infrastructure workloads. Such workloads include big

data analytics, virtualization, and graphics-rich and bare-metal applications. For larger workloads, the 2RU Cisco

UCS C240 M4 Rack Server supports up to two Intel Xeon processor E5-2600 v3 CPUs, 768 GB of RAM, 12 LFF or

24 SFF drives, and the Cisco UCS VIC 1227 mLOM 10-Gbps adapter for integration with Cisco UCS Mini.

StorMagic SvSAN 5.2

SvSAN is a software solution, which enables enterprises to eliminate downtime of business critical applications at

the edge, where this disruption directly equates to a loss in service and revenue. SvSAN ensures high availability

through a virtual storage platform, appliance (VSA), so that business critical edge applications remain operational.

StorMagic’s typical customer has anywhere between 10 – 10,000 edge sites, where local IT support is not

available, but uptime of applications is a must.

SvSAN provides an intuitive, standardized management interface that allows multiple SvSAN VSAs, spread across

remote sites, to be managed and provisioned quickly and simply either locally, or remotely, from a central location.

SvSAN’s efficient and flexible architecture and its modular approach enable it to meet the ever-changing and

increasingly demanding storage requirements within any organization.

SvSAN’s unique benefits include:

● Abstraction of storage services away from traditional storage arrays, making it a key component of a

software-defined storage strategy

● Elimination of the need for a physical Storage Area Network (SAN)

Page 4: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 4 of 18

● Virtualization of internal disk drives and external, direct-attached storage arrays (DAS) to enable storage

sharing among multiple servers

● High availability in a simple two-server solution

● Reduced remote site IT costs by up to 40%, with elimination of physical SANs and less spend on servers

and software

● Maximized application uptime

● Centralized storage management of entire multi-site infrastructure

● Rapid, scripted deployments and updates of multiple sites simultaneously with automated provisioning

● Optimal flexibility, as SvSAN is hardware and hypervisor agnostic and can scale as storage requirements

grow

● Fast resynchronization through restore capability, enabling users to replace a failed server with a new one

and automatically rebuild their environment

VMware vSphere 5.5 U2

VMware vSphere is a virtualization platform for holistically managing large collections of infrastructure resources —

CPUs, storage, networking — as a seamless, versatile, and dynamic operating environment. Unlike traditional

operating systems that manage an individual machine, VMware vSphere aggregates the infrastructure of an entire

data center to create a single powerhouse with resources that can be allocated quickly and dynamically to any

application in need. The VMware vSphere environment delivers a robust application environment. For example,

with VMware vSphere, all applications can be protected from downtime with VMware High Availability (HA) without

the complexity of conventional clustering. In addition, applications can be scaled dynamically to meet changing

loads with capabilities such as Hot Add and VMware Distributed Resource Scheduler (DRS). For more information,

refer to: http://www.vmware.com/products/datacenter-virtualization/vsphere/overview.html

Solution Detail

Basic Architecture

This architecture is storage self-contained. Rather than utilize remote storage over networks that may have high

latency and low reliability, this UCS-Mini based design connects the storage and compute nodes directly to the

same fabric switch: the 6324 Fabric Interconnect. This allows 10Gb/sec connectivity between virtual machines and

their virtual disks, increasing performance and responsiveness.

The Cisco UCS C240 M4 servers in this design are used as storage nodes only. This prevents application delivery

virtual machines from using resources that might impact data access times, and thereby the performance of other

virtual machines. Since the C240 M4 systems have slightly different processors from the B200 M4 compute nodes,

containing the application delivery nodes to one specific type of processor also removes the need to utilize

Enhanced vMotion Compatibility. Virtual machines will have maximum use of processing power available, without

being restricted to the lowest CPU type available.

Page 5: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 5 of 18

Figure shows a connectivity diagram for the system.

Component Configuration

Table 1. Physical Connectivity

Physical Networking configuration

Device Port Device Port Bandwidth

C240 M4 - A MLOM-A UCS-Mini-A 3 10Gbs

C240 M4 - B MLOM-A UCS-Mini-A 4 10Gbs

C240 M4 - A MLOM-B UCS-Mini-B 3 10Gbs

C240 M4 - B MLOM-B UCS-Mini-B 4 10Gbs

UCS-Mini-A 1 Top of Rack Any 1Gbs or 10Gbs

UCS-Mini-B 1 Top of Rack Any 1Gbs or 10Gbs

*UCS-Mini-A 2 Top of Rack * 1Gbs or 10Gbs

*UCS-Mini-B 2 Top of Rack * 1Gbs or 10Gbs

*Connect these ports in case of Top of Rack Virtual Port Channel

Table 2. Common Configuration

Service profiles for the storage and compute nodes should include 4 vNICs: two for management/vMotion, and one

each for the iSCSI fabric paths. Storage nodes also require the VLAN for Mirroring, the traffic SvSAN uses to

create redundant LUNs. At least 5 different VLANs should be created. See the chart below.

Service Profile Networking configuration

vNIC VLAN

vNIC-A Management, vMotion, Mirroring

vNIC-B Management, vMotion, Mirroring

iSCSI-A iSCSI-A VLAN

iSCSI-B iSCSI-B VLAN

Page 6: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 6 of 18

Storage Nodes

Disks

The disks on each C240 M4 should be configured in two different sets.

The first set of two disks should be configured in RAID 1. The resulting virtual disk will contain the VMware ESXi

operating system and the StorMagic SvSAN VSA. RAID 1 will provide protection in case of disk failure.

The second disk set should be comprised of the remaining disks being configured as RAID 6, creating one large

virtual disk. UCS RAID controllers in RAID 6 mode provide caching with battery backup, for speed and reliability.

Table 3. Raid Configurations

RAID Configuration Settings

Virtual Disk 0 RAID1 (Disks 1 and 2) ESXi and SvSAN VM

Virtual Disk 1 RAID6 (Disks 3 – 24) StorMagic RDM

StorMagic Configuration

Each StorMagic SvSAN VSA will reside on the RAID1 disk of the storage node.

Neutral Storage Host Service

The Neutral Storage Host service is used in this architecture, and assumed to be located on a centralized data

center and not on the UCS-Mini or the storage servers. NSH provides a “third party” for managing failover in the

event of networking errors causing both storage hosts to be active without mirroring available, which can cause

data inconsistancies once storage systems are connected again. Each storage node is connected to both the A

and B fabric of the UCS-Mini system. This removes any single point of failure in the network connectivity, removing

the need for third party arbitration.

Caching

The use of SSD for caching is allowed but not required for this configuration, and SSD caching is not documented

or used in the configuration herein described. Refer to the StorMagic documentation for more information on using

SSD for caching. http://www.stormagic.com/manual/SvSAN_5-2/en/index.htm

Networking

iSCSI is a TCP/IP implementation of the SCSI storage protocol. Since SCSI has no method for multi-pathing (in

case of loss of a path from initiator to target) or load balancing, these highly desirable features are typically

implemented in the host OS. VMware vSphere 5.5 is no exception. vSphere provides a software based iSCSI

Page 7: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 7 of 18

initiator to perform multi-pathing and load balancing between the initiator and the targets. On TCP/IP L3 networks,

routers can change paths between to TCP/IP endpoints, based on algorithms and network traffic.

This initiator has no awareness of TCP/IP routing protocols. As it attempts to load balance iSCSI traffic, router and

their protocols change the latency and traffic patterns on the network, causing load balancing efforts by the host

OS to misjudge load and fail to balance. As a result, it is best practice to restrict iSCSI traffic to L2 networks, and

this practice is implemented in this design.

Aside from routers, the host OS can also attempt to load balance through the use of multiple host nics connected

to LACP ports on upstream switches. VMware vSphere 5.5 can and does load balance across multiple NICs by

default, if two or more such NICs are provided to a virtual switch. This can potentially thwart the iSCSI initiators

load balancing efforts. As a result, this design only provides one NIC for each iSCSI path, to eliminate possible

network load balancing interference.

Also best practice is the use of two iSCSI network controllers to provide paths to any given target. This provides

reliability through redundancy for the path from initiator to target. In the event of a path failure, host OS multi-

pathing software will restrict iSCSI traffic to the working path to avoid possible data loss. Two paths are provided

for iSCSI traffic, with each path restricted to a VLAN available only on one fabric (A or B) of the UCS-Mini Fabric

Interconnect.

Standard virtual switches are used in this configuration to minimize the amount of configuration detail of the

architecture. Standard virtual switches provide adequate networking without complication. Separating the iSCSI

NICs on different standard virtual switches also ensures pinning of mac address to iSCSI packet, further

streamlining any load balancing algorithms on upstream systems.

SvSAN also uses a Mirroring network for use in replicating between LUNs and their mirrors. This network can also

be the management network, however this design separates the two networks via VLAN. The Mirroring network

needs redundancy and fail over in order to ensure reliability.

Storage Nodes do not require specific iSCSI NICs to be created in the service profile. Simple connectivity to the

iSCSI network is all that is required.

Service Profile networking for storage node is shown below.

Page 8: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 8 of 18

HA/DRS

The Storage nodes provide redundancy for the rest of the system. As such, they should NOT be placed in clusters

with HA or DRS enabled. They do not share storage, and no HA or DRS are possible in this configuration.

Page 9: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 9 of 18

Compute Node Configuration

Networking

As described above, the Compute nodes have similar networking requirements as the Storage nodes. Exceptions

are noted below.

Compute nodes will boot from iSCSI. As such they require iSCSI NICs to be created in the Service Profile for this

purpose. More information about the specific network requirements for boot from iSCSI are below in the boot

configuration section.

Compute nodes are also different in that access to the Mirroring network VLAN is neither required nor desired. The

Compute node should have access to the Management, vMotion, iSCSI-a and iSCSI-b VLANs.

iSCSI networking in vSphere 5.5 requires the use of the VMware Software iSCSI Adapter. This adapter provides

load balancing and multi-pathing functionality. This adapter will be automatically installed by vSphere upon

detection of a boot from iSCSI NIC. Further configuration management of this adapter is performed by the SvSAN

software when common targets are created. As such, StorMagic SvSAN relieves the server admin from the burden

of having to set and adjust paths for iSCSI targets.

Boot Configuration

Compute nodes are booted from iSCSI LUNs created by SvSAN, though Cisco FlexFlash or local disk can be

used. The iSCSI boot solution is more complex to configure, but preferable to local boot as it provides

statelessness for the blades. With statelessness, blades can be shutdown, replaced, and powered on without need

for reconfiguration or restoration from backup, significantly reducing downtime in the event of a failure.

In order to boot a service profile from iSCSI, a number of steps must be completed in order.

Step 1. Create the boot target

StorMagic SvSAN plugin for vCenter creates LUNs for ESXi servers that are registered in vCenter Server. This

proves problematic when booting from iSCSI; the server is not installed, and is therefore not yet registered in

vCenter. Therefore, creating the boot target requires use of the Appliance Management page. This page can be

found at https://<<SvSAN_ip_address>>

Page 10: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 10 of 18

Selecting Targets in the top menu allows for the creation of a target for each ESXi server. Since these targets are

specific to each service profile, the associated ACL for each target requires the Initiator name of the service profile

that will boot from it. In a UCS service profile, iSCSI initiator names can be applied at the NIC level or the service

profile level. This solution uses the service profile level initiator, which provides simplicity and multi-pathing via

VMware ESXI.

Each initiator name is taken from a common pool, created in advance. Since the pool has a well known and

common schema for naming, this simplifies the editing of the target ACL.

The target must be created from the management page of one of the VSA’s.

Page 11: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 11 of 18

Step 2. Add the boot target to the service profile

Each service profile must have its boot target added individually, with both paths available, on each iSCSI NIC.

This ensures a path to boot from even in the event of a loss of one fabric. The target initiator must be added to the

boot target of each iSCSI NIC.

Page 12: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 12 of 18

Step 3. Install ESXi

Now that the target has been created, boot the server to the ESXi installation disk. The Cisco custom ESXi

installation disk is recommended. This can be downloaded from the VMware website. This solution leverages the

vMedia profiles to attach an ISO from and NFS server as the service profile DVD.

Page 13: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 13 of 18

Page 14: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 14 of 18

Once ESXi is installed, it will henceforth boot from the iSCSI target.

More information about booting Cisco UCS Service Profiles from iSCSI can be found here.

http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-6100-series-fabric-

interconnects/whitepaper_c11-702584.html

More information about stateless config and its benefits with Cisco UCS service profiles can be found here

(http://www.cisco.com/c/en/us/products/collateral/servers-unified-computing/ucs-manager/white_paper_c11-

590518.html).

In order to ensure two paths to iSCSI LUNs, adjustments must be made to the ESXi networking. The second iSCSI

NIC must be added to a new standard virtual switch. A new VMKernel NIC must be created, using the IP address

of the fabric B iSCSI NIC. This address can be found in the service profile in the iSCSI boot parameters.

Finally, a VMKernel NIC must be created whose VLAN should be that of the vMotion VLAN. This NIC should be set

to perform vMotion.

Page 15: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 15 of 18

DataStores can be created to fit the unique environment. In this design, one 2TB LUN is created, then mirrored, for

the storage of virtual machine files. Refer to the VMware vSphere documentation for limitations.

https://www.vmware.com/pdf/vsphere5/r55/vsphere-55-configuration-maximums.pdf

Test and Validation

Post-Install Checklist

Confirm VSAs are visible/manageable within vCenter plugin.

Check network speed test between VSAs. Instructions

Page 16: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 16 of 18

Confirm mirror target synchronization status

Confirm session status to iSCSI Target/s

Page 17: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 17 of 18

Failure Testing

Test Case # Test Area Duration (approximation in minutes)

Procedure Expected Result Tested

7.1.1 Power Cycle SvSAN VSA 3 From the vSphere Client/Hyper-V Manager or the VSA Web GUI restart the SvSAN VSA

● iSCSI path failover to surviving VSA node IP with no loss of service to iSCSI LUNs or Guest Virtual Machines. Surviving VSA marks storage unsynchronized

● Events triggered and forwarded (if configured)

7.1.2 Power Cycle SvSAN VSA 3 From the VSA Web GUI or plugin confirm VSA restart and mirror target resynchronization

● VSA restarted and mirror targets complete a quick synchronization automatically

● Events triggered and forwarded (if configured)

7.2.1 Force Power Cycle a VSA Hypervisor Host

10 Hard reset a hypervisor host ● Guest VMs running on surviving hosts continue to run uninterrupted

● Guest VMs on failed host restarted by HA on surviving host

● Surviving VSA marks storage unsynchronized

● Events triggered and forwarded (if configured)

7.2.1 Force Power Cycle a VSA Hypervisor Host

10 ● From vCenter or Hyper-V Manager confirm host restart.

● From the VSA Web GUI or plugin confirm VSA restart and mirror target resynchronization

● VSA restarted and mirror targets complete a quick synchronization automatically.

● Events triggered and forwarded (if configured)

● Guest VMs manually or automatically migrated onto restarted hypervisor host to spread workloads

7.3 Environment power failure 20 ● Fail both hosts simultaneously to simulate an environment power outage

● Restart both hosts

● After host boot VSAs will start automatically due to HA on the hosts

● VSAs will negotiate mirror synchronization status and synchronize the target

● VSAs will trigger HBA rescans to both hosts

● Guest VMs will then become available and start automatically

7.4 Host 1 failure

Host 2 failure

20 ● Power off host 1

● Power off host 2

● After host boot VSA will start automatically due to HA on the

Page 18: Cisco UCS-Mini Solution with StorMagic SvSAN with · PDF fileCisco UCS-Mini Solution with StorMagic SvSAN with ... VMware vSphere is a virtualization platform for holistically managing

© 2015 Cisco and/or its affiliates. All rights reserved. This document is Cisco Public. Page 18 of 18

Test Case # Test Area Duration (approximation in minutes)

Procedure Expected Result Tested

Host 1 start ● Power on host 1 hosts

● VSA will keep storage offline due to ‘out of date’ dataset

7.5 Power host 2 10 ● Power on host 2 ● After host boot VSA will start automatically due to HA on the hosts

● VSA will negotiate mirror leader

● Automatically resynchronize

● Trigger HBA rescans and datastores come online

● Guest VMs will then become available and start automatically

References

For information on Cisco UCS-Mini, please refer to the following URL

http://www.cisco.com/c/en/us/products/servers-unified-computing/ucs-mini/index.html

For information on Cisco UCSM, please refer to the following URL

http://www.cisco.com/c/en/us/products/servers-unified-computing/index.html

For documentation pertaining to VMware vSphere 5.5 U2, please refer to the following URL

https://www.vmware.com/support/pubs/vsphere-esxi-vcenter-server-pubs.html

For documentation pertaining to StorMagic SvSAN, please refer to the following URL

http://www.stormagic.com/manual/SvSAN_5-2/en/index.htm

Printed in USA C11-734391-01 06/15