40
vStart 50 VMware® vSphere™ Solution Design Guide Release 1.5 for 12 th Generation Servers Dell Virtualization Solutions Engineering Revision: A00 October 2012

vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

  • Upload
    hakiet

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware® vSphere™ Solution

Design Guide

Release 1.5 for 12th Generation Servers

Dell Virtualization Solutions Engineering

Revision: A00

October 2012

Page 2: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 2

THIS WHITE PAPER IS FOR INFORMATIONAL PURPOSES ONLY, AND MAY CONTAIN TYPOGRAPHICAL

ERRORS AND TECHNICAL INACCURACIES. THE CONTENT IS PROVIDED AS IS, WITHOUT EXPRESS OR

IMPLIED WARRANTIES OF ANY KIND.

© 2012 Dell Inc. All rights reserved. Reproduction of this material in any manner whatsoever without

the express written permission of Dell Inc. is strictly forbidden. For more information, contact Dell.

Dell, the DELL logo, PowerConnect, PowerEdge, EqualLogic, and OpenManage are trademarks of Dell

Inc. Microsoft and Windows Server are either trademarks or registered trademarks of Microsoft

Corporation in the United States and/or other countries. Intel and Xeon are registered trademarks of

Intel Corporation. VMware, ESXi, vSphere, and vCenter are registered trademarks or trademarks (the

"Marks") of VMware, Inc. in the United States and/or other jurisdictions. Other trademarks and trade

names may be used in this document to refer to either the entities claiming the marks and names or

their products. Dell Inc. disclaims any proprietary interest in trademarks and trade names other than

its own.

October 2012

Page 3: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 3

Table of Contents 1 Introduction ........................................................................................................... 4

2 Audience............................................................................................................... 4

3 Overview .............................................................................................................. 4

3.1 Component Roles .............................................................................................. 5

3.2 Prerequisites and Datacenter Planning .................................................................... 7

4 Architecture .......................................................................................................... 8

4.1 Network Architecture Overview ............................................................................ 8

4.2 LAN Architecture ............................................................................................ 11

4.3 SAN Architecture ............................................................................................ 19

4.4 Management Architecture ................................................................................. 27

5 Power and Cooling Configuration ............................................................................... 31

6 Scalability ........................................................................................................... 33

7 References .......................................................................................................... 34

8 Appendix A – IP & VLAN Planning Worksheet.................................................................. 35

9 Appendix B – IP & VLAN Sample Worksheet ................................................................... 38

Page 4: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 4

1 Introduction The vStart solution is a virtualization infrastructure solution that has been designed and validated by

Dell Engineering. It is delivered racked, cabled, and ready to be integrated into your datacenter. The

vStart 50 VMware vSphere™ configuration includes DellTM PowerEdgeTM R620 servers running VMware®

ESXi™, Dell EqualLogicTM PS6100X iSCSI storage, Dell PowerConnectTM switches, and a Dell PowerEdge

R420 server that manages the solution by hosting VMware vCenter™ Server and Dell management tools.

The configurations also include Dell Management and Dell EqualLogic Host Integration Tools for VMware

Edition (HIT/VE) Plug-ins for VMware vCenter Server. VMware ESXi versions 4.1 Update 2 and 5.0 are

both supported. The solution can either be configured with PowerConnect 7024 or PowerConnect 6224

switches.

The following documents are provided to describe the various aspects of the solution.

vStart 50 VMware vSphere Solution Overview – Provides a solution overview, including various

components, and how it is delivered.

vStart 50 VMware vSphere Solution Specification – Provides a detailed specification of various

components included in the solution.

vStart 50 VMware vSphere Solution Design Guide – (this document) Provides a detailed

architectural solution design.

For Dell Sales and Services teams, the latest versions of these documents are available at the internal

vStart Web Portal. Customers can contact their Dell Sales teams to get the latest version.

2 Audience IT administrators and managers who have purchased or plan to purchase a vStart can use this document

to understand the solution architecture. For those that have purchased the solution, detailed cabling

diagrams and networking details can be utilized during troubleshooting and maintenance. It is assumed

that the reader has a basic understanding of VMware vSphere, EqualLogic, and network architecture.

3 Overview The solution discussed in this whitepaper is powered by Dell PowerEdge servers, Dell EqualLogic iSCSI

storage, Dell PowerConnect networking, and VMware vSphere. The solution implements Dell and

VMware best practices and utilizes the latest Dell developed vSphere integration offerings that provide

management enhancements. Dell Management Plug-in and EqualLogic HIT/VE Plug-in for vCenter are

included in the solution which provides integration of Dell hardware management and storage

configuration from within vCenter.

EqualLogic SAN HeadQuarters (SAN HQ) and Group Manager are included in the solution for storage

array monitoring and management. The solution also includes the rack, power distribution units (PDU),

optional KMM (Keyboard, Mouse, and Monitor), management server, and an optional uninterruptible

power supply (UPS).

vStart 50 includes two PowerEdge R620 servers, one EqualLogic PS6100 array and four PowerConnect

switches. Storage expansion modules are offered in this release, for which more details are provided in

the section below. Figure 1 below provides a high-level overview of the components utilized in the

configuration.

Page 5: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 5

vStart 50 Component Overview Figure 1.

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

SAN PC7024

LAN PC7024

KMM

PE R420 Management

Dell 3750W UPS (Optional)

PowerEdge R620Hypervisor Cluster

EqualLogic PS6100

Equipment Shelf

RPS/MPS

EST

vStart 50

MPS-720100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

STACKING MODULE

STACKING MODULE

STACKING MODULE

STACKING MODULE

EqualLogic PS6100(Optional)

3.1 Component Roles

ESXi Cluster – PowerEdge R620: Each PowerEdge R620 is configured with two 8-Core Intel® Xeon® 2.2

GHz processors and 64GB of memory. The PowerEdge R620 has quad port 1Gb rack Network Daughter

Card (rNDC) and an additional quad port 1Gb card has been added to each, providing a total of eight

1Gb ports. Four of these ports are utilized for LAN traffic and the remaining four are for SAN traffic.

In addition, each PowerEdge R620 is configured with an internal redundant SD card which contains

ESXi. The servers are also configured with the EqualLogic Multipathing Extension Module (MEM) which

enables enhancements to existing VMware multipathing functionality by providing automatic iSCSI

connection management and load balancing across multiple active paths.

Management Server – PowerEdge R420: The PowerEdge R420 is configured with one 4-Core Intel Xeon

2.2GHz processor and 8GB of memory. The PowerEdge R420 has two onboard 1Gb ports and an add-in

dual port 1Gb NIC which provide two ports for LAN traffic and two ports for SAN traffic. The

PowerEdge R420 runs Microsoft Windows Server 2008 R2 SP1 Standard Edition to host the management

applications for the devices in the solution. The primary applications include VMware vCenter Server,

and EqualLogic SAN HQ. In addition, management and configuration of the EqualLogic array and

PowerConnect switches can be performed through a web based interface or serial connection from the

R420. The PowerEdge R620 and PowerEdge R420 servers are all configured with an iDRAC out-of-band

Page 6: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 6

management device that supports direct management of the systems through a web-based interface.

SAN HQ provides consolidated performance and event monitoring for the iSCSI environment along with

historical trend analysis for capacity planning and troubleshooting.

In the vStart 50 the Management Server is optional. This choice provides customers with the flexibility

to manage the vStart 50 from a VM running on the vStart 50 cluster, from an existing physical server

that runs vCenter Server and meets the vStart Solution requirements, or from a VM that meets the

requirements and runs in a separate virtualized environment that can communicate with the vStart 50

Solution.

Dell Management Plug-in for VMware vCenter: This management plug-in for vCenter is delivered as a

virtual appliance. This virtual appliance is hosted on the ESXi cluster and provides advanced

integration with the Dell servers which allows for virtual and physical management directly within the

vCenter console. This eliminates the need to utilize multiple consoles for day-to-day management of

the environment by providing inventory, monitoring, alerting, BIOS and firmware updates, and access

to online warranty information.

EqualLogic Host Integration Tools for VMware Edition (HIT/VE): HIT/VE is a plug-in to VMware

vSphere and provides a suite of storage tools delivered as a virtual appliance. These tools provide

additional functions and tasks that can be applied to VMware objects, and provide EqualLogic’s storage

efficiency, performance, and data protection. The EqualLogic HIT/VE provides the following:

VMware Object Management in the vSphere GUI

Auto-Snapshot Manager/VMware Edition (ASM/VE)

Virtual Desktop Infrastructure Management using Virtual Desktop Deployment Utility

(Requires additional software)

Datastore Manager

iSCSI Storage Array - EqualLogic PS6100X: The EqualLogic PS6100X in the solution has 24 300GB 10K

SAS drives. It is configured with two storage controllers for redundancy. Each storage controller has

four 1Gb network interfaces, and a 100Mb interface dedicated to management. In addition, offloading

of storage operations from the ESXi hosts to the EqualLogic array(s) are enabled through VMware

vStorage APIs for Array Integration (VAAI). This hypervisor and storage integration enables offloading a

subset of operations to the array(s) that in turn reduce storage traffic and enhance performance.

Local Area Network (LAN) and Storage Area Network (SAN) Switches – PowerConnect 7024 or 6224:

Four PowerConnect 7024 or 6224 switches are utilized in the vStart 50 Solution. Each switch supports

24 x 1Gb connections and has two expansion bays that support either 10Gb Ethernet modules or

stacking modules. Stacking modules provide the ability to aggregate multiple switches into a single

logical switch, which is then managed as a single entity. The vStart 50 Solution dedicates two switches

for LAN traffic and two for SAN traffic.

LAN and SAN traffic are segregated to ensure minimal latency for iSCSI traffic. In addition, this design

decision was made to allow for integration into environments that may have already implemented

separate fabrics for LAN and SAN traffic. If the existing environment has a unified fabric (LAN and SAN

on a single fabric), then the LAN and SAN switches provided can be uplinked into this unified

environment.

Page 7: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 7

Uninterruptible Power Supply (UPS): The UPS provides backup power in the event of a power failure.

The Dell UPS model will vary based on the local power requirements of the datacenter, and are

optional for the vStart 50 solution.

Power Distribution Unit (PDU): As the name suggests, PDUs distribute power from the main power

source to the individual components within the 24U rack. Dell PDUs utilize a combination of worldwide

standard IEC power outlet connections with regionalized input options allowing flexibility across a

variety of global power infrastructures. The appropriate PDU model will vary based on the local power

requirements of the datacenter.

Rack – PowerEdge 2420: A single 24U rack is required to support the configuration. Blanking panels

are included to ensure optimal airflow.

Keyboard, Monitor, Mouse (KMM): An optional 1U KMM console (touchpad, keyboard, and 18.5 inch

LED) is cabled to the management server providing the ability to walk up to the rack and manage the

entire solution. The KMM is optional for the vStart 50 solution.

3.2 Prerequisites and Datacenter Planning

To support the configuration, the following components are required to be present in the customer

environment:

Database to support VMware vCenter Server

o For a list of supported databases, refer to the VMware Product Interoperability Matrix.

o Beginning with vSphere 5.0, vCenter Server is available as a virtual machine appliance

via the VMware vCenter Server Appliance. With the VMware vCenter Server Appliance,

the contents are contained in a pre-configured virtual machine. Refer to the VMware

Product Interoperability Matrix for a list of supported databases. Refer to the KB article

for minimum requirements.

o If vCenter Server is installed on the Management Server, then a route must exist

between the Management Server (PowerEdge R420) and the Database Server.

o If vCenter Server is installed in a virtual machine or as a VMware vCenter Server

Appliance, then a route must exist between the virtual machine/appliance and the

Database Server.

o The database is assumed to have maintenance and backup configured as per the

business needs of the customer.

o For small deployments, vCenter Microsoft SQL Server® Express database can be used.

This option is not available for the VMware vCenter Server Appliance.

Domain Name Server (DNS) – DNS must be available on the management network.

Network Time Protocol (NTP) Server - NTP must be available on the management network.

Sufficient power to support a vStart 50 must be present. Detailed power, weight, and cooling

requirements for the datacenter are defined in the vStart 50 Solution Specifications document.

Switch Connectivity – The network architecture supports uplinks into the existing switches in

the datacenter. The uplink recommendations are discussed in Section 4, Network Architecture.

Page 8: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 8

Support for the following VMware vCenter Plug-ins:

o Dell Management Plug-in for VMware vCenter

Routing – A route must exist between the out-of-band management and

management networks. Refer to Section 4.2 for more networking details and

requirements.

o EqualLogic HIT/VE

The appliance must have access to the management network to be able to

communicate with the storage arrays, the ESXi hosts, and the vCenter Server.

The addition of servers, switches, and iSCSI storage arrays to an existing or new datacenter requires

planning for IP addresses and VLANs. Appendices A and B provide examples to help support this

planning effort. Before planning can begin, it is important to understand the vStart solution

architecture, power and cooling attributes, and scalability characteristics. The remainder of this

document covers those subjects. Please contact your Dell Sales and Services representatives for more

information about planning and prerequisites.

4 Architecture Network, SAN, and Management architecture is discussed in this section.

4.1 Network Architecture Overview

vSphere network traffic in this solution is comprised of four distinct types: Virtual Machine (VM),

Management, vMotion, and iSCSI. In addition, support for out-of-band management is included. Two

separate networks are created to support different traffic types:

LAN - This network supports management, VM, vMotion, and out-of-band management. In

addition, uplinks to core infrastructure provide connectivity to the solution support services

(DNS, NTP, and Database for vCenter Server).

SAN – This network supports iSCSI data and iSCSI management traffic. Uplinks are supported to

connect into an existing iSCSI network; however, these uplinks are not required for full solution

functionality.

Page 9: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 9

Figure below depicts the logical LAN and SAN network architecture. Detailed diagrams are available

throughout the remainder of the document.

Network Topology (Logical View) Figure 2.

LA

N (

Pow

erC

onnect

7024)

SA

N (

Pow

erC

onnect

7024)

Core Network

iSCSI Network

ESXi Cluster

Pow

erE

dge

R420

Management Server (vCenter, SAN HQ)

iSCSI Storage

Equal

Logi

c P

S6100

Solu

tion S

upport

Serv

ices

AD/DNS Database forManagement Server

(Optional)

Legend

Stacking Link

1Gb LAN

1Gb SAN

Out-Of-Band Mgmt

10Gb ISL

Pow

erE

dge

R620

NTP

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Page 10: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 10

The table below summarizes the use of each traffic type.

Traffic Type Summary Table 1.

Traffic Type Use

Management Supports virtualization management traffic and communication between the ESXi servers in the cluster, and configuration and monitoring of the storage arrays and LAN switches.

vMotion Supports migration of VMs between ESXi servers in the cluster.

VM Supports communication between the VMs hosted on the ESXi cluster and external systems.

Out-of-Band Management (OOB)

Supports configuration and monitoring of the servers through the iDRAC management interface.

iSCSI Data Supports iSCSI traffic between the servers and storage array(s). In addition, the SAN switches can be managed on this network.

Throughout the remainder of the networking sections, it is important to understand the correlation

between the component location in the rack and the label for each. Figure 3 below displays the labels

for the vStart 50. For clarity, the full device description and option details are not noted in figure.

Please refer to Figure 1 for more information about device descriptions and options.

Page 11: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 11

Component Labels Figure 3.

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

SAN PC7024

LAN PC7024

KMM

PE R420 Management

Dell 3750W UPS (Optional)

PowerEdge R620Hypervisor Cluster

EqualLogic PS6100

Equipment Shelf

RPS/MPS

EST

vStart 50

MPS-720100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

100V-240VAC, 4A

LOCATOR

RPS

DC IN 12V, 10A

STACKING MODULE

STACKING MODULE

STACKING MODULE

STACKING MODULE

EqualLogic PS6100(Optional)

4.2 LAN Architecture

The LAN includes two PowerConnect 7024 or 6224 switches which support the VM, Management,

vMotion, and OOB traffic. These traffic types are logically separated through the use of VLANs. The

two switches are stacked together, which forms a single logical switch and provides a 48Gb stack

bandwidth between the two PC6224 switches, or 64Gb stack bandwidth between the two PC7024

switches. The solution provides four 1Gb uplinks from each switch to link into an existing core network

infrastructure. If the core network infrastructure supports 10Gb Ethernet, then 10Gb uplink modules

may be added to each switch; however, this option is beyond the scope of this document.

4.2.1 Traffic Isolation using VLANs

The traffic on the LAN is separated into four VLANs: one VLAN each for Management, vMotion, VM, and

Out-of-Band Management (OOB) traffic. VLAN tagging for the OOB traffic is performed by the

PowerConnect switches. Table 2 below provides VLAN and Subnet examples. For the other traffic

types, the tagging is performed in the virtual switch by the ESXi hypervisor. Routing between VLANs is

dependent on the specific customer requirements and is not included in this document. Consult with

your Dell Sales and Services representatives if you have questions about routing or require assistance

Page 12: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 12

implementing in your environment. If desired, the PowerConnect switches can be configured to provide

the routing function.

VLAN / Subnet Examples Table 2.

Traffic Type Sample VLAN

Sample Subnet

OOB 10 192.168.10.X

Management 20 192.168.20.X

vMotion 30 192.168.30.X

VM 100 192.168.100.X

Additional VLANs can be implemented for VM traffic, if required.

4.2.2 PowerConnect LAN Switch Configuration

Table 3 provides a port configuration overview for the two PowerConnect LAN switches. Trunk mode is

used to allow multiple VLANs on a port, and access mode is used when the switch performs the VLAN

tagging function. Figure 4 below shows the function that each switch port provides. Individual ports or

blocks of contiguous ports are color coded and labeled per their respective functions. Ports are

available to support future expansion as well.

Sample VLAN Switch Assignment Table 3.

Ports Configuration Notes

ESXi Hosts Configured in Trunk mode: Allow VLANs 20, 30, and 100

Management Server Configured in Trunk mode. Allow VLANs 10 and 20

OOB Configured in Access mode for VLAN 10

Storage Array Management

Configured in Access mode for VLAN 20

Uplink Configured in Trunk mode. All uplink ports configured in a single Link Aggregation Group (LAG)

- Allow VLAN 100 for VM traffic

- Also allow VLANs 10 and 20, for core routing between VLANs 10 and 20

- Also allow VLANs 20 and 30 if the cluster will be expanded outside rack

Future Expansion These ports are disabled. This will prevent any unauthorized access or misconfiguration.

Each PowerConnect switch is configured with a single stacking module that supports two stacking links.

The two PowerConnect LAN switches are connected using both of the stacking links to provide

redundancy. The stacked switches form a single logical switch where one of the switch modules acts as

the master and both switches are managed as a single entity by connecting to the master switch.

Stacking also provides for a high-speed data path between the switch modules. In addition, stacking

Page 13: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 13

the switches helps prevent any potential loops when the switches are uplinked to the customer’s

existing network infrastructure.

LAN Switch Port Usage Figure 4.

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

LAN

Sw

itch

1LA

N S

wit

ch 2

Mgm

t Se

rver

Arr

ay M

gmt

iDR

AC

Host Core

Mgm

t Se

rver

Arr

ay M

gmt

iDR

AC

Host Core

Stack

4.2.3 PowerEdge R620 LAN Connectivity

Each PowerEdge R620 has eight 1Gb ports, of which four are dedicated for LAN traffic. In addition, the

iDRAC7 OOB interface is connected to the LAN switches. Figure 5 shows the connectivity of Server 1 to

the LAN switches.

Page 14: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 14

Server 1 LAN Connectivity Figure 5.

LA

N S

wit

ch

2

Legend

LA

N S

wit

ch

1

Mgmt

Out-Of-Band Mgmt

R620

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

1

1 2 3 4

iDRAC

1 2750W 750W

The other PowerEdge R620 server follows the same connectivity pattern to the LAN switches with the

exception that each server uses a unique set of physical ports on the switches. Table 4 details the

PowerEdge R620 server connectivity to the LAN switches.

PowerEdge R620 LAN Connectivity Table 4.

rNDC – Port 1 rNDC – Port 3 NIC – Port 1 NIC – Port 3 iDRAC7

Server 1 Switch 1 – Port 9

Switch 2 – Port 9

Switch 1 – Port 10

Switch 2 – Port 10

Switch 1 – Port 5

Server 2 Switch 1 – Port 11

Switch 2 – Port 11

Switch 1 – Port 12

Switch 2 – Port 12

Switch 2 – Port 6

4.2.4 PowerEdge R620 Adapter Enumeration

ESXi enumerates the network adapters based upon PCI bus information. Each network adapter is

assigned a label in ESXi of “vmnic” followed by a number. As an example, the first adapter is labeled

vmnic0. In the case of the R620 servers included in this solution, there are eight 1Gb network adapters

which ESXi labels as vmnic0 through vmnic7. Figure 6 provides a mapping of the physical network ports

to the vmnic labels. This information is necessary when configuring the virtual switches or for

troubleshooting purposes.

Page 15: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 15

PowerEdge R620 Adapter Enumeration Figure 6.

1

1 2 3 4

iDRAC

1 2750W 750W

vmnic4 vmnic5

iDRACvmnic0 vmnic1 vmnic2 vmnic3

vmnic6 vmnic7

4.2.5 PowerEdge R620 LAN Virtual Switch Configuration

For each PowerEdge R620, a single ESXi virtual switch is created to support all LAN traffic. Unique port

groups are then created for management and VM traffic. In addition, VMkernel interfaces are defined

for management and vMotion traffic. Each port group and VMkernel interface is configured to tag with

the appropriate VLAN. The Management port group is utilized by the Dell Management Plug-in and the

EqualLogic HIT/VE Plug-in for VMware vCenter.

Figure 7 is a screenshot taken in vCenter Server of the LAN virtual switch on Server 1. Note the vmnics

and how they correlate to the physical adapters as shown in Figure 6. “Mgmt Network” is a Virtual

Machine Port Group, which allows VMs to communicate on the management network, and “Mgmt” is a

VMKernel port, which allows the ESXi kernel to communicate with the management network.

LAN vSwitch Configuration Figure 7.

Page 16: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 16

4.2.6 Load Balancing and Failover

Load balancing enables sharing network traffic between the physical network ports in a team, thereby

generating higher throughput. The VMware virtual switch has three options to configure load balancing:

Route based on the originating virtual switch port ID

Route based on source MAC hash

Route based on IP hash

The default load balancing configuration of route based on the originating virtual switch port ID is

recommended. This setting enables multiple VMs to use different physical adapter ports to transmit

and receive traffic without requiring additional physical switch configuration.

The Management and vMotion networks each have only one port ID or MAC address. Hence, these

networks will each use only one physical adapter port for communicating unless there is a failover to

another physical adapter port.

Table 5 notes any required configuration changes that should be explicitly made to the port group

configuration.

Port Group Table 5.

VMKernel Ports Configuration Notes

Mgmt vmnic0: active

vmnic2,vmnic4, vmnic6: standby

management traffic: enabled

failback: no

vMotion vmnic4: active

vmnic0,vmnic2,vmnic6: standby

vMotion: enabled

failback: no

While the configurations explored in the section above along with Table 5 depict a robust and fault

tolerant ESXi network design, customers can implement an additional level of resiliency by

implementing VMware HA. More information on setting up VMware HA can be found in VMware High

Availability (HA): Deployment Best Practices for vSphere 4.1and vSphere High Availability Deployment

Best Practices for vSphere 5.0.

4.2.7 PowerEdge R420 LAN Connectivity

The PowerEdge R420 has four 1Gb ports of which two are dedicated for LAN traffic. In addition, the

iDRAC7 OOB interface is connected to the LAN switches. Figure 8 shows the management server

connectivity to the LAN switches.

Page 17: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 17

Management Server LAN Connectivity Figure 8.

Po

we

rEd

ge

R

42

0

iDRAC

LA

N S

wit

ch

1

LA

N S

wit

ch

2

PWR

FAN

DIAG RPS

TEMP

4/105/11

6/12

STACK ID

Unit 7-122/8 3/9MASTER 1/7

LNK/ACT LNK/ACT LNK/ACT LNK/ACT

RESET

21 22 23 24

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22 24

23

FDX

/HDX

LNK

/ACT

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Legend

Mgmt

Out-Of-BandMgmt

Table 6 details management server connectivity to the LAN switches.

PowerEdge R420 LAN Connectivity Table 6.

LOM – Port 1 NIC – Port 2 iDRAC7

Switch 1 – Port 1 Switch 2 – Port 1 Switch 1 – Port 7

4.2.8 PowerEdge R420 Network Adapter Teaming Configuration

The two adapters dedicated to the LAN are configured into a Smart Load Balancing (SLB) and Failover

team using BACS. This provides redundancy in the event that one of the paths or network adapters

fails. There are several teaming types supported by Broadcom that provide this capability. The SLB

and Failover teaming type was chosen based on the ease of configuration. There is no switch

configuration required to support this teaming type.

Two virtual VLAN adapters are created using BACS that correspond to the Management and OOB

networks. The Broadcom team performs the VLAN tagging on each of these virtual adapters. If the

management server is running as a VM, then it will not need the BACS VLAN adapters, but rather virtual

NICs that are associated with the appropriate vSwitches and VLANs.

Figure 9 displays the teamed configuration and the two virtual VLAN adapters. Note the two adapters

classified as “Unassigned Adapters” which are used for iSCSI traffic.

Also note that the LAN adapters are teamed for fault tolerance. However, the SAN adapters are not

teamed because they use MPIO provided by the EqualLogic HIT for Windows. The SAN adapters connect

into the separate SAN switches.

Page 18: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 18

PowerEdge R420 Server Broadcom Team Configuration Figure 9.

Page 19: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 19

4.2.9 EqualLogic PS6100 LAN Connectivity

The EqualLogic PS6100 has a management port on each controller. Each storage array is connected to

the management network and a shared management IP address is assigned to the storage group. When

managing the array, the Group Manager web interface will be assigned to this shared management IP.

Additionally, San HQ will utilize this interface for monitoring the storage group. Figure 11 shows the

PS6100 connectivity to the LAN switches.

PS6100X Array 1 LAN Connectivity Figure 10.

LA

N S

wit

ch

2

LA

N S

wit

ch

1

PS

61

00

Legend

1Gb LAN

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

Table 7 details EqualLogic Array connectivity to the LAN switches while Figure 7 shows the array

controller connectivity to the SAN switches. Table 7 and Figure 10 show the network connections that

are used for array monitoring and management. Group Manager and SAN HQ utilize these interfaces.

EqualLogic PS6100 LAN Connectivity Table 7.

Array # Controller # Mgmt Port

Array 1 Slot 0 Switch 1 – Port 3

Array 1 Slot 1 Switch 2 – Port 3

Array 2 Slot 0 Switch 1 – Port 4

Array 2 Slot 1 Switch 2 – Port 4

4.3 SAN Architecture

The SAN includes two PowerConnect 7024 or 6224 switches which support iSCSI data and iSCSI

management traffic. The two switches are connected together with stacking modules on each switch.

In addition, the solution supports up to eight 1Gb uplinks from each switch to link into an existing core

iSCSI network infrastructure. These uplinks are optional. If required, 10Gb uplink modules may be

added to each switch; however these options are beyond the scope of this document.

Page 20: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 20

4.3.1 PowerConnect switch configuration for SAN

Figure 11 shows how the ports are distributed for the ESXi hosts, management server, storage arrays,

and uplinks. Additional ports are available for future expansion as well.

SAN Switch Port Configuration Figure 11.

SAN

Sw

itch

1SA

N S

wit

ch 2

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Mgm

t Se

rver

Storage

Svc Host Core

2x 10Gb LAG

Mgm

t Se

rver

Storage

Svc Host Core

The two switches are connected using two 10Gb Inter switch links (ISLs). The two 10Gb Link

Aggregation Groups (LAGs) provide a path for communication across the switches. In addition, the LAGS

support traffic between EqualLogic arrays if more than one array is present in the configuration.

Spanning Tree Protocol (STP) PortFast is enabled on all the server and storage ports. This helps to

reduce the link downtime in the event of a path or switch failure. Ports left for future expansion are

disabled to prevent any unauthorized access or misconfiguration. The uplink ports on each switch are

configured in a LAG. Switch level configuration details are defined in Table 8. A single VLAN, not

VLAN 1, is used for all iSCSI traffic.

Port 16 on each switch is configured with VLAN 10 and without PortFast, for OOB management of the

switch.

Page 21: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 21

PowerConnect SAN Switch Settings Table 8.

Item Setting

iSCSI Enabled

Rapid STP Enabled

Jumbo Frames Enabled

Flow Control On

Unicast Storm Control Disabled

It is important to understand the spanning tree configuration on the SAN if both uplinks will be utilized.

Spanning tree costs should be set appropriately to avoid STP blocking the LAG between these two

switches which would result in a longer Ethernet switch path for SAN traffic and potentially increased

SAN latency.

4.3.2 PowerEdge R620 SAN Connectivity

Each PowerEdge R620 has eight 1Gb ports of which four are dedicated for SAN traffic. Figure 12 shows

Server 1 connectivity to the LAN switches.

SAN Switch Port Configuration Figure 12.

SA

N S

wit

ch

2

SA

N S

wit

ch

1

Pow

erEdg

e R

620

Legend

iSCSI

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

1

1 2 3 4

iDRAC

1 2750W 750W

The other PowerEdge R620 server follows the same connectivity pattern to the SAN switches with the

exception that each server uses a unique set of physical ports on the switches. Table 9 details

PowerEdge R620 server connectivity to the SAN switches for a vStart 50 configuration.

Page 22: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 22

PowerEdge R620 SAN Connectivity Table 9.

rNDC – Port 2 rNDC – Port 4 NIC – Port 2 NIC – Port 4

Server 1 Switch 1 – Port 11 Switch 2 – Port 11 Switch 1 – Port 12 Switch 2 – Port 12

Server 2 Switch 1 – Port 13 Switch 2 – Port 13 Switch 1 – Port 14 Switch 2 – Port 14

4.3.3 PowerEdge R620 SAN Virtual Switch Configuration

For each PowerEdge R620, a single virtual switch is created to support all SAN traffic. The four 1Gb

physical networks that are dedicated for iSCSI traffic are connected to this virtual switch (as shown in

Figure 13). Four VMkernel (vmk) interfaces are created and each vmk interface is associated with only

one physical network port.

SAN vSwitch Configuration Figure 13.

An additional VMkernel port is created on the iSCSI subnet serving as the default vmkernel port for

storage heartbeat traffic with the EqualLogic storage. This heartbeat sits outside of the iSCSI software

initiator and does not consume any additional iSCSI storage connections. It is simply used as the lowest

vmkernel port for vmkping and other iSCSI network functions. This heartbeat has to be the lowest

VMkernel port on the vSwitch and is not bound to the software initiator.

The creation and configuration of the vSwitches, iSCSI VMkernel interfaces and storage heartbeat

interface are accomplished through the EqualLogic MEM installation process, as discussed below. For

detailed instructions on configuring EqualLogic storage with vSphere, refer to the Technical Reports

Configuring VMware vSphere 4.x Software iSCSI with Dell EqualLogic PS Series Storage and Configuring

iSCSI Connectivity with VMware vSphere 5 and Dell EqualLogic PS Series Storage.

Page 23: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 23

4.3.4 EqualLogic Multipathing Extension Module

VMware introduced Pluggable Storage Architecture (PSA) in vSphere 4, which enabled storage vendors

to develop MEM that plugs into vSphere. MEM provides advanced multipathing capabilities with path

redundancy and improved performance by intelligently routing data. The EqualLogic MEM consists of

two components: a Path Selection Plug-in (PSP) driver that is used by the VMware Native Multipathing

Plug-in (NMP) to select the best path for each I/O to the EqualLogic storage devices and an EqualLogic

Host Connection Manager (EHCM) that manages iSCSI sessions to the EqualLogic storage devices.

By utilizing the PSP in the EqualLogic MEM, storage I/O is improved with increased bandwidth and

reduced network latency. This is because the PSP can use its knowledge about the exact distribution of

volumes in the storage arrays. The EHCM provides automatic connection management by distributing

iSCSI sessions across all configured adapters and PS Series group Ethernet ports. It also automates load

balancing across multiple active paths to the storage array.

EqualLogic MEM consists of a setup.pl script for configuring the module and an offline bundle to be

installed on the ESXi server. The bundle can be installed through the vCenter Update Manager from a

vSphere Management Assistant server or the vSphere vCLI using the setup script. As described above,

vSphere Multipathing requires configuring four VMkernels, each associated with only one active

network port. During MEM configuration, the option to utilize the iSCSI software adapters should be

enabled by including the enableswiscsi flag when executing the setup.pl script.

4.3.5 PowerEdge R420 SAN Connectivity

The PowerEdge R420 has four 1Gb ports of which two are dedicated for SAN traffic. These ports allow

the PowerEdge R420 to manage the SAN switches and mount iSCSI based volumes for access to VMDKs

or ISOs on the SAN. Figure 14 shows the management server connectivity to the SAN switches.

Management Server SAN Connectivity Figure 14.

SA

N S

wit

ch

2

SA

N S

wit

ch

1

Pow

erE

dge R

420

Legend

iSCSI

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

iDRAC

Table 10 details the management server connectivity to the SAN switches.

Page 24: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 24

PowerEdge R420 SAN Connectivity Table 10.

LOM – Port 2 NIC – Port 2

Switch 1 – Port 1 Switch 2 – Port 1

As noted above, a benefit of connecting the management server to the SAN switches is that iSCSI

volumes may be presented to the management server to provide additional storage capacity for storing

items such as ISO images and scripts. Windows Server 2008 R2 supports Network File System (NFS)

shares which may be mounted by the ESXi cluster to access these files. To account for this potential

use case, the EqualLogic HIT is installed. HIT provides the multipath I/O (MPIO) plug-in for the

Microsoft iSCSI framework; however, HIT is not required to use Group Manager. Group Manager is the

user interface for managing the PS6100X storage.

4.3.6 EqualLogic PS6100X SAN Connectivity

The EqualLogic PS6100X contains redundant storage controllers. Each storage controller has four 1Gb

connections supporting iSCSI data and a 100Mb dedicated management traffic port. The four iSCSI data

connections on each controller are split between the two SAN switches for redundancy.

Figures 15 show how the two controllers on Array 1 are connected.

PS6100X Array 1 SAN Connectivity Figure 15.

SA

N S

wit

ch

2

SA

N S

wit

ch

1

PS

61

00

Legend

iSCSI

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

Page 25: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 25

The other optional EqualLogic array follows the same connectivity pattern to the SAN switches with the

exception that it uses a unique set of physical ports on the switches. Table 11 details the connectivity

for each of the EqualLogic iSCSI arrays to the SAN switches.

PS6100X SAN Connectivity Table 11.

Array # Controller # Array Port

Ethernet 0

Array Port

Ethernet 1

Array Port

Ethernet 2

Array Port

Ethernet 3

Array 1 Slot 0 Switch 1 – Port 3 Switch 2 – Port 3 Switch 1 – Port 5 Switch 2 – Port 5

Array 1 Slot 1 Switch 2 – Port 4 Switch 1 – Port 4 Switch 2 – Port 6 Switch 1 – Port 6

Array 2 Slot 0 Switch 1 – Port 7 Switch 2 – Port 7 Switch 1 – Port 9 Switch 2 – Port 9

Array 2 Slot 1 Switch 2 – Port 8 Switch 1 – Port 8 Switch 2 – Port 10 Switch 1 – Port 10

4.3.7 EqualLogic Group and Pool Configuration

Each EqualLogic array (or member) is assigned to a particular group. Groups help in simplifying

management by enabling management of all members in a group from a single interface. Each group

contains one or more storage pools. Each pool must contain one or more members and each member is

associated with only one storage pool. Figure 16 shows an example of a group with three members

distributed across two pools.

EqualLogic Organizational Concepts Figure 16.

Group

Pool Pool

Member Member Member

The iSCSI volumes are created at the pool level. In the case where multiple members are placed in a

single pool, the data is distributed amongst the members of the pool. With data being distributed over

a larger number of disks, the potential performance of iSCSI volumes within the pool is increased with

each member added.

For vStart 50 with two arrays, one pool can be created with two members or two pools can be created

with one member each. Depending upon the storage options selected, the vStart 50 EqualLogic

organization options can be expanded.

Page 26: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 26

Storage Configuration Options Table 12.

vStart Model vStart 50: Base vStart Storage Array

Configuration

vStart 50+: With Storage Expansion Configuration

vStart 50 1 x PS6100X 2 x PS6100 Series Array

Using the information from Table 12 above as inputs, for a vStart 50 with Storage Expansion, one pool

with two members, or two pools with one member each can be created. Other EqualLogic array types

can be added to the existing storage pool or group. Understanding the expected storage workload

profile will help to determine the appropriate array selection and pool configuration. For more

information, please consult with your Dell Sales and Services representatives for planning out and

designing an EqualLogic storage solution. Also, refer to the following white paper, Using Tiered

Storage in a PS Series SAN, available at

http://www.equallogic.com/WorkArea/DownloadAsset.aspx?id=5239.

4.3.8 Storage Array RAID Configuration

The storage array RAID configuration is highly dependent on the workload in your virtual environment.

The EqualLogic PS series storage arrays support four RAID types: RAID 6, RAID 10, and RAID 50. The

RAID configuration will depend on workloads and customer requirements. In general, RAID 10 provides

the best performance at the expense of storage capacity.

RAID 10 generally provides better performance in random I/O situations, and requires additional

overhead in the case of a drive failure scenario. RAID 50 generally provides more usable storage, but

has less performance than RAID 10. RAID 6 provides better data protection than RAID 50.

For more information on configuring RAID in EqualLogic, refer to the white paper, How to Select the

Correct RAID for an EqualLogic SAN, available at

http://www.EqualLogic.com/resourcecenter/assetview.aspx?id=8071.

4.3.9 Volume Size Considerations

Volumes are created in the storage pools. Volume sizes depend on the customer environment and the

type of workloads. Volumes must be sized to accommodate not only the VM virtual hard drive, but also

the size of the virtual memory of the VM and additional capacity for any snapshots of the VM.

Depending on the environment, one may decide to create multiple ~500 GB volumes with multiple VMs.

It is important to include space for the guest operating system memory cache, snapshots, and VM

configuration files when sizing these volumes. Additionally, one can configure thin-provisioned volumes

to grow on demand only when additional storage is needed for those volumes. Thin provisioning can

increase the efficiency of storage utilization.

With each volume created and presented to the servers, additional iSCSI sessions are initiated. When

planning the solution, it is important to understand that group and pool limits exist for the number of

simultaneous iSCSI sessions. For more information, refer to the current EqualLogic Firmware (FW)

Release Notes available at the EqualLogic Support site.

Page 27: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 27

4.3.10 Storage Access Control

Access to the created volumes can be restricted to a subset of the servers that have physical

connectivity to the EqualLogic arrays. For each volume, access can be restricted based on IP address,

iSCSI qualified name (IQN), and/or Challenge Handshake Authentication Protocol (CHAP). When

creating a volume for the servers in the ESXi cluster, access must be granted to all servers in the

cluster.

4.4 Management Architecture

This section assumes initial configuration of the devices has been performed and pertains to ongoing

management of the vStart configurations. For additional information on managing the vStart

configuration, refer to the vStart 50 Solution Overview document.

4.4.1 Management on the LAN

The management of the devices on the LAN includes the following items:

Out–of-band server management

Server management through Dell OpenManage Server Administrator

Server management through Dell Management Plug-in for VMware vCenter

ESXi cluster management through VMware vCenter

LAN switch management through CLI or web browser

EqualLogic array management and monitoring

Server Out-of-Band Management: The PowerEdge R620 servers and the PowerEdge R420 can be

managed directly by connecting to the iDRAC Web interface. In addition, the iDRAC supports remote

KVM through a virtual console.

Dell OpenManageTM Server Administrator (OMSA): OMSA should be installed on the PowerEdge R620

server and the PowerEdge R420. For the PowerEdge R620 servers, OMSA is available to download as an

OpenManage Offline Bundle and VIB for ESXi from support.dell.com. It can be installed on each ESXi

server either using vSphere Command Line Interface (vSphere CLI) or VMware vSphere Management

Assistant (vMA). For more information refer to the white paper, Dell Systems Management for VMware

ESX and ESXi, available at

http://support.dell.com/support/edocs/software/eslvmwre/sysman/sysman.htm

For the PowerEdge R420, the Dell OpenManage Installation DVD is available for download from

support.dell.com. The OpenManage Web Service option allows the PowerEdge R420 to serve out the

OpenManage Web pages for the ESXi servers. The OMSA option provides hardware management for the

local system.

Dell Management Plug-in for VMware vCenter: The plug-in provides server details for all PowerEdge

R620 servers in the solution at the datacenter, cluster, and host level within vCenter Server.

Inventory, monitoring, alerting, BIOS and firmware updates, and access to online warranty information

are available within vCenter Server. In addition, the plug-in provides the ability to directly launch the

iDRAC7 or OMSA Web interfaces for each server.

Page 28: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 28

ESXi Cluster Management: Management of the ESXi cluster will typically be performed directly with

vCenter Server, but can also be performed by connecting to each server through the VMware Virtual

Infrastructure Client (VI Client). In addition, for troubleshooting purposes, KVM or virtual console

through iDRAC connectivity to an ESXi server may be required to utilize the Direct Console User

Interface (DCUI).

LAN Switch Management: Management of the LAN switches can be performed through a Web browser,

serial cable, or telnet.

EqualLogic Array Management: The EqualLogic arrays are managed through the EqualLogic Group

Manager Web interface which can be accessed from the management server. Administrator primary

tasks within Group Manager include configuration and troubleshooting of the arrays. The HIT/VE Plug-in

for VMware vCenter can also be used to manage the storage arrays from within vCenter Server.

Page 29: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 29

EqualLogic Array Monitoring: SAN HQ is installed on the management server to provide current

performance monitoring and historical statistics. Group Manager can also be used for array monitoring.

A logical overview of the LAN management architecture is shown in Figure 17. Notice that the PS6100

arrays are managed via the dedicated management ports that are connected to the LAN switches.

Management Overview (LAN) Figure 17.

Legend

LA

N (

Po

we

rCo

nn

ec

t 7

02

4)

1Gb LANOut-Of-Band Mgmt

Po

we

rEd

ge

R6

20

ESXi Cluster

Po

we

rEd

ge

R

42

0

Management Server (vCenter, SAN HQ)

Core Network

So

luti

on

Su

pp

ort

Se

rvic

es

Database forManagement Server

(Optional)

NTP

Domain Controller/Active Directory

iSCSI Storage

Equ

alLogi

c P

S6100

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

4.4.2 SAN Management

SAN management includes SAN switch management. Management of the SAN switches can be

performed through a Web browser, serial cable, or telnet from the management server, but only if the

management server is physical.

Page 30: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 30

A logical overview of the SAN management architecture is shown in Figure 18.

Management Overview (SAN) Figure 18.

SA

N (

Pow

erC

onnect

6224)

Pow

erE

dge

R420

Management Server (vCenter, SAN HQ)

Legend

1Gb SAN

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

21 22 23 24

LNK ACT

Reset

Stack No.

M RPSFan

PW

R

Status

COMBO PORTS

LNK ACT

2 4 6 8 10 12 14 16 18 20 22 24

1 3 5 7 9 11 13 15 17 19 21 23

Note that while Figure 18 shows the high level physical connectivity between the PowerEdge R420

Management Server and the SAN switches, another path for storage management exists through the

Storage Management Port Group as discussed below in Section 4.4.4 and shown above in Figure 19.

4.4.3 Dell Management Plug-In for VMware vCenter

Dell Management Plug-In for VMware vCenter Server is provided as a virtual appliance and is hosted on

the ESXi cluster. The virtual networking for the appliance must be connected to the management port

group discussed in Section 4.2. After configuration, hardware monitoring and management of the ESXi

hosts can be done from within vCenter Server. OMSA must be installed on each ESXi host for the Dell

Management Plug-In for VMware vCenter to function properly.

Page 31: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 31

4.4.4 Dell EqualLogic HIT/VE Plug-In for VMware vCenter

Like the Management Plug-In, the HIT/VE is provided as a virtual appliance and is hosted on the ESXi

cluster. Similarly, virtual networking for the appliance must be connected to the management port

group discussed in Section 4.2. Once configured, storage volume and Auto Snapshot can be configured

and monitored from within the vCenter Server user interface.

5 Power and Cooling Configuration The vStart 50 configuration supports datacenters that have two separate sources of power. For the

servers and iSCSI storage arrays, redundant power supplies are provided, and each power supply is

connected to a unique PDU to avoid a single point of failure. The four PowerConnect switches are

configured with an external redundant power unit; the primary power on the switches is connected to

a separate PDU rather than the Redundant Power Unit for the switches.

The UPS and PDU model may vary based on the deployment needs of the datacenter. As such, detailed

power cabling information will be provided by Dell Services as part of the deployment process. Figure

19 below depicts the power configuration for the vStart 50+ configuration.

Page 32: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 32

vStart 50+ Power Cabling Figure 19.

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

01

02

03

04

05

06

07

08

09

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

REPO OUTREPO IN

OFF

BATT.EXT

192V 30A

Not For

Current

Interrupting.

20A

250Vac

CIRCUIT

BREAKER

LS2

INPUT 208V~

AC OUTPUT 30A~

LS1

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

RPS

DC IN==,12V 7A 100-240V9600,8,1

BAY 2: 10G UPLINK MODULEBAY 1: STACKING OR 10G UPLINK MODULE

XG1 XG2 XG3 XG4

PDU connectivity will vary based on UPS presence and datacenter power infrastructure

PowerConnect 7024 or 6248 (SAN 2)PowerConnect 7024 or 6248 (SAN 1)

PowerConnect 7024 or 6248 (LAN 2)

PowerConnect 7024 or 6248 (LAN 1)

Keyboard, Monitor, Mouse

PowerEdge R620

PowerEdge R620

PowerEdge R420

EqualLogic PS6100X

UPS

PDU B

PDU A

/x8

/x8

vStart 50

PDU1 Power Cables PDU2 Power Cables

RPS PC RPS-720 or 600

1

1 2 3 4

iDRAC

1 2750W 750W

1

1 2 3 4

iDRAC

1 2750W 750W

iDRAC

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

SERIAL PORT

SERIAL PORT

MANAGEMENT

MANAGEMENT

STANDBY

ON/OFF

STANDBY

ON/OFF

ACT

ERR

PWR

ACT

ERR

PWR

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

ETHERNET 1ETHERNET 0

CONTROL MODULE 11

ETHERNET 2 ETHERNET 3

To Datacenter

Power Source A

To Datacenter

Power Source B

PDU A should be cabled to one power source. PDU B should be cabled to a separate power source.

With one optional UPS in the solution, the recommended solution is to cable PDU A into the UPS and

PDU B into another power source in the datacenter. If this option is implemented, then the UPS should

be cabled to one power source and PDU B should be cabled to a separate power source, if two power

sources are available.

Detailed information on the power, cooling, and related datacenter rack requirements of the vStart 50

are available in the vStart 50 Solution Specifications document.

Page 33: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 33

6 Scalability When adding additional servers or storage to the rack, power, weight, and cooling requirements must

be taken into account. The power limits of PDUs and UPS must be understood prior to installing a new

system.

In addition, the impact to the total iSCSI session count for the EqualLogic group and pools must be

understood when adding either new servers or EqualLogic arrays. For additional information refer to

the whitepapers, Configuring VMware vSphere Software iSCSI with Dell EqualLogic PS Series Storage and

Configuring iSCSI Connectivity with VMware vSphere 5 and Dell EqualLogic PS Series Storage.

Switch ports on both the LAN and SAN switches are available for expansion. Those ports must be

enabled and configured to support the new servers and/or storage arrays.

Adding new servers to the ESXi Cluster: If additional VMs will be deployed that will exceed current

resource capabilities, or the ESXi cluster has reached its acceptable maximum (CPU and memory)

resource utilization, then additional servers can be added to the cluster.

Previously created iSCSI volumes on the EqualLogic array may require modifications to the access

controls to grant access to the newly added servers.

When adding servers to an ESXi cluster, it is recommended that the configuration be identical to the

other systems in the cluster. If this is not achievable, then there may be restrictions on certain actions

such as vMotion between the differing systems. To understand vMotion compatibility requirements,

refer to the Dell whitepaper, VMware vMotion and 64-Bit Virtual Machine Support for Dell PowerEdge

Systems Compatibility Matrix, available at

http://support.dell.com/support/edocs/software/eslvmwre/VS/index.htm.

For quick addition of a new server and for the ESXi cluster to be uniform, host profiles can be used. A

host profile is created from an existing server on the same ESXi cluster. It will capture the networking,

storage and security settings from the validated configuration. A new server with the basic vSphere

installation can be added to the cluster and the host profile can then be applied to setup the

networking, storage, and security configuration. Note that host profiles require vSphere Enterprise Plus

license edition.

Adding new storage to the EqualLogic group: New EqualLogic arrays can be added to the existing

EqualLogic group. As each new array is added to the storage group, the storage capacity and

performance, in terms of both bandwidth and IOPS, are increased. This increased capacity can be

utilized without downtime. When a new array is added to an existing pool, the existing data will

automatically be distributed to the newly added array.

If EqualLogic thin provisioning was utilized and virtual capacity allocated is nearing the limit of physical

capacity, then adding an additional storage array to the constrained pool will address this issue.

Page 34: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 34

7 References VMware references:

VMware vSphere Edition Comparisons

https://www.vmware.com/products/vsphere/buy/editions_comparison.html

VMware vSphere Compatibility Matrixes

http://www.vmware.com/pdf/vsphere4/r40/vsp_compatibility_matrix.pdf

VMware High Availability (HA): Deployment Best Practices

http://www.vmware.com/resources/techresources/10166

EqualLogic references:

Dell EqualLogic PS Series Architecture Whitepaper

http://www.dell.com/downloads/global/products/pvaul/en/dell_EqualLogic_architecture.pdf

Configuring and Installing the EqualLogic Multipathing Extension Module for VMware vSphere

4.1 and PS Series SANs

http://www.EqualLogic.com/resourcecenter/assetview.aspx?id=9823

Host Integration Tools for VMware

http://www.dell.com/downloads/global/products/pvaul/en/EqualLogic-host-software.pdf

http://www.delltechcenter.com/page/EqualLogic+HIT-VE+new+features#fbid=xzSL8xnCju5

How to Select the Correct RAID for an EqualLogic SAN

http://www.EqualLogic.com/resourcecenter/assetview.aspx?id=8071

Using Tiered Storage in a PS Series SAN

http://www.EqualLogic.com/resourcecenter/assetview.aspx?id=5239

Monitoring your PS Series SAN with SAN HQ

http://www.EqualLogic.com/resourcecenter/assetview.aspx?id=8749

Dell Management:

Dell Management Plug-In for VMware vCenter references – Solution Brief

http://content.dell.com/us/en/enterprise/d/business~solutions~virtualization~en/Documents~

dell-management-plug-in-vmware-vcenter-solution-brief.pdf.aspx

Dell Systems Management for VMware ESX and ESXi

http://support.dell.com/support/edocs/software/eslvmwre/sysman/sysman.htm

Page 35: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 35

8 Appendix A – IP & VLAN Planning Worksheet

VLAN Configuration

Traffic Type VLAN Subnet Subnet Mask Gateway

Management

vMotion

Out-of-Band Management

VM

iSCSI / iSCSI Management

Existing Infrastructure

DNS NTP NTP for SAN Database for vCenter Server

Management Appliances

Appliance - Network IP Address

Dell Management Plugin for vCenter – MGMT Network

EqualLogic Host Integration tools for VMware Edition (HIT/VE) – MGMT Network

Page 36: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 36

PowerConnect 7024 Switches

The IP address for the PowerConnect LAN switches should be on the out-of-band management network

and the SAN switches should be on the iSCSI network. Only one IP address is required for the LAN

switches due to the stacked configuration.

Switch IP Address

LAN 1 & 2

SAN 1

SAN 2

PowerEdge R420 Server or Mgmt VM: ___IP Address, hostname, FQDN________

iDRAC Management Out-of-Band Mgmt iSCSI 1 iSCSI 2

PowerEdge R620 Servers

Server 1:____IP Address, hostname, FQDN________

iDRAC Management vMotion

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Storage Heartbeat

Page 37: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 37

Server 2: ____IP Address, hostname, FQDN________

iDRAC Management vMotion

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Storage Heartbeat

EqualLogic PS6100X Array(s)

EqualLogic Group Name:____________

EqualLogic Group IP:____________

EqualLogic Management IP:_______________

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Management

Array 1

Array 2

Page 38: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 38

9 Appendix B – IP & VLAN Sample Worksheet

VLAN Configuration (VLANs, and TCP/IP Subnets, and Gateways should be changed to match

existing infrastructure) and the information in the other tables in Appendix B is provided as an

example.

Traffic Type VLAN Subnet Subnet Mask Gateway

Management 20 192.168.20.X 255.255.255.0 192.168.20.1

vMotion 30 192.168.30.X 255.255.255.0 192.168.30.1

Out-of-Band Management

10 192.168.10.X 255.255.255.0 192.168.10.1

VM 100 192.168.100.X 255.255.255.0 192.168.100.1

iSCSI / iSCSI Management

50 192.168.50.X 255.255.255.0 192.168.50.1

Existing Infrastructure (IP addresses should be changed to match existing infrastructure)

DNS NTP NTP for SAN Database for vCenter Server

192.168.20.2 192.168.20.2 192.168.50.2 192.168.20.3

Management Appliances

Appliance - Network IP Address

Dell Management Plugin for vCenter – MGMT Network 192.168.20.19

EqualLogic Host Integration tools for VMware Edition (HIT/VE) – MGMT Network

192.168.20.18

Page 39: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 39

PowerConnect 7024 Switches

The IP address for the PowerConnect LAN switches should be on the out-of-band management network

and the SAN switches should be on the iSCSI network. Only one IP address is required for the LAN

switches due to the stacked configuration.

Switch IP Address

LAN 1 & 2 192.168.10.201

SAN 1 192.168.50.201

SAN 2 192.168.50.202

PowerEdge R420 Server or Mgmt VM: management

iDRAC Management Out-of-Band Mgmt iSCSI 1 iSCSI 2

192.168.10.10 192.168.20.10 192.168.10.60 192.168.50.10 192.168.50.60

PowerEdge R620 Servers

Server 1: esxi-node1

iDRAC Management vMotion

192.168.10.11 192.168.20.11 192.168.30.11

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Storage Heartbeat

192.168.50.11 192.168.50.21 192.168.50.31 192.168.50.41 192.168.50.81

Page 40: vStart 100 and vStart 200 Solution Design Guide - Delli.dell.com/.../1_vStart50v_VMware_vSphere_Solution_Design_ca.pdf · vStart 50 VMware vSphere Solution Design Guide Dell Inc 2

vStart 50 VMware vSphere Solution Design Guide

Dell Inc 40

Server 2: esxi-node2

iDRAC Management vMotion

192.168.10.12 192.168.20.12 192.168.30.12

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Storage Heartbeat

192.168.50.12 192.168.50.22 192.168.50.32 192.168.50.42 192.168.50.82

EqualLogic PS6100X Array(s)

EqualLogic Group Name:____________

EqualLogic Group IP: 192.168.50.100

EqualLogic Management IP: 192.168.20.100

iSCSI 1 iSCSI 2 iSCSI 3 iSCSI 4 Management

Array 1 192.168.50.101 192.168.50.102 192.168.50.103 192.168.50.104 192.168.20.102

Array 2 192.168.50.105 192.168.50.106 192.168.50.106 192.168.50.107 192.168.20.103