Hyper-V Medium Fast Track for IBM Flex System With V7000

Embed Size (px)

Citation preview

  • IBM Flex System Solution for Microsoft Hyper-V

    Configuration and Implementation Guide using IBM

    Flex System x240 Compute Nodes and Flex System

    V7000 Storage Node with Converged Network

    Infrastructure running Windows Server 2012 and

    System Center 2012 SP1

    Scott Smith

    David Ye

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 2

    Contents IBM Flex System Solution for Microsoft Hyper-V .................................................................... 1

    Business Problem ............................................................................................................................ 5

    Business Value ................................................................................................................................ 5

    Intended Audience .......................................................................................................................... 5

    IBM Flex System Solution for Microsoft Hyper-V ........................................................................ 6

    Components .................................................................................................................................... 7

    IBM Flex System Enterprise Chassis ......................................................................................... 7

    IBM Flex System Chassis Management Module........................................................................ 7

    IBM Flex System x240 Compute Node ...................................................................................... 8

    IBM Flex System V7000 Storage Node ..................................................................................... 9

    IBM Flex System CN4093 Switches ........................................................................................ 10

    IBM Flex System Manager ....................................................................................................... 10

    Microsoft Windows Server 2012 .............................................................................................. 11

    Microsoft System Center 2012 SP1 .......................................................................................... 11

    Best Practice and Implementation Guidelines .............................................................................. 11

    Racking and Power Distribution ................................................................................................... 11

    Networking and VLANs ............................................................................................................... 12

    Flex System Switch Positions and Network Connections ........................................................ 12

    VLAN Description .................................................................................................................... 15

    Management Cluster Private and CSV Networks (VLAN 30) ............................................. 16

    Management Cluster Live Migration Network (VLAN 31) ................................................. 16

    SQL Guest Cluster Private Network (VLAN 33) ................................................................. 16

    SCVMM Guest Cluster Private Network (VLAN 35) .......................................................... 16

    Production Cluster Private and CSV Networks (VLANs 37) ............................................... 16

    Production Cluster Live Migration Network (VLAN 38) .................................................... 16

    Cluster Public Network (VLAN 40) ..................................................................................... 16

    Production VM Communication Network (VLAN 50) ........................................................ 17

    Out of Band Management Network (VLAN 70) .................................................................. 17

    FCoE Storage Network (VLAN 1002) ................................................................................. 17

    Inter Switch Link Network (VLAN 4094)............................................................................ 17

    x240 Compute Node Network Ports ......................................................................................... 17

    Physical Host Data Access ............................................................................................... 17

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 3

    Physical Host FCoE Storage Access .............................................................................. 18

    Storage Controller FCoE Access ..................................................................................... 18

    IBM Flex System CN4093 Converged Ethernet Configuration ............................................... 18

    Using ISCLI to configure CN4093 switches ........................................................................ 23

    Active Directory............................................................................................................................ 29

    IBM Flex System V7000 Storage Node ....................................................................................... 29

    Overview ................................................................................................................................... 29

    Internal Flex Chassis Connections ............................................................................................ 29

    Management .............................................................................................................................. 30

    IBM Flex System V7000 Storage Node and Cluster Storage Considerations .......................... 30

    Storage Pool and Volume Configuration .................................................................................. 31

    Host Server Definition and Volume Mapping .......................................................................... 33

    IBM Flex System x240 Management Fabric Setup ...................................................................... 34

    Pre-OS Installation .................................................................................................................... 35

    OS Installation and Configuration ............................................................................................ 38

    Network Configuration ............................................................................................................. 39

    Host Storage Connections ......................................................................................................... 42

    Management Host Cluster Creation .......................................................................................... 43

    Virtual Machine Fibre Channel Storage Connections .............................................................. 45

    Virtual Machine Setup and Configuration ................................................................................ 48

    System Center 2012 SP1 Setup and Configuration ...................................................................... 49

    SQL Server 2012 Setup and Configuration .............................................................................. 49

    SQL Clustered Instances ....................................................................................................... 50

    SQL Cluster Storage ............................................................................................................. 51

    SQL Server Guest Clustering................................................................................................ 52

    System Center Virtual Machine Manager 2012 SP1 Setup and Configuration ........................ 53

    SCVMM Guest Clustering for Virtual Machine Manager ................................................... 54

    IBM Pro Pack for Microsoft System Center Virtual Machine Manager .............................. 54

    Flex System V7000 Storage Automation with SMI-S .......................................................... 54

    Bare Metal Provisioning ....................................................................................................... 57

    System Center Operations Manager 2012 SP1 Setup and Configuration ................................. 57

    IBM Upward Integration Modules for Microsoft System Center Operations Manager ....... 58

    IBM Pro Pack for Microsoft System Center Virtual Machine Manager .............................. 58

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 4

    IBM Flex System V7000 Storage Management Pack for Microsoft System Center

    Operations Manager .............................................................................................................. 58

    System Center Orchestrator 2012 SP1 Setup and Configuration ............................................. 59

    System Center Service Manager 2012 SP1 Setup and Configuration ...................................... 60

    WSUS Server Setup and Configuration .................................................................................... 61

    Cluster Aware Updating Setup and Configuration ................................................................... 62

    IBM Flex System x240 Compute Node Setup .............................................................................. 63

    Pre-OS Installation .................................................................................................................... 63

    OS Installation and Configuration ............................................................................................ 63

    Network Configuration ............................................................................................................. 64

    Host Storage Connections ......................................................................................................... 67

    Compute Host Cluster Creation ................................................................................................ 67

    Summary ....................................................................................................................................... 69

    Appendix ....................................................................................................................................... 70

    Related Links ............................................................................................................................ 70

    Bill of Materials ........................................................................................................................ 72

    Networking Worksheets............................................................................................................ 74

    Switch Configuration ................................................................................................................ 75

    Switch-1 ................................................................................................................................ 75

    Switch-2 ................................................................................................................................ 83

    PowerShell Scripts .................................................................................................................... 91

    Management Node Network Configuration .......................................................................... 91

    Compute Node Network Configuration ................................................................................ 92

    Network Address Tables ........................................................................................................... 94

    The team who wrote this paper ..................................................................................................... 96

    Trademarks and special notices .................................................................................................... 97

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 5

    Business Problem Todays IT managers are looking for efficient ways to manage and grow their IT infrastructure with confidence. Good IT practices recognize the need for high availability, simplified management and containing costs through maximum resource utilization. CIOs need to be able to rapidly respond to changing business needs with simple, easily deployed configurations with the ability to scale on demand. Natural disasters, malicious attacks, and even simple software upgrade patches can cripple services and applications until administrators resolve the problems and restore any backed up data. The challenge of maintaining healthy systems and services only becomes more critical as businesses consolidate physical servers into a virtual server infrastructure to reduce data center costs, maximize utilization, and increase workload performance.

    Business Value The IBM Flex System

    TM Solution for Microsoft Hyper-V provides businesses with an affordable,

    interoperable and reliable industry-leading virtualization solution choice. This IBM Flex System based offering built around the latest IBM Flex System Compute Nodes, storage and networking, takes the complexity out of the solution with this step-by-step implementation guide. Validated under the Microsoft Private Cloud Fast Track program, this IBM reference architecture combines Microsoft software, consolidated guidance, and validated configurations for computing, network, and storage. This reference architecture provides certain minimum levels of redundancy and fault tolerance across the servers, storage, and networking for the Windows Servers to help ensure a defined level of fault tolerance while managing pooled resources. By pooling computing, networking, and storage capacity with Microsoft Hyper-V in a Windows Failover Cluster helps eliminate single points of failure so users have near-continuous access to important server-based, business-productivity applications. An independent cluster hosting the management fabric based on Microsoft System Center 2012 SP1 with IBM upward integration components, provides an environment to deploy, maintain, and monitor the production private cloud. IT administration can be improved by simplifying the hardware configuration to a corporate standard with automated deployment and maintenance practices. Templates of pre-configured virtual machines can be saved and deployed rapidly through self-service portals to the end customers. Virtual machines can be migrated among clustered host servers to support resource balancing, scheduled maintenance, and in the event of unplanned physical or logical outages, virtual machines can automatically be restarted on the remaining cluster nodes. As a result, clients will minimize downtime, making this seamless operation attractive to organizations that are trying to create new business and maintain healthy service level agreements.

    Intended Audience This reference configuration architecture and implementation guide targets organizations implementing Hyper-V and IT Engineers familiar with the hardware and software that make up the IBM Virtualization Reference Architecture with Microsoft Hyper-V. Additionally, the System x sales teams and their customers evaluating or pursuing Hyper-V virtualization solutions will benefit from this previously validated configuration. Comprehensive experience with the various reference configuration technologies is recommended.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 6

    IBM Flex System Solution for Microsoft Hyper-V Microsoft Hyper-V technology continues to gain competitive traction as a key component in many customer virtualization environments. Hyper-V is a standard component in Windows Server 2012 Standard and Datacenter editions. Windows 2012 Microsoft Hyper-V Virtual Machines (VMs) support up to sixty-four virtual processors and up to 1TB of memory.

    Individual virtual machines (VMs) have their own operating system instance and are completely isolated from the host operating system as well as other VMs. VM isolation helps promote higher business-critical application availability while the Microsoft failover clustering feature, found in the Windows Server 2012, can dramatically improve production system uptimes. IT administration can be improved by simplifying the hardware configuration to a corporate standard with automated deployment and maintenance practices. Templates of pre-sized virtual machines can be saved and rapidly deployed from self-service portals to compute nodes. Virtual machines can be migrated among clustered host servers to support resource balancing, scheduled maintenance, and in the event unplanned failures of physical or logical outages virtual machines can be automatically restarted on remaining cluster nodes. As a result, clients can minimize downtime. This seamless operation is attractive for organizations trying to develop or expand new business opportunities and maintain healthy service level agreements. This Hyper-V Reference Architecture and Implementation guide provides ordering, setup and configuration details for the IBM highly available virtualization compute environment that has been validated as a Microsoft Hyper-V Fast Track Medium configuration. The Microsoft Hyper-V Fast Track Medium configuration provides a validated 2-node clustered management fabric built around Microsoft System Center 2012 SP1, and an 8-node clustered compute fabric for deployment of production resources. This is ideal for large organizations that are ready to take their virtualization to the next level. The design consists of ten IBM Flex System x240 Compute Nodes, attached to an IBM Flex System V7000 Storage Node. Networking leverages the Flex System CN4093 converged switches. This fault tolerant hardware configuration is clustered using Microsofts Windows Server 2012. This configuration can be expanded to multiple chassis for additional compute capacity or storage. A short summary of the IBM Hyper-V Reference Architecture software and hardware components is listed below, followed by best practice implementation guidelines. The IBM Hyper-V Reference Configuration is constructed with the following enterprise-class components:

    One IBM Flex System Enterprise System Chassis

    Ten IBM Flex System x240 Compute Nodes in a Windows Failover Cluster running Hyper-V o Two IBM Flex System Compute Nodes will be used to build the highly available

    management fabric. o Eight IBM Flex System Compute Nodes will be used to build a highly available

    virtualization cluster.

    One IBM Flex System V7000 Storage Node with dual controllers (V7000 expansion options available)

    Two IBM Flex System CN4093 switches providing fully converged and redundant networking for data and storage (FCoE)

    Together, these software and hardware components form a high-performance, cost-effective solution that supports Microsoft Hyper-V environments for most business-critical applications and many custom third-party solutions. Equally important, these components meet the criteria set by Microsoft Private Cloud Fast Track program which promotes robust virtualization environments to help satisfy even the most demanding virtualization requirements. A diagram of the overall configuration is illustrated in Figure 1.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 7

    Figure 1. IBM Hyper-V Fast Track reference configuration

    Components This highly available IBM virtualization architecture is comprised of the IBM Flex System Enterprise chassis with IBM Flex System CN4093 converged switches, IBM Flex System x240 Compute Nodes running Microsofts Windows Server 2012 operating system, and the IBM Flex System V7000 Storage Node. Each component provides a key element to the overall solution.

    IBM Flex System Enterprise Chassis The IBM Flex System Enterprise Chassis is a simple, integrated infrastructure platform that supports a combination of compute, storage, and networking resources to meet the demands of your application workloads. Additional chassis can be added as the workloads increases. The 14 node, 10U chassis delivers high-speed performance that is complete with integrated servers, storage, and networking. This flexible chassis is designed for a simple deployment now, and for scaling to meet future needs. With the optional IBM Flex System Manager, multiple chassis can be monitored from a single screen. In addition, the optional IBM Upward Integration Modules (UIM) for Microsoft System Center provides the integration of the management features of the Flex System into an existing Microsoft System Center environment. These IBM upward integration modules enhance Microsoft System Center server management capabilities by integrating IBM hardware management functionality, providing affordable, basic management of physical and virtual environments and reducing the time and effort required for routine system administration. The UIM provides discovery, deployment, configuration, monitoring, event management, and power monitoring needed to reduce cost and complexity through server consolidation and simplified management.

    IBM Flex System Chassis Management Module The IBM Flex System Chassis Management Module (CMM) is a hot-swap module that configures and manages all installed chassis components. The CMM provides resource discovery, inventory, monitoring, and alerts for all compute nodes, switches, power supplies, and fans in a single chassis. The CMM provides a communication link with each components management processor to support power control and out of band remote connectivity as shown in Figure 2.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 8

    Figure 2. CMM Management Network Note: The default IP address for the CMM is 192.168.70.100. Default UserID and Password: USERID / PASSW0RD (with a zero)

    IBM Flex System x240 Compute Node At the core of this IBM reference configuration for Hyper-V, the IBM Flex System x240 Compute Nodes deliver the performance and reliability required for virtualizing business-critical applications in Hyper-V environments. To provide the expected virtualization performance for handling any Microsoft production environment, IBM Flex System x240 Compute Nodes can be equipped with up to two eight core E5-2600 series processors, and up to 768GB of memory. The IBM Flex System x240 includes an on-board RAID controller and the choice of either hot swap SAS or SATA disks as well as SFF hot swap solid state drives. Two I/O slots provide ports for both data and storage connections though the Flex Enterprise chassis switches. The x240 also supports remote management via the IBM Integrated Management Module (IMM) which enables continuous out of band management capabilities. All of these key features, including many that are not listed, help solidify the dependability that IBM customers have grown accustomed to with System x servers. By virtualizing with Microsoft Hyper-V technology on IBM Flex System x240 Compute Nodes (Figure 3), businesses reduce physical server space, power consumption and the total cost of ownership (TCO). Virtualizing the server environment can also result in lower server administration overhead, giving administrators the ability to manage more systems than in a physical server environment. Highly available critical applications residing on clustered host servers can be managed with greater flexibility and minimal downtime with Microsoft Hyper-V Live Migration capabilities.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 9

    Figure 3. IBM Flex System x240

    IBM Flex System V7000 Storage Node IBM Flex System V7000 Storage Node combines best-of-breed storage development with leading 1/10 GbE iSCSI, FCoE, or FC host interfaces and SAS/SSD drive technology. With its simple, efficient and flexible approach to storage, the Flex V7000 Storage Node is a cost-effective complement to the IBM Flex System. The Flex V7000 Storage Node delivers superior price/performance ratios, functionality, scalability, and ease of use for the mid-range storage user by offering substantial features at a price that fits most budgets.

    The V7000 Storage Node storage offers the ability to:

    Automate and speed deployment with integrated storage for the IBM PureFlex System or IBM Flex System

    Simplify management with an integrated, intuitive user interface for faster system accessibility

    Reduce network complexity with FCoE and iSCSI connectivity

    Store up to five times more active data in the same disk space using IBM Real-time Compression

    Virtualize third-party storage for investment protection of the current storage infrastructure

    Optimize costs for mixed workloads, with up to 200 percent performance improvement with solid-state drives (SSDs) using IBM System Storage Easy Tier

    1

    Improve application availability and resource utilization for organizations of all sizes

    Support growing business needs while controlling costs with clustered systems

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 10

    The IBM Flex System V7000 Storage Node (Figure 4) is well-suited for Microsoft Hyper-V environments. The Flex V7000 Storage Node delivers proven disk storage in flexible, scalable configurations and complements the IBM Flex System Enterprise Chassis, Flex System CN4093 Converged Network switches, and x240 Compute Nodes in an end-to-end IBM solution for Microsoft Hyper-V.. Connecting optional EXP2500 enclosures to your Flex V7000 Storage Node can scale up to 240 SAS and SSD disks and up to 960 per clustered system. The Flex V7000 Storage Node has 8GB cache per controller and 16GB for the whole system. The IBM Flex System V7000 Storage Node comes with advanced features such as System Storage Easy Tier, IBM Flashcopy, internal virtualization, thin provisioning, data migration, and system clustering. Optional features include Remote Mirroring, Real-time Compression, and external virtualization.

    Figure 4. IBM Flex System V7000 Storage Node

    IBM Flex System CN4093 Switches The IBM Flex System Fabric CN4093 10Gb Converged Scalable Switch (Figure 5) provides unmatched scalability, performance, convergence, and network virtualization, while delivering innovations to address a number of networking concerns today and providing capabilities that will help prepare for the future. The switch offers full Layer 2/3 switching as well as FCoE Full Fabric and Fibre Channel NPV Gateway operations to deliver a truly converged integrated solution, and is designed to install within the I/O module bays of the IBM Flex System Enterprise Chassis. The switch can help clients migrate to a 10Gb or 40Gb converged Ethernet infrastructure.

    Figure 5. IBM Flex CN4093 Switch

    IBM Flex System Manager The IBM Flex System Manager (FSM) is a systems management appliance that drivers efficiency and cost savings in the data center. The IBM Flex System Manager provides a pre-integrated and virtualized management environment across servers, storage, and networking that can be easily managed from a single interface. Providing a single focal point for seamless multi-chassis management, the Flex System Manager offers an instant and resource-oriented view of chassis resources for both IBM System x and IBM Power Systems compute nodes. Figure 6 displays this optional management node.

    Figure 6. The optional IBM Flex System Manager Node

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 11

    Microsoft Windows Server 2012 Windows Server 2012 with Hyper-V provides the enterprise with a scalable and highly dynamic platform to support virtualization of most environments with support for up to 4TB of RAM, 320 logical processors, and sixty-four physical nodes per cluster. IT organizations can simplify virtualization resource pools using key features as High Availability clustering, simultaneous Live Migration, in-box network teaming, and improved network Quality of Service (QoS) features. Virtual machines running Windows Server 2012 with Hyper-V have also increased their resource utilization with support for up to 64 vCPU, 1TB of RAM, and virtual HBA (vHBA).

    Microsoft System Center 2012 SP1 Microsoft System Center 2012 with IBM Upward Integration Modules enables you to create a comprehensive management environment for the IBM Flex System for Microsoft Hyper-V environment with the following features:

    Platform monitoring and management with System Center Operations Manager

    Virtualization deployment and management with System Center Virtual Machine Manager.

    Self Service Portal and incident tracking with System Center Service Manager

    Automation management with System Center Orchestrator

    Best Practice and Implementation Guidelines Successful Microsoft Hyper-V deployment and operation is significantly attributed to a set of test-proven planning and deployment techniques. Proper planning includes the sizing of needed server resources (CPU and memory), storage (space, and IOPS), and network bandwidth needed to support the infrastructure. This information can then be implemented using industry standard best practices to achieve optimal performance and reserve capacity necessary for the solution. The Microsoft Private Cloud Fast Track program combined with IBMs enterprise-class hardware prepares IT administrators to meet their virtualization performance and growth objectives by deploying highly available, elastic and flexible virtualize resource pools efficiently and securely. An IBM and Microsoft collaboration-based collection of Hyper-V configuration best practices and implementation guidelines that aid in the planning and configuration of your solution, are shared in the sections that follow. They are categorized into the following topics:

    Racking location and power distribution

    Networking and VLANs

    Storage setup and configuration

    Setup of IBM Flex System x240 Compute Node

    Windows Server Failover Cluster and Hyper-V setup

    System Center 2012 SP1 Setup and Configuration

    Racking and Power Distribution The installation of power distribution units (PDUs) and associated cables should be performed before any system is racked. Before cabling the PDUs, consider the following:

    Make sure that there are separate electrical circuits and receptacles providing enough power to support the required PDUs.

    Redundant electrical circuits to support power to the PDUs are recommended to minimize the possibility of a single electrical circuit failure impacting this configuration.

    Plan for individual electrical cords from separate PDUs for devices that have redundant power supplies.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 12

    Maintain appropriate shielding and surge suppression practices, and appropriate battery back-up techniques

    For questions please refer IBM Flex System Enterprise Chassis & PureFlex Power Requirements Guide

    Networking and VLANs Flex System Switch Positions and Network Connections The IBM Flex System chassis contains up to four switches. The numbering of these switches is interleaved as shown in Figure 7, and should be kept in mind when performing work on the switches or adding cable connections to the external ports.

    Figure 7. IBM Flex System Chassis Switch Position

    Each compute node has a single four port 10Gb CN4054 card. Each CN4054 has two ASIC chips and each supports two of the four 10GB ports as shown in Figure 8. Each compute node will maintain two 10Gb/s connections to each switch. Storage connections and network teams should be distributed across both ASICs to maintain fault tolerance.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 13

    Figure 8. View of CN4093 network and storage connections

    A visual representation of the connections between the CN4054 converged adapters and the CN4093 converged switches is shown in Figure 9.

    Figure 9. Illustration of converged storage and server connections Combinations of physical and virtual isolated networks are configured at the host, switch, and storage layers to satisfy isolation requirements. At the physical host layer, there is a 4-port 10GbE Virtual Fabric Adapter for each Hyper-V server (1- Flex System CN4054 four port 10Gb VFA module). At the physical switch layer, there are two redundant Flex System CN4093 modules with up to 42 internal 10GbE ports, 2 external 10GbE SFP+ ports, 12 external SFP+ Omni ports, and 2 external 40GbE QSFP+ ports (which can also be converted to eight 10GbE ports) for storage and host connectivity. In order to support all four 10GbE connections from each server the CN4093 switches will require the Upgrade 1 Feature on Demand (FoD) option. The servers and storage maintain connectivity through two FCoE connections

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 14

    using Multi Path I/O (MPIO). The two 10GbE ports for FCoE are also shared with host management, cluster private and public data networks because these networks are generally not bandwidth intensive and Windows QoS settings are applied to limit bandwidth so storage traffic is not impeded. On the data network side, Windows Server 2012 NIC teaming is used to provide fault tolerance, and load balancing to all of the communication networks: host management, cluster private, cluster public, live migration, and virtual machine. This setup allows the most efficient use of network resources with a highly-optimized configuration for both network and storage connectivity. At the physical switch layer, VLANs are used to provide logical isolation between the various storage and data traffic. A key element is to properly configure the switches to maximize the available bandwidth and reduce congestion, however based on individual environment preferences, there is flexibility regarding how many VLANs are created and what type of role-based traffic they handle. Once a final selection is made, make sure that the switch configurations are saved and backed up. All switch ports with the exception of the Flex V7000 ports should be configured as tagged and the VLAN definitions specified on each port as needed. Non-FCoE networks will need to have VLAN assignments made in Windows Server or Hyper-V. Inter switch links are created between the two CN4093 switches. Link Aggregation Control Protocol (LACP) is used to combine two 10GbE switch ports into a single entity, which is then connected to a similar number of ports on the second switch. LACP teams provide higher bandwidth connections and error correction between LACP team members. An LACP team can also be used to support the uplink connections to a corporate network. In addition to LACP, Virtual Link Aggregation Group (VLAG) is also configured between the 2 switches. VLAGs allow multi-chassis link aggregation and facilitate active-active uplinks of access layer switches. VLAG with spanning tree disabled helps avoid the wasted bandwidth associated with links that are blocked when enabled. An example of a VLAG configuration is illustrated in Figure 10.

    Figure 10. Typical Data Center Switching Layers with STP vs. VLAG

    A high level network VLAN overview is shown in Figure 11.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 15

    Figure 11. IBM Flex System Solution for Microsoft Hyper-V Architecture

    VLAN Description The validated configuration uses the VLANs described in Table 1.

    Network Name Description VLAN 30 Management Cluster Private

    Network

    Private cluster communication for 2-node

    management cluster.

    VLAN 31 Management Cluster Live

    Migration Network

    VM Live Migration traffic for 2-node

    management cluster

    VLAN 33 SQL Cluster Private Private Cluster communication for SQL

    Cluster

    VLAN 35 SCVVM Cluster Private Private Cluster communication for

    SCVMM Cluster

    VLAN 37 Production Cluster Private

    Network

    Private cluster communication for 8-node

    production cluster.

    VLAN 38 Production Cluster Live

    Migration Network

    VM Live Migration traffic for 8-node

    production cluster

    VLAN 40 VM Communication Network VM communication

    VLAN 50 Cluster Public Network

    Used for host management and cluster

    public network

    VLAN 70 Out of band management

    network

    Used for out of band connections to CMM

    and IMM devices.

    VLAN 1002 FCoE Storage Network Used for FCoE storage traffic

    VLAN 4094 Inter-Switch Link (ISL) VLAN Dedicated to ISL

    Table 1. VLAN definitions

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 16

    Management Cluster Private and CSV Networks (VLAN 30) This network is reserved for cluster private (heartbeat and cluster shard volume) communication between clustered management servers. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports (see Table 2) for each management x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. The networks using these must specify VLAN 30 in Windows Server 2012. There should be no IP routing or default gateways for cluster private networks.

    Management Cluster Live Migration Network (VLAN 31) A separate VLAN should be created to support Live Migration for the management cluster. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports used by each management x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. The networks using these must specify VLAN 31 in Windows Server. There should be no routing on the Live Migration VLAN.

    SQL Guest Cluster Private Network (VLAN 33) A separate VLAN should be created to support the SQL guest cluster private network communication. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports used by each management x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. The networks using these must specify VLAN 33 in the Hyper-V settings for the SQL virtual machines. There should be no routing on the SQL Cluster Private VLAN.

    SCVMM Guest Cluster Private Network (VLAN 35) A separate VLAN should be created to support the SCVMM guest cluster private network communication. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports used by each x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. The networks using these must specify VLAN 35 in Windows Server. There should be no routing on the SCVMM Cluster Private VLAN.

    Production Cluster Private and CSV Networks (VLANs 37) This network is reserved for cluster private (heartbeat and cluster shard volume) communication between clustered production servers. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports (see Table 2) for each production x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. The networks using these must specify VLAN 37 in Windows Server 2012. There should be no IP routing or default gateways for cluster private networks.

    Production Cluster Live Migration Network (VLAN 38) A separate VLAN should be created to support Live Migration for the production cluster. Switch ports should be configured to appropriately limit the scope of each of these VLANs. This will require the appropriate switch ports used by each production x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. The networks using these must specify VLAN 38 in Windows Server. There should be no routing on the Live Migration VLAN.

    Cluster Public Network (VLAN 40) This network supports communication for the host management servers for both the management cluster, System Center components, and compute cluster. One team over two 10GbE ports, created using the Windows Server 2012 in-box NIC teaming feature, will be used to provide fault tolerance, and load balancing for host management (cluster public), and cluster private networks. This NIC team will be sharing the bandwidth with the two FCoE 10Gb ports. Quality of Service (QoS) will be applied from Windows Server to limit its bandwidth usage. The management cluster will also support VLAN 40 on the

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 17

    VM communications network to allow the System Center components to manage the host servers. VLAN identification will have to be set in either Windows Server or Hyper-V accordingly.

    Production VM Communication Network (VLAN 50) This network supports communication for the virtual machines. One LACP team over two 10GbE ports, created using the Windows Server 2012 in-box NIC teaming feature, will be used to provide fault tolerance, and load balancing for communication for live migration and virtual machine communication. This will require the appropriate switch ports (see Table 2) for each production x240 Compute Node to be set as tagged, and the VLAN definitions should include these ports for each switch. Network settings for proper VLAN identification will need to be performed in each virtual machines network interface. If additional segregation between virtual machine networks is required then the VM Team network switch ports can have additional VLAN IDs assigned as needed. Each VM can then set the necessary VLAN ID as part of its network settings in Hyper-V.

    Out of Band Management Network (VLAN 70) This network supports communication for the out of band management network. As shown in Figure 2 the CMM provides the communication entry point for the Flex System x240 Integrated Management Module (IMM), IO modules, and Flex System V7000 storage. For best practices and security reasons, this network should be isolated and integrated into customers existing management network environment. Routing will have to be configured for VLAN 70 and VLAN 40 to support communication between System Center components and the management environment.

    FCoE Storage Network (VLAN 1002) All FCoE storage traffic between the Flex System V7000 and Flex System ITEs should be isolated on VLAN 1002.

    Inter Switch Link Network (VLAN 4094) A dedicated VLAN to support the ISL between the two switches should be implemented. There should be no spanning tree protocol on the ISL VLAN.

    x240 Compute Node Network Ports Each host server has one CN4054 4-port 10Gb device that will be used for network connections and FCoE storage connectivity, public and private cluster communication, and VM communication. The FCoE connections to storage will use Multipath I/O drives to ensure fault tolerance and load balancing. Windows Server 2012 NIC teaming is used for all but the FCoE networks to provide fault tolerance, and spread the workload across the network communication interfaces. The NIC teams will follow best practice by ensuring the team members are from each of the two ASICs on the CN4054 CNA card, so no single ASIC failure can take down the team. See Figure 9 for more information.

    Physical Host Data Access Each compute node will utilize four connections to the Ethernet network(s). Figure 9 shows four

    active data connections. Two connections from each ASIC

    By default the CN4093 switches are set as untagged ports. This will need to be changed to tagged and VLAN IDs assigned according to Table 3. The default VLAN ID will remain with a PVID equal to 1.

    Windows Server 2012 NIC teaming will be used to form high bandwidth fault tolerant teams.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 18

    Physical Host FCoE Storage Access Each compute node will utilize two connections to the FCoE network. Figure 9 shows two active

    FCoE connections. One connection coming from each ASIC

    Since the host servers will be utilizing switch ports for both data and storage traffic each CN4093 switch port will be changed from the default untagged mode to tagged. The default VLAN ID will remain with a PVID equal to 1. Correct VLAN IDs for storage and data must be specified for each switch port as shown in Table 3.

    Storage Controller FCoE Access At the physical storage layer, the V7000 Storage Node uses FCoE ports for storage connectivity. Each controller has two 10GbE converged ports for FCoE traffic. The use of the V7000 Storage Node Device Specific Module (DSM) manages the multiple I/O paths between the host servers and storage, and optimizes the storage paths for maximum performance. VLANs are used to isolate storage traffic from other data traffic occurring on the switches. FCoE traffic is prioritized by the CN4093 switches to maximize storage traffic throughput.

    Each Flex V7000 Storage Node controller will maintain two FCoE connections to help balance storage workloads

    One connection is provided each controller to each switch (Figure 8 above)

    By default the CN4093 switches are set as untagged ports. The ports for the storage controller will need to be assigned to the FCoE VLAN ID 1002.

    IBM Flex System CN4093 Converged Ethernet Configuration The IBM Hyper-V Virtualization Reference Architecture uses two Flex System CN4093 switches containing up to (64) 10GbE ports each. The CN4093 provides primary storage access and data communication services. Redundancy across the switches is achieved by creating an inter-switch link (ISL) and VLAG between switches 1 and 2. The inter-switch link will be created using two external 10GbE links from each switch to form an LACP team. Virtual Link Aggregation (VLAG) is a method to created LACP teams between two independent switches. Corporate uplink connections can be achieved with VLAGs and Virtual Router Redundancy Protocol VRRP, as shown in Figure 12, depending on the customer configuration. Each of the CN4093 switches will require Upgrade 1 to activate the additional ports needed to fully support all the CN4054 ports on each x240 Compute Node.

    Figure 12. Active-Active Configuration using VRRP and VLAGs Note: Switch ports used for FCoE do not support VLAG/LACP. Windows 2012 Switch Independent NIC Teaming is used for those ports to support TCP/IP data traffic.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 19

    Note: Use of VLAGs between the two CN4093 switches will require all routing to be performed in upstream switches Management of the CN4093 switches can be performed either through a command line interface or a web based user interface (Figure 13). The default user name and password for the IBM CN4093 switches is admin/admin. This should be changed to a unique password to meet your security requirements.

    Figure 13. CN4093 Administration Interface Spanning Tree should be enabled on all switches according to your organizational requirements.

    Spanning Tree is disabled for the VLAG ISL VLAN. By default the switches are assigned the following management IP address: 192.168.70.120 Switch 1 192.168.70.121 Switch 2 Table 2 shows the roles of each switch port for the two CN4093 switches in the configuration.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 20

    Table 2. CN4093 Switch port roles

    Table 3 describes the VLAN configuration of the ports for each of the two CN4093 switches in the configuration.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 21

    Table 3. CN4093 switch port VLAN roles Ports are set as untagged by default. All Flex x240 ports will set to tagged in the configuration. A preferred VLAN ID (PVID) should be remain set 1. This can be done from the switch GUI under Configuration Figure 14 or ISCLI as shown in section Using ISCLI to configure CN4093 switches.

    Figure 14. Setting VLAN tagging and Default VLAN ID

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 22

    VLAN assignments for the CN4093 switch ports can be made in the GUI as seen in Figure 15 or with the ISCLI as shown in section Using ISCLI to configure CN4093 switches.

    Figure 15. Adding ports to VLAN Interface Note: Regarding VLAG and LACP teams on the CN4093 switch. Each LACP team will have its own unique Port Admin Key (VLAG ID) with each port that is a member of that team being set to this unique value. Spanning Tree Protocol is disabled on the VLAG ISL VLAN. VLAG and LACP configuration is shown in section Using ISCLI to configure CN4093 switches Figure 16 shows the concept of using VLAG to create LACP teams from the NIC interfaces.

    Figure 16. VLAG/LACP configuration interfaces A VLAG is only created between the two ports not being used for FCoE. Each servers VLAG should consist of a team with one port from each CNA ASIC and span the two CN4093 switches. The VLAG ID will be assigned automatically by the switch Table 4 below describes the VLAG/LACP configurations for both switch 1 and 2:

    VLAG ID (Server) Switch1 Port Switch2 Port LACP Key

    65 (Server 1) INTB1 INTA1 101

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 23

    66 (Server 2) INTB2 INTA2 102

    67 (Server 3) INTB3 INTA3 103

    68 (Server 4) INTB4 INTA4 104

    69 (Server 5) INTB5 INTA5 105

    70 (Server 6) INTB6 INTA6 106

    71 (Server 7) INTB7 INTA7 107

    72 (Server 8) INTB8 INTA8 108

    73 (Server 9) INTB9 INTA9 109

    74 (Server 10) INTB10 INTA10 110

    ISL EXT1 EXT1 100

    ISL EXT2 EXT2 100

    Table 4. Summary of VLAGs for Switch 1 and 2

    Using ISCLI to configure CN4093 switches This section provides guidance on switch configuration using the ISCLI command line environment. This is not an exhaustive step by step, but does provide details for each major component of the switch configuration such as ISL, VLAN, VLAN, Port Configuration, and FCoE configuration. To access the ISCLI, refer to the IBM Flex System Fabric CN4093 Industry Standard CLI Command Reference.

    Grant Privilege and Enter Configuration mode

    1. Grant Privilege Mode CN 4093)# enable

    2. Enter Configuration Mode

    CN 4093)# configure terminal

    Configure the ISL and VLAG Peer Relationship

    1. Enable VLAG Globally. CN 4093(config)# vlag enable

    2. Configure the ISL ports of each switch and place them into a port trunk group:

    CN 4093(config)# interface port ext1-ext2

    CN 4093(config-if)# tagging

    CN 4093(config-if)# pvid 4094

    CN 4093(config-if)# lacp mode active

    CN 4093(config-if)# lacp key 100

    CN 4093(config-if)# exit

    3. Place the ISL into a dedicated VLAN. VLAN 4094 is recommended:

    CN 4093(config)# vlan 4094

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member ext1-ext2

    CN 4093(config-vlan)# exit

    4. If STP is used on the switch, turn STP off for the ISL:

    CN 4093(config)# spanning-tree stp 20 vlan 4094

    CN 4093(config)# no spanning-tree stp 20 enable

    5. Configure VLAG Tier ID. This is used to identify the VLAG switch in a multi-tier environment.

    CN 4093(config)# vlag tier-id 10

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 24

    6. Define VLAG peer relationship: CN 4093(config)# vlag isl vlan 4094

    CN 4093(config)# vlag isl adminkey 100

    CN 4093(config)# exit

    7. Save the configuration changes

    CN 4093# write

    8. Configure the ISL and VLAG peer relationship for the second switch. Ensure the VLAG peer (VLAG

    Peer 2) is configured using the same ISL trunk type (dynamic or static), VLAN, STP mode and tier ID used on VLAG peer 1.

    Configure the Host VLAG

    1. Make each port from the ITEs an active LACP team member. This needs to be done for each of the two ports per ITE (once on each switch). Refer to Table 4 as needed. CN 4093(config)# interface port intb1 (Switch1)

    CN 4093(config)# interface port inta1 (Switch2)

    CN 4093(config-if)# lacp mode active

    CN 4093(config-if)# lacp key 101 (For ITE1)

    CN 4093(config-if)# exit

    2. Enable the VLAG trunk on each switch. This allows LACP teams to be formed across the two

    CN4093 switches. This should be done for each LACP key in Table 4 on each switch. CN 4093(config)# vlag adminkey 101 enable

    3. Continue by configuring all required VLAGs on VLAG Peer 1 (Switch 1), and then repeat the

    configuration for VLAG Peer 2. For each corresponding VLAG on the peer, the port trunk type (dynamic or static), VLAN, and STP mode and ID must be the same as on VLAG Peer 1.

    4. Verify the completed configuration:

    CN 4093(config)# show vlag

    Note: The LACP teams will not show up as active on the switches until they are also formed in Windows Server 2012 with NIC teaming.

    Configure the VLANs Each switch must be configured to support Fibre Channel over Ethernet. 1. From the ISCLI of Switch1:

    CN 4093#enable

    CN 4093#configure terminal

    CN 4093(config-if)#interface port inta1-inta10,intb1-intb10,ext1-ext2

    CN 4093(config-if)#tagging

    CN 4093(config-if)#exit

    CN 4093(config)# vlan 30

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta9-inta10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 31

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 25

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb9-intb10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 33

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb9-intb10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 35

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb9-intb10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 37

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta1-inta8,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 38

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb1-intb8,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 40

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta1-inta10,intb9-intb10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 50

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb1-intb10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config-vlan)# show vlan

    CN 4093(config-vlan)# show interface status

    CN 4093(config-vlan)# write

    2. From the ISCLI of Switch2:

    CN 4093# enable

    CN 4093# configure terminal

    CN 4093(config-if)# interface port inta1-inta10,intb1-intb10,ext1-ext2

    CN 4093(config-if)# tagging

    CN 4093(config-if)# exit

    CN 4093(config)# vlan 30

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb9-intb10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 31

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta9-inta10,ext1-ext2

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 26

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 33

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta9-inta10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 35

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta9-inta10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 37

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb1-intb8,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 38

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta1-inta8,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 40

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member intb1-intb10,inta9-inta10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config)# vlan 50

    CN 4093(config-vlan)# enable

    CN 4093(config-vlan)# member inta1-inta10,ext1-ext2

    CN 4093(config-vlan)# exit

    CN 4093(config-vlan)# show vlan

    CN 4093(config-vlan)# show interface status

    CN 4093(config-vlan)# reload

    3. Backup configuration to TFTP server (xx.xx.xx.yy is the IP address of the TFTP server)

    CN 4093# copy running-config tftp filename file.cfg address

    xx.xx.xx.yy mgt-port

    Note: The switch ports Ext13 and Ext14 should be configured as a fault tolerant team and used as uplink connections for Routing and CorpNet access. Routes will need to be established for VLANs 40, 50, and 70.

    Configure Fibre Channel over Ethernet (FCoE) Each switch must be configured to support Fibre Channel over Ethernet. Note: It is easiest to perform the switch FCoE configuration after servers have been enabled and configured for FCoE, the OS has been installed and PWWN have been recorded for each server. 1. Enable FCoE on each switch from the ISCLI:

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 27

    CN 4093(config)# cee enable

    CN 4093(config)# fcoe fips enable

    2. Enable FCoE fiber channel forwarding on each Switch for VLAN 1002 from the ISCLI:

    CN 4093# enable

    CN 4093# configure terminal

    CN 4093(config)# system port ext11-ext12 type fc

    CN 4093(config)# vlan 1002

    CN 4093(config)# enable

    (Switch1)

    CN 4093(config-vlan)# member inta1-inta10,inta13-inta14,ext11-ext12

    (Switch2)

    CN 4093(config-vlan)# member intb1-intb10,inta13-inta14,ext11-ext12

    CN 4093(config-vlan)# fcf enable

    3. An FC alias is assigned to each HBA PWWN, used for easier name identification. An example of FC

    aliases assignments are shown in Tables 5 and Table 6. Note that there is one port that is activated at the switch from each of the ASICs on the CN4054 CNA adapter. PWWN can be viewed from the OneCommand Manager tool as shown in Figure 8.

    FC Alias PWWN

    ITE7-PortA7 10:00:00:90:fa:07:84:21

    ITE8-PortA8 10:00:00:90:fa:0d:4d:93

    ITE9-PortA9 10:00:00:90:fa:0d:2a:27

    ITE10-PortA10 10:00:00:90:fa:0d:33:75

    V7KLeft-PortA13 50:05:07:68:05:04:01:96

    V7KRight-PortA14 50:05:07:68:05:04:01:97

    Table 5) Switch 1 Fibre Channel alias example

    FC Alias PWWN

    ITE7-PortB7 10:00:00:90:fa:07:84:2d

    ITE8-PortB8 10:00:00:90:fa:0d:4d:9f

    ITE9-PortB9 10:00:00:90:fa:0d:2a:33

    ITE10-PortB10 10:00:00:90:fa:0d:33:81

    V7KLeft-PortA13 50:05:07:68:05:08:01:96

    V7KRight-PortA14 50:05:07:68:05:08:01:97

    Table 6) Switch 2 Fibre Channel alias example

    Create FCAliases for each World Wide Port Name (WWPN) on each switch. Each Flex System ITE, and storage canister will present one WWPN per switch for storage connections. Additional WWPN will be created for virtual machines that need direct storage access (VMs need to be created with virtual HBAs to be able to view their PWWs). Examples of FCAlias definitions are below.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 28

    CN 4093(config)# show fcns database

    CN 4093(config)#fcalias ITE9_PortA9 wwn 10:00:00:90:fa:0d:2a:27

    CN 4093(config)#fcalias ITE10_PortA10 wwn 10:00:00:90:fa:0d:33:75

    CN 4093(config)#fcalias V7KLeft_PortA13 wwn 50:05:07:68:05:04:01:96

    CN 4093(config)#fcalias V7KRight_PortA14 wwn 50:05:07:68:05:04:01:97

    CN 4093(config-zone)# exit

    CN 4093(config)# show fcalias

    CN 4093(config)# write

    Note: Virtual machine PWWN acquisition is shown in the Virtual Machine Fibre Channel Storage Connections section below.

    4. Create and populate fibre channel zones on each switch. These must contain all of the FC aliases

    previously created.

    Zone1 should include the storage and compute servers

    Zone2 should include the storage and management servers

    Zone3 should include the storage and SQL virtual machines (after VM configuration)

    Zone4 should include the storage and VMM virtual machines (after VM configuration)

    Note: For virtual machines using vHBAs will not register in the Fibre Channel name service until the VM is running

    CN 4093(config-zone)# zone name SW1Zone_MgmtSvrs

    CN 4093(config-zone)# member fcalias ITE9_PortA9

    CN 4093(config-zone)# member fcalias ITE10_PortA10

    CN 4093(config-zone)# member fcalias V7KLeft_PortA13

    CN 4093(config-zone)# member fcalias V7KRight_PortA14

    CN 4093(config-zone)# exit

    CN 4093(config)# show zone

    CN 4093(config)# write

    5. A FC zoneset can contain multiple FC zones. Create and activate a fibre channel zoneset on each

    switch that contains the fibre channel zone(s) previously created. CN 4093(config)# zoneset name SW1_ZoneSet

    CN 4093(config-zoneset)# member SW1Zone_MgmtSvrs

    CN 4093(config-zoneset)# member SW1Zone_ComputeSvrs

    CN 4093(config-zoneset)# member SW1Zone_SQL_Cluster (After VM

    Configuration)

    CN 4093(config-zoneset)# member SW1Zone_VMM_Cluster (After VM

    Configuration)

    CN 4093(config)# exit

    CN 4093(config)# show zoneset

    CN 4093(config)# zoneset activate name SW1_ZoneSet

    CN 4093(config)# write

    6. Backup configuration to TFTP server (xx.xx.xx.yy is the IP address of the TFTP server)

    CN 4093# copy running-config tftp mgt-port

    Enter the IP address of the TFTP Server: xx.xx.xx.yy

    Enter the filename: SW1-June-24-2013.cfg

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 29

    Active Directory The IBM Hyper-V Fast Track reference configuration must be part of an Active Directory (AD) domain. This is required to form the Microsoft Windows Server 2012 clusters. For this configuration an AD server must exist, and be reachable to this configuration.

    IBM Flex System V7000 Storage Node Overview The reference guide for the IBM Flex V7000 can be found in the Redbook IBM Flex System V7000 Storage Node Introduction and Implementation Guide.

    Internal Flex Chassis Connections The Flex System V7000 Storage Node is connected to the CN4093 switches through the internal Flex Chassis backplane. No physical cables are required. There are two 10GbE connections per controller. Each controller has 1 connection to switch 1 and 2 in order to create a mesh topology for fault toleranance and performance. The Flex V7000 Storage Node initial setup and IP configurations is done through the CMM web based management UI. After the initial setup as shown in there will be a single management IP interface that can be accessed to manage the storage node through the UI. Figures 17 and 18 the initial setup of the Flex V7000 storage.

    Figure 17. Initial Flex V7000 Storage setup via CMM

    Figure 18. Initial Flex V7000 Storage wizard

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 30

    Management Management of the V7000 Storage Node is performed using the web-based user interface (Figure 19). To begin management of the V7000 Storage Node, perform the following actions: 1. From the web browser, establish an connection with V7000 Storage Node Cluster TCP/IP address

    (Figure 18)

    V7000 Storage Node Cluster Management IP Address example: 192.168.70.200

    Default UserID and Password are superuser / passw0rd (with a zero)

    Figure 19. Establish connection to V7000 Storage Node management UI

    IBM Flex System V7000 Storage Node and Cluster Storage Considerations The V7000 Storage Node supports a concept called Managed Disks (MDisk). An MDisk is a unit of storage that the V7000 Storage Node virtualizes. This unit is typically a RAID group created from the available V7000 Storage Node disks, but can also be logical volumes from external third party storage. MDisks can be allocated to various storage pools in V7000 Storage Node for different uses and workloads. A storage pool is a collection up to 128 MDisks that are grouped to provide the capacity to create virtual volumes or LUNs which can be mapped to the hosts. MDisks can be added to a storage pool at any time to increase capacity and performance. Microsoft Windows Failover Clustering supports Cluster Shared Volumes (CSVs). Cluster Shared Volumes provide shared primary storage for virtual machine configuration files and virtual hard disks consistently across the entire cluster. All CSVs are visible to all cluster nodes, and allow simultaneous access from each node. A single storage pool will be created from the twenty-four storage devices on the Flex V7000 storage node. This will allow the Flex V7000 to fully monitor and manage I/O utilization across the volumes and automatically move frequently accessed portions of files to the SSD for better performance. Host mapping will use a hosts worldwide port names to limit host access to those volumes they are authorized to access. Thin provisioning is a technology that allows you to create a volume of a specific size and present that volume to an application or host. The Flex V7000 will only use the provisioned space on an as needed. This allows the IT admin to set initial volume sizes, and then monitor, and add storage as needed to support the volumes. Figure 20 shows a recommended initial disk configuration for the Flex V7000 Storage Node, and Table 7 lists the volumes, recommended sizes per the Microsoft deployment guide, and host mappings.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 31

    Figure 20. Flex V7000 Storage Note: This storage configuration is sufficient for initial setup and deployment. Disk configuration and performance can be highly workload dependent. It is recommended to profile and analyze your specific environment to ensure adequate space and performance for your organizational needs.

    Volume Name Size Mapped Hosts

    Management CSV1 2TB Management Host1- Host2

    Management Quorum 1GB Management Host1- Host2

    Compute CSV1 4TB Compute Host1 Host8 Compute Quorum 1GB Compute Host1 Host8 SQL Quorum 1GB SQLVM1-SQLVM2

    SQL- Service Manager Management Database 145GB SQLVM1-SQLVM2

    SQL- Service Manager Management Logs 70GB SQLVM1-SQLVM2

    SQL- Service Manager Data Warehouse Data 1TB SQLVM1-SQLVM2

    SQL- Service Manager Data Warehouse Logs 500GB SQLVM1-SQLVM2

    SQL Service Manager Analysis Service Data 8GB SQLVM1-SQLVM2 SQL Service Manager Analysis Service Logs SQLVM1-SQLVM2 SQL - Service Manager SharePoint, Orchestrator, App Controller Data

    10GB SQLVM1-SQLVM2

    SQL - Service Manager SharePoint, Orchestrator, App Controller Logs

    5GB SQLVM1-SQLVM2

    SQL - Virtual Machine Manager and Update Services Data

    6GB SQLVM1-SQLVM2

    SQL - Virtual Machine Manager and Update Services Logs

    3GB SQLVM1-SQLVM2

    SQL - Operations Manager Data 130GB SQLVM1-SQLVM2

    SQL - Operations Manager Logs 65GB SQLVM1-SQLVM2

    SQL - Operations Manager Data Warehouse Data 1TB SQLVM1-SQLVM2

    SQL - Operations Manager Data Warehouse Logs 500GB SQLVM1-SQLVM2

    System Center Virtual Machine Manager Quorum 1GB SCVMM_VM1-SCVMM_VM2

    SCVMM Library Volume 500GB SQLVM1-SQLVM2

    Table 7. Volume roles, sizes, and mappings Note: The Flex V7000 supports Thin Provisioning as a standard feature. The volumes above can be Thin Provisioned to optimize the use of storage resources.

    Storage Pool and Volume Configuration The following step-by-step processes are accompanied by sample screenshots to illustrate how quick and easy it is to configure V7000 Storage Node.

    1. Create the MDisks (Figure 21) from Pools->Internal Storage. Each class of storage is shown with all the devices in that class. Figure 20 defines a general configuration for most initial workloads.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 32

    Figure 21. Flex V700 MDisks Creation

    2. Create a new storage Pool by using the MDisks that were created in the previous step shown in Figure 22

    Figure 22. V7000 Storage Node Pool Creation

    3. Logical Disks can now be created from the pool created in the previous step (Figure 23).

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 33

    Figure 23. V7000 Storage Node Logical Drives Creation

    Host Server Definition and Volume Mapping The next step is to define the host servers to the IBM Flex V7000 storage node and map volumes to them. This process will limit access to the volumes to only authorized host servers. Note: It is easiest to perform volume mapping after the switch and hosts have been configured for FCoE, the OS has been installed and PWWN have been recorded for each server.

    1. Host definitions are created in the Flex System V7000 storage console. Hosts should be created as

    Fibre Channel Hosts, and include their WWPNs in the definition as shown in Figure 24.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 34

    Figure 24. Host definition with PWWNs associated.

    2. Map the logical drives to one or more host servers (Figure 25) as show in Table 7.

    Figure 25. Flex V700 Logical Drives Hosts Mapping

    Note: The IBM Flex V7000 will present a warning when mapping volumes to more than one host.

    Volumes mapped to hosts should now be visible in Windows Disk Manager. A rescan may be

    required. Only one host server can have the storage online at a time until the cluster is configured. One server per cluster must be chosen to initially bring the volume online, and format.

    IBM Flex System x240 Management Fabric Setup The management fabric for this environment is built around a two node Windows Server cluster consisting of two dual socket IBM Flex System x240 Compute Nodes with 256GB of RAM, and one CN4054 4-port converged network card for each node. This independent cluster hosting the management fabric based on Microsoft System Center 2012 SP1 with IBM upward integration modules helps eliminate single points of failure and allows near continuous access to important management applications to deploy, maintain, and monitor the production environment. The High Availability (HA) configuration used for each management component varies depending on the needs and capabilities of the individual component. A high level overview of the management fabric can be seen in Figure 26.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 35

    Figure 26. Microsoft System Center 2012 Management Fabric for IBM Flex System for Microsoft Hyper-V Setup involves the installation and configuration of Windows Server 2012 Datacenter edition, networking, and storage on each node. Highly available VMs can then be created to perform the various management tasks for the management framework.

    Pre-OS Installation The following configuration steps should be performed before installing an operating system. 1. Confirm that the CN4054 4-port Ethernet devices are installed in each compute node

    The FCoE Feature on Demand (FoD) must be imported on each node through an IMM connection as shown in Figure 26.

    Note: The default IMM address for each x240 Compute Node is 192.168.70.1xx where xx is equal to the two digit slot number that the compute node is installed in (Slot 1 = 01)

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 36

    Figure 26. Feature on Demand Key Activation

    2. It is best to include any Features on Demand (FoD) keys needed with the initial order so they can be

    installed at the factory. If ordering IBM FoD separately please contact your IBM Sales representative for assistance.

    For questions regarding Features on Demand refer the Redpaper Using IBM Features on Demand.

    3. If the Feature on Demand Unique Identifier (FUI) is needed. This can be found in UEFI under System

    Settings->Network Emulex Device -> Feature on Demand. There should be two unique FUI per CN4054 card, one for each ASIC.

    Once Windows and the Emulex OneCommand tool is installed the HBACmd utility can be used to retried the FUI with the format:

    C:\Program Files\Emulex\Util\OCManager> HbaCmd getfodinfo xx-xx-xx-xx-xx-xx Where xx-xx-xx-xx-xx-xx is the MAC address of the first NIC port on each ASIC

    4. The Flex x240 firmware for the following devices: UEFI, IMM, DSA, SAS, and CNA should be

    evaluated, and if necessary flashed to the latest firmware.

    For out of band updates IBM Bootable Media Creator will create a bootable image of the latest IBM x240 updates (download previously) o An external DVD device will be required or mounted to the server using the virtual media

    capabilities of the IMM. o Please refer to IBM Tools Center Bootable Media Creator Installation and Users Guide 9.41

    for assistance.

    IBM Fast Setup is an optional tool that can be downloaded and used to configure multiple IBM System x, Bladecenter or Flex systems simultaneously.

    In band updates the can be applied using IBM UpdateXpress tool to download and install IBM recommended updates on the IBM Flex System platform. o Learn more about IBM UpdateXpress. o Please refer to IBM UpdateXpress System Pack Installation and Users Guide for assistance.

    5. By default the x240 Compute Node settings are set to balance power consumption and performance.

    To change these settings, boot to UEFI mode, and select System Settings-> Operating Mode (Figure 27). Change the settings to fit your organizational needs.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 37

    Figure 27. Operating Mode settings in UEFI

    6. Disable Multichannel mode on the CN4054. This can done in the UEFI under System Settings -

    >Network Emulex Device as shown in Figure 28.

    Figure 28. Disabling Multichannel Mode in CN4054 in UEFI

    7. Enable FCoE on the CN4054 converged network adapter. This is done in the UEFI under System Settings->Network->Network Device List. Select the top device on each bus as shown in Figure 29.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 38

    Figure 29. Identifying CNA devices to be configured

    8. Change the setting from NIC to FCoE as shown in Figure 30.

    Figure 30. Changing CNA property to support FCoE

    9. Repeat this action for the second device on the other bus (second ASIC).

    10. The two local disks should be configured as a RAID 1 array. 11. Confirm CN4093 switches have been configured

    Inter switch links, VLAGs, VLANs, configuration should be should have been created and assigned as described above.

    Switch FCoE configuration is easiest after the operating system is installed, and PWWN can be collected for each server.

    12. The V7000 Storage Node MDisks, Pools, and Volumes must be configured as defined in the storage

    section above to be ready for host mapping assignments.

    Volume mapping is easiest after the operating system is installed, and PWWN can be collected for each server.

    OS Installation and Configuration Windows Server 2012 Datacenter allows unlimited Windows Server virtual machine instances or licenses on the host servers and is the preferred version for deploying Hyper-V compute configurations. Windows

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 39

    Server 2012 Standard now supports clustering also, but only provides licensing rights for up to two Windows Server virtual machine instances (additional licenses would be needed for additional VMs). Windows Server 2012 Standard Edition is intended for physical servers that have few or no virtual machines running on it.

    1. Install Windows Server 2012 Datacenter Edition.

    2. Set your server name, and join the domain.

    3. Install the Hyper-V role and Failover Clustering feature.

    4. Run Windows Update to ensure any new patches are installed.

    5. The latest CN4054 NIC and FCoE drivers should be downloaded from IBM Fix Central and installed.

    A complete package of all platform related updates by operating system is available from IBM Fix Central as an UpdateXpress System Pack.

    The associated Windows Installer is download separately also on IBM Fix Central.

    6. Install the Emulex OneCommand Manager utility to provide additional information for the CN4054 converged network adapter.

    7. Multipath I/O is used to provide balanced and fault tolerant paths to V7000 Storage Node. This

    requires an additional Flex V7000 MPIO DSM specific driver to be installed on the host servers before attaching the storage.

    The Microsoft MPIO pre-requisite driver will also be installed if not already on the system

    8. Install the IBM Systems Director platform agent 6.3.3

    Network Configuration One key new feature of Windows Server 2012 is in-box NIC teaming. In-box teaming can provide fault tolerance, link aggregation, and can be tailored to host or virtual machine (VM) connectivity. Two separate Windows Server 2012 teams will be created in following configuration. One team is used to support host server management and cluster private traffic. A second team is used to support Live Migration and VM Communication. Note: Be careful when identifying and enumerating the network interfaces in each host to ensure teams are spread across the two network interfaces and properly routed to the correct switches. Use Emulex OneCommand Manager Utility to review each network interface and MAC address. Note: Refer to Figure 8 and 9 to clarify the mapping of server ports to the two CN4093 switch ports. Each ASIC has two network interfaces, the top one is for switch 1 and the bottom for switch 2. The following PowerShell commands can also be useful (additional PowerShell scripts can be found in the section PowerShell Scripts found in the appendix. Each team should contain a member from each bus

    Get-NetAdapterHardwareInfo | Sort-Object Property Bus,Function

    Get-NetAdapter InterfaceDescription Emulex*

    Rename-NetAdapter name Ethernet NewName PortA9_SW1

    Figure 31 is a NIC interface naming example..

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 40

    Figure 31. Displaying all Emulex network adapters with PowerShell Windows Server 2012 in-box NIC teaming can be found in the Server Manager console as shown in figure 32. The basic NIC teaming tools are available in the Server Manager GUI however it is better to use PowerShell for the added options and flexibility. Note: Execute LBFOAdmin.exe from the command line to see the Network Teaming graphical user interface.

    Figure 32. NIC teaming in Server Manager The team that is created to support cluster public and private communication between the host servers will share the same ports that are being used for FCoE traffic. The CN4093 switches prioritize FCoE traffic over Ethernet data traffic. To further reduce the potential of bandwidth over subscription Quality of Service (QoS) limits will be placed on these interfaces in Windows Server 2012. This team should be created using the two NIC ports as described in Table 2. 1. The ClusterTeam should be created using the default Switch Independent mode and Address

    Hash mode with the following PowerShell commands.

    New-NetLbfoTeam name ClusterTeam TeamMembers PortA9_SW1, PortB9_SW2 TeamingMode SwitchIndependent

    Note: The team member names will vary by host server

    2. The second team will be created off the NIC interfaces that will be configured to support LACP. The

    use of LACP teaming and is then backed by the full aggregated bandwidth of the two ports.

    New-NetLbfoTeam name VLAG TeamMembers vLAG PortA9_SW2, VLAG PortB9_SW1 TeamingMode Lacp

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 41

    3. When Windows Server 2012 NIC teaming is complete there should be two teams displayed when queried in PowerShell (Figure 33).

    Figure 33. Windows Server NIC teaming.

    4. Virtual switches will be created on each of these teams. Generally each Hyper-V virtual switch can

    provide one interface to the management operating system. A second virtual NIC for use by the host operating system will be created on the ClusterTeam in order to provide a segregated network path for the Cluster Private/CSV network. The following PowerShell commands are used to create the virtual switches and second virtual NIC.

    New-VMSwitch Name MgmtClusterPublic NetAdapterName ClusterTeam MinimumBandwidthMode Absolute AllowManagementOS $true

    Add-VMNetworkAdapter ManagementOS -Name MgmtClusterPrivate SwitchName MgmtClusterPublic

    New-VMSwitch Name VM_Communication NetAdapterName VLAG MinimumBandwidthMode Weight AllowManagementOS $true

    5. Rename the management facing network interface on the VM_Communication team to reflect the role

    that it is fulfilling.

    Rename-VMNetworkAdapter -NewName MgmtLiveMigration ManagmentOS name VM_Communication

    6. Confirm the network interfaces are available to the management operating systems with the following

    Powershell command and as shown in Figure 34.

    Get-VMNetworkAdapter ManagementOS

    Figure 34. Displaying all network interfaces created for host partition from vSwitches

    7. Assign VLAN IDs to each of these interfaces with the following PowerShell commands (Figure 35).

    Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName MgmtClusterPublic Access Vlanid 40

    Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName MgmtClusterPrivate Access Vlanid 30

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 42

    Set-VMNetworkAdapterVlan ManagementOS VMNetworkAdapterName MgmtLiveMigration Access Vlanid 31

    8. Confirm your Management OS network adapter names and VLAN assignments (Figure 35).

    Get-VMNetworkAdapterVlan ManagementOS

    Figure 35. VLAN Assignments on Management OS NIC interfaces

    9. Bandwidth limits should be placed on these network interfaces. The ClusterPublic virtual switch was created with Absolute weighting. This allows a maximum bandwidth cap to be placed on these network interfaces with the following PowerShell commands. Maximum bandwidth is defined in bits per second.

    SetVMNetworkAdpater ManagementOS -Name MgmtClusterPublic -MaximumBandwidth 2GB

    SetVMNetworkAdpater ManagementOS -Name MgmtClusterPrivate -MaximumBandwidth 1GB

    10. The network interface used for Hyper-V Live Migration uses the team that was created using

    Weighted mode. A minimum bandwidth setting of 30% will be set for the LiveMigration network.

    SetVMNetworkAdpater ManagementOS -Name MgmtLiveMigration -MinimumBandwidthWeight 30

    11. Assign TCP/IP addresses and confirm network connectivity for all of the network connections on each

    VLAN

    New-NetIPAddress InterfaceAlias vEthernet (MgmtClusterPublic) IPAddress 192.168.40.21 PrefixLength 24

    New-NetIPAddress InterfaceAlias vEthernet (MgmtClusterPrivate) IPAddress 192.168.30.21 PrefixLength 24

    New-NetIPAddress InterfaceAlias vEthernet (MgmtLiveMigration) IPAddress 192.168.31.21 PrefixLength 24

    12. Confirm the cluster public network (VLAN 40) should is at the top of the network binding order. 13. The Cluster Private and Live Migration networks should not have any default gateway defined.

    Host Storage Connections Two volumes are required to support the Management host cluster as follows:

    2TB Volume to be used as a Cluster Shared Volume

    1GB Volume to be used as the Management Cluster Quorum

    Once the switch FCoE configuration and Flex V7000 volume mapping has been completed storage volumes should be visible to the host servers. 1. Confirm the disks are visible in Windows Disk Manager

    A disk rescan may be required 2. From one server, bring each disk online, and format it as a GPT disk for use by the cluster. Assigning

    drive letters is optional since they will be used for specific clustering roles such as CSV, and Quorum.

  • Hyper-V Fast Track on IBM Flex System

    Copyright IBM Corporation, 2013 Page 43

    Validate that each potential host server can see the disks and bring them online.

    Note: Only one server can have the disks online at a time, until they have been added to Cluster Shared Volumes.

    Management Host Cluster Creation Microsoft Windows clustering will be used to join the two host servers together in a highly available configuration that will allow both servers to run virtual machines to support a production environment. Virtual machine workloads should be balanced across all hosts and careful attention should be paid to ensure that the combined resources of all virtual machines do not exceed those available on N-1 cluster nodes. Staying below this threshold will allow any single server to be taken out of the cluster while minimizing the impact to your management servers. A policy of monitoring resource utilization such as CPU, Memory, and Disk (space, and I/O) will help keep the cluster running at optimal levels, and allow for proper planning for additional resources as needed.

    1. Temporarily disable the default IBM USB Remote NDIS Network Device on all cluster nodes. If not, it will cause the validation to issue a warning during network detection, due to all nodes sharing the same IP address. These can be re-enabled after validation.

    2. Using the Failover Cluster Manager on one of the two management nodes, run the Cluster Validation

    Wizard to assess the two physical host servers as potential cluster candidates and address any errors.

    The cluster validation wizard checks for available cluster compatible host servers, storage, and networking (Figure 36).

    Make sure the intended cluster storage is online in only one of the cluster nodes.

    Address any issues that are flagged during the validation.

    Figure 36. Cluster Validation Wizard

    Use the Failover Cluster Manager to create a cluster with the two physical host servers that are to be used for the management cluster. Step through the cluster creation wizard. o You will need a cluster name and IP ad