55
© 2009 IBM Corporation STG Technical Conferences 2009 Power Blades Implementation Mike Schambureck with help from Janus Hertz [email protected] , IBM Systems Lab Services and Training

Power Blades Implementation

Embed Size (px)

Citation preview

Page 1: Power Blades Implementation

© 2009 IBM CorporationSTG Technical Conferences 2009

Power Blades Implementation

Mike Schambureck with help from Janus [email protected], IBM Systems Lab Services and Training

Page 2: Power Blades Implementation

2

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

AgendaWhere to start an IBM i on blade implementationHardware overview:

– Power blade servers technical overview

– New expansion adapters

– BladeCenter S components and I/O connections

– BladeCenter H components and I/O connections

– Switch module portfolio

– Expansion adapter portfolio for IBM iVirtualization overview

– VIOS-based virtualization

– IVM overview

– Storage options for BladeCenter H and BladeCenter S

– Multiple Virtual SCSI adapters

– Virtual tape

– Active Memory Sharing on blade4Q 2009 enhancements

Page 3: Power Blades Implementation

3

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Where Do I Start with Installing IBM i on Blade?

• Latest versions at: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 4: Power Blades Implementation

4

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter JS23 Express

2 sockets, 4 POWER6 cores @ 4.2 GHzEnhanced 65-nm lithography32 MB L3 cache per socket 4 MB L2 cache per core8 VLP DIMM slots, up to 64 GB memoryFSP-1 service processor2 x 1Gb embedded Ethernet ports (HEA)2 PCIe connectors (CIOv and CFFh)1 x onboard SAS controllerUp to 1 SSD or SAS onboard diskEnergyScale™ power managementPowerVM Hypervisor virtualization

Page 5: Power Blades Implementation

5

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter JS23 Express

Page 6: Power Blades Implementation

6

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter JS43 Express

4 sockets, 8 POWER6 cores @ 4.2 GHzEnhanced 65-nm lithography32 MB L3 cache per socket 4 MB L2 cache per core16 VLP DIMM slots, up to 128 GB memoryFSP-1 service processor4 x 1Gb embedded Ethernet ports (HEA)4 PCIe connectors (CIOv and CFFh)1 x onboard SAS controllerUp to 2 SSD or SAS onboard disksEnergyScale™ power managementPowerVM Hypervisor virtualization

+

Page 7: Power Blades Implementation

7

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter JS43 Express SMP Unit Only

Page 8: Power Blades Implementation

8

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter JS12

1 socket x 2 cores@ 3.8 GHz

SAS disk drive

PCI-X (CFFv)connections

PCIe (CFFh)connection

P5IOC2 I/O chip (2 HEA ports)

Service Processor

SAS Exp. Adapter

8 DDR2 DIMMs64 GB max

SAS disk drive

Page 9: Power Blades Implementation

9

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter JS22

2 sockets x 2 cores@ 4 GHz

SAS disk drive

PCI-X (CFFv)connections

PCIe (CFFh)connection

P5IOC2 I/O chip (2 IVE ports)

Service processor

SAS Controller

4 DDR2 DIMMs32 GB max

Page 10: Power Blades Implementation

10

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Combination Form Factor (CFF) allows for 2 different expansion adapters on the same blade

CFFv (Combo Form Factor – Vertical)Connects to PCI-X bus to provide access to switch

modules in bays 3 & 4

Vertical switch form factor

Supported for IBM i: SAS (#8250)

CFFh (Combo Form Factor – Horizontal)Connects to PCIe bus to provide access to the switch

modules in bays 7 – 10

Horizontal switch form factor, unless MSIM used

Supported for IBM i: Fibre Channel and Ethernet (#8252)

SerDes

PCI-X

PCI-Express

SM3SM4

HSSM2HSSM4

HSSM1HSSM3

CFFX

CFFE

CFFv and CFFh I/O Expansion Adapters

Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

CFFh

CFFv

Page 11: Power Blades Implementation

11

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Combination I/O Form Factor – Vertical is available only on JS23 and JS43

CFFv adapters not supported on JS23 and JS43

CIOvConnects to new PCIe bus to provide access to switch

modules in bays 3 & 4

Vertical switch form factor

Supported for IBM i: SAS passthrough (#8246), Fibre Channel (#8240, #8241, #8242)

Can provide redundant FC connections

CFFh Connects to PCIe bus to provide access to the switch

modules in bays 7 – 10

Horizontal switch form factor, unless MSIM used

Supported for IBM i: Fibre Channel and Ethernet (#8252)

CIOv and CFFh I/O Expansion Adapters

Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 12: Power Blades Implementation

12

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Meet the BladeCenter S – Front View

Supports up to 6 BladeServers

SAS and SATA disks can be mixed

SAS disks recommended for IBM i production

RAID 0, 1, 5, 0+1 supported with RAID SAS Switch Module (RSSM)

Separate RAID arrays for IBM i recommended

Shared USB portsand CD-RW / DVD-ROM Combo Battery Backup Units for use only with RAID SAS

Switch Module

7U

Service label cards slot enable quick and easy reference to BladeCenter S

Page 13: Power Blades Implementation

13

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Four Blower modules standard

Top: AMM standardBottom: Serial Pass-thru Module optional

Top(SW1) & Bottom(SW2) left: Ethernet Top(SW3) & Bottom(SW4) right: SASBoth CIOv (#8246) and CFFv (#8250) adapters supported

7U

Hot-swap Power Supplies 3 & 4 are optional, Auto-sensing b/w 950W / 1450W

Hot-swap Power Supplies 1 & 2 are standard, Auto-sensing b/w 950W / 1450W

Power supplies 3 and 4 required if using > 1 blade

Meet the BladeCenter S – Rear View

Page 14: Power Blades Implementation

14

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

BladeCenter S Midplane - Blade to I/O Bay Mapping

PCI-X (CFFv) or PCIe (CIOv)Blade Daughter Card

eNet, Fibre, SAS, SAS RAID

PCI-E (CFFh)Blade Daughter Card

BC-S Mid-Plane

I/O Bay 1Ethernet Bay

I/O Bay 3ENet Switch

FibreSAS

SAS Switch Bay

I/O Bay 4ENet SwitchFibreSAS SAS Switch Bay

I/O Bay 2Option Bay

Blade #1

“A”

“B”

Blade #6

Blade #2Blade #3

Blade #4Blade #5

D.C. Blade #1D.C. Blade

#2D.C. Blade #3D.C. Blade

#4D.C. Blade #5D.C. Blade

#6

C.C. Blade #1C.C. Blade

#2C.C. Blade #3C.C. Blade

#4C.C. Blade #5C.C. Blade

#6

“A”

“B”

“A”

“B”

AMM Bay

RAID Battery Bay

RAID Battery Bay

Page 15: Power Blades Implementation

15

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

BladeCenter H - front view

Front System Panel

Power Module 1 and Fan

pack

Front USB

CD DVD- drive

Power Module 2

Filler

Power Module 4 and Fan

pack

Power Module 3

Filler

Blade Filler

HS20 Blade # 1

9U

Page 16: Power Blades Implementation

16

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM BladeCenter H - Rear View

I/O Module bay 2

Power Connector 1

Power Connector 2

I/O Module bay 4

Rear LED panel and Serial connector

Advanced Management

Module 1Blower Module 1 and 2

I/O module bay 9 and 10

Advanced Management Module 2 slot

I/O Module bay 3

Left Shuttle release lever

I/O module bay 7 and 8

Right Shuttle release lever

I/O Module bay 6

I/O Module bay 1

I/O Module bay 5

Ethernet switch

Ethernet switch

SAS or Fibre

Channel module

• Multi-Switch Interconnect Module

• Ethernet switch (left side bay 9)

• Fibre Channel switch (right side bay 10)

• Multi-Switch Interconnect Module

• Ethernet switch (left side bay 9)

• Fibre Channel switch (right side bay 10)

Page 17: Power Blades Implementation

17

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

On-Board DualGbit Ethernet

POWERBlade Server #1

MIDPLANE

Switch #1Ethernet

Blade #N

Switch #2Ethernet

On-Board DualGbit Ethernet

QLogic CFFh Expansion Card:• Provides 2 x 4Gb Fibre Channel connections to SAN • 2 Fibre Channel ports externalized via Switch 8 & 10• Provides 2 x 1 Gb Ethernet ports for additional networking• 2 Ethernet ports externalized via Switch 7 & 9SAS CFFv Expansion Card:• Provides 2 SAS ports for connection to SAS tape drive• 2 SAS ports externalized via Switch 3 & 4

SAS CFFv Expansion Card

Switch #3

Switch #4QLogic CFFh Expansion Card

Switch #7

Switch #8

Switch #9

Switch #10

BCH: CFFv and CFFh I/O Connections

Page 18: Power Blades Implementation

18

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

On-Board DualGbit Ethernet

POWERBlade Server #1

MIDPLANE

Switch #1Ethernet

Blade #N

Switch #2Ethernet

On-Board DualGbit Ethernet

CIOv Expansion Card:• 2 x 8Gb or 2 x 4Gb Fibre Channel• OR, 2 x 3Gb SAS passthrough• Uses 4Gb or 8Gb FC vertical switches in bays 3 & 4• OR, 3Gb SAS vertical switches in bays 3 & 4• Redundant FC storage connection option for IBM iCFFh Expansion Card:• 2 x 4Gb and 2 x 1Gb Ethernet

CIOv Expansion Card

Switch #3

Switch #4QLogic CFFh Expansion Card

Switch #7

Switch #8

Switch #9

Switch #10

BCH: CIOv and CFFh I/O Connections

Page 19: Power Blades Implementation

19

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

BladeCenter Ethernet I/O ModulesNortel L2-7 GbE Switch

ModuleCisco Systems

Intelligent Gb Ethernet Switch Module

Nortel Layer 2/3 Gb Ethernet Switch

Modules

Nortel L2/3 10GbE Uplink Switch Module

Copper Pass-Through Module

Intelligent Copper Pass-Through Module

Nortel 10Gb Ethernet Switch Module

Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 20: Power Blades Implementation

20

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

BladeCenter Fibre Channel I/O ModulesCisco 4Gb 10 and 20 port Fibre Channel

Switch Modules

Brocade 4Gb 10 and 20 port Fibre Channel

Switch Modules

QLogic 8Gb 20 port Fibre Channel Switch

Module

QLogic 4Gb 10 and 20 port Fibre Channel

Switch Module

Brocade Intelligent 8Gb Pass-Thru Fibre Channel

Switch Module

Brocade Intelligent 4Gb Pass-Thru Fibre Channel

Switch Module

Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 21: Power Blades Implementation

21

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

BladeCenter SAS I/O ModulesBladeCenter S SAS

RAID Controller Module

BladeCenter SAS Controller Module

• Supported only in BladeCenter S• RAID support for SAS drives in chassis• Supports SAS tape attachment• No support for attaching DS3200 • 2 are always required

• Supported in BladeCenter S and BladeCenter H• No RAID support• Supports SAS tape attachment• Supports DS3200 attachment

Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 22: Power Blades Implementation

22

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

SAS RAID Controller Switch ModuleRAID controller support provides additional protection options for BladeCenter S storage

SAS RAID Controller Switch Module– High-performance, fully duplex, 3Gbps speeds – Support for RAID 0, 1, 5, & 10 – Supports 2 disk storage modules with up to 12 SAS drives– Supports external SAS tape drive– Supports existing #8250 CFFv SAS adapter on blade – 1GB of battery-backed write cache between the 2 modules– Two SAS RAID Controller Switch Modules (#3734) required

Supports Power and x86 Blades– Recommend separate RAID sets

– For each IBM i partition– For IBM i and Windows storage

– Requirements– Firmware update for SAS RAID Controller Switch Modules– VIOS 2.1.1, eFW 3.4.2

Note: Does not support connection to DS3200IBM i is not pre-installed with RSSM configurations

Page 23: Power Blades Implementation

23

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Multi-switch Interconnect Module for BCH

MSIM

• Installed in high-speed bays 7 & 8 and/or 9 & 10

• Allows a “vertical” switch to be installed and use the “horizontal” high-speed fabric (bays 7 – 10)

• High-speed fabric is used by CFFh expansion adapters

• Fibre Channel switch module must be installed in right I/O module bay (switch bay 8 or 10)

• If additional Ethernet networking required additional Ethernet switch module can be installed in left I/O module bay (switch bay 7 or 9)

Page 24: Power Blades Implementation

24

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

#8252 QLogic Ethernet and 4Gb Fibre Channel Expansion Card (CFFh)

#8250 LSI 3Gb SAS Dual Port Expansion Card (CFFv)

I/O Expansion Adapters

#8246 3Gb SAS Passthrough Expansion

Card (CIOv)

#8240 Emulex 8Gb Fibre Channel

Expansion Card (CIOv)

#8242 QLogic 8Gb Fibre Channel

Expansion Card (CIOv)

#8241 QLogic 4Gb Fibre Channel

Expansion Card (CIOv)

Note: See IBM i on Power Blade Supported Environments for hardware supported by IBM i: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 25: Power Blades Implementation

25

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Virtualization Overview

Page 26: Power Blades Implementation

26

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

VIOS / IVM

LinuxClient

AIXClient

HEA HEAHEAHEA

CFFh FC exp card

USB

HEA

DS3400

DS4700 DS4800

DS8100 DS8300

SVC

FC Switch

DVD

AMM / LAN Console

IVM / Virtual Op Panel

SAS Switch

CFFv SASexp card

SAS-attached LTO4 tape drive

(virtual tape)

SAS

LAN

VIOS, IVM and i on Power BladeVIOS = Virtual I/O Server =

virtualization software in a partition

Does not run other applications

First LPAR installed on blade

VIOS owns physical hardware (Fibre

Channel, Ethernet, DVD, SAS)

VIOS virtualizes disk, DVD,

networking, tape to i partitions

IVM = Integrated Virtualization

Manager = browser interface to manage

partitions, virtualization

IVM installed with VIOS

i uses LAN console through Virtual

Ethernet bridge in VIOS

SSD

orCIOv SAS exp card

and/or

CIOv FC exp card

or

DS3200*

* Not supported with RSSM

Page 27: Power Blades Implementation

27

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Browser-based interface, supports Mozilla Firefox and Internet ExplorerPart of VIOS, no extra charge or installationPerforms LPAR and virtualization management on POWER6 blade

Integrated Virtualization Manager (IVM) Introduction

Page 28: Power Blades Implementation

28

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IVM Example: Create i Partition

Fewer steps than HMC

IVM uses several defaults

Virtual I/O resources only for

IBM i partitions

Page 29: Power Blades Implementation

29

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

VIOS Host

Virtual SCSI connection

i Client

DVD

hdiskX LUNs DDxx

/dev/cd0 DVDOPTxx

CFFh

USB

BladeC

enter midplane

Power BladeMedia tray

MSIM with Fibre Channel I/O module inside

Storage, Tape and DVD for i on JS12/JS22 in BCH

Virtual SCSI connection

With BCH and JS12/JS22, IBM i can use:Fibre Channel storage (MSIM, FC module and CFFh adapter required)

SAS storage (SAS module and CFFv adapter required)

SAS tape (SAS module and CFFv adapter required)

USB DVD in BladeCenterPhysical I/O resources are attached to VIOS, assigned to IBM i in IVM

Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used

Fibre Channel I/O module

CFFv

SAS I/O module

Fibre Channel Storage

SAS Storage and/or tape

DS3200

TS2240

Page 30: Power Blades Implementation

30

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

VIOS Host

Virtual SCSI connection

i Client

DVD

hdiskX LUNs DDxx

/dev/cd0 DVDOPTxx

CFFh

USB

BladeC

enter midplane

Power BladeMedia tray

MSIM with Fibre Channel I/O module inside

Storage, Tape and DVD for i on JS23/JS43 in BCH

Virtual SCSI connection

CIOv

Fibre Channel I/O module

OR

CIOv

SAS I/O module

Fibre Channel Storage

SAS Storage and/or tape

With BCH and JS23/JS43, IBM i can use:Fibre Channel storage (MSIM, FC module and CFFh adapter required; or FC module and CIOv adapter required)

Redundant FC adapters can be configured (CFFh and CIOv)SAS storage (SAS module and CIOv adapter required)

SAS tape (SAS module and CIOv adapter required)

USB DVD in BladeCenterPhysical I/O resources are attached to VIOS, assigned to IBM i in IVM

Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used

DS3200

TS2240

Page 31: Power Blades Implementation

31

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

VIOS Host

Virtual SCSI connection

IBM i Client

DVD

hdiskX LUNs DDxx

/dev/cd0 DVDOPTxx

SASCFFv

USB

BladeC

enter midplane

Power BladeMedia tray

Non-RAID SAS module in I/O Bay 3/4

Storage, Tape and DVD for i on JS12/JS22 in BCS

RAID SAS module in I/O Bay 3 & 4

SAS drives in BCS

DS3200

TS2240

Virtual SCSI connection

With BCS and JS12/JS22, IBM i can use:SAS storage (SAS module and CFFv adapter required)

SAS tape (SAS module and CFFv adapter required)

USB DVDDrives in BCS, TS2240, DS3200 supported with Non-RAID SAS Switch Module (NSSM)Only drives in BCS and TS2240 supported with RAID SAS Switch Module (RSSM)Physical I/O resources are attached to VIOS, assigned to IBM i in IVM

Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used

Page 32: Power Blades Implementation

32

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

VIOS Host IBM i Client

DVD

hdiskX LUNs DDxx

/dev/cd0 DVDOPTxx

SASCIOv

USB

BladeC

enter midplane

Power BladeMedia tray

Non-RAID SAS module in I/O Bay 3/4

Storage, Tape and DVD for i on JS23/JS43 in BCS

With BCS and JS23/JS43, IBM i can use:SAS storage (SAS module and CIOv adapter required)

SAS tape (SAS module and CIOv adapter required)

USB DVDDrives in BCS, TS2240, DS3200 supported with Non-RAID SAS Switch Module (NSSM)Only drives in BCS and TS2240 supported with RAID SAS Switch Module (RSSM)Physical I/O resources are attached to VIOS, assigned to IBM i in IVM

Storage LUNs (physical volumes) assigned directly to IBM i; storage pools in VIOS not used

RAID SAS module in I/O Bay 3 & 4

SAS drives in BCS

DS3200

TS2240

Virtual SCSI connection

Virtual SCSI connection

Page 33: Power Blades Implementation

33

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Storage and Tape Support

Storage support– BladeCenter H and JS12/JS22/JS23/JS43:

– SAS – DS3200 – Fibre Channel – DS3400, DS4700, DS4800, DS8100, DS5020, DS5100,

DS5300, XIV, DS8300, DS8700, SVC– Multiple storage subsystems supported with SVC

– BladeCenter S and JS12/JS22/JS23/JS43:– SAS – BCS drives; DS3200 (only with NSSM)

Tape support– BladeCenter H and BladeCenter S:

– TS2240 LTO-4 SAS – supported for virtual tape and for VIOS backups– TS2230 LTO-3 SAS – not supported for virtual tape, only for VIOS backups

– NEW support for Fibre Channel tape library support announced 20/10/2009!– Enables access to tape libraries 3584 (TS3500) and 3573 (TS3100 and

TS3200) – Requires selected 8GB Fibre Channel Adapters

Page 34: Power Blades Implementation

34

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Configuring Storage for IBM i on BladeStep 1: Perform sizing

– Use Disk Magic, where applicable

– Use the PCRM, Ch. 14.5 – http://www.ibm.com/systems/i/advantages/perfmgmt/resource.html

– Number of physical drives is still most important

– VIOS itself does not add significant disk I/O overhead

– For production workloads, keep each i partition on a separate RAID array

Step 2: Use appropriate storage UI and Redbook for your environment to create LUNs for IBM i and attach to VIOS (or use TPC or SSPC where applicable)

Storage Configuration Manager for NSSM and

RSSM

DS Storage Manager for DS3200, DS3400, DS4700,

DS4800

DS8000 Storage Manager for DS8100 and DS8300

SVC Console for SVC

Page 35: Power Blades Implementation

35

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Configuring Storage for IBM i on Blade, Cont.Step 3: Assign LUNs or physical drives in BCS to IBM i

– ‘cfgdev’ in VIOS CLI necessary to detect new physical volumes if VIOS is running

– Virtualize whole LUNs/drives (“physical volumes”) to IBM i

– Do not use storage pools in VIOS

Page 36: Power Blades Implementation

36

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Multiple Virtual SCSI Adapters for IBM iSince VIOS 2.1 in November 2008, IBM i is no longer limited to 1 VSCSI connection to VIOS and 16 disk + 16 optical devices

What IVM will do:– Create 1 VSCSI server adapter in VIOS for each IBM i partition created

– Create 1 VSCSI client adapter in IBM i and correctly map to Server adapter

– Map any disk and optical devices you assign to IBM i to the first VSCSI server adapter in VIOS

– Create a new VSCSI server-client adapter pair only when you assign a tape device to IBM i

– Create another VSCSI server-client adapter pair when you assign another tape device

What IVM will not do:– Create a new VSCSI server-client adapter pair if you assign more than 16 disk devices to

IBM i

Page 37: Power Blades Implementation

37

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Multiple Virtual SCSI Adapters for IBM i, Cont.Scenario I: you have <=16 disk devices and you want to add virtual tape

– Action required in VIOS:– In IVM, click on tape drive, assign to IBM i partition

– Separate VSCSI server-client adapter pair created automatically

Scenario II: you have 16 disk devices and you want to add more disk and virtual tape– Actions required in VIOS:

– In VIOS CLI, create new VSCSI client adapter in IBM i– VSCSI server adapter in VIOS created automatically

– In VIOS CLI, map new disk devices to new VSCSI server adapter using ‘mkvdev’– In IVM, click on tape drive, assign to IBM i partition

For details and instructions, see IBM i on Blade Read-me First: http://www.ibm.com/systems/power/hardware/blades/ibmi.html

Page 38: Power Blades Implementation

38

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM i Support for Virtual Tape

Virtual tape support enables IBM i partitions to directly backup to PowerVM VIOS attached tape drive saving hardware costs and management time

Simplifies backup and restore processing with BladeCenter implementations

– IBM i 6.1 partitions on BladeCenter JS12, JS22, JS23, JS43– Supports IBM i save/restore commands & BRMS– Supports BladeCenter S and H implementations

Simplifies migration to blades from tower/rack servers– LTO-4 drive can read backup tapes from LTO-2, 3, 4 drives

Supports IBM Systems Storage SAS LTO-4 Drive– TS2240 SAS for BladeCenter ONLY– Fibre Channel attached tape libraries 3584 (TS3500) and

3573 (TS3100 and TS3200)

Requirements– VIOS 2.1.1, eFW 3.4.2, IBM i 6.1 PTFs

Page 39: Power Blades Implementation

39

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Virtual Tape Hardware and Virtualization

VIOS Host

Separate Virtual SCSI connection

IBM i Client

CFFv SAS

BladeC

enter midplane

Power Blade

SAS I/O moduleSAS-attached LTO4 tape drive (TS2240)

/dev/rmt0

TS2240 LTO4 SAS tape drive attached to SAS switch in BladeCenter:– NSSM or RSSM in BCS (shown above)

– NSSM in BCHFibre Channel attached tape libraries 3584 (TS3500) and 3573 (TS3100 and TS3200) in BC-HVIOS virtualizes tape drive to IBM i directlyTape drive assigned to IBM i in IVMTape drive available in IBM i as TAPxx, type 3580 model 004

TAP013580 004RAID SAS I/O

module

ORCIOv SAS

OR

Page 40: Power Blades Implementation

40

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Assigning Virtual Tape to IBM i

No action required in IBM i to make tape drive available– If QAUTOCFG is on (default)

Page 41: Power Blades Implementation

41

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Migrating IBM i to Blade

Virtual tape makes migration to blade similar to migration to tower/rack server:– On existing system, go save option 21 on tape media

– On blade, use virtual tape to perform D-mode IPL and complete restore

– Existing system does not have to be at IBM i 6.1– Previous-to-current migration also possible

IBM i partition saved on blade can be restored on tower/rack server– IBM i can save to tape media on blade

For existing servers that do not have access to tape drive, there are two options:– Save on different media, convert to supported tape format as a service, restore from

tape

– Use Migration Assistant method

Page 42: Power Blades Implementation

42

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

VIOS Host

Virtual LAN connection

i Client

IVE

BladeC

enter midplane

Power Blade

Ethernet I/O module Embedded Ethernet

ports on blade

Local PC for:

AMM browser IVM browser LAN console LAN

IVE (HEA)

Virtual Ethernet bridge

LAN console

Production interface

CMN01

10.10.10.5

10.10.10.20

10.10.10.35

10.10.10.37

10.10.10.38

CMN02IVE (HEA)

VIOS is accessed from local PC via embedded Ethernet ports on blade (IVE/HEA)For both IVM browser and VIOS command lineSame PC can be used to connect to AMM and for LAN console for IBM i

For i connectivity, IVE/HEA port is bridged to Virtual LAN

Networking on Power Blade

Page 43: Power Blades Implementation

43

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

LAN Console for i on Power Blade

Required for i on Power blade

Uses System i Access software on PC (can use same PC for IVM connection)

Full console functionality

Uses existing LAN console capability

Page 44: Power Blades Implementation

44

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

PowerVM Active Memory SharingPowerVM Active Memory Sharing is an advanced memory virtualization technology which intelligently flows memory from one partition to another for increased utilization and flexibility of memory usage

Memory virtualization enhancement for Power Systems– Partitions share a pool of memory – Memory dynamically allocated based on partition’s workload

demands

Extends Power Systems Virtualization Leadership– Capabilities not provided by Sun and HP virtualization offerings

Designed for partitions with variable memory requirements– Workloads that peak at different times across the partitions– Active/inactive environments– Test and Development environments– Low average memory requirements

Available with PowerVM Enterprise Edition– Supports AIX 6.1, i 6.1, and SUSE Linux Enterprise Server 11– Partitions must use VIOS and shared processors– POWER6 processor-based systems

0

5

10

15

NightDay

0

5

10

15

AsiaAmericasEurope

Time

Time

Mem

ory

Usa

ge (G

B)

Mem

ory

Usa

ge (G

B)

0

5

10

15#10 #9 #8 #7 #6 #5 #4 #3 #2 #1 Time

Mem

ory

Usa

ge (G

B)

Around the World

Day and Night

Infrequent Use

Page 45: Power Blades Implementation

45

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IVM Example: Working with AMS

Page 46: Power Blades Implementation

46

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Enhancements for IBM i and Power BladesN_Port ID Virtualization (NPIV) Support for IBM i

– Provides direct Fibre Channel connections from client partitions to SAN resources

– Simplifies the management of Fibre Channel SAN environments

– Enables access to Fibre Channel tape libraries– Supported with PowerVM Express, Standard, and Enterprise Edition

– Power blades with an 8Gb PCIe Fibre Channel Adapter

Power Hypervisor

VIOS

Virtual FC AdapterFC Adapter Virtual FC Adapter

Page 47: Power Blades Implementation

47

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Virtual SCSI NPIV

EMC

The VSCSI model for sharing storage resources is storage virtualizer. Heterogeneous storage is pooled by the VIOS into a homogeneous pool of block storage and then allocated to client LPARs in the form of generic SCSI LUNs. The VIOS performs SCSI emulation and acts as the SCSI target.

With NPIV, the VIOS' role is fundamentally different. The VIOS facilitates adapter sharing only, there is no device level abstraction or emulation. Rather than a storage virtualizer, the VIOS serving NPIV is a passthrough, providing an FCP connection from the client to the SAN.

IBM i

VIOS

FC HBAs

DS5000

generic scsi disk

generic scsi disk

SVC

SAN

IBM i

FCPVIOS

FC HBAs

SAN

SCSI

Page 48: Power Blades Implementation

48

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Additional 4Q Enhancements for IBM i on Blade

Support for IBM i (through VIOS) and AIX for CFFh 1Gb Eth/8Gb FC combo card

– Supported on JS12, JS22, JS23, JS43

– Only adapter with NPIV support for JS12 and JS22

– FC ports supported only, not Ethernet

Converged Network Adapter with support for 10Gb Ethernet and 8Gb FC (FC over Ethernet)

– FC support for IBM i is with VSCSI only

– NPIV not supported

QLogic 1Gb Ethernet and 8Gb Fibre Channel Expansion Card (CFFh)

10 GbE/8Gb FC Converged Network Adapter (CFFh)

Page 49: Power Blades Implementation

49

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

System & Metode, Denmarkwww.system-method.com

• IBM Business Partner• Software Solutions & Hosting company

Focuses on very small / old existing installations• 1 BladeCenter S chassis• 1 JS12 POWER6 blade• 2 HS21 x86 blades

• Provides hosting services to several clients/companies•1 IBM Virtual IO Server 2.1 (VIOS) host LPAR •3 IBM I 6.1 client LPARs – for different customers

Pros:• Cheap hardware compared to traditional Power servers• Possible to get customers that would potentially have switched to the “dark side…”• FlexibleCons:• Complex, requires three different skills sets (Blade, VIOS, IBM i) • Difficult backup in early stages (2 step process). Now great with virtual tape.

IBM i and BladeCenter S

Page 50: Power Blades Implementation

50

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM Systems Lab Services Virtualization ProgramWhat is it?

– Free presales technical assistance from Lab Services– Help with virtualization solutions:

– Open storage– Power blades– IBM Systems Director VMControl– Other PowerVM technologies

– Design solution, hold Q&A session with client, verify hardware configuration

Who can use it?– IBMers, Business Partners, clients

How do I use it?– Contact Lab Services for nomination form; send form in– Participate in assessment call with Virtualization Program team– Work with dedicated Lab Services technical resource to design solution before

the sale

Page 51: Power Blades Implementation

51

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Service Voucher for IBM i on Power Blade

• Let IBM Systems Lab Services and Training help you install i on blade!• 1 service voucher for each Power blade AND IBM i license purchased• http://www.ibm.com/systems/i/hardware/editions/services.html

Page 52: Power Blades Implementation

52

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

Further Reading

IBM i on Blade Read-me First: http://www.ibm.com/systems/power/hardware/blades/ibmi.htmlIBM i on Blade Supported Environments: http://www.ibm.com/systems/power/hardware/blades/ibmi.htmlIBM i on Blade Performance Information: http://www.ibm.com/systems/i/advantages/perfmgmt/resource.htmlService vouchers: http://www.ibm.com/systems/i/hardware/editions/services.htmlIBM i on Blade Training: http://www.ibm.com/systems/i/support/itc/educ.html

Page 53: Power Blades Implementation

53

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

8 IBM Corporation 1994-2007. All rights reserved.References in this document to IBM products or services do not imply that IBM intends to make them available in every country.

Trademarks of International Business Machines Corporation in the United States, other countries, or both can be found on the World Wide Web at http://www.ibm.com/legal/copytrade.shtml.

Intel, Intel logo, Intel Inside, Intel Inside logo, Intel Centrino, Intel Centrino logo, Celeron, Intel Xeon, Intel SpeedStep, Itanium, and Pentium are trademarks or registeredtrademarks of Intel Corporation or its subsidiaries in the United States and other countries.

Linux is a registered trademark of Linus Torvalds in the United States, other countries, or both.Microsoft, Windows, Windows NT, and the Windows logo are trademarks of Microsoft Corporation in the United States, other countries, or both.IT Infrastructure Library is a registered trademark of the Central Computer and Telecommunications Agency which is now part of the Office of Government Commerce.ITIL is a registered trademark, and a registered community trademark of the Office of Government Commerce, and is registered in the U.S. Patent and Trademark Office.UNIX is a registered trademark of The Open Group in the United States and other countries.Java and all Java-based trademarks are trademarks of Sun Microsystems, Inc. in the United States, other countries, or both.Other company, product, or service names may be trademarks or service marks of others.

Information is provided "AS IS" without warranty of any kind.

The customer examples described are presented as illustrations of how those customers have used IBM products and the results they may have achieved. Actual environmental costs and performance characteristics may vary by customer.

Information concerning non-IBM products was obtained from a supplier of these products, published announcement material, or other publicly available sources and does not constitute an endorsement of such products by IBM. Sources for non-IBM list prices and performance numbers are taken from publicly available information, including vendor announcements and vendor worldwide homepages. IBM has not tested these products and cannot confirm the accuracy of performance, capability, or any other claims related to non-IBM products. Questions on the capability of non-IBM products should be addressed to the supplier of those products.

All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only.

Some information addresses anticipated future capabilities. Such information is not intended as a definitive statement of a commitment to specific levels of performance, function or delivery schedules with respect to any future products. Such commitments are only made in IBM product announcements. The information is presented here to communicate IBM's current investment and development activities as a good faith effort to help with our customers' future planning.

Performance is based on measurements and projections using standard IBM benchmarks in a controlled environment. The actual throughput or performance that any user will experience will vary depending upon considerations such as the amount of multiprogramming in the user's job stream, the I/O configuration, the storage configuration, and the workload processed. Therefore, no assurance can be given that an individual user will achieve throughput or performance improvements equivalent to the ratios stated here.

Prices are suggested U.S. list prices and are subject to change without notice. Starting price may not include a hard drive, operating system or other features. Contact your IBM representative or Business Partner for the most current pricing in your geography.

Photographs shown may be engineering prototypes. Changes may be incorporated in production models.

Trademarks and Disclaimers

Page 54: Power Blades Implementation

54

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

This document was developed for IBM offerings in the United States as of the date of publication. IBM may not make these offerings available in other countries, and the information is subject to change without notice. Consult your local IBM business contact for information on the IBM offerings available in your area.Information in this document concerning non-IBM products was obtained from the suppliers of these products or other public sources. Questions on the capabilities of non-IBM products should be addressed to the suppliers of those products.IBM may have patents or pending patent applications covering subject matter in this document. The furnishing of this document does not give you any license to these patents. Send license inquires, in writing, to IBM Director of Licensing, IBM Corporation, New Castle Drive, Armonk, NY 10504-1785 USA. All statements regarding IBM future direction and intent are subject to change or withdrawal without notice, and represent goals and objectives only. The information contained in this document has not been submitted to any formal IBM test and is provided "AS IS" with no warranties or guarantees either expressed or implied.All examples cited or described in this document are presented as illustrations of the manner in which some IBM products can be used and the results that may be achieved. Actual environmental costs and performance characteristics will vary depending on individual client configurations and conditions.IBM Global Financing offerings are provided through IBM Credit Corporation in the United States and other IBM subsidiaries and divisions worldwide to qualified commercial and government clients. Rates are based on a client's credit rating, financing terms, offering type, equipment type and options, and may vary by country. Other restrictions may apply. Rates and offerings are subject to change, extension or withdrawal without notice.IBM is not responsible for printing errors in this document that result in pricing or information inaccuracies.All prices shown are IBM's United States suggested list prices and are subject to change without notice; reseller prices may vary.IBM hardware products are manufactured from new parts, or new and serviceable used parts. Regardless, our warranty terms apply.Any performance data contained in this document was determined in a controlled environment. Actual results may vary significantly and are dependent on many factors including system hardware configuration and software design and configuration. Some measurements quoted in this document may have been made on development-level systems. There is no guarantee these measurements will be the same on generally-available systems. Some measurements quoted in this document may have been estimated through extrapolation. Users of this document should verify the applicable data for their specific environment.

Revised September 26, 2006

Special notices

Page 55: Power Blades Implementation

55

STG Technical Conferences 2009

Power Blades Implementation © 2009 IBM Corporation

IBM, the IBM logo, ibm.com AIX, AIX (logo), AIX 6 (logo), AS/400, BladeCenter, Blue Gene, ClusterProven, DB2, ESCON, IBM i, IBM i (logo), IBM Business Partner (logo), IntelliStation, LoadLeveler, Lotus, Lotus Notes, Notes, Operating System/400, OS/400, PartnerLink, PartnerWorld, PowerPC, pSeries, Rational, RISC System/6000, RS/6000, THINK, Tivoli, Tivoli (logo), Tivoli Management Environment, WebSphere, xSeries, z/OS, zSeries, AIX 5L, Chiphopper, Chipkill, Cloudscape, DB2 Universal Database, DS4000, DS6000, DS8000, EnergyScale, Enterprise Workload Manager, General Purpose File System, , GPFS, HACMP, HACMP/6000, HASM, IBM Systems Director Active Energy Manager, iSeries, Micro-Partitioning, POWER, PowerExecutive, PowerVM, PowerVM (logo), PowerHA, Power Architecture, Power Everywhere, Power Family, POWER Hypervisor, Power Systems, Power Systems (logo), Power Systems Software, Power Systems Software (logo), POWER2, POWER3, POWER4, POWER4+, POWER5, POWER5+, POWER6, System i, System p, System p5, System Storage, System z, Tivoli Enterprise, TME 10, Workload Partitions Manager and X-Architecture are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or ™), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at "Copyright and trademark information" at www.ibm.com/legal/copytrade.shtml

The Power Architecture and Power.org wordmarks and the Power and Power.org logos and related marks are trademarks and service marks licensed by Power.org.UNIX is a registered trademark of The Open Group in the United States, other countries or both. Linux is a registered trademark of Linus Torvalds in the United States, other countries or both.Microsoft, Windows and the Windows logo are registered trademarks of Microsoft Corporation in the United States, other countries or both.Intel, Itanium, Pentium are registered trademarks and Xeon is a trademark of Intel Corporation or its subsidiaries in the United States, other countries or both.AMD Opteron is a trademark of Advanced Micro Devices, Inc.Java and all Java-based trademarks and logos are trademarks of Sun Microsystems, Inc. in the United States, other countries or both. TPC-C and TPC-H are trademarks of the Transaction Performance Processing Council (TPPC).SPECint, SPECfp, SPECjbb, SPECweb, SPECjAppServer, SPEC OMP, SPECviewperf, SPECapc, SPEChpc, SPECjvm, SPECmail, SPECimap and SPECsfs are trademarks of the Standard Performance Evaluation Corp (SPEC).NetBench is a registered trademark of Ziff Davis Media in the United States, other countries or both.AltiVec is a trademark of Freescale Semiconductor, Inc.Cell Broadband Engine is a trademark of Sony Computer Entertainment Inc.InfiniBand, InfiniBand Trade Association and the InfiniBand design marks are trademarks and/or service marks of the InfiniBand Trade Association. Other company, product and service names may be trademarks or service marks of others.

Revised April 24, 2008

Special notices (cont.)