28
Hints and tips for implementing Storwize V7000 in a IBM i environment - 1 – Copyright IBM Corporation August 10 th 2013 Hints and Tips for implementing Storwize V7000 in an IBM i environment Version 1.5: August 2013 Alison Pate IBM Advanced Technical Sales Support [email protected] Jana Jamsek IBM Advanced Technical Skills, Europe [email protected]

Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

  • Upload
    tuancoi

  • View
    12

  • Download
    1

Embed Size (px)

DESCRIPTION

Hints-and-Tips_V7000

Citation preview

Page 1: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 1 – Copyright IBM Corporation August 10th 2013

Hints and Tips for implementing Storwize V7000 in an IBM i environment Version 1.5: August 2013 Alison Pate IBM Advanced Technical Sales Support [email protected] Jana Jamsek IBM Advanced Technical Skills, Europe [email protected]

Page 2: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 2 – Copyright IBM Corporation August 10th 2013

Page 3: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 3 – Copyright IBM Corporation August 10th 2013

Table of Contents Table of Contents ................................................................................................................ 3 Introduction ......................................................................................................................... 4 IBM i external storage options ............................................................................................ 5 IBM i Storage Management ................................................................................................ 5

Single level storage and object orientated architecture ................................................... 5 Translation from 520 byte blocks to 512 byte blocks ..................................................... 6

Virtual I/O Server (VIOS) Support ..................................................................................... 7 VIOS vSCSI support ...................................................................................................... 8

Requirements .............................................................................................................. 8 Implementation Considerations .................................................................................. 8

NPIV Support.................................................................................................................. 9 Requirements for VIOS_NPIV connection ................................................................ 9 Implementation considerations ................................................................................. 10

Native Support of Storwize V7000 ................................................................................... 11 Requirements for native connection ............................................................................. 11 Implementation Considerations .................................................................................... 11

Direct Connection of Storwize V7000 to IBM i (without a switch) ......................... 11 Sizing for performance ..................................................................................................... 12 Storwize V7000 Configuration options ............................................................................ 12 Host Attachment ............................................................................................................... 13 Multipath ........................................................................................................................... 14

Description of IBM i Multipath .................................................................................... 14 Insight in Multipath with Native and VIOS NPIV connection ..................................... 15 Insight in Multipath with VIOS VSCSI connection ..................................................... 16

Zoning SAN switches ....................................................................................................... 17 Boot from SAN ................................................................................................................. 18 IBM i mirroring for V7000 LUNs .................................................................................... 18 Thin Provisioning.............................................................................................................. 19 Real-time Compression ..................................................................................................... 19 Solid State Drives (SSD) .................................................................................................. 19 Data layout ........................................................................................................................ 20

LUN versus Disk arm ................................................................................................... 21 LUN Size ...................................................................................................................... 21 Maximum resource names in IBM i .............................. Error! Bookmark not defined. Adding LUNs to ASP ................................................................................................... 22 Disk unit Serial number, type, model and resource name ............................................ 22 Identify which V7000 LUN is which disk unit in IBM i .............................................. 23

Software ............................................................................................................................ 25 Performance Monitoring ................................................................................................... 25 Copy Services Considerations .......................................................................................... 26 Further References ............................................................................................................ 26

Page 4: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 4 – Copyright IBM Corporation August 10th 2013

Introduction Midrange and big IBM i customers are extensively implementing IBM Storwize V7000 as the external storage for their IBM i workloads. Storwize V7000 not only provides variety of disk drives, RAID levels and connection types for an IBM i installation, it also offers the options for flexible and well managed High availability and Disaster recovery solutions for IBM i.

Page 5: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 5 – Copyright IBM Corporation August 10th 2013

This document provides Hints and Tips for implementing the Storwize V7000 with IBM i. The Storwize software is consistent across the entire Storwize family including the SVC, V7000, and V3700 and therefore the content is applicable to all the products in the family.

IBM i external storage options More than ever before we have a choice of storage solutions for IBM i: Details of supported configurations and software levels are provided by the System Storage Interoperation Center: http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_over=yes Sue Baker also maintains a useful reference of supported servers, adapters, and storage systems on Techdocs for IBMers and Business Partners IBMers: http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4563 Business Partners: https://www-304.ibm.com/partnerworld/wps/servlet/ContentHandler/tech_PRS4563

IBM i Storage Management Many computer systems require you to take responsibility for how information is stored and retrieved from the disk units, along with providing the management environment to balance disk utilization, enable disk protection and maintain balanced data spread for optimum performance.

Single level storage and object orientated architecture When you create a new file in a UNIX system, you must tell the system where to put the file and how big to make it. You must balance files across different disk units to provide good system performance. If you discover later that a file needs to be larger, you need to copy it to a location on disk that has enough space for the new, larger file. You may need to move files between disk units to maintain system performance.

Page 6: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 6 – Copyright IBM Corporation August 10th 2013

The IBM i server is different in that it takes responsibility for managing the information in auxiliary storage pools (also called disk pools or ASPs). When you create a file, you estimate how many records it should have. You do not assign it to a storage location; instead, the system places the file in the best location that ensures the best performance. In fact, it normally spreads the data in the file across multiple disk units. When you add more records to the file, the system automatically assigns additional space on one or more disk units. Therefore it makes sense to use disk copy functions to operate on either the entire disk space or the iASP. Power HA supports only an iASP-based copy. IBM i uses a single level storage, object orientated architecture. It sees all disk space and the main memory as one storage area and uses the same set of virtual addresses to cover both main memory and disk space. Paging of the objests in this virtual address space is performed in 4 KB pages. However, data is usually blocked and transferred to storage devices in bigger than 4 KB blocks. Blocking of transferred data is based on many factors, for example, expert cache usage.

Translation from 520 byte blocks to 512 byte blocks IBM i disks have a block size of 520 bytes. Most fixed block (FB) storage devices are formatted with a block size of 512 bytes so a translation or mapping is required to attach these to IBM i. (The DS8000 supports IBM i with a native disk format of 520 bytes). IBM i performs the following change of the data layout to support 512 byte blocks (sectors) in external storage: for every page (8 * 520 byte sectors) it uses additional 9th

Page 7: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 7 – Copyright IBM Corporation August 10th 2013

sector; it stores the 8-byte headers of the 520 byte sectors in the 9th sector, and therefore changes the previous 8* 520-byte blocks to 9* 512-byte blocks. The data that was previously stored in 8 * sectors is now spread across 9 * sectors, so the required disk capacity on V7000 is 9/8 of the IBM i usable capacity. Vice versa, the usable capacity in IBM i is 8/9 of the allocated capacity in V7000. Therefore, when attaching a Storwize V7000 to IBM i, whether through vSCSI, NPIV or native attachment this mapping of 520:512 byte blocks means that you will have a capacity ‘overhead’ of being able to use only 8/9ths of the effective capacity. The impact of this translation to IBM i disk performance is negligible.

Virtual I/O Server (VIOS) Support The Virtual I/O Server is part of the IBM PowerVM editions hardware feature on IBM Power Systems. The Virtual I/O Server technology facilitates the consolidation of network and disk I/O resources and minimizes the number of required physical adapters in the IBM Power Systems server. It is a special-purpose partition which provides virtual I/O resources to its client partitions. The Virtual I/O Server actually owns the physical resources that are shared with clients. A physical adapter assigned to the VIOS partition can be used by one or more other partitions. The Virtual I/O Server can provide virtualized storage devices, storage adapters and network adapters to client partitions running an AIX, IBM i, or Linux operating environment. The core I/O virtualization capabilities of the Virtual I/O server are shown below: _ Virtual SCSI _ Virtual Fibre Channel using NPIV (Node Port ID Virtualization) _ Virtual Ethernet bridge using Shared Ethernet Adapter (SEA)

Page 8: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 8 – Copyright IBM Corporation August 10th 2013

VIOS vSCSI support

Requirements Following are the requirements for connecting V7000 to IBM i in VIOS vscsi: Hardware: POWER6 or higher Minimum software and microcode levels IBM i level V6.1.1 IBM i v 7.1 is required for Power HA support VIOS level 2.2. V7000 level 6.1.x

Implementation Considerations The storage virtualization capabilities by PowerVM and the Virtual I/O Server are supported by the Storwize V7000 series as VSCSI backing devices in the Virtual I/O Server. Remember if you use VSCSI devices that 8/9 of the configured LUN capacity will be usable for IBM i data (8x520B -> 9x512B translation). STORWIZE V7000 LUNs are surfaced as generic 6B22 devices to IBM i. When using VIOS to virtualize storage make sure that you have 2 VIOS servers to provide alternate pathing in the event of a failure. Multi-pathing across two Virtual I/O Servers is supported with IBM i 6.1.1 or later. More VIOS servers may be needed to support multiple i/OS partitions. Make sure that you size the VIOS servers and IO adapters to support the anticipated throughput. Optionally, it is recommended to implement AIX-level multipath from the VIOS server using either SDD PCM or base multipath IO driver (MPIO), to provide multiple paths to the disks from each VIOS. You should configure alternate paths to the disk from each VIOS zoned to provide access to each node canister. FC adapter attributes Specify the following attributes for each SCSI I/O Controller Protocol Device (fscsi) device that connects a V7000 LUN for IBM i. The attribute fc_err_recov should be set to fast_fail. The attribute dyntrk should be set to yes.

Setting of these values for the two attributes is related to how AIX FC adapter driver or AIX disk driver handle the certain type of fabric related errors. Without setting these values for these 2 attributes, the way to handle the errors will be different, and it will cause unnecessary retries. We recommend setting the same attribute values with either VIOS VSCSI connection or VIOS_NPIV connection.

Page 9: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 9 – Copyright IBM Corporation August 10th 2013

Disk device attributes If V7000 is connected in VIOS VSCSI specify the following attributes for each hdisk device that represents a V7000 LUN connected to IBM i. The attribute reserve_policy should be set to no_reserve. The attribute queue_depth should be set to 32 The attribute algorithm should be set as follows:

If the driver SDDPCM is used in VIOS, the attribute algorithm should be set to load_balance

If AIX PCM is used the attribute algorithm should be set to round_robin Setting reserve_policy to no_reserve is required be set in each VIOS if Multipath with two or more VIOS is implemented, to remove SCSI reservation on the hdisk device. The specified values of other attributes are recommended for performance reasons.

NPIV Support N-port Virtualization (NPIV) virtualizes the fibre channel adapters on the Power Server allowing the same physical IOA port to be shared by multiple LPARs. NPIV requires VIOS to support the virtualization and LUNs are mapped directly to the IBM i partition through a virtual fibre adapter.

Requirements for VIOS_NPIV connection Following are the requirements for NPIV connection of V7000 to IBM i: Hardware:

Page 10: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 10 – Copyright IBM Corporation August 10th 2013

POWER6 or higher 8-Gb adapters in VIOS NPIV enabled switches Minimum software and microcode levels IBM i V7.1 TR6 VIOS 2.2.2.1 V7000 6.4.1.4 PowerHA group PTF SF99706 level 3 Note: PowerHA group PTF SF99706 level 4 is required for managing PowerHA with V7000 in GUI, and for LUN level switching with V7000. NPIV connection requires SAN switches that must be NPIV enabled. IBM i 7.1 TR6 is required for NPIV support of Storwize V7000. You will also need NPIV capable switches to attach the Storwize V7000 to the VIOS server.

Implementation considerations NPIV attachment of Storwize V7000 provides full support for Power HA and LUN level switching. The default qdepth for IBM i LUNs is 16 (also known as SNUM). LUNs are only presented to the host from the owning IO group port. NPIV can support up to 64 active and 64 passive LUNs/virtual path, although 32 is recommended as a guideline for optimum performance. Remember that the physical path must still be sized to provide adequate throughput. Up to 3 virtual paths/physical paths is the maximum recommended with the physical path sized to accommodate no more than 300MB/sec or 1200 IOPS/second. It is possible to combine NPIV and native connections for a host connection, but it is not possible to combine NPIV and vSCSI connections. Rules for VIOS_NPIV mapping Following are the rules for mapping Server virtual FC adapters to the ports in VIOS when implementing VPIV connection: Map maximum one virtual FC adapter from an IBM i LPAR to a port in VIOS. You can map up to 64 virtual FC adapters each from another IBM i LPAR to the

same port in VIOS. When implementing solutions with IASP use different FC adapters for Sysbas

than for IASP and map them to different ports in VIOS. You can use the same port in VIOS for both NPIV mapping and connection with

VIOS VSCSI.

Page 11: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 11 – Copyright IBM Corporation August 10th 2013

Native Support of Storwize V7000 Support is now available for a native connection of Storwize V7000 to IBM i either with, or without a switch. Power HA and LUN level switching supported.

Requirements for native connection Following are the requirements for native connection of V7000 to IBM i: Hardware: POWER7 Minimum software and microcode levels IBM i V7.1 TR6 and PTFs MF56600, MF56753, MF57854 or IBM i V7.1 TR6 Resave 710-H V7000 code 6.4.1.4 Fabric attach FC5735/5273 - 8Gb adapters FC 5774/5276 - 4Gb adapters Direct attach FC 5774/5276 - 4Gb adapters

Implementation Considerations 2145 Device type

– Flexible LUN sizes supported Boot from SAN supported Default qdepth (SNUM) of 16 V7000 Compression is supported by IBM i

It is possible to combine NPIV and native connections for a host connection, but it is not possible to combine NPIV and vSCSI connections. Migrating from vSCSI to a native connection requires that the system be powered off.

Direct Connection of Storwize V7000 to IBM i (without a switch) No switch required

– One port in IBM i to 1 port in Storwize V7000/V3700 – Good for smaller systems with few LPARs

4GB connection only supported – FC5774/5276

No NPIV support Storwize V7000, SVC, V3700

– SVC Code 6.4.1.4 or later required Power 7 only Minimum level of IBM i 7.1 TR6 plus PTFs

– MF56600, MF56753, MF57854

Page 12: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 12 – Copyright IBM Corporation August 10th 2013

Sizing for performance It’s important to size a storage subsystem based on I/O activity rather than capacity requirements alone. This is particularly true of an IBM i environment because of the sensitivity to I/O performance. IBM has excellent tools for modeling the expected performance of your workload and configuration. We provide some guidelines and general words of wisdom in this paper; however, these provide a starting point only for sizing with the appropriate tools. The LUN size is flexible; choose the LUN size that gives you enough LUNs for good performance according to Disk Magic. A good recommended size to start modeling is 80GB. It is equally important to ensure that the sizing requirements for your SAN configuration also take into account the additional resources required when enabling Copy Services. Use Disk Magic to model the overheads of replication (Global Mirror and Metro Mirror), particularly if you are planning to enable Metro Mirror. A Bandwidth Sizing should be conducted for Global Mirror and Metro Mirror. Note that Disk Magic does not support modeling FlashCopy (aka Point-in-Time Copy functions), so make sure you do not size the system to maximum recommended utilizations if you want to also exploit FlashCopy snapshots for backups. You will need to collect IBM i performance data. Generally, you will collect performance data for a week’s worth of performance for each system/lpar and send the resulting reports for the sizing. Each set of reports should include print files for the following:

System Report - Disk Utilization Required Component Report - Disk Activity Required Resource Interval Report - Disk Utilization Detail Required System Report - Storage Pool Utilization Optional

Send the report print files as indicated below (send reports as .txt file format type). If you are collecting from more than one IBM i or LPAR, the reports need to be for the same time period for each system/lpar, if possible.

Storwize V7000 Configuration options Different hardware and RAID options are available for the Storwize V7000 and can be validated by Disk Magic. You should configure the RAID level and array width

Page 13: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 13 – Copyright IBM Corporation August 10th 2013

according to the solution that you modeled in Disk Magic. As always, it’s recommended to follow the default configuration options as recommended in the Storwize V7000 GUI configuration wizard. The default configuration option in the GUI is RAID 5 with a default array width of 7+P for SAS HDDs, RAID 6 for Nearline HDDs with a default array width of 10+P+Q and RAID 1 with a default array width of 2 for SSDs. The recommendation is to create a dedicated storage pool for IBM i with enough managed disks backed by a sufficient number of spindles to handle the expected IBM i workload. Modeling with Disk Magic using actual customer performance data should be performed to size the storage system properly.

Host Attachment IBM i will log into a Storwize V7000 node only once from an IO adapter port on the IBM i LPAR.

Multiple paths between the switch and the Storwize V7000 provide some level of redundancy: if the path in use (active) fails, IBM i will automatically start using the other path. However there is no way to force an IBM i partition to use a specific port and if multiple partitions are all configured to use multiple paths between the switch and the Storwize V7000 the result is typically that all partitions will use the same port on the Storwize V7000. The recommended option is to provide multipath support by using two VIOS partitions each with a path to the Storwize V7000.

Page 14: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 14 – Copyright IBM Corporation August 10th 2013

The same connection considerations apply when connecting using the Native Connection option without VIOS. Best practices guidelines: Isolate host connections from remote copy connections (Metro Mirror or Global

Mirror) where possible. Isolate other host connections from IBM i host connections on a host port basis. Always have symmetric pathing by connection type (i.e., use the same number of

paths on all host adapters used by each connection type) Size the number of host adapters needed based on expected aggregate maximum

bandwidth and maximum IOPS (use Disk Magic or other common sizing methods based on actual or expected workload).

Multipath Multipath provides greater resiliency for SAN attached storage. The IBM i supports up to 8 paths to each LUN. In addition to the availability considerations, lab performance testing has shown that 2 or more paths provide performance improvements when compared to a single path. Typically 2 paths to a LUN is the ideal balance of price and performance. The Disk Magic tool supports only multipathing over 2 paths. You might want to consider more than 2 paths for workloads where there is high wait time, or where high IO rates are expected to LUNs.

Description of IBM i Multipath Multipath for a LUN is achieved with connecting the LUN to two or more ports that belong to different adapters in IBM i partition. With native connection, the ports for Multipath must be in different physical adapters in IBM i. With VIOS_NPIV the virtual Fibre channel adapters for Multipath must be assigned to different VIOS. With VIOS vscsi connection the virtual SCSI adapters for Multipath must be assigned to different VIOS. Following pictures show high-level view of Multipath in different ways of connection, while the detailed view of all paths is presented further in this section.

Page 15: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 15 – Copyright IBM Corporation August 10th 2013

IBM i Multipath provides resiliency in case the hardware for one of the path fails. It also provides performance improvement since Multipath uses IO load balancing in Round-Robin mode among the paths. Every LUN in Storwize V7000 uses one V7000 node as preferred node: the IO traffic to / from the particular LUN normally goes through the preferred node; if that node fails the IO is transferred to the remaining node. With IBM i Multipath, all the paths to a LUN through the preferred node are active and the path through the non-preferred node are passive. Multipath employs the load balancing among the paths to a LUN that go through the node which is preferred for that LUN.

Insight to Multipath with Native and VIOS NPIV connection In native and VIOS_NPIV connection Multipath is achieved by assigning the same LUN to multiple physical or virtual FC adapters in IBM i. To put more precisely: the same LUN is assigned to multiple WWPNs each from one port in physical or virtual FC adapter, each virtual FC adapter assigned to different VIOS. For better explaining we limit our further discussion to the multipath with two WWPNs. With using the recommended switch zoning we achieve that 4 paths are established from a LUN to the IBM i: two of the paths go through adapter 1 (in NPIV also through VIOS 1) and two of the paths go through adapter 2 (in NPIV also through VIOS 2); from the two paths that go through each adapter one goes through the preferred node, and one goes through the non-preferred node. Therefore two of the 4 paths are active, each of them going through different adapter, and different VIOS if NPIV is used; two of the path are passive, each of them going through different adapter, and different VIOS if NPIV is used. IBM i Multipathing uses Round Robin algorithm to balance the IO among the paths that are active. Picture 1 presents the detailed view of paths in Multipath with natively or VIOS_NPIV connected V7000. The solid lines refer to active paths, and the dotted lines refer to passive paths. Red lines present one switch zone and green lines present the other switch zone. The screen-shot below presents the IBM i view of paths to the LUN connected in

Page 16: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 16 – Copyright IBM Corporation August 10th 2013

VIOS_NPIV. As can be seen on the screen-shot, two active and two passive paths are established to each LUN.

Insight to Multipath with VIOS VSCSI connection In this type of connection the LUN in V7000 is assigned to multiple VIOS, IBM i establishes one path to the LUN through each VIOS. For better explaining we limit our further discussion to the multipath with two VIOS. The LUN reports as device hdisk in each VIOS. The IO rate from VIOS (device hdisk) to the LUN uses all the paths that are established from VIOS to V7000. Multipath across these paths as well as load balancing and IO through preferred node, are handled by VIOS Multipath driver. The two hdisks that represent the LUN in each VIOS, are mapped to IBM i through different virtual SCSI adapters. Each of them reports in IBM i as different path to the same LUN (disk unit). IBM i establishes Multipath to the LUN, using both paths, both path are active and the load balancing in Round Robin algorithm is used for the IO traffic. Picture 2 presents the detailed view of paths in Multipath with VIOS VSCSI connected V7000. IBM I uses two paths to the same LUN, each path through one VIOS to the relevant hdisk connected with virtual SCSI adapter. Both paths are active and IBM i load balancing algorithm is used for IO traffic. Each VIOS has 8 connections to V7000; therefore 8 paths are established from each VIOS to the LUN. The IO traffic through these paths is handled by VIOS Multipath driver. The screen-shots show both paths for the LUN in IBM I, and the paths in each VIOS for the LUN

Page 17: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 17 – Copyright IBM Corporation August 10th 2013

Zoning SAN switches With native connection and the connection with VIOS_NPIV we recommend to zone the switches so that one WWPN of one IBM i port is in zone with two ports of V7000, each port from one node canister. This way we ensure resiliency for the IO to / from a LUN assigned to that WWPN: if the preferred node for that LUN fails the IO rate will continue using the non-preferred node. Note: A port in physical or virtual FC adapter in IBM i has two WWPNs. For connecting external storage we use the first WWPN, while the second WWPN is used for Live Partition Mobility (LPM). Therefore: it is a good idea to zone both WWPNs if you plan to use LPM; otherwise, zone just the first WWPN. When connection with VIOS virtual SCSI is used we recommend to zone one physical port in VIOS with all available ports in V7000, or with as many ports as possible to allow load balancing, keeping in mind that there are maximum 8 paths available from VIOS to V7000. V7000 ports zoned with one VIOS port should be evenly spread between V7000 node canisters. Examples of zoning for VIOS VSCSI connection Example 1: We use one port in VIOS, and we zone it with 8 * ports in V7000, 4 * ports from each canister. This way we use all available ports, spread evenly between the canisters, and we don’t exceed 8 paths from VIOS to V7000. Example 2: We use two adapters in VIOS, we zone one port from each adapter with 4 * V7000 ports, 2 ports from each canister. This way we achieve that the V7000 are balanced between VIOS ports and between the V7000 node canisters and we don’t exceed the 8 path from VIOS to V7000.

Page 18: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 18 – Copyright IBM Corporation August 10th 2013

Following pictures show recommended switch zoning in different ways of connection

Boot from SAN All connection options: Native, VIOS_NPIV, and VIOS Virtual SCSI support Boot from SAN. LoadSource resides on a V7000 LUN which is connected the same way as the other LUNs; there aren't any special requirements for LoadSource connection. When installing IBM i operating system with disk capacity on V7000, the installation prompt to select one of the available V7000 LUNs for the LoadSource. When migrating from internal disk drives or from another Storage system to V7000 we can use IBM i ASP balancing to migrate all the disk capacity except LoadSource. After the non LoadSource data are migrated to V7000 with load balancing, we migrate LoadSource by copying it from the previous disk unit to the LUN in V7000. The V7000 LUN must be of equal or greater size than the disk unit previously used for LoadSource. This way of migration can be done with all ways of V7000 connection Native, VIOS_NPIV and VIOS vscsi.

IBM i mirroring for V7000 LUNs Some customers prefer to use IBM i mirroring for resiliency. For example, they use IBM i mirroring between two V7000 systems, each connected with one VIOS. When starting IBM i mirroring with VIOS connected V7000, you should add the LUNs to the mirrored ASP is steps: first you add the LUNs from two virtual adapters each adapter connecting one to-be mirrored half of LUNs. After mirroring is started for those LUNs add the LUNs from two new virtual adapters, each adapter connecting one to-be mirrored half, etc. This

Page 19: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 19 – Copyright IBM Corporation August 10th 2013

way you ensure that the mirroring is started between the two V7000 and not among the LUNs in the same V7000.

Thin Provisioning IBM i can take advantage of thin provisioning as it is transparent to the server. However, firstly, you need to provide adequate HDDs to sustain the required performance regardless of whether the capacity is actually required. Secondly, while IBM i 7.1 and later do not pre-format LUNs so that initial allocations can be thin provisioned, there is no space reclamation thus the effectiveness of the thin provisioning may decline over time. You still need to ensure that you have sufficient disks configured to maintain performance of the IBM i workload. Thin provisioning may be more applicable to test or development environments.

Real-time Compression IBM i can take advantage of Real-time Compression (RtC) as it is transparent to the server. RtC is not for every workload and careful planning needs to be done when considering RtC. IBM has several tools to evaluate the potential benefit of RtC for your environment. The use of the Comprestimator utility in combination with Disk Magic to determine if RtC is applicable to a given workload is highly recommended. Comprestimator tool: http://www-304.ibm.com/support/customercare/sas/f/comprestimator/home.html

Solid State Drives (SSD) Perhaps one of the most exiting innovations to happen to Enterprise Storage is SSD. We have just begun to explore the promising future of this technology. Solid-state storage means using a memory-type device for mass storage, rather than spinning disk or tape. First-to-market devices are the shape of standard hard disks, so they plug easily into existing disk systems.

IBM is making solid-state storage affordable, with innovative architectures, system and application integration, and management tools that enable effective use of solid-state storage. Solid-state technologies will continue to evolve and IBM researchers have been making significant breakthroughs. IBM will continue to bring the best implementations to our customers as innovation allows us to bring the full value of this technology to market.

Solid-state storage technology can have the following benefits:

Page 20: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 20 – Copyright IBM Corporation August 10th 2013

Significantly improved performance for hard-to-tune, I/O bound applications. No code changes required.

Reduced floor space. Can be filled near 100% without performance degradation Greater IOPS Faster access times Reduced energy use

Exploitation of SSDs with the Storwize V7000 is through Easy Tier. Even if you don’t plan to install SSDs you can still use Easy Tier to evaluate your workload and provide information on the benefit you might gain by adding SSDs in the future. Easy Tier is included with the Storwize V7000; however, it does require you to purchase a license and obtain a license key on a Storwize V3700. When using Easy Tier automated management it’s important to allow Easy Tier some ‘space’ to move data. You should not allocate 100% of the pool capacity but leave some capacity unallocated to allow Easy Tier migrations. As a minimum leave one extent free per tier in each storage pool, however for optimum exploitation of future functions plan to leave 10 extents free total per pool. There is also an option to create a disk pool of SSD in V7000, and create an IBM i ASP that uses disk capacity form SSD pool. The applications running in that ASP will experience performance boost. IBM i data relocation methods such as ASP balancing and Media preference are not available to use with SSD in V7000.

Data layout Selecting an appropriate data layout strategy depends on your primary objectives: Spreading workloads across all components maximizes the utilization of the hardware components. This includes spreading workloads across all the available resources. However, it is always possible when sharing resources that performance problems may arise due to contention on these resources. To protect critical workloads you should isolate them minimizing the chance that non-critical workloads can impact the performance of critical workloads. A storage pool is a collection of managed disks from which volumes are created and presented to the IBM i system as LUNs. The primary property of a storage pool is the extent size which by default is 1GB with Storwize V7000 release 7.1 and 256MB in earlier versions. . This extent size is the smallest unit of allocation from the pool.

Page 21: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 21 – Copyright IBM Corporation August 10th 2013

When you add managed disks to a pool they should have similar performance characteristics:

– Same RAID level – Roughly the same number of drives per array – Same drive type (SAS, NL_SAS, SSD except if using Easy Tier)

This is because data from each volume will be spread across all MDisks in the pool, so the volume will perform approximately at the speed of the slowest MDisk in the pool

– The exception to this rule is that if using Easy Tier you can have 2 different tiers of storage in the same pool – but the MDisks within the tiers should still have the same performance characteristics

Isolation of workloads is most easily accomplished where each ASP or LPAR has it’s own managed storage pool. This ensures that you can place data where you intend. I/O activity should be balanced between the two nodes or controllers on the Storwize V7000. Make sure that you isolate critical workloads – We strongly recommend only IBM i LUNs on any storage pool (rather than mixed with non-IBM i). If you mix production and development workloads in storage pools make sure that the customer understands that this may impact production performance.

LUN compared to Disk arm The V7000 LUN connected to IBM i reports in IBM i as a disk unit. IBM i storage management employs the management and performance functions as if the LUN was a disk arm. In fact the LUN is typically spread across multiple physical disk arms in the V7000 disk pool; the LUN uses some capacity from each disk arm. All disk arms the disk pools are shared among all the LUNs that are defined in that disk pool. The following picture shows an example of V7000 disk pool with three disk arrays of V7000 internal disk arms (mdisks) and a LUN created in the disk pool, LUN using an extent from each disk array in turn.

LUN Size LUNs can be configured up to 2000GB. The number of the LUNs defined is typically related to the wait time component of the response time. If there are insufficient LUNs,

Page 22: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 22 – Copyright IBM Corporation August 10th 2013

wait time typically increases. The sizing process determines the correct number of LUNs required to access the needed capacity while meeting performance objectives. The number of LUNs drives the requirement for more FC adapters on the IBM i due to the addressing restrictions of IBM i if you are using native attachment. Remember that each path to a LUN will count towards the maximum addressable LUNs on each IBM i IOA. For any ASP define all the LUNs to be the same size. 80GB is the recommended minimum LUN size. A minimum of 6 LUNs for each ASP or LPAR is recommended. To support future product enhancements it is recommended that load source devices be created at least 80GB. A smaller number of larger LUNs will reduce the number of IO ports required on both the IBM i and the Storwize V7000. Remember that in an iASP environment, you may exploit larger LUNs in the iASPs, but SYSBAS may require more, smaller LUNs to maintain performance; Disk Magic does not always accurately predict the effective capacity of the ranks depending on the DDM size selected and the number of spares assigned. The IBM tool Capacity Magic can be used to verify capacity and space utilization plans.

Adding LUNs to ASP Adding a LUN to an ASP generates I/O activity on the rank as the LUN is formatted. If there is production work sharing the same rank you may see a performance impact. For this reason it is recommended that you schedule adding LUNs to ASPs outside peak intervals.

Disk unit Serial number, type, model and resource name Each of the IBM i disk units representing V7000 LUNs has a unique Serial number. With native or NPIV connected V7000, the type of the disk unit is 2145, while with VIOS VSCSI connection the type is 6B22. Model of disk units in either connection is 050. Resource name starting with DMP indicated that the disk unit is connected in Multipath. If it is connected it single path the resource name starts with DD. The pictures below show IBM I disk units with native or NPIV connection, and with VIOS VSCSI connection. Picture Disk units in native or NPIV connection

Page 23: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 23 – Copyright IBM Corporation August 10th 2013

Picture Disk units in VIOS_VSCSI connection

Identify which V7000 LUN is which disk unit in IBM i There are many instances when we want to identify which IBM i disk unit is which LUN in V7000. For example: we need to identify which LUNs are the disk units in a particular IBM i Auxiliary Storage Pool (ASP) to migrate that ASP to another disk pool in V7000. When the V7000 is connected natively or with VIOS_NPIV use the following way to identify the LUNs:

a. In IBM i Dedicated Service Tools (DST) or System Service Tools (SST) look for the Serial number of a disk unit. In the picture Disk units in native or NPIV connection we see Serial numbers Y11C490001DC, Y11C490001DA, etc.

b. The last 6 characters of the Serial number are the last 6 characters of the LUN IDs in V7000. The picture below shows the corresponding LUN id of the disk unit with serial number Y11C490001DA.

c. The first 6 characters of the disk unit Serial number is a hash of the V7000 cluster ID.

When V7000 is connected in VIOS VSCSI use the following steps to identify which disk unit is which LUN in V7000:

a. In IBM i DST or SST use “Display Disk Unit Details” to track the controller at IBM i disk unit; controller (Ctl) indicates the LUN ID of a disk unit. An example

Page 24: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 24 – Copyright IBM Corporation August 10th 2013

of disk units with controller s is shown below. This this example the disk units 1 belongs to controller 1.

b. In VIOS look for the LUN ID of mapped disk units. The picture below shows the vdisk unit with the LUN ID 1:

c. In VIOS look for the matching hdisk LUN IDs. The pictures below show the output of command lsamp that connects mapped disk device to the hdisk, and the LUN ID of the hdisk unit matching the mapped disk unit disrec_sysbas_1:

d. In V7000 look for the LUN with the UID corresponding to the Serial number of the hdisk in VIOS. The picture below shows the V7000 LUN with the corresponding UID:

Page 25: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 25 – Copyright IBM Corporation August 10th 2013

Software It is essential that you ensure that you have all up to date software levels installed. There are fixes that provide performance enhancements, correct performance reporting, and support for new functions. As always, call the support center before installation to verify that you are current with fixes for the hardware that you are installing. It is also important to maintain current software levels to make sure that you get the benefit from new fixes that are developed. When updating storage subsystem LIC it is also important to check whether there are any server software updates required. Details of supported configurations and software levels are provided by the System Storage Interoperation Center: http://www-03.ibm.com/systems/support/storage/config/ssic/displayesssearchwithoutjs.wss?start_over=yes

Performance Monitoring Once your storage subsystem is installed it is essential that you continue to monitor the performance of the subsystem. IBM i Performance Tools reports provide information on I/O rates, and on response times to the server. This allows you to track trends in increased workload and changes in response time. You should review these trends to ensure that your storage subsystem continues to meet your performance and capacity requirements. Make sure that you are current on fixes to ensure that Performance Tools reports are reporting your external storage correctly. If you have multiple servers attached to a storage subsystem, particularly if you have other platforms attached in addition to IBM i, it is essential that you have a performance tool that enables you to monitor the performance from the storage subsystem perspective. IBM TPC provides a comprehensive tool for managing the performance of Storwize V7000s. You should collect data from all attached storage subsystems in 15 minute

Page 26: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 26 – Copyright IBM Corporation August 10th 2013

intervals; in the event of a performance problem IBM will ask for this data – without it resolution of any problem may be prolonged. There is a simple performance management reporting interface available through the Storwize V7000 GUI. This provides a subset of the performance metrics available from TPC.

Copy Services Considerations The Storwize V7000 has 2 options for Global Mirror: the Classic Global Mirror, and the Change Volumes enhancement which allows for a flexible and configurable RPO allowing GM to be maintained during peak periods of bandwidth constraint. Change Volumes is not currently supported by Power HA so it is essential that you size the bandwidth to accommodate the peaks or else risk impact to production performance. There is currently a limit of 256 Global Mirror with change volume relationships per system. The current zoning guidelines for mirroring installations advise that a maximum of two ports on each SVC node/Storwize V7000 node canister be used for mirroring. The remaining two ports on the node/canister should not have any visibility to any other cluster. If you have been experiencing performance issues when mirroring is in operation, implementing zoning in this fashion might help to alleviate this situation. Consulting services are available from the IBM STG Lab Services to assist in the planning and implementation of Storwize V7000 Copy Services in an IBM i environment: http://www-03.ibm.com/systems/services/labservices/platforms/labservices_i.html

Further References For further detailed information on implementing Storwize V7000 in an IBM i environment refer to the following redbooks. These can be downloaded from www.redbooks.ibm.com Power HA references: PowerHA Website

– www.ibm.com/systems/power/software/availability/ Lab Services

– http://www-03.ibm.com/systems/services/labservices PowerHA SystemMirror for IBM i Cookbook

– http://www.redbooks.ibm.com/abstracts/sg247994.html?Open Implementing PowerHA for IBM i

Page 27: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 27 – Copyright IBM Corporation August 10th 2013

– http://www.redbooks.ibm.com/abstracts/sg247405.html?Open IBM System Storage Copy Services and IBM i: A Guide to Planning and

Implementation – http://www.redbooks.ibm.com/abstracts/sg247103.html?Open

Is your ISV solution registered as ready for PowerHA? – http://www-304.ibm.com/isv/tech/validation/power/index.html

STORWIZE V7000 references: Introducing the Storwise STORWIZE V7000

– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4391 External storage solutions for IBM I

– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4605 Power HA options for IBM I

– http://w3-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/PRS4021 Simple Configuration Example for Storwize Storwize V7000 FlashCopy and

PowerHA SystemMirror for i – http://www.redbooks.ibm.com/abstracts/redp4923.html?Open

VIOS references: IBM i Virtualization and Open Storage

– http://www-03.ibm.com/systems/resources/systems_i_Virtualization_Open_Storage.pdf

IBM PowerVM Best Practice – http://www.redbooks.ibm.com/abstracts/sg248062.html?Open – IBM PowerVM Virtualization Introduction and Configuration

http://www.redbooks.ibm.com/abstracts/sg247940.html IBM PowerVM Virtualization Managing and Monitoring

– http://www.redbooks.ibm.com/abstracts/sg247940.html

IBM i and Midrange storage redbook – http://www.redbooks.ibm.com/abstracts/sg247668.html?Open

Fibre Channel (FC) adapters supported by VIOS – http://www14.software.ibm.com/webapp/set2/sas/f/vios/documentation/da

tasheet.html Disk Zoning White paper

– http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP101914

Acknowledgements

Page 28: Hints and TiHints-and-Tips_V7000ps V7000 in an IBM i Environment V2

Hints and tips for implementing Storwize V7000 in a IBM i environment

- 28 – Copyright IBM Corporation August 10th 2013

Thanks to Lamar Reavis, Byron Grossnickle, and William Wiegand (Storage ATS), Sue Baker (Power ATS), Kris Whitney (Rochester Development) and Selwyn Dickey and Brandon Rao (STG Lab Services).