17
Technical white paper OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices OpenStack Kilo update Table of contents Revision history ..................................................................................................................................................................... 2 Executive summary .............................................................................................................................................................. 3 Introduction............................................................................................................................................................................ 3 HP 3PAR StoreServ Storage ................................................................................................................................................ 4 Configuration ......................................................................................................................................................................... 5 Volume types creation.......................................................................................................................................................... 5 Setting extra_specs or capabilities .....................................................................................................................................6 Extra_specs restrictions ................................................................................................................................................... 7 Creating and setting qos_specs .......................................................................................................................................... 8 qos_specs restrictions...................................................................................................................................................... 9 Multiple storage backend support and Block Storage configuration ............................................................................. 9 iSCSI target port selection ................................................................................................................................................. 10 Fibre Channel target port selection ................................................................................................................................. 11 Block Storage scheduler configuration with multi-backend ........................................................................................ 11 Block Storage scheduler configuration with driver filter and weigher ........................................................................ 12 Volume types assignment ................................................................................................................................................ 14 Multiple backend requirements ................................................................................................................................... 14 Volume migration............................................................................................................................................................... 14 Volume manage and unmanage ...................................................................................................................................... 15 Volume retype .................................................................................................................................................................... 16 Security improvements ..................................................................................................................................................... 16 CHAP support.................................................................................................................................................................. 16 Configurable SSH Host Key Policy and Known Hosts File ......................................................................................... 16 Summary ............................................................................................................................................................................. 17 For more information ........................................................................................................................................................ 17

Openstack Cinder on 3PAR server (Best practices)

Embed Size (px)

Citation preview

Page 1: Openstack Cinder on 3PAR server (Best practices)

Technical white paper

OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices OpenStack Kilo update

Table of contents Revision history ..................................................................................................................................................................... 2

Executive summary .............................................................................................................................................................. 3

Introduction ............................................................................................................................................................................ 3

HP 3PAR StoreServ Storage ................................................................................................................................................ 4

Configuration ......................................................................................................................................................................... 5

Volume types creation .......................................................................................................................................................... 5

Setting extra_specs or capabilities ..................................................................................................................................... 6

Extra_specs restrictions ................................................................................................................................................... 7

Creating and setting qos_specs .......................................................................................................................................... 8

qos_specs restrictions ...................................................................................................................................................... 9

Multiple storage backend support and Block Storage configuration ............................................................................. 9

iSCSI target port selection ................................................................................................................................................. 10

Fibre Channel target port selection ................................................................................................................................. 11

Block Storage scheduler configuration with multi-backend ........................................................................................ 11

Block Storage scheduler configuration with driver filter and weigher ........................................................................ 12

Volume types assignment ................................................................................................................................................ 14

Multiple backend requirements ................................................................................................................................... 14

Volume migration............................................................................................................................................................... 14

Volume manage and unmanage ...................................................................................................................................... 15

Volume retype .................................................................................................................................................................... 16

Security improvements ..................................................................................................................................................... 16

CHAP support .................................................................................................................................................................. 16

Configurable SSH Host Key Policy and Known Hosts File ......................................................................................... 16

Summary ............................................................................................................................................................................. 17

For more information ........................................................................................................................................................ 17

Page 2: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

2

Revision history

Rev Date Description

1.0 15-Apr-2014 Update for OpenStack Icehouse release

Added QoS and Fibre Channel zoning

2.0 16-Oct-2014 Update for OpenStack Juno release

• Requires “hp3parclient” version 3.1.1 from the Python Package Index (PyPi).

• HP 3PAR FC OpenStack driver supports Match Set (requires Fibre Channel Zone Manager) VLUNs instead of Host Sets.

• Admin Horizon UI now supports adding extra-specs and qos-specs settings.

• HP 3PAR iSCSI OpenStack driver now supports CHAP authentication.

• Configurable SSH Host Key Policy and known host file.

• Default HP 3PAR host persona was set to “1—Generic.” It now defaults to “2—Generic—ALUA.”

• Support added for manage/unmanage volumes.

• The <pool> is required for any <host> based options on the command line; For the HP 3PAR drivers this is just a repeat of the driver backend name.

2.1 02-Feb-2015 Update host persona to enums to match HP 3PAR WSAPI values.

3.0 30-Apr-2015 Updated for the OpenStack Kilo release

• The hp3par_cpg setting in the cinder.conf can now contain multiple CPGs (pools).

• The hp3par:cpg extra-spec is now ignored, if it’s used a warning will be posted to the log

• Support added for Flash Cache requires HP 3PAR OS 3.2.1 MU2, Web Services API version 1.4.2 and “hp3parclient” version 3.2.0 from the PyPi.

• Support added for Thin Duplication provisioned volumes requires HP 3PAR OS 3.2.1 MU1 and Web Services API version 1.4.1.

• Block Storage scheduler configuration with driver filter and weigher.

• Both Cisco and Brocade Fibre Channel Zone Manager drivers have made configuration changes.

Page 3: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

3

Executive summary

HP’s commitment to the OpenStack community brings the power of OpenStack® to the enterprise with new and enhanced offerings that enable enterprises to increase agility, speed innovation, and lower costs.

Since the Grizzly release, HP has been a top contributor to the advancement of the OpenStack project.1 HP’s contributions have focused on continuous integration and quality assurance, which has supported the development of a reliable and scalable cloud platform that is equipped to handle production workloads.

To support the need that many larger organizations and service providers have for enterprise-class storage, HP has developed the HP 3PAR StoreServ Block Storage Drivers, which support the OpenStack technology across both iSCSI and Fibre Channel (FC) protocols. This provides the flexibility and cost-effectiveness of a cloud-based open source platform to customers with mission-critical environments and high resiliency requirements.

Figure 1 shows the high-level components of a basic cloud architecture.

Figure 1. OpenStack cloud architecture

Introduction

This document provides information about the new best practices features in the OpenStack release. These include configuring and using volume types, extra specs, quality of service (QoS) specs, and multiple backend support with the HP 3PAR StoreServ Block Storage Drivers.

The “HP3PARFCDriver” and “HP3PARISCSIDriver” are based on the Block Storage (Cinder) plug-in architecture, shown in figure 2. The drivers execute the volume operations by communicating with the HP 3PAR Storage system over HTTP or HTTPS and secure shell (SSH) connections. The connections communicate using the “hp3parclient,” which is part of the PyPi.

1 Stackalytics.com, “OpenStack Kilo Analysis,” April 2015.

stackalytics.com/?release=kilo&amp;metric=commits&amp;project_type=openstack&metric=commits

Page 4: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

4

Figure 2. HP 3PAR iSCSI and FC drivers for OpenStack or Cinder

HP 3PAR StoreServ Storage

HP 3PAR StoreServ uses a single architecture, shown in figure 3, to deliver primary storage platforms for midrange, enterprise, and all-flash arrays.2

HP 3PAR StoreServ Block Storage Drivers can work with all arrays in the entire HP 3PAR StoreServ product family. HP 3PAR StoreServ Storage delivers key advantages for the OpenStack community:

• High performance to meet peak demands

• Non-disruptive scalability to easily support storage growth

• Bulletproof storage to reduce downtime

• Increased efficiency to help ensure no wasted storage

• Effortless storage administration to lower operational costs and reduce time to value

The HP 3PAR has added two new features in the latest release, Adaptive Flash Cache (AFC) and Thin Deduplication provisioning. The HP 3PAR implementation of the AFC uses the flash (SSD) storage as level-2 read cache on the HP 3PAR StoreServ array.

The HP 3PAR Thin Deduplication software delivers inline, block-level deduplication without performance or capacity inefficiency tradeoffs. Built-in, zero-detection mechanism drives efficient inline zero block deduplication at the hardware layer.

Figure 3. HP 3PAR StoreServ Storage3

2 HP 3PAR StoreServ Storage: hp.com/go/3PAR 3 HP 3PAR StoreServ offering: hp.com/us/en/products/disk-storage/index.html?facet=3par-storage

Page 5: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

5

Configuration

The HP 3PAR StoreServ Block Storage Drivers for iSCSI and Fibre Channel were introduced in the OpenStack Grizzly release. Since that release, several configuration improvements have been made, including the following:

Icehouse • CPGs used by the HP 3PAR StoreServ Block Storage Drivers are no longer required to belong to a domain. The

“hp3par_domain” configuration setting in the cinder.conf file has been removed.

• Added support to the HP 3PAR iSCSI OpenStack driver, which allows the selection of the best-fit target iSCSI port from a list of candidate ports.

• Enhanced quality of service features now using qos_specs instead of extra_specs.

• Icehouse release requires the “hp3parclient” version 3.0.0 from the PyPi.

• The HP 3PAR FC OpenStack driver can now take advantage of the Fibre Channel Zone Manager feature in OpenStack that allows FC SAN zone or access control management. See the OpenStack configuration reference guide for details.

Juno • Juno release requires the “hp3parclient” version 3.1.1 from the PyPi.

• Added support to the HP 3PAR Fibre Channel OpenStack driver allows for Match Set (requires Fibre Channel Zone Manager) VLUNs instead of Host Sets.

• Admin Horizon UI now supports adding extra-specs and qos-specs settings.

• The HP 3PAR iSCSI OpenStack driver now supports CHAP authentication.

• Configurable SSH Host Key Policy and known host file.

Kilo • The Kilo release introduces support for pools. With Kilo or later, the hp3par_cpg setting in the cinder.conf file is used

to define CPGs/pools. The pool name is the CPG name. The hp3par_cpg setting can now contain a comma-separated list of CPGs. This allows the scheduler to select a backend and a pool in its set of pools.

• The extra spec setting hp3par:cpg is ignored in Kilo. Instead, use the hp3par_cpg setting in the cinder.conf file to list the valid CPGs for a backend. If types referred to different CPGs with different attributes, those should be converted to multiple backends with the CPGs specified in the cinder.conf file.

• Added support for Flash Cache, which can be enabled for a volume with the “hp3par:flash_cache” extra-spec setting.

• Added support for Thin Deduplication volume provisioning, which can be used for provisioning a volume with the “hp3par:provisioning” extra-spec setting.

• The Fibre Channel Zone Manager feature in OpenStack that allows FC SAN zone or access control management. See the OpenStack configuration reference guide for the latest configuration details for both Cisco and Brocade.

• The Dynamic Optimization license is required to support any feature that results in a volume changing provisioning type or CPG. This may apply to the volume migrate, retype, and manage commands.

Volume types creation

Block Storage volume types are a type or label that can be selected at volume create time in OpenStack. These types can be created either in the Admin Horizon UI or using the command line, as shown.

$cinder --os-username admin --os-tenant-name admin type-create <name>

The <name> is the name of the new volume type. This example illustrates how to create three new volume type names with the names gold, silver, and bronze:

$cinder --os-username admin --os-tenant-name admin type-create gold

$cinder --os-username admin --os-tenant-name admin type-create silver

$cinder --os-username admin --os-tenant-name admin type-create bronze

Page 6: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

6

Setting extra_specs or capabilities

After the volume type names have been created, you can assign extra_specs, qos_specs, or capabilities to these types. The filter scheduler uses the extra_specs data to determine capabilities and the backend. It also enforces strict checking. Starting in the Icehouse release, any QoS-related settings with the exception on the virtual volume set (VVS) must be set in the qos_specs, described in the next section, Creating and setting qos_specs.

The extra_specs or capabilities must be set or unset for a volume type. The extra_specs are set or unset either in the Admin Horizon UI (new in Juno) or using the command line, as shown:

$cinder --os-username admin --os-tenant-name admin type-key <vtype> <action> [<key=value> [<key=value> ...]]

The argument <vtype> is the name or ID of the previously created volume type (e.g., gold, silver, and bronze). The argument <action> must be one of the actions: “set” or “unset.” The optional argument <key=value> is the extra_specs to set. Only the key is necessary for unset.

Any or all of the following capabilities can be set on a volume type. They override the default values that were specified in the cinder.conf or are just additional capabilities that the HP 3PAR StoreServ Storage array offers. See the extra_specs restrictions section, which provides constraints on when the VVS and QoS settings are set for a single volume type.

• volume_backend_name—Assign a volume type to a particular Block Storage Driver and set the volume_backend_name key to match the value specified in the cinder.conf file for that Block Storage Driver.

Scoping “hp3par:” This is required for all the HP 3PAR specific keys. The current list of supported HP 3PAR keys includes:

• hp3par:flash_cache—Valid values are true and false. Added in the Kilo release.hp3par:snap_cpg—Overrides the hp3par_cpg_snap setting. Defaults to the hp3par_cpg_snap setting in the cinder.conf file. If hp3par_cpg_snap is not set, it defaults to the hp3par_cpg setting.

• hp3par:provisioning—Defaults to thin provisioning. Valid values are thin, dedup, and full. In Kilo and later, dedup was added as a provisioning type for thin deduplication provisioned volumes.

• hp3par:persona—Defaults to “2—Generic-ALUA” persona. The valid values are: 1—Generic, 2—Generic-ALUA, 3—Generic-legacy, 4—HPUX-legacy, 5—AIX-legacy, 6—EGENERA, 7—ONTAP-legacy, 8—VMware®, 9—OpenVMS, 10—HPUX, and 11—Windows Server®. Before the Juno release the default was set to “1—Generic”; it now defaults to “2—Generic—ALUA.”

Note The HP 3PAR WSAPI requires these personas. The numerical values are different from what is displayed in the HP 3PAR Management console and the HP 3PAR CLI.

Prior to Kilo only, the CPG could be set using hp3par:cpg. As described in the following bullet. In Kilo and later, CPGs should be controlled by configuring separate backends with pools.

• (Obsolete) hp3par:cpg—Overrides the hp3par_cpg setting. Defaults to the hp3par_cpg setting in the cinder.conf file.

To use VVS settings, the HP 3PAR StoreServ Storage array must have an HP 3PAR Priority Optimization license installed.

• hp3par:vvs—The virtual volume set name that has been set up by the administrator that would have predefined QoS rules associated with it. If you specify extra_specs hp3par:vvs, the qos_specs minIOPS, maxIOPS, minBWS, and maxBWS settings are ignored.

Page 7: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

7

“Set” examples: $cinder type-key gold set hp3par:snap_cpg=SNAPCPG volume_backend_name=3par_FC

$cinder type-key silver set hp3par:provisioning=full volume_backend_name=3par_ISCSI

$cinder type-key bronze set hp3par:vvs=myvvs volume_backend_name=iscsi

“Unset” examples: $cinder type-key gold unset hp3par:snap_cpg

Use the following command to list all the volume types and extra_specs currently configured:

$cinder --os-username admin --os-tenant-name admin extra-specs-list

Extra_specs restrictions Certain constraints apply when using one or more of the extra_specs documented above.

• If hp3par:snap_cpg is set per volume type, it must be in the same virtual domain as the back end’s CPGs on the HP 3PAR StoreServ Storage array.

• The hp3par:persona is set on a per volume basis, but is not actually used until that volume is attached to an instance and an HP 3PAR host is created. In this case, the first volume’s persona to be attached to the host is used. Additional volumes that have a different persona will still be attached, but their persona is ignored. They use the persona of the first attached volume.

• Errors occur if you attempt to use vvs or the qos setting without the Priority Optimization license installed on the HP 3PAR StoreServ Storage array.

• If you specify hp3par:vvs virtual volume set as an extra_spec and one or more of the qos settings (via qos_specs), the qos settings will be ignored and the volume will be created in the VVS specified.

• Volumes that have been cloned will only support extra specs keys hp3par:snap_cpg, hp3par:provisioning, and hp3par:vvs. The others are ignored. In addition, the comments section of the cloned volume in the HP 3PAR StoreServ Storage array will not be populated.

• If you specify hp3par:flash_cache, the HP 3PAR StoreServ Storage array must meet the following requirements:

– Firmware version HP 3PAR OS 3.2.1 MU2 and Web Services API version 1.4.2

– Adaptive Flash Cache license installed

– Available SSD Disks

– The assigned CPG for a Flash Cache volume must be set to device type of “SSD”

• Flash Cache must be enabled on the HP 3PAR StoreServ Storage array. This is done with the CLI command—“createflashcache <size>” (size must be in 16 GB increments). For example, “createflashcache 128g” will create 128 GB of Flash Cache for each node pair in the array.

• If you specify dedup as the hp3par:provisioning value, the HP 3PAR StoreServ Storage array must meet the following requirements:

– Firmware version HP 3PAR OS 3.2.1 MU1 and Web Services API version 1.4.1

– Thin Deduplication license installed

– Available SSD Disks

– The assigned CPG for a Thin Deduplication volume must be set to device type of “SSD”

Page 8: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

8

Creating and setting qos_specs

The qos_specs need to be created and associated with a volume type. To use these QoS settings, the HP 3PAR StoreServ Storage array must have a Priority Optimization license installed. The current HP 3PAR qos_specs that can be specified in the Icehouse release do not require a scoping.

• minIOPS—Sets the QoS I/O issue count minimum goal. If not specified, there is no limit on I/O issue count.

• maxIOPS—Sets the QoS I/O issue count rate limit. If not specified, there is no limit on I/O issue count.

• minBWS—Sets the QoS I/O issue bandwidth minimum goal. If not specified, there is no limit on I/O issue bandwidth rate.

• maxBWS—Sets the QoS I/O issue bandwidth rate limit. If not specified, there is no limit on I/O issue bandwidth rate.

• latency—Sets the latency goal in milliseconds.

• priority—Sets the priority of the QoS rule over other rules. Default to normal, the valid values are low, normal, and high.

Any or all of the above capabilities can be set on a volume type. They override the default values that were specified in the cinder.conf or are just additional capabilities that the HP 3PAR StoreServ Storage array offers. See the extra_specs requirements section, which provides constraints on when the VVS and QoS settings are set for a single volume type.

Since the Icehouse release, minIOPS and maxIOPS must be used together to set I/O limits. Similarly, minBWS and maxBWS must be used together. If only one is set, the other will be set to the same value. For example, if a qos-create was called with only minIOPS=10000 being set, then maxIOPS would also be set to 10000.

All qos_specs can be made in the Admin Horizon UI or by on the command line. Use the following command to list all the qos_specs currently configured:

$ cinder --os-username admin --os-tenant-name admin qos-list

The qos_specs can be created by using the qos-create command, following this format:

$cinder --os-username admin --os-tenant-name admin qos-create <name> <key=value> [<key=value> [<key=value> ...]]

The argument <name> is the name of the new QoS spec. The argument <key=value> is the qos_specs to set the key and value that you would like to create for this qos_specs. You must have at least one key=value pair.

You can also set or unset keys and values only on the command line, after the qos_specs are created following this format:

$cinder --os-username admin --os-tenant-name admin qos-key <qos_specs> <action> [<key=value> [<key=value> ...]]

The argument <qos_specs> is the ID of the qos specs. You can retrieve the ID of the qos_spec by running cinder

qos-list. The argument <action> must be one of the actions: set or unset. The argument <key=value> is the qos_specs to set or unset the key. Only key is necessary on unset.

Next, connect the qos_specs to a volume type by making an association. You can associate the qos specs ID to the volume type ID that is connected to a particular Block Storage Driver by issuing the following command:

$cinder --os-username admin --os-tenant-name admin qos-associate <qos_specs_id> <volume_type_id>

You can undo an association using the qos-disassociate command.

$cinder --os-username admin --os-tenant-name admin qos-disassociate <qos_specs_id> <volume_type_id>

To find the <qos_spec_id>, run the cinder qos-list command. To find the <volume_type_id>; run the cinder extra-specs-list command. The volume type used must also have a volume_backend_name assigned to it.

volume_backend_name=<volume backend name>

Page 9: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

9

“Create” examples: $cinder qos-create high_iops minIOPS=1000 maxIOPS=100000

$cinder qos-create high_bws maxBWS=5000

“Set” examples: $cinder qos-key 563055a9-f17f-4553-8595-4a948b5bf010 set priority=high minIOPS=100000

$cinder qos-key d58adb0b-a282-43c5-8c13-550c38df31b8 set maxIOPS=2000 maxBWS=100

“Unset” examples: $cinder qos-key 563055a9-f17f-4553-8595-4a948b5bf010 unset priority

$cinder qos-key d58adb0b-a282-43c5-8c13-550c38df31b8 unset maxIOPS maxBWS

When you want to unset a particular key value pair from a volume type, only the key is required.

“Associate” example: $cinder qos-associate 563055a9-f17f-4553-8595-4a948b5bf010 71ca8337-5cbf-43f5- b634-c0b35808d9c4

Where 563055a9-f17f-4553-8595-4a948b5bf010 is the ID of the qos_specs and 71ca8337-5cbf-43f5-b634- c0b35808d9c4 is the ID of the volume type. This ID can be found by running cinder qos-list and cinder extra-specs-list commands.

“Disassociate” example: $cinder qos-disassociate 563055a9-f17f-4553-8595-4a948b5bf010 71ca8337-5cbf-43f5- b634-c0b35808d9c4

qos_specs restrictions Certain constraints apply when using one or more of the qos_specs documented in the Creating and setting qos_specs section.

• Errors occur if you attempt to use vvs or the qos setting without the Priority Optimization license installed on the HP 3PAR StoreServ Storage array.

• If you specify hp3par:vvs virtual volume set as an extra_spec and one or more of the qos settings, the qos settings are ignored and the volume is created in the VVS specified.

Multiple storage backend support and Block Storage configuration

Multiple backend support was added to OpenStack in the Grizzly release. Detailed instructions on setting up multiple backends can be found in the OpenStack Configuration Reference Guide.

The multi-backend configuration is done in the cinder.conf file. The enabled_backends flag has to be set up. This flag defines the names (separated by a comma) of the config groups for the different backends. One name is associated to one config group for a backend (e.g., [3parfc-1]). Each group must have a full set of the driver-required configuration options. Figure 4 shows a sample cinder.conf file for three different HP 3PAR StoreServ Storage array backends configuring two Fibre Channel drivers and one iSCSI Cinder driver.

Note Currently, the HP 3PAR drivers communicate with the HP 3PAR StoreServ Storage array over HTTP or HTTPS and SSH. This means that both the hp3par_username/password and san_login/password entries must be configured in the cinder.conf file.

Page 10: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

10

Figure 4. Sample cinder.conf file

# List of backends that will be served by this node enabled_backends=3parfc-1, 3parfc-2,3pariscsi-1 # [3parfc-1] volume_driver=cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver volume_backend_name=3par_FC hp3par_api_url=https://10.10.22.241:8080/api/v1 hp3par_username=<username> hp3par_password=<password> hp3par_cpg=OpenStackCPG_RAID5_NL,cpggold1 san_ip=10.10.22.241 san_login=<san_username> san_password=<san_password> # [3parfc-2] volume_driver=cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver volume_backend_name=3par_FC hp3par_api_url=https://10.10.22.242:8080/api/v1 hp3par_username=<username> hp3par_password=<password> hp3par_cpg=OpenStackCPG_RAID6_NL,cpggold2 san_ip=10.10.22.242 san_login=<san_username> san_password=<san_password> # [3pariscsi-1] volume_driver=cinder.volume.drivers.san.hp.hp_3par_iscsi.HP3PARISCSIDriver hp3par_iscsi_ips=10.10.220.253,10.10.220.254 hp3par_api_url=https://10.10.22.243:8080/api/v1 volume_backend_name=3par_ISCSI hp3par_username=<username> hp3par_password=<password> hp3par_cpg=OpenStackCPG_RAID6_ISCSI san_ip=10.10.22.243 san_login=<username> san_password=<password>

In this configuration, both the “3parfc-1” and “3parfc-2” have the same volume_backend_name. When a volume request comes in with the “3par_FC” backend name, the scheduler must choose which one is most suitable. This is done with the capacity filter scheduler. See details in the Block Storage scheduler configuration with multi-backend section. This example also includes a single iSCSI base HP 3PAR Cinder driver with a different volume_backend_name.

In this configuration, both “3parfc-1” and “3parfc-2” also show multiple CPGs in their hp3par_cpg option. These CPGs are used as “pools” in Kilo.

iSCSI target port selection

The HP 3PAR iSCSI OpenStack driver provides the ability to select the best-fit target iSCSI port from a list of candidate ports. The first time a volume is attached to a host, all iSCSI ports configured for driver selection are examined for best fit. The port with the least active volumes attached is selected as the communication path to the HP 3PAR StoreServ Storage array. Any subsequent volumes attached to the same host will use the established target port.

To configure the candidate iSCSI ports used for best-fit selection, set the cinder.conf option, hp3par_iscsi_ips with a comma-separated list of IP addresses. Do not use quotes around the list. For example, the section for the backend config group name [3pariscsi-1] in the cinder.conf file in figure 4 is as follows:

hp3par_iscsi_ips=10.10.220.253,10.10.220.254

If the single iSCSI cinder.conf option iscsi_ip_address is set, it will be included as a possible candidate for port selection at volume attach time.

Page 11: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

11

At driver startup, target iSCSI ports are verified with the HP 3PAR StoreServ Storage array to ensure each is a valid iSCSI port. If an invalid iSCSI port is identified, the following message is logged to the cinder-volume log file:

2013-07-02 08:50:50.934 WARNING cinder.volume.drivers.san.hp.hp_3par_iscsi [req-6c6e6807-5543-46dd-ba66-30149f24758d None None] Found invalid IP address(s) in configuration option(s) hp3par_iscsi_ips or iscsi_ip_address '10.10.22.230, 10.10.220.25'

If no valid iSCSI port is found, the following exception is logged and the driver fails:

2013-07-02 08:53:57.559 TRACE cinder.service InvalidInput: Invalid input received: At least one valid iSCSI IP address must be set.

Fibre Channel target port selection

Before the Juno release, the HP 3PAR FC OpenStack driver would always use all available FC ports on the HP 3PAR host when an instance is attached to a volume and only one FC path is available to that host. Now the HP 3PAR FC OpenStack Driver can detect if only a single path FC path is available. When a single FC path is detected, only a single VLUN will be created, instead of one for every available NSP (node:slot:port) on the HP 3PAR host. This prevents an HP 3PAR host from using extra FC ports that are not needed. If multiple FC paths are available, all the ports are used.

To configure HP 3PAR OpenStack FC driver target port selection (added in Juno), the Fibre Channel Zone Manager needs to be configured and the zone_mode=fabric must be set in cinder.conf to enable the target port selection. If zone_mode=None is not present in the cinder.conf, then all available FC ports are used. See the OpenStack Configuration Reference Guide for details.

Block Storage scheduler configuration with multi-backend

Multi-backend must be used with filter_scheduler enabled. Filter scheduler acts in two steps:

1. Filter scheduler filters the available backends. By default, AvailabilityZoneFilter, CapacityFilter, and CapabilitiesFilter are enabled.

2. Filter scheduler weighs the previously filtered backends. By default, the CapacityWeigher is enabled. The CapacityWeigher attributes high scores to backends with the most available space.

According to the filtering and weighing, the scheduler will be able to pick “the best” backend to handle the request. In that way, filter scheduler achieves the goal of explicitly creating volumes on specific backends using volume types.

From the Grizzly release forward, the default scheduler is the FilterScheduler.

(scheduler_driver=cinder.scheduler.filter_scheduler.FilterScheduler)

So, the line does not need to be added to the cinder.conf file.

Page 12: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

12

Block Storage scheduler configuration with driver filter and weigher

The driver filter and weigher for the Block storage scheduler is a feature (new in Kilo) that, when enabled, allows for a filter and goodness function to be defined in your cinder.conf file. The two functions are used during volume creation time by the Block storage scheduler to determine which backend is the ideal for the volume. The filter function is used to filter out backend choices that should not be considered at all. The goodness function is used to rank the filtered backends from 0 to 100. This feature should be used when the default Block storage scheduling does not provide enough control for where volumes are being created. Enable the usage of the driver filter for the scheduler by adding DriverFilter to the scheduler_default_filters property in your cinder.conf file. Enabling the driver weigher is similar. Add GoodnessWeigher to the scheduler_default_weighers property in your cinder.conf file. If you wish to include other OpenStack filters and weighers in your setup make sure to add those to the scheduler_default_filters and scheduler_default_weighers properties as well.

Note You can choose to have only the DriverFilter or GoodnessWeigher enabled in your cinder.conf file depending on how much customization you want.

OpenStack supports various math operations that can be used in the filter and goodness functions. The currently supported list of math operations can be seen in table 1.

Table 1. Supported math operations for filter and goodness functions

Operations Type

+, -, *, /, ^ standard math

not, and, or, &, |, ! logic

>, >=, <, <=, ==, <>, != equality

+, - sign

x ? a : b ternary

abs(x), max(x,y), min(x,y) math helper functions

Several driver specific properties are available for use in the filter and goodness functions for an HP 3PAR backend. The currently supported list of HP 3PAR specific properties include:

• capacity_utilization—Percent of total space used on the HP 3PAR CPG.

• total_volumes—The total number of volumes on the HP 3PAR CPG.

Additional generic volume properties are available from OpenStack for use in the filter and goodness functions. These properties can be seen in the OpenStack Cloud Administrator Guide.

Note Access the HP 3PAR specific properties by using the following format in your filter or goodness functions: capabilities.<property>

Page 13: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

13

The sample cinder.conf file in figure 5 shows an example of how several HP 3PAR backends could be configured to use the driver filter and weigher from the Block storage scheduler.

Figure 5. Sample cinder.conf file showing driver filter and weigher usage

[default] scheduler_default_filters = DriverFilter scheduler_default_weighers = GoodnessWeigher enabled_backends = 3parfc-1, 3parfc-2, 3parfc-3 [3parfc-1] hp3par_api_url = <api_url> hp3par_username = <username> hp3par_password = <password> san_ip = <san_ip> san_login = <san_username> san_password = <san_password> volume_backend_name = 3parfc hp3par_cpg = CPG-1 volume_driver = cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver filter_function = “capabilities.total_volumes < 10” goodness_function = “(capabilities.capacity_utilization < 75)? 90 : 50” [3parfc-2] hp3par_api_url = <api_url> hp3par_username = <username> hp3par_password = <password> san_ip = <san_ip> san_login = <san_username> san_password = <san_password> volume_backend_name = 3parfc hp3par_cpg = CPG-2 volume_driver = cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver filter_function = “capabilities.total_volumes < 10” goodness_function = “(capabilities.capacity_utilization < 50)? 95 : 45” [3parfc-3] hp3par_api_url = <api_url> hp3par_username = <username> hp3par_password = <password> san_ip = <san_ip> san_login = <san_username> san_password = <san_password> volume_backend_name = 3parfc hp3par_cpg = CPG-3 volume_driver = cinder.volume.drivers.san.hp.hp_3par_fc.HP3PARFCDriver filter_function = “capabilities.total_volumes < 20” goodness_function = “(capabilities.capacity_utilization < 90)? 75 : 40”

In figure 5 there are three HP 3PAR backends enabled in the cinder.conf file. The sample shows how you can use HP 3PAR specific properties to distribute volumes with more control than the default Block storage scheduler.

Note Remember that you can combine the HP 3PAR specific properties with the generic volume properties provided by OpenStack. Also the values used in the above sample are only for examples. In your own environment you have full control over the filter and goodness functions that you create. Refer to the OpenStack Cloud Administrator Guide for more details and examples.

Page 14: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

14

Volume types assignment

Use the following command or the Admin Horizon UI (new in Juno) to specify a volume_backend_name for each volume type you create. This links the volume type to a backend name.

$ cinder --os-username admin --os-tenant-name admin type-key gold set volume_backend_name=3parfc-1

The second volume type could be for an iSCSI driver volume type named silver.

$ cinder --os-username admin --os-tenant-name admin type-key silver set volume_backend_name=3pariscsi-1

Multiple key value pairs can be specified when running the above command. For example, you could run the following command to create a volume type named gold, with a CPG of OpenStack_RAID5_FC, and a host persona of VMware with full provisioning.

$ cinder --os-username admin --os-tenant-name admin type-key gold set volume_backend_name=3parfc-1 hp3par:persona=’11 – VMware’ hp3par:provisioning=full

Multiple backend requirements • In the Grizzly release, hard coding on the volume_backend_name was required, using either the HP3PARFCDriver

or the HP3PARISCSDriver. From the Havana release forward, you can name the volume_backend_name whatever you like.

• The hp3par_domain in the cinder.conf file has been deprecated in the Havana release and removed in the Icehouse release. The driver now looks up the domain based on the CPG specified in either the cinder.conf file (or hp3par:cpg extra-spec volume type setting prior to Kilo only).

• Errors will occur if you try to attach volumes from different domains to the same HP 3PAR host.

Volume migration

Starting in the Icehouse release, unattached volumes can be migrated between different CPGs in the same HP 3PAR backend, directly within the backend. Volume migration requires that you have the Dynamic Optimization license installed on our HP 3PAR Storage array. First, configure Cinder to use multiple backends, as explained in the “Block Storage scheduler configuration with multi-backend” section. Using the command line, you can see the available driver instances represented as “hosts” within the cinder.conf file to migrate volumes using the following command:

$cinder-manage host list

mystack mystack@3parfc-1

mystack@3parfc-2

mystack@3pariscsi-1

To see which HP 3PAR driver instance is managing a particular volume:

$cinder show <volume_id>

Where <volume_id> represents the volume ID, and the host is in the attribute os-vol-host-attr:host.

os-vol-host-attr:host mystack@3parfc-1#cpggold1

To migrate a volume to a different driver instance, and therefore to a different CPG, use the command:

$cinder migrate <volume_id> <host>#<pool>

Where <volume_id> represents the volume ID and <host> represents the driver instance. The <pool> is required. For the Juno release for the HP 3PAR drivers, pool is just a repeat of the driver backend name. In Kilo and later the HP 3PAR drivers use the CPG as the pool.

$cinder migrate 3e57599e-7327-4596-a45f-d29939c836cf mystack@3parfc-2#cpggold2

Page 15: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

15

Note Cinder migrate requires the host or drivers to have the same volume_backend_name in the cinder.conf file. Changing the example above so that all three drivers have the same volume_backend_name=3par would enable volume migration between all of them.

Volume manage and unmanage

Starting in the Juno release, HP 3PAR volumes can be managed and unmanaged. This allows for importing –non-OpenStack volumes already on an HP 3PAR Storage array into OpenStack/Cinder or exporting, which would remove them from the OpenStack/Cinder perspective. However, the volume on the HP 3PAR Storage array would be left intact. Using the command line, you can see the available driver instances represented as “hosts” within the cinder.conf file. This host is where the HP 3PAR volume that you would like to manage resides. Use the following command:

$cinder-manage host list

mystack mystack@3parfc-1

mystack@3parfc-2

mystack@3pariscsi

To manage with what exists on the HP 3PAR but is not already managed by OpenStack/Cinder, use the command:

$cinder manage --name <cinder name> <host>#<pool> <source-name>

Where <source-name> represents the name of the volume to manage and <cinder name> is optional but represents the OpenStack name and <host> represents the driver instance, the <pool> is required for the Juno release. In Juno for the HP 3PAR drivers, pool is just a repeat of the driver backend name. In Kilo and later, 3PAR drivers use one of the CPGs configured for the backend as the pool. The manage volume command will also accept an optional <--volume-type> parameter that will perform a retype of the virtual volume after being managed.

$cinder manage --name volgold mystack@3par-fc2#cpggold2 volume123

Note Cinder manage will rename the volume on the HP 3PAR Storage array to a name that starts with “osv-” followed by a UUID as this is required for OpenStack/Cinder to locate the volume under its management.

To unmanage a volume from the OpenStack/Cinder and leave the volume intact on the HP 3PAR Storage array, use the command:

$ cinder unmanage <volume_id>

Where <volume_id > is the ID of the OpenStack/Cinder volume to unmanage:

$cinder unmanage 16ab6873-eb09-4522-8d0f-91aab83be34d

Note Cinder unmanage will remove the OpenStack/Cinder volume from OpenStack but the volume will remain intact on the HP 3PAR Storage array. The volume name will have “umn-” prefixed to it, followed by an encoded UUID. This is required because the HP 3PAR has name length and character limitations.

Page 16: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

16

Volume retype

Volume retype in now available (since the Juno release). The retype only works if the volume is on the same HP 3PAR Storage array. This allows the volume retype for example from a “silver” volume type to a “gold” volume type. The HP 3PAR OpenStack drivers modify the volume’s Snap CPG, provisioning type, persona, and QoS settings, as needed, to make the volume behave appropriately for the new volume type. The ability to change a volume’s CPG existed prior to Kilo. In Kilo and later, separate configured backends with CPGs (as pools) should be used to allow the scheduler to select the appropriate CPG. Volume retype also requires that you have the Dynamic Optimization license enabled on our HP 3PAR Storage array.

Use caution when using the optional “--migration-policy on-demand,” because this falls back to copying the entire volume (using dd over the network) to the cinder node and then to the destination HP 3PAR storage array. The cinder node also has to have enough space available to store the entire volumes during the migration. We recommend that you use the default “--migration-policy never” when retype is used.

Note Volume retype will not be allowed if the volume has snapshots and the retype would require a change to the Snap CPG or User CPG. The volume_backend_name in cinder.conf must be the same between the source and destination volume types when “--migration-policy” is set to “never.” This is the default and recommend retype method.

Security improvements

CHAP support CHAP (Challenge-Handshake-Authentication-Protocol) support was added in the Juno release to the HP 3PAR iSCSI driver and is one-way authentication (sets the CHAP initiator on the HP 3PAR Storage array). The hp3par_iscsi_chap_ enabled option in the cinder.conf must be set to True to enable the iSCSI CHAP support. The current HP 3PAR host will have the CHAP setting automatically added the next time an iSCSI volume is attached.

Configurable SSH Host Key Policy and Known Hosts File Both OpenStack Cinder and the HP 3PAR client were enhanced in the Juno release to allow for configuring the SSH Host Key Policy and Known Hosts File. This adds configuration options for ssh_hosts_key_file and strict_ssh_host_key_policy in cinder.conf.

The strict_ssh_host_key_policy option defaults to False. When False, Cinder and the HP 3PAR Client will use auto-add policy like previous versions. Auto-add allows new hosts to be added, but will raise an exception if a host that was already known starts sending a different host key. When strict_ssh_host_key_policy=True, Cinder and the HP 3PAR Client will use reject policy. With reject policy, the host must already be recorded in your known host file and match the recorded host key.

The ssh_hosts_key_file option defaults to $state_path/ssh_known_hosts (state_path is a config option that defaults to /var/lib/cinder). This setting allows you to specify the known hosts file to use for both Cinder and HP 3PAR Client SSH connections. The previous default was to use the system host keys. The client will try to create the configured file if it does not exist. If strict_ssh_host_key_policy=True, then this file needs to be pre-populated with trusted known host keys. When using strict_ssh_host_key_policy=False (the default), new hosts will be appended to the file automatically.

Page 17: Openstack Cinder on 3PAR server (Best practices)

Technical white paper | OpenStack HP 3PAR StoreServ Block Storage Driver Configuration Best Practices

Sign up for updates hp.com/go/getupdated

Share with colleagues

Rate this document

© Copyright 2014–2015 Hewlett-Packard Development Company, L.P. The information contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable for technical or editorial errors or omissions contained herein.

The OpenStack word mark and the Square O Design, together or apart, are trademarks or registered trademarks of OpenStack Foundation in the United States and other countries, and are used with the OpenStack Foundation's permission. VMware is a registered trademark or trademark of VMware, Inc. in the United States and/or other jurisdictions. Windows Server is a trademark of the Microsoft group of companies.

4AA5-1930ENW, April 2015, Rev. 3

Summary

HP is a Platinum member of The OpenStack Foundation. HP has integrated OpenStack open source cloud platform technology into its enterprise solutions to enable customers and partners to build enterprise-grade private, public, and hybrid clouds.

The Kilo release continues HP’s contributions to the Cinder project, enhancing core Cinder capabilities as well as extending the HP 3PAR StoreServ Block Storage Driver. The focus continues to be on adding enterprise functionality such thin deduplication provisioning, Adaptive Flash Cache, and enhanced Block Storage scheduling based on filtering and goodness functions from the Drivers, etc. The HP 3PAR StoreServ Block Storage Drivers support the OpenStack technology across both iSCSI and Fibre Channel protocols.

For more information

HP HP press release

OpenStack OpenStack website OpenStack documentation OpenStack cloud administrator guide OpenStack Block Storage—HP 3PAR

HP 3PAR Storage array HP 3PAR StoreServ Storage family HP 3PAR Fibre Channel and iSCSI drivers

HP Cloud HP Helion HP Helion OpenStack Community HP Converged Cloud HP CloudSystem brochure

To help us improve our documents, provide feedback at hp.com/solutions/feedback.

Learn more at hp.com/go/helion