24
Installation runbook for EMC VNX, XtremIO, ScaleIO Test Result: Test Date: 2/12/2015 Partner Name: EMC Product Name: VNX, XtremIO, ScaleIO Product Version: VNX 5800, XtremIO Gen 2, ScaleIO 1.31 MOS Version: 6.0 MOX Version (if applicable): OpenStack version: Juno on CentOS 6.5 Product Type: EMC VNX, XtremIO, ScaleIO Partner Contact Name: Eoghan Kelleher Partner Contact Phone: 5081935004 Partner Сontact Email: [email protected] Partner Main Support Number: Partner Main Support Email: Certification Lab Location: RTP, NC

Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

  • Upload
    ngominh

  • View
    252

  • Download
    2

Embed Size (px)

Citation preview

Page 1: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

  

 

Installation runbook for EMC VNX, XtremIO, ScaleIO 

 

Test Result:   

Test Date:  2/12/2015 

Partner Name:  EMC 

Product Name:  VNX, XtremIO, ScaleIO 

Product Version:  VNX 5800, XtremIO Gen 2, ScaleIO 1.31 

MOS Version:  6.0 

MOX Version (if applicable):   

OpenStack version:  Juno on CentOS 6.5 

Product Type:  EMC VNX, XtremIO, ScaleIO 

Partner Contact Name:  Eoghan Kelleher 

Partner Contact Phone:  508­193­5004 

Partner Сontact Email:  [email protected] 

Partner Main Support Number:   

Partner Main Support Email:   

Certification Lab Location:   RTP, NC 

   

Page 2: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

                   

Contents  

Reviewers Document History Support contacts Introduction 

1.1 Objective 1.2 Target Audience 

Product Overview Joint reference architecture Networking 

4.1 Physical & Logical network topology Node specifications Installation and Configuration 

6.1 Overview of MOS installation steps 6.2 MOS Installation in details 6.3 Creation of OpenStack environment 6.4 MOS Deployment 6.5 <Driver name> Installation steps 

Testing 7.1 Test tools 7.2 Test cases 

7.2.1 Deployment modes and configuration options 

Page 3: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

7.2.2 Functional testing 7.2.3 Performance testing 7.2.4 Negative testing 

7.3 Test results (if FUEL HealthCheck is used)  

           

Reviewers  

Date  Name  e-mail 

     

     

 

Document History  

Version  Revision Date  Description 

0.1  2-13-2015  Initial Version 

0.2 2/24/2015 Updated with configuration details and overview for Cinder drivers

0.3 3/5/2015 Corrected zoning

Page 4: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

requirements for XtremIO

 

Support contacts  

Name  Phone  E-mail 

     

 <E.g., <Partner> could respond in X hours, Y days a week from Monday to Friday> 

      

1. Introduction  This document is to serve as a detailed Deployment Guide for EMC VNX, XtremIO and ScaleIO Cinder drivers with Mirantis OpenStack Juno on CentOS 6.5. This document describes the reference architecture, installation steps for certified MOS Cinder drivers, limitations and testing procedures. 

1.1 Objective  The objective of Mirantis OpenStack certification is to provide Mirantis program partners with an consistent and unified approach for acceptance of their solution into the Mirantis Technology Partner Program.  Certification is designed within the context of Mirantis OpenStack infrastructure, including Mirantis Fuel deployment tool and supported cloud reference architectures. 

1.2 Target Audience <TBD>  

Page 5: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

2. Product Overview This solution, encompassing EMC VNX, XtremIO, and ScaleIO Cinder drivers describes how to install, configure and use each of the listed Cinder drivers with MOS 6.0. The solution outlines the configuration items required to successfully integrate EMC Cinder drivers with MOS, and outlines the tested and validated architecture for each driver.   

3. Joint reference architecture Placeholder for link to RA document when it is released. The following diagram outlines the system architecture that was used to validate this solution.

  

4. Networking

4.1 Physical & Logical network topology  

Page 6: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

 

5. Installation and Configuration

5.1 Overview of MOS installation steps Mirantis OpenStack was deployed as a multi-node HA CentOS 6.5 based stack with KVM as the hypervisor. Three nodes were dedicated for the HA controllers and seven nodes for Compute. Neutron VLAN tenant networking with hard VLAN splinters was used for this deployment. Since external arrays (VNX and XtremIO) and EMC’s virtual SAN array ScaleIO were used to provide Cinder block storage, Ceph was not used and Cinder iSCSI nodes were not deployed. All servers used for the deployment have dual NIC’s for network connectivity and dual HBA’s for FC connectivity. Public networks were assigned to all nodes to allow for required connectivity with the external arrays.  

5.2 MOS Installation in details This section outlines the detailed installation steps used with Fuel and MOS. Create a VM with 2CPU’s, 4GB of ram, 100GB disk space, and a NIC on the PXE network. Attach the Mirantis ISO and boot into the installer, making sure to hit tab at the Mirantis installer screen to set the boot option “showmenu=yes”. During the installation, configure PXE with the following values:

Page 7: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach
Page 8: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

5.3 Creation of OpenStack environment This section outlines the creation of the OpenStack environment used to test EMC VNX, XtremIO and ScaleIO with MOS. Create a new environment based on OpenStack Juno running on CentOS 6.5:

Page 9: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

 Select multi-node HA for the OpenStack deployment:

 

Page 10: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

Select KVM for the Hypervisor:

 Select Neutron with VLAN segmentation for the tenant networking:

 

Page 11: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

Accept all defaults for Cinder and do not install Ceph:

Additional services were not required for this solution and were not installed or tested:

Page 12: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

 

 

5.4 MOS Deployment For Neutron network settings, VLANs with hard splinters were enabled:

Public network addresses were assigned to all OpenStack nodes:

Page 13: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

Cinder LVM over iSCSI for volumes was disabled since EMC arrays are used to provide block storage to Cinder.

   Networking was configured as outlined below:

Page 14: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

  

 Three nodes were selected for Controllers:

Seven nodes were selected for Compute:

Page 15: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

 For all Compute nodes, only the first disk was selected for the base operating system. The second disk was manually unallocated for use with ScaleIO SDS storage:  

 Disks for Controller nodes were left unchanged from the default MOS selection. The OpenStack controller nodes also served as the management nodes for the ScaleIO cluster. Networks were configured as per the diagram below for all OpenStack nodes in the MOS deployment:

Page 16: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

  

5.5 VNX Cinder driver Installation steps  VNX Cinder drivers are included in the MOS release of OpenStack Juno and no extra packages are required for the drivers to function as Cinder backends. For the VNX iSCSI driver, the iscsi-initiator-utils package was manually installed on all OpenStack controller nodes to allow for Copy Image to Volume. In the case of fibre channel (FC) connectivity the following packages were manually installed on all OpenStack nodes:

● sysfsutils ● sg3_utils

The following packages were also installed manually to provide support for device multi-pathing with VNX:

● device-mapper-multipath-libs ● device-mapper-multipath ● kpartx

5.5 VNX Cinder driver Overview  The VNX Cinder driver offers support for both iSCSI and FC and a brief overview of both is outlined below. Both drivers use EMC Naviseccli to communicate directly with the VNX array and as such Naviseccli must be installed and authentication provided to communicate with the VNX management network on all OpenStack controllers. 

5.5.1 VNX iSCSI driver Configuration The VNX iSCSI driver is enabled by configuring the Cinder volume driver in cinder.conf to

use the EMC iSCSI driver as outlined in the EMC VNX Direct Driver section of OpenStack Configuration Reference Guide for Juno. Storage pools are created manually on the VNX array and these provide the required storage for Cinder to provision volumes.

Page 17: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

For VNX to provide iSCSI storage to hosts, those hosts must have registered host iSCSI initiators on the VNX array. The driver options for the VNX driver include an option that allows for auto-registration of all hosts. When this is enabled all host initiators logging into the VNX array will automatically be registered and no other steps are required directly on the VNX array. The VNX driver also allows for the use of multiple backends, which allows users to differentiate between different backing storage on the VNX array to different backends in Cinder, for example creating a flash-backed storage pool for the highest performance level and a HDD-backed storage pool for other storage. If multiple backends are selected, the users must create cinder volume types for each backend. When the driver has been configured and the cinder-volume service has been restarted, users can start provisioning volumes using Cinder. Volumes will be created in the storage pool which has been tied to the volume backend. On volume attachment, the following steps are taken to attach the LUN to the instances:

● The VNX driver will create a Storage Group (using Naviseccli to talk directly to the array) for the host initiator of the Compute node requesting the attachment.

● When the storage group is created, it will add the requested LUN to the storage group.

● When added, it initiates an iSCSI login from the Compute node to the VNX which will present the LUN as a raw device to the host.

From there nova-compute attaches the LUN to the instance. Volume detachments firstly detach the volume from the instance using nova and once complete remove the backing LUN from the VNX storage group.  

5.5.2 VNX FC driver Configuration The VNX FC driver is enabled by configuring the Cinder volume driver in cinder.conf to

use the EMC FC driver as outlined in the EMC VNX Direct Driver section of OpenStack Configuration Reference Guide for Juno. Storage pools are created manually on the VNX array and these provide the required storage for Cinder to provision volumes. As with iSCSI hosts, FC hosts must also be registered on the VNX array. By using the auto initiator registration setting outlined in the VNX Direct Driver documentation no manual steps are required on the VNX array other than creating the storage pools that will provide backing storage to Cinder for volume provisioning. For FC configurations FC zoning entries are required for all hosts in the environment. For the Juno release of OpenStack, Brocade enabled an auto-zoning feature which was used in the testing of this solution. When enabled zones will be automatically created as required by the Brocade driver and the only requirement for FC is cabling the host WWN’s to the Brocade fabric. VNX FC, as with the iSCSI driver, uses Naviseccli to create storage groups and add and remove LUN’s from storage groups. On a volume attachment request in this environment the following steps will happen:

● If a zone does not exist for the Compute host required, Brocade will first create a zone mapping the host HBA’s to the VNX array.

Page 18: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

● Once the zoning has been created, the VNX driver will create a storage group which will include the host FC initiators and the LUN required for the attach.

● When the LUN has been assigned to the storage group, a device rescan on the Compute host will surface the LUN. This is handled by nova-compute and once the LUN successfully surfaces on the Compute host it is attached to the instance by nova-compute. 

5.6 XtremIO Cinder driver Installation steps XtremIO Cinder drivers are included in the MOS release of OpenStack Juno and no extra packages were required for the drivers to function correctly. For the VNX iSCSI driver, the iscsi-initiator-utils package was manually installed on all OpenStack controller nodes. In the case of fibre channel (FC) connectivity the following packages were manually installed on all OpenStack nodes:

● sysfsutils ● sg3_utils

The following packages were also installed manually to provide support for device multi-pathing with VNX:

● device-mapper-multipath-libs ● device-mapper-multipath ● kpartx

5.6.1 XtremIO driver Configuration Configuration of the EMC XtremIO Cinder driver is outlined in the OpenStack Configuration Reference Guide for Juno. To configure XtremIO FC or iSCSI as the backend for Cinder volumes, the volume_driver in cinder.conf must be configured to use either the XtremIO FC or iSCSI driver. In addition login credentials with administrative privileges to the XtremIO array must be specified for the XtremIO Cinder driver to communicate with the array. For iSCSI environments no other steps are required as the XtremIO driver automatically creates host initiator groups when required and handles all mapping of LUN’s to OpenStack nodes. When volume attachments are requested, the XtremIO driver ensures the host initiator has an initiator group on the array containing a mapping to the XtremIO LUN. When complete it initiates an iSCSI login for the host initiator to the array and the new LUN is presented to the host as a raw device. Once the LUN mapping has completed nova-compute rescans until the new LUN is surfaced and maps the LUN directly to the instance. For FC environments, all hosts require connectivity to the FC fabric from the host HBA’s and as such zoning entries were created for all OpenStack nodes to the XtremIO array. In an FC environment, a volume attach will also result in the XtremIO driver creating an initiator group containing the host HBA and map the requested LUN to that initiator group. When complete it will hand control back to nova-compute which will issue a device rescan on the host. When the LUN surfaces, nova-compute will map the LUN directly to the instance.

Page 19: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

5.7 ScaleIO Cinder driver Installation steps All ScaleIO software and Cinder drivers were downloaded manually from support.emc.com The following steps were taken to install ScaleIO on MOS:

1. ScaleIO MDM, Callhome,SDC and GW packages installed on first Openstack Controller node 

2. Java installed on GW node 3. ScaleIO MDM, Callhome and SDC installed on second OpenStack controller 4. ScaleIO Tie-breaker and SDC installed on third Controller node. 5. ScaleIO SDS and SDC packages installed on all Compute nodes 

When the ScaleIO components were successfully installed, the ScaleIO cluster was configured with all SDS nodes providing storage to the ScaleIO cluster using dedicated local HDD’s on each Compute node. The steps for installing and configuring the ScaleIO Cinder driver are outlined in the ScaleIO V1.31 User Guide.  

5.7.1 ScaleIO Overview and driver Configuration  ScaleIO is a software-only solution that uses local hard disks of existing servers to create a massively-scalable storage cluster. ScaleIO consists of two main components that manage its underlying storage; the ScaleIO Data Server (SDS) which provides the underlying storage on each SDS host, and the ScaleIO Data Client (SDC) which exposes ScaleIO volumes as block devices to servers which are running the SDC. ScaleIO uses a proprietary protocol (SDC) to attach volumes to ScaleIO hosts. The SDC protocol runs over the existing storage network in a Mirantis deployment. The instructions for downloading, installing and configuring the ScaleIO OpenStack driver must be obtained directly from EMC. In our test environment all ScaleIO management components were installed on the OpenStack controllers. All compute nodes in the tested solution provided ScaleIO SDS and SDC. Full instructions for configuring ScaleIO can be found in the ScaleIO v1.31 User Guide on EMC’s support website. For the purposes of this solution all ScaleIO networking was configured using the br-storage network. To configure the ScaleIO driver once the ScaleIO cluster has been successfully configured required the following:

● ScaleIO libvirt drivers must be manually copied to all compute nodes and scaleio filters must be specified in nova/rootwrap.d

● When complete the nova compute services must be restarted. ● The ScaleIO cinder drivers must be copied manually to each OpenStack controller

host and the scaleio filters file must be copied to /cinder/rootwrap.d

Page 20: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

● In addition a ScaleIO configuration file is created in /etc/cinder which includes credentials and address for the ScaleIO Gateway together with ScaleIO protection domains and storage pools to be used with Cinder.

● When all Cinder host configuration is complete, all cinder volume services must be restarted to successfully load the ScaleIO Cinder driver.

ScaleIO does not support backends, but allows the use of multiple ScaleIO storage pools with the Cinder pool aware scheduler. This is an optional configuration item in the ScaleIO cinder configuration file. The OpenStack flow for volume attachments with ScaleIO is as follows:

● Cinder create volume calls the ScaleIO add-volume command ● Attach volume to instance calls the ScaleIO map_volume_to_instance which connects the

LUN to the Compute host. ● After the volume has been attached to the Compute host, nova-compute attaches the

volume to the instance.  

6. Testing

6.1 Test tools Manual tests were run against all supported Cinder functionality for each of the EMC drivers. FC and iSCSI was tested for both VNX and XtremIO drivers. Tests included API/CLI tests and UI tests of all supported functions. The testing carried out to verify this solution was functional in nature, and did not involve performance or scaling testing. In addition to manual tests a custom set of automation scripts were written using Python to test all functionality of VNX, XtremIO and ScaleIO drivers. The automated tests were run together with manual testing.   

6.2 VNX Test cases  The table below outlines the VNX tests that were completed for this solution. Create Cinder volume with iSCSI Pool­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with FC Pool­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with iSCSI Array­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with FC Array­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Page 21: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

List cinder volumes  Verify Cinder list correctly returns correct of created volumes and volume names 

Delete cinder volume  Verify Cinder volumes can be deleted, and verify the volume is removed from Cinder and from the VNX array 

Snapshot cinder volume  Verify Cinder volume snapshots can be taken. Verify also that data written to the snapshot is successfully saved to the volume snapshot 

List volume snapshots  Verify Cinder snapshots can be successfully listed and all existing snapshots are listed 

Delete volume snapshots  Verify Cinder volume snapshots can be successfully delete, and verify the snapshot is removed from the snapshot list 

Attach volume  Verify Cinder volumes can be attached to instances. Verify this with multiple instances and multiple volumes per instance 

Detach volume  Verify detach successfully detaches volumes from instances. Verify this with multiple instances and multiple volumes per instance 

Create volume from snapshot  Verify a volume can be created from a volume snapshot. Verify the new volume can be attached to an instance, and verify the snapshotted data is present on the volume. 

Copy image to volume (create bootable volume) 

Verify an image from Glance can be copied to a Cinder volume. Verify an instance can then be booted from the volume. 

Boot from bootable volume  Verify that a nova instance can successfully be created using a bootable Cinder volume as the source. Verify the that the instance starts successfully and that user login to the instance is possible. 

Extend volume  Verify Extend will expand a volume to requested size. Verify on multiple volumes, and verify volumes can be extended multiple times. 

Clone volume  Verify Cinder volumes can be cloned and that cloned volumes contain the same data as the source. 

Migrate volume  Verify Cinder volumes can be successfully migrated Retype volume  Verify a volume can be retyped to a different volume type Get Volume Stats  Verify get volume stats displays Cinder volume stats Create THIN volume  Verify when THIN is specified in the volume extra specs that only 

THIN volumes are created Create THICK volume  Verify when THICK is specified in the volume extra specs that 

only THIN volumes are created Create COMPRESSED volume  Verify when COMPRESSED is specified in the volume extra 

specs that only THIN volumes are created Create DEDUPLICATED volume 

Verify when DEDUPLICATED is specified in the volume extra specs that only THIN volumes are created 

Make volume read­only  Verify a volume can be set to read­only and that the volume cannot be written to when attached to an instance 

  

Page 22: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

6.3 XtremIO Test cases This table outlines all XtremIO test cases that were completed Create Cinder volume with iSCSI Pool­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with FC Pool­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with iSCSI Array­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with FC Array­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

List cinder volumes  Verify Cinder list correctly returns correct of created volumes and volume names 

  Delete cinder volume  Verify Cinder volumes can be deleted, and verify the volume is removed from Cinder and from the VNX array 

Snapshot cinder volume  Verify Cinder volume snapshots can be taken. Verify also that data written to the snapshot is successfully saved to the volume snapshot 

List volume snapshots  Verify Cinder snapshots can be successfully listed and all existing snapshots are listed 

Delete volume snapshots  Verify Cinder volume snapshots can be successfully delete, and verify the snapshot is removed from the snapshot list 

Attach volume  Verify Cinder volumes can be attached to instances. Verify this with multiple instances and multiple volumes per instance 

Detach volume  Verify detach successfully detaches volumes from instances. Verify this with multiple instances and multiple volumes per instance 

Create volume from snapshot  Verify a volume can be created from a volume snapshot. Verify the new volume can be attached to an instance, and verify the snapshotted data is present on the volume. 

Copy image to volume (create bootable volume) 

Verify an image from Glance can be copied to a Cinder volume. Verify an instance can then be booted from the volume. 

Boot from bootable volume  Verify that a nova instance can successfully be created using a bootable Cinder volume as the source. Verify the that the instance starts successfully and that user login to the instance is possible. 

Extend volume  Verify Extend will expand a volume to requested size. Verify on multiple volumes, and verify volumes can be extended multiple times. 

Clone volume  Verify Cinder volumes can be cloned and that cloned volumes contain the same data as the source. 

6.4 ScaleIO Test cases This table outlines all ScaleIO test cases that were completed

Page 23: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

Create Cinder volume with iSCSI Pool­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with FC Pool­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with iSCSI Array­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

Create Cinder volume with FC Array­based backend 

Create Cinder volumes of varying sizes and verify volumes successfully created and size of volume matches size requested 

List cinder volumes  Verify Cinder list correctly returns correct of created volumes and volume names 

  Delete cinder volume  Verify Cinder volumes can be deleted, and verify the volume is removed from Cinder and from the VNX array 

Snapshot cinder volume  Verify Cinder volume snapshots can be taken. Verify also that data written to the snapshot is successfully saved to the volume snapshot 

List volume snapshots  Verify Cinder snapshots can be successfully listed and all existing snapshots are listed 

Delete volume snapshots  Verify Cinder volume snapshots can be successfully delete, and verify the snapshot is removed from the snapshot list 

Attach volume  Verify Cinder volumes can be attached to instances. Verify this with multiple instances and multiple volumes per instance 

Detach volume  Verify detach successfully detaches volumes from instances. Verify this with multiple instances and multiple volumes per instance 

Create volume from snapshot  Verify a volume can be created from a volume snapshot. Verify the new volume can be attached to an instance, and verify the snapshotted data is present on the volume. 

Copy image to volume (create bootable volume) 

Verify an image from Glance can be copied to a Cinder volume. Verify an instance can then be booted from the volume. 

Boot from bootable volume  Verify that a nova instance can successfully be created using a bootable Cinder volume as the source. Verify the that the instance starts successfully and that user login to the instance is possible. 

Extend volume  Verify Extend will expand a volume to requested size. Verify on multiple volumes, and verify volumes can be extended multiple times. 

Clone volume  Verify Cinder volumes can be cloned and that cloned volumes contain the same data as the source. 

6.2.2 Deployment modes and configuration options The following table contains the minimum combination of Mirantis OpenStack deployment options.   

Page 24: Installation runbook for EMC VNX, XtremIO, ScaleIOcontent.mirantis.com/rs/451-RBY-185/images/EMCRunbook.docx.pdf · Installation runbook for EMC VNX, ... consistent and unified approach

 

OS  Mode  HV  Network  Storage 

VLAN  Ceph 

CentOS  HA  KVM  x   

 

6.2.3 Functional testing  

6.2.4 Performance testing Performance testing was not performed as part of this solution.  

6.2.5 Negative testing Negative testing was not performed as part of this solution 

6.3 Test results (if FUEL HealthCheck is used)