100
Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform Architecture Guide Version 1 Dell EMC Service Provider Solutions

Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

  • Upload
    others

  • View
    0

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Dell EMC Ready Architecture for Hyper-ConvergedInfrastructure on Red Hat OpenStack Platform

Architecture GuideVersion 1

Dell EMC Service Provider Solutions

Page 2: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

ii | Contents

Contents

List of Figures...................................................................................................................................v

List of Tables................................................................................................................................... vi

Trademarks.....................................................................................................................................viiiNotes, cautions, and warnings......................................................................................................... ix

Chapter 1: Overview........................................................................................................................10Executive summary.............................................................................................................................................11Key benefits........................................................................................................................................................ 11Key differentiators.............................................................................................................................................. 12

Chapter 2: Architecture overview................................................................................................... 14Overview............................................................................................................................................................. 15Software...............................................................................................................................................................18

Red Hat Enterprise Linux Server 7.6.....................................................................................................18Red Hat OpenStack Platform version 13...............................................................................................18Red Hat Ceph Storage 3.2......................................................................................................................18

Hardware............................................................................................................................................................. 19Solution Admin Host..............................................................................................................................19OpenStack Controller nodes...................................................................................................................19OpenStack Converged nodes..................................................................................................................19Intel NVMe 3.2TB P4600...................................................................................................................... 20Intel XXV710 Dual Port 25GbE............................................................................................................20

Network layout................................................................................................................................................... 21Physical network.................................................................................................................................................23

Chapter 3: Red Hat Ceph Storage for HCI.....................................................................................24Introduction to Red Hat Ceph Storage.............................................................................................................. 25

RADOS................................................................................................................................................... 25Pools........................................................................................................................................................ 26Placement Groups................................................................................................................................... 26CRUSH....................................................................................................................................................26Objectstore...............................................................................................................................................26Red Hat Ceph Storage Dashboard introduction.....................................................................................26

Hyper-Converged Infrastructure (HCI) Ceph storage........................................................................................27

Chapter 4: Deployment considerations........................................................................................... 29Converged Nodes with integrated Ceph storage................................................................................................30Resource isolation...............................................................................................................................................30Performance tuning.............................................................................................................................................31

Chapter 5: Deployment....................................................................................................................34Before you begin................................................................................................................................................ 35

Tested BIOS and firmware versions...................................................................................................... 35Disk layout..............................................................................................................................................35

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 3: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Contents | iii

Virtual Disk Creation............................................................................................................................. 36Software requirements............................................................................................................................ 37

Deployment workflow........................................................................................................................................ 37Solution Admin Host..........................................................................................................................................38

Prepare Solution Admin Host................................................................................................................ 38SAH deployment overview.................................................................................................................... 38Kickstart file customization....................................................................................................................38Creating image........................................................................................................................................40Presenting the image to the RHEL OS installation process.................................................................. 41Deploy SAH node...................................................................................................................................42

Red Hat OpenStack Platform Director...............................................................................................................43Kickstart file customization....................................................................................................................43Red Hat OpenStack Platform Director as VM deployment...................................................................44

Undercloud Deployment.....................................................................................................................................45Configure the Undercloud...................................................................................................................... 45Install Undercloud...................................................................................................................................46

Configure and deploy the Overcloud.................................................................................................................49Prepare the nodes registration file..........................................................................................................50Register and introspect the nodes.......................................................................................................... 51Configure networking............................................................................................................................. 52Configure cluster.....................................................................................................................................54Configure the static IPs.......................................................................................................................... 55Configure the Virtual IPs....................................................................................................................... 56Configure the NIC interfaces................................................................................................................. 56Prepare and deploy the Overcloud.........................................................................................................57

Red Hat Ceph Storage Dashboard deployment and configuration (optional)....................................................58Red Hat Ceph Storage Dashboard deployment..................................................................................... 58Ceph dashboard VM configuration........................................................................................................ 60

Chapter 6: Validation and testing....................................................................................................63Manual validation............................................................................................................................................... 64

Test Glance image service..................................................................................................................... 65Testing Nova compute provisioning service..........................................................................................65Test Cinder block storage service.......................................................................................................... 66Test Swift object storage service........................................................................................................... 66Accessing the instance test.....................................................................................................................66

Tempest test suite............................................................................................................................................... 68Configure Tempest..................................................................................................................................68Run Tempest tests...................................................................................................................................69Summary................................................................................................................................................. 69

Chapter 7: Performance measuring................................................................................................. 70Overview............................................................................................................................................................. 71Performance tools............................................................................................................................................... 71Test cases and test reports..................................................................................................................................71

Network performance............................................................................................................................. 71Compute performance.............................................................................................................................74Storage performance............................................................................................................................... 76

Conclusion...........................................................................................................................................................78

Appendix A: Bill of Materials........................................................................................................ 80Bill of Materials - SAH node............................................................................................................................ 81Bill of Materials - 3 Controller nodes............................................................................................................... 81

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 4: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

iv | Contents

Bill of Materials - 3 Converged nodes.............................................................................................................. 81Bill of Materials - 1 Dell EMC Networking S3048-ON switch........................................................................82Bill of Materials - 2 Dell EMC Networking S5248-ON switches.....................................................................82

Appendix B: Environment Files......................................................................................................83Heat templates and environment yaml files.......................................................................................................84

network-environment.yaml..................................................................................................................... 84static-vip-environment.yaml....................................................................................................................87static-ip-environment.yaml......................................................................................................................88nic_environment.yaml.............................................................................................................................89dell-environment.yaml............................................................................................................................ 89

Nodes registration json file................................................................................................................................ 91instackenv.json........................................................................................................................................ 91

Undercloud configuration file............................................................................................................................ 92undercloud.conf.......................................................................................................................................92

Appendix C: References.................................................................................................................. 99To learn more................................................................................................................................................... 100

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 5: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

List of Figures | v

List of Figures

Figure 1: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red HatOpenStack Platform key differentiators.....................................................................................12

Figure 2: Architecture for RHOSP version 13 over HCI...............................................................15

Figure 3: SAH node layout.............................................................................................................16

Figure 4: HCI network layout........................................................................................................ 21

Figure 5: Physical network............................................................................................................. 23

Figure 6: HCI Ceph storage........................................................................................................... 27

Figure 7: Verify Virtual Disk......................................................................................................... 37

Figure 8: Workflow for RHOSP deployment over HCI................................................................ 37

Figure 9: Check RHEL iso and image file.....................................................................................41

Figure 10: Install Redhat................................................................................................................ 42

Figure 11: Provide kickstart file.....................................................................................................42

Figure 12: Network throughput vs frame size................................................................................72

Figure 13: Network latency vs frame size..................................................................................... 72

Figure 14: Network jitter vs frame size......................................................................................... 73

Figure 15: 4KB memory read IOPs............................................................................................... 74

Figure 16: 4KB memory write IOPs.............................................................................................. 75

Figure 17: 4KB memory latency.................................................................................................... 75

Figure 18: 4K storage random IOPs...............................................................................................76

Figure 19: 4K storage sequential IOPs...........................................................................................77

Figure 20: 4K storage random latency........................................................................................... 77

Figure 21: 4K storage sequential latency....................................................................................... 78

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 6: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

vi | List of Tables

List of Tables

Table 1: RHOSP deployment elements..........................................................................................16

Table 2: Solution Admin Host hardware configuration – Dell EMC PowerEdge R640................ 19

Table 3: Controller nodes hardware configuration – Dell EMC PowerEdge R640........................19

Table 4: OpenStack Converged nodes hardware configuration – Dell EMC PowerEdgeR740xd........................................................................................................................................ 19

Table 5: Logical Networks............................................................................................................. 21

Table 6: Placement Group configuration summary........................................................................32

Table 7: Validated firmware versions............................................................................................ 35

Table 8: BIOS configuration.......................................................................................................... 35

Table 9: Switches firmware version............................................................................................... 35

Table 10: Disk configuration for Controller and SAH Node......................................................... 36

Table 11: Disk configuration for Converged Node........................................................................36

Table 12: Kickstart file variables................................................................................................... 38

Table 13: Kickstart File Variables..................................................................................................43

Table 14: Undercloud variables......................................................................................................45

Table 15: List of environment files................................................................................................50

Table 16: instackenv.json.yaml file parameters............................................................................. 50

Table 17: network-environment.yaml file parameters....................................................................52

Table 18: dell-environment.yaml file parameters...........................................................................54

Table 19: static-ip-environment.yaml file parameters....................................................................55

Table 20: static-vip-environment.yaml file parameters.................................................................. 56

Table 21: nic_environment.yaml file parameters........................................................................... 57

Table 22: dashboard.cfg file parameters.........................................................................................59

Table 23: Bill Of Materials - SAH node........................................................................................81

Table 24: Bill Of Materials - 3 Controller nodes...........................................................................81

Table 25: Bill Of Materials - 3 Converged nodes..........................................................................81

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 7: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

List of Tables | vii

Table 26: Bill Of Materials - 1 Dell EMC Networking S3048-ON switch....................................82

Table 27: Bill Of Materials - 2 Dell EMC Networking S5248-ON switches.................................82

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 8: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

viii | Trademarks

Trademarks

Copyright © 2019 Dell EMC or its subsidiaries. All rights reserved.

The information in this publication is provided “as is.” Dell EMC makes no representations or warranties of any kindwith respect to the information in this publication, and specifically disclaims implied warranties of merchantability orfitness for a particular purpose.

Use, copying, and distribution of any software described in this publication requires an applicable software license.

Red Hat®, Red Hat Enterprise Linux®, the Shadowman logo, JBoss, OpenShift, Fedora, the Infinity logo, and RHCEare trademarks or registered trademarks of Red Hat, Inc., registered in the U.S. and other countries. Linux® is theregistered trademark of Linus Torvalds in the U.S. and other countries. Oracle® and Java® are registered trademarks ofOracle Corporation and/or its affiliates.

Intel® and Xeon® are registered trademarks of Intel Corporation.

Dell EMC believes the information in this document is accurate as of its publication date. The information is subjectto change without notice.

Spirent Temeva®, Cloudstress®, MethodologyCenter® and TrafficCenter® are registered trademarks of SpirentCommunication Inc. All rights reserved. Specifications subject to change without notice.

DISCLAIMER: The OpenStack® Word Mark and OpenStack Logo are either registered trademarks/service marksor trademarks/service marks of the OpenStack Foundation, in the United States and other countries, and are usedwith the OpenStack Foundation's permission. We are not affiliated with, endorsed or sponsored by the OpenStackFoundation or the OpenStack community.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 9: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Notes, cautions, and warnings | ix

Notes, cautions, and warnings

Note: A Note indicates important information that helps you make better use of your system.

CAUTION: A Caution indicates potential damage to hardware or loss of data if instructions are notfollowed.

Warning: A Warning indicates a potential for property damage, personal injury, or death.

This document is for informational purposes only and may contain typographical errors and technical inaccuracies.The content is provided as is, without express or implied warranties of any kind.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 10: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

10 | Overview

Chapter

1Overview

Topics:

• Executive summary• Key benefits• Key differentiators

Dell EMC and Red Hat have worked closely together to build an enterprise-scale hyper-converged infrastructure architecture guide ideally suited forcustomers who are looking for performance and ease of management.

This architecture guide describes prescriptive guidance and recommendationsfor complete configuration, sizing, bill-of-material, and deployment details.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 11: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Overview | 11

Executive summaryThis architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single nodehardware through configurations of Hyper-Converged Infrastructure using Red Hat OpenStack Platform 13 and RedHat Ceph Storage that consumes OpenStack Nova Compute and Ceph storage services.

Communication Service Providers inherently have distributed operation environments, whether multiple largescale core datacenters, 100's and 1000's of central offices and Edge locations, or even customer premise equipmentfor same infrastructure services in remote and branch offices as run in the core datacenter. However, remote andbranch offices can have unique challenges such as less space and power/cooling, fewer (or no) technical staff on-site.Organizations in this situation require powerful integrated services on a single easily scaled environment.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform is designed toaddress these challenges by integrating compute and storage together on a single aggregated cluster, making it awell-suited solution for low-footprint remote or central office installations and Edge computing. Dell EMC Hyper-Converged Infrastructure for Red Hat OpenStack Platform is designed to enable organizations to deploy and managedistributed infrastructure centrally, enabling remote locations to benefit from high-performing systems withoutrequiring extensive or highly specialized on-site technical staff.

This architecture guide defines hardware and software building block details including but not limited to Red HatOpenStack Platform configuration, network switch configuration, and all software and hardware components.

This all-NVMe configuration is optimized for block storage performance.

Key benefitsThe Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform offersseveral benefits to help Service Providers reduce CAPEX/OPEX (Capital expenditures/ operating expenditures) andsimplify planning and procurement with the following features:

• Infrastructure Consolidation. A smaller hardware footprint eases power, cooling and deployment reducingCAPEX.

• Operational Efficiency. A single supported rack is easier to train personel to manage and configure resulting inlower OPEX overhead.

• Fully engineered, validated, tested and documented by Dell EMC.• Based on Dell EMC PowerEdge R-Series servers and specifically Dell EMC PowerEdge R640 and Dell EMC

PowerEdge R740xd which are the server models recommended for this architecture guide, which includes IntelXeon processors, Intel NVMe disks and Intel 25GbE network interface cards.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 12: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

12 | Overview

Key differentiators

Figure 1: Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStackPlatform key differentiators

The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform has somemajor enhancements from the regular Dell EMC Ready Architecture Guide.

• Implementation model. Compute and storage are deployed in a Hyper-Converged Infrastructure approach. As aresult, both services and their associated OpenStack services are deployed and managed as a single entity.

• Server model. Dell EMC PowerEdge R640 and Dell EMC PowerEdge R740xd servers are used in thisarchitecture guide providing the most cutting edge range of Dell EMC PowerEdge R-Series with optimizedtechnology for all kind of workloads and offers sophisticated built-in protection at every step.

• Hardware resources. Optimized for Hyper-Converged Infrastructure, this Ready Architecture Guide combinesscalability, robustness, and efficiency by leveraging the following Intel components:

• Intel Platinum 8160 Skylake processors. Used for compute and storage needs. This 64-bit 24-core x86 multi-socket high performance server microprocessor provides 48 cores per node which maximizes the concurrentexecution of multi-threaded applications.

• Intel 25GbE adaptors. Used for all network communications. The flexible and scalable Intel XXV710network adapter offers broad interoperability, critical performance optimizations, and increased agility. Acouple of ports have also been reserved for future usage of NFV oriented optimizations such as SR-IOV orOVS-DPDK.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 13: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Overview | 13

• Intel P4600 NVMe drives. Acts as a key for Red Hat Ceph Storage backend. This NAND SSD drive isoptimized for the data caching needs of cloud storage and more particularly software-defined solutions. Ithelps to modernize the data center by combining performance, capacity, manageability and scalability.

• RAM Optimized. Memory is a key concern when it comes to virtualization and even more with a Hyper-Converged Infrastructure. Each compute/storage server is configured with 384GB of RAM, delivering optimalperformance and available resources for both compute and storage services.

This architecture guide will cover in details the key differentiators in the next sections.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 14: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

14 | Architecture overview

Chapter

2Architecture overview

Topics:

• Overview• Software• Hardware• Network layout• Physical network

Undercloud and Overcloud deployment elements are part of Dell EMC ReadyArchitecture for Hyper-Converged Infrastructure on Red Hat OpenStackPlatform. This Ready Architecture Guide uses Red Hat OpenStack PlatformRHOSP 13. The Red Hat OpenStack Platform implementation of Hyper-Converged Infrastructure (HCI) uses Red Hat Ceph Storage version 3.2 as thestorage provider.

This overview of the deployment process for Dell EMC Ready Architecturefor Hyper-Converged Infrastructure on Red Hat OpenStack Platform on DellEMC Dell EMC PowerEdge R640 and Dell EMC PowerEdge R740xd serverhardware and network outlines the following:

• Software requirements• Hardware requirements• Dell EMC networking switch requirements

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 15: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Architecture overview | 15

OverviewThis chapter describes the complete architecture for Red Hat OpenStack Platform version 13 over Hyper-ConvergedInfrastructure. Figure 2: Architecture for RHOSP version 13 over HCI on page 15 shows components usedin Undercloud and Overcloud in detail and description of SAH node with logical networks. This section providesdescription of hardware and software components.

Figure 2: Architecture for RHOSP version 13 over HCI on page 15 illustrates ready architecture for RHOSPversion 13 deployment over HCI.

Figure 2: Architecture for RHOSP version 13 over HCI

Note: We recommend minimal configuration of seven nodes as follows:

1. Solution Admin Host (SAH)2. Controller node one3. Controller node two4. Controller node three5. Converged node one6. Converged node two7. Converged node three

for optimal performance and high-availability.

Solution Admin Host (SAH)

The Solution Admin Host (SAH) is the central administration server with internal bridged networks for virtualmachines (VM) responsible for performing management operations across the domain. Figure 3: SAH node layout onpage 16 shows a brief layout of the SAH node. It is also a physical server that supports VMs for the Undercloudthat are needed for the OpenStack Overcloud to be deployed and operated. The SAH communicates with the nodemanagers to perform management operations across the domain. It is physically connected to the following networks.For more details of networks, please refer to Table 5: Logical Networks on page 21

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 16: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

16 | Architecture overview

Figure 3: SAH node layout

Undercloud. The Undercloud is the OSP director (TripleO) node. It is a single-VM OpenStack installation thatincludes components for provisioning and managing the Overcloud.

HCI Overcloud: The Overcloud is the end user RHOSP environment created using Undercloud. HCI Overcloud hasonly two types of roles:

• Controller. A node that provides administration, networking, and high availability for the OpenStack Overcloudenvironments.

• ComputeHCI. A real hyper-converged role designed to have compute and storage services like OpenStack NovaCompute and Ceph storage to run in tandem. This role has a direct application for Edge computing for Telcos. Werefer to this role as converged through this architecture guide.

Table 1: RHOSP deployment elements on page 16 describes the basic RHOSP 13 deployment sequence.

Table 1: RHOSP deployment elements

DeploymentLayer

Deploymentelements

Description

Undercloud RHEL 7.6 • Dell EMC PowerEdge R-Series servers need an operating systemwhich handles high density networks. Red Hat Enterprise LinuxOperating System is the best optimized operating system for theseservers.

Undercloud KVM • An open-source virtualization protocol which is part of Linuxflavored operating systems and creates a hypervisor host machinewith a myriad of individual virtual machines. This virtualizationlayer runs on the SAH and hosts the director VM as well as thedashboard VM. These two VMS shares the Solution Admin Hostresources to run their applications.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 17: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Architecture overview | 17

DeploymentLayer

Deploymentelements

Description

Undercloud Swift Swift provides object storage for multiple OpenStack Platformcomponents including:

• Image storage for Glance Image service• Introspection data for Ironic Baremetal service• Deployment plans for Mistral Workflow service

Undercloud,Overcloud

Keystone • In the Undercloud, it is an identity service which providesauthentication and grants access for the director's components.

• In the Overcloud, it also maps users and groups to projects andresources based on a catalog where every OpenStack serviceendpoints are referenced.

Undercloud Ironic • Ironic provides bare-metal As A Service (BaaS) to provision andmanage physical machines for end users. Ironic uses iPXE toprovision physical machines. This solution uses Ironic iDRAC driverto manage all Overcloud servers.

Undercloud,Overcloud

Glance • In the Undercloud, Glance stores images that will be used bybare-metal machines when doing introspection and overclouddeployment.

• In the Overcloud, it is also used to store VM images that otherOpenStack services will use as templates to deploy VM instances.

Undercloud,Overcloud

Nova • In the Undercloud, Nova service is used to manage bare-metalinstances that comprise the infrastructure services that are used byOvercloud administrator.

• In the Overcloud, it is used for deploying and managing largenumbers of virtual machines and other instances to handlecomputing tasks. It facilitates enterprises and service providersto offer on-demand computing resources, by provisioning andmanaging large networks of virtual machines.Compute resources areaccessible via APIs for developers or users.

Undercloud,Overcloud

Neutron • In the Undercloud, Neutron controls networking for managing baremetal machines.

• In the Overcloud, Neutron offers networking capabilities incomplex cloud environment. It also helps to ensure that any of thecomponents of an OpenStack environment can communicate witheach other quickly and efficiently.

Undercloud,Overcloud

Heat • In the Undercloud, Heat provides orchestration and configuration ofnodes based upon templates customization.

• In the Overcloud, it allows developers to store the requirementsof a cloud application as templates which contains definition ofresources that are necessary for a particular application. The flexibletemplate language can specify compute, storage, and networkingconfigurations — detailed post-deployment activity automating thefull provisioning of infrastructure for services and applications.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 18: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

18 | Architecture overview

DeploymentLayer

Deploymentelements

Description

Overcloud Ceph • Ceph is used as the backend storage by Cinder for block persistentstorage and Swift API for object level access. It also help storingimages for Glance and ephemeral storage with Nova.

Undercloud Ansible • Ansible is used by the OSP director to install and configure theUndercloud. When deploying the Overcloud, it is also used by Ceph-ansible to deploy and configure the Ceph cluster.

Software

Red Hat Enterprise Linux Server 7.6

Dell EMC PowerEdge R-Series servers need an operating system which can handle high density networks. It offersa scalable, fault-tolerant platform for the development of cloud-enabled workloads and require high throughput.Red Hat Enterprise Linux operating system is the best optimized operating system for these servers. Dell EMCrecommends using Red Hat Enterprise Linux Server 7.6 for deployment which includes the following:

• Security and compliance• Performance and efficiency• Platform manageability• Stability and reliability• Multi-platform support• Application experience

Note: Check the used RHEL version for standard compatibility checks.

Red Hat OpenStack Platform version 13

Red Hat OpenStack Platform provides the foundation to build a private or public Infrastructure-as-a-Service (IaaS)cloud on Red Hat Enterprise Linux. Red Hat OpenStack Platform (RHOSP 13 is based on the OpenStack Queens.It includes additional features packaged to turn available physical hardware into a private, public, or hybrid cloudplatform including:

• Overcloud provisioning using node• Fully containerized services• bare-metal (Ironic) service• Ceph storage default support• Integration of real time KVM (Kernel-based Virtual Machine) RT-KVM (Real-time KVM) with the Compute

service• High availability

Red Hat Ceph Storage 3.2

Ceph is an open-source distributed object, block, and file storage. It is open-source software that runs on readily-available hardware. It is designed to be scalable and to have no single point of failure.

Reasons to choose Ceph storage over traditional alternatives are:

• Unified Storage - It supports block, object, and file in one system.• Flexible configuration - Adjust as per application load and deployment demands changes in the cloud.• Open foundation - Built on the shared open development process and proprietary ecosystem.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 19: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Architecture overview | 19

Red Hat Ceph Storage is an essential component of Hyper-Converged Infrastructure (HCI). Please refer detailedinformation on Red Hat Ceph Storage for HCI on page 24

Hardware

Solution Admin Host

Table 2: Solution Admin Host hardware configuration – Dell EMC PowerEdge R640

Machine function Configurations

Platform Dell EMC PowerEdge R640

CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25MCache,Turbo,HT (125W) DDR4-2666

RAM (Minimum) 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)

LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+

Add-in Network 2 x Intel XXV710 DP 25GbE DA/SFP+

Disk 6TB SAS (10 x 600GB 15k RPM, SAS 12Gbps)

Storage Controller PERC H740P RAID Controller

RAID RAID 10

OpenStack Controller nodes

Table 3: Controller nodes hardware configuration – Dell EMC PowerEdge R640

Machine function Configurations

Platform Dell EMC PowerEdge R640

CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25M Cache,Turbo, HT (125W) DDR4-2666

RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)

LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+

Add-in Network 2 x Intel XXV710 DP 25GbE DA/SFP+

Disk 6TB SAS (10 x 600GB 15k RPM, SAS 12Gbps)

Storage Controller PERC H740P RAID Controller

RAID Layout RAID 10

OpenStack Converged nodes

Table 4: OpenStack Converged nodes hardware configuration – Dell EMC PowerEdge R740xd

Machine function Configurations

Platform Dell EMC PowerEdge R740xd

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 20: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

20 | Architecture overview

Machine function Configurations

CPU 2 x Intel® Xeon® Platinum 8160 2.1G,24C/48T,10.4GT/s, 33MCache,Turbo,HT (150W) DDR4-2666

RAM 384GB RAM (12 x 32GB RDIMM, 2666MT/s, Dual Rank)

LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+

Add-in Network 4 x Intel XXV710 DP 25GbE DA/SFP+

OS Disk 240GB (2 x 240GB, SSD SATA, 2.5, HP, S4600)

Disk 25,6TB NVMe (8 x 3.2TB NVMe, Mixed Use, P4610)

Storage Controller PERC H740P RAID Controller

RAID Layout RAID 1

Intel NVMe 3.2TB P4600

The Intel P4600 Mainstream NVMe SSDs are advanced data center SSDs optimized for mixed read-writeperformance, endurance, and strong data protection . They are designed for greater performance and endurance in acost-effective design, and to support a broader set of workloads. NVMe drives are also optimized for heavy multi-threaded workloads by using internal parallelism and many other improvements, such as enlarged I/O queues. TheIntel P4600 NVMe drives have the following key characteristics:

• Suitable for mixed read-write workloads.• Variable sector size and end-to-end data-path protection.• Enhanced power-loss data protection.

Note: Our performance testing was conducted with P4600 because the P4610 was not orderable at the timethe servers were acquired. Please use P4610 instead of P4600

Intel XXV710 Dual Port 25GbE

The Intel XXV710 Dual Port 25GbE delivers excellent performance for 25GbE connectivity that is backwardscompatible to 1/10GbE, making migration to higher speeds easier. It also features a foundation for serverconnectivity, providing broad interoperability, critical performance optimization, and increased agility forTelecommunications, Cloud, and Enterprise IT network solutions.

Interoperability. Multiple speeds and media types for broad compatibility backed by extensive testing andvalidation.

Agility. Both kernel and Data Plane Development Kit (DPDK) drives for scalable packet processing.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 21: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Architecture overview | 21

Network layout

Figure 4: HCI network layout

Figure Figure 4: HCI network layout on page 21 illustrates the network layout for Hyper-ConvergedInfrastructure.

Bond0 interface is associated with p1p1, p2p1 physical interfaces and is paired with br-tenant and br-intvirtual bridges. Bond0 is used to communicate with the internal API, Tenant, and Storage network.

Bond1 interface is associated with physical interfaces p1p2, p2p2, and br-ex and is paired with br-intvirtual bridge which communicates with instances. Bond1 interface is used for external network and storage clusternetwork. Open vSwitch bridges are mapped between physical and virtual interfaces.

Bond2 interface is especially used for NFV workload with dedicated NICs for either SRIOV, or OVS-DPDK. Thisinterface handles NFV workload on hyper-converged nodes.

Table 5: Logical Networks on page 21 describes the network layout:

Note: All the VLANs described here are used explicitly for Overcloud network and are bound to differ asper end user configuration. Environment file network-environment.yaml configures VLANs that ispassed to Undercloud to deploy and manage Overcloud.

Table 5: Logical Networks

Network VLAN Description

Internal APInetwork

140 • The Internal API network is used for communication between theOpenStack services.

iDRAC network 110 • It is used to manage bare-metal nodes remotely using dracclient.

Provisioning/IPMI network

120 • The provisioning network is used to provision bare-metal servers sothat the bare-metal can communicate with the Networking servicefor DHCP, PXE boot and other requirements.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 22: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

22 | Architecture overview

Network VLAN Description

Public/Externalnetwork

Untagged • OSP director VM uses the Public/External network to downloadsoftware updates for the overcloud.

• Administrator of OSP Director VM uses this network to accessundercloud to manage the overcloud.

• This network is also responsible for floating IP addressesmanagement across the tenant instances where external users canaccess them using the same.

• The converges nodes do not need to be directly connected to theexternal network, as their instances communicate via the Tenantnetwork to the controllers who then route external traffic on theirbehalf to the external network.

StorageClusteringnetwork

180 • The backend storage network to which Ceph routes its heartbeat,object replication, and recovery traffic. The Ceph OSDs use thisnetwork to balance data according to the replication policy. Thisprivate network only needs to be accessed by the OSDs.

Storage network 170 • The frontend storage network where Ceph clients (through GlanceAPI, Cinder API , or Ceph CLI) access the Ceph cluster. Cephmonitors operate on this network.

Tenant network 130 • This is the network for allocating IP addresses to tenant instances.OpenStack tenants create private networks provided by VLANsconfigured on underlying physical switch. This network facilitatescommunication across tenant instances.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 23: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Architecture overview | 23

Physical network

Figure 5: Physical network

The Figure 5: Physical network on page 23 diagram illustrates the physical network wiring for RHOSPdeployment in a Converged infrastructure.

Stacks of S5248-ON switches uplink to an external network with a S3048-ON management switch. It also consistsof SAH node and cluster of three Controller nodes and three Converged nodes. For seamless communication, theinterfaces are wired as follows:

1. iDRAC interfaces to S3048-ON switch for all nodes. This is used to access iDRAC session for all nodes.2. Interface em3 to S3048-ON switch via VLAN 120 for all nodes to provision bare-metal servers.3. A bridge Bond0 is set up between the first port of the two 25G NICs for all nodes. VLAN 130, 140 and 170 also

use this interface. This bridge is connected as a Link Aggregation Control Protocol (LACP) connection.4. A bridge Bond1 is set up between the second port of the 25G NICs for all nodes. VLAN 180 uses this interface

for Converged Nodes whereas Controllers access Public Network through this bridge. This bridge is connected asa Link Aggregation Control Protocol (LACP) connection.

5. The last 25G NIC on Converged nodes remains available for future NFV operations such as SRIOV or OVS-DPDK, but currently not being used. A bridge Bond2 is set up between two ports out of four (two interface oftwo ports each) of 25G NICs on Converged nodes. It remains available for future operations such as SR=IOV orOVS-DPDK while the remaining two ports remain free.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 24: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

24 | Red Hat Ceph Storage for HCI

Chapter

3Red Hat Ceph Storage for HCI

Topics:

• Introduction to Red Hat CephStorage

• Hyper-Converged Infrastructure(HCI) Ceph storage

Ceph is a widely used open-source storage platform. It provides highperformance, reliability and scalability. The Ceph distributed storage systemprovides an interface for object, block and file-level storage.

This chapter describes the Red Hat Ceph Storage and its integration withController and Converged node in Hyper-Converged Infrastructure.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 25: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Red Hat Ceph Storage for HCI | 25

Introduction to Red Hat Ceph StorageRed Hat Ceph Storage is an open-source, petabyte-scale distributed storage system primarily designed for objectand block data, which can be scaled with commodity hardware. It can provide unstructured object storage for cloudworkloads which can be accessed either through a native API or by using Amazon S3 or OpenStack Object Storage(Swift) API protocols. Block based storage is managed by either a native block based protocol or by using the iSCSIprotocol. The latest version of Red Hat Ceph Storage can also provide client access to a Network File system (NFS)by using its native Ceph File System (CephFS).

Red Hat Ceph Storage offers the following features:

• Multi-site and disaster recovery options.• Flexible storage policies.• Data durability and resiliency via erasure coding or replication.• Deployment in a containerized environment.

Red Hat Ceph Storage can be integrated with OpenStack Nova Compute to store cloud instance disks. Glancemanages the images used to deploy cloud instances. Cinder provides persistent storage for cloud instances.

Red Hat Ceph Storage is based on a modular and distributed architecture that contains the following:

• An object storage backend named Reliable Autonomic Distributed Object Store (RADOS).• A variety of access methods to interact with RADOS – RADOS Block Device (RBD), RADOS Gateway (RGW)

and CephFS.

RADOS

RADOS system stores data as objects in logical storage pools, and utilizes the Controlled Replication Under ScalableHashing (CRUSH) data placement algorithm to automatically determine where that object should be stored.

RADOS, the Ceph storage backend is based on the following daemons which can be easily scaled to meet therequirements of any deployed architecture.

• Monitors (MONs). Daemons responsible for maintaining a master copy of the cluster map which containsinformation about the state of the Ceph cluster and its configuration. When the number of active monitors fallsbelow the threshold, the entire cluster is unaccessible for any data integrity client operation.

• Object Storage Devices (OSDs). Building blocks of a Ceph storage cluster. They connect a storagedevice to the Ceph storage cluster.An individual storage server may run multiple OSDs daemons and can alsoprovide multiple OSDs to the cluster. Each OSD Daemon provides a storage device which is normally formattedwith an Extents File System (XFS). A new feature called BlueStore introduced in Red Hat Ceph Storage permitsraw access to local storage devices. The replication of objects to multiple OSDs is handled automatically. OneOSD is called the primary OSD and a Ceph client reads or writes data from the primary OSD. Secondary OSDsplay an important role in ensuring the resilience of data in the event of a failure in the cluster. Primary OSDfunctions are:

• Serves I/O requests.• Replicates and protects the data.• Rebalances the data to ensure performance.• Recovers the data in case of a failure.

Secondary OSDs functions are always under control of a Primary OSD and are all capable of becoming thePrimary OSD.

• Ceph Manager (MGRS). Gathers a collection of statistics of the Ceph storage cluster. There is no impact onclient I/O operations if the Ceph Manager daemon fails. However, to avoid this scenario, a minimum of two Cephmanagers are recommended.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 26: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

26 | Red Hat Ceph Storage for HCI

Pools

Ceph pools are logical partitions of the Ceph storage cluster, and are used to store objects under a common name tag.Each pool is assigned a specific number of hash buckets to a group objects together for storage. These hash bucketsare call Placement Groups (PGs).

The number of placement groups assigned to each pool can be configured independently to fit any type of data. Thisnumber is configured at the time of the cluster creation can be increased dynamically but can never be decreased.

The CRUSH algorithm is used to select the OSDs that will serve the data for a pool. Permissions such as read, writeor execute can be set at the pool level.

When you first deploy a cluster without creating a pool, Ceph uses the default pools for storing data. A pool providesyou with:

• Resilience• Placement Groups• CRUSH rules• Snapshots• Set ownership

Placement Groups

A Placement Group (PG) aggregates a series of objects into a hash bucket, or group, and is mapped to a set of OSDs.An object belongs to only one PG, and all objects belonging to the same PG return the same hash result.

The placement strategy is known as the CRUSH placement rule. When a client writes an object to a pool, it uses thepool's CRUSH placement rule to determine the object's placement group and the cluster map to calculate where anobject is written to which OSD(s).

When OSDs are added or removed from a cluster, placement groups are automatically rebalanced betweenoperational OSDs.

You can set the number of placement groups for the pool. The number of placement groups per OSD in Hyper-Converged Infrastructure (HCI) environment is set to 200 for optimal usage of OSDs/NVMe SSDs.

CRUSH

When you store data in a pool, a CRUSH rule set mapped to the pool enables CRUSH to identify a rule for theplacement of the object and its replicas (or chunks for erasure coded pools) in your cluster. CRUSH rules can becustomized.

Objectstore

Ceph is an ecosystem of technologies offering three different storage models - object storage, block storage andfilesystem storage. Ceph’s approach is to treat object storage as its foundation, and provide block and filesystemcapabilities as layers built upon that foundation. Objectstores store data in a flat non-hierarchical namespace whereeach piece of data is identified by an arbitrary unique identifier. Any other details about the piece of data are storedalong with the data itself, as metadata.

Objectstore is an abstract interface for storing the data and can be implemented through two mediums - filestore(legacy) and BlueStore. BlueStore stores objects directly on the block devices without any file system interface,which improves the performance of the cluster and FileStore is the legacy approach to storing objects in Ceph.

Red Hat Ceph Storage Dashboard introduction

The Red Hat Ceph Storage Dashboard is a built-in web-based Ceph management and monitoring application. It isused to administer various aspects and objects of the cluster. This web-based dashboard application runs on a virtualmachine known as the Red Hat Ceph Storage Dashboard which is deployed on the Solution Admin Host. As Hyper-Converged Infrastructure is using Red Hat Ceph Storage as its storage mechanism, it is very important to manageand monitor the Ceph cluster. After the deployment of Overcloud, the user can deploy the Red Hat Ceph Storage

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 27: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Red Hat Ceph Storage for HCI | 27

Dashboard to perform management and monitoring of Ceph storage through a web-based application. Deploymentand Configuration of dashboard uses JetPack scripts.

Note: Please refer https://github.com/dsp-jetpack/JetPack for more information on JetPack 13.1 Please referRed Hat Ceph Storage Dashboard deployment and configuration (optional) on page 58 for deploymentand configuration of Red Hat Ceph Storage Dashboard.

Hyper-Converged Infrastructure (HCI) Ceph storageCurrently, three types of nodes for OpenStack deployment are required:

• Controller• Compute• Storage

A separate network node also may be a requirement. Hardware resources are high with this infrastructure. Hyper-Converged Infrastructure is an ideal solution to address the problem. HCI integrates compute and storage services onone hardware reducing cost and efforts significantly.

Ceph is a preferred and proven storage for HCI because it supports block and object storage. Ceph use all the servicesin one cluster in a HCI environment.This solution is ideal for 5G and Edge computing. These applications requiresmall setups that perform high volume operations.

Figure 6: HCI Ceph storage

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 28: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

28 | Red Hat Ceph Storage for HCI

The Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform uses Red HatCeph Storage as the storage provider. The architecture includes co-located compute and Ceph storage services.Ceph cluster configuration features:

• The daemon Ceph monitor running on Controller nodes maintains a master-copy of the cluster map.• The daemon OSD running on converged nodes stores objects on Ceph, in addition to a KVM module required for

instance spawning.

HCI uses Red Hat Ceph Storage features to have a highly reliable, scalable, easily manageable and optimizedperformant Ready Architecture for HCI. Features include:

• Usage of NVMe SSDs reduce latency with higher IOPs and potentially lower power consumption.• Optimal count of two Ceph OSDs per NVMe SSDs based on performance statistics.• BlueStore. A new and high performance backend Objectstore for OSD.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 29: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment considerations | 29

Chapter

4Deployment considerations

Topics:

• Converged Nodes withintegrated Ceph storage

• Resource isolation• Performance tuning

This section highlights the key elements which have been covered during thedesign phase as well as the reason behind those choices.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 30: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

30 | Deployment considerations

Converged Nodes with integrated Ceph storage1. Replication Factor - NVMe SSDs have higher reliability than rotational disks. They also have higher MTBR

(Mean Time Between Failure) and lower bit error rate. 2x replication is recommended in production whendeploying OSDs on NVMe versus the 3x replication used with legacy rotational disks. This architecture guideuses 2x replication factor.

2. OSD Sizing - To reduce tail latency in 4KW write transfers, it is recommended to run more than one OSD perdrive for NVMe SSDs. It has been proven that it's not possible to take advantage of NVMe SSD bandwidth with asingle OSD. Four is the optimum partitions per SSD drive that gives the best possible performance. The downsideis that each OSD must add a memory overhead in the system which results in less RAM available for Nova. Thisarchitecture guide uses two OSDs per NVMe which is a good compromise and provides enough performancewithout reducing RAM availability for the compute engine.

3. Ceph Journal consideration - In an all-flash Ceph cluster using Intel NVMe SSDs, separating the journal fromthe OSD datastore usually does not produce additional benefits. In these all-flash configurations, a Ceph journal isfrequently co-located on the same NVMe drive in a different partition from the OSD data. This maintains a simpleto use configuration and also limits the scope of the any drive failures in an operational cluster. This architectureguide uses co-located journal on each NVMe disk.

4. CPU sizing - Ceph OSD processes can consume large amount of CPUs while doing block operations, and therecommended ratio of CPUs per NVMe is 10:1, assuming a 2Ghz CPU. As the Ceph OSDs are hosted on the samenode that the compute engine uses, it's even more important to have a good number of CPUs available to allowintensive workload and to protect the VM which are run by Nova. With Intel Platinum 8160 2.1GHz, 48 cores areavailable to both OpenStack Nova Compute and Ceph storage and end up with 96 vCores with Hyper-threadingenabled. This architecture guide uses a ratio of 12:1 CPUs per NVMe.

5. Memory sizing - OSDs do not require as much RAM for regular operations (500MB of RAM per daemoninstance), however, during recovery they need significantly more RAM (~3GB per 1TB of storage per daemon).Generally, more RAM is better. With that in mind, RAM usage for OSDs should not exceed 24GB of usage incase of a severe failure in a node. There is still plenty of RAM available for Nova usage. This architecture guideuses 384GB of RAM on each Converged node.

6. Networking - A 25GbE Network is required to leverage the maximum performance benefits of a NVMe-basedHyper-Converged Infrastructure platform. This architecture guide uses Intel XXV710 25GbE adapter on all nodes.

Resource isolationResource isolation for Ceph OSDS

Limiting the amount of CPU and memory for each Ceph OSD is important, so resources are free for the OpenStackNova Compute process. When reserving memory and CPU Resources for Ceph on hyper-converged nodes, eachcontainerized OSD should be limited in GB of RAM and vCPUs.

ceph_osd_docker_memory_limit constraints the memory available to a container. If the host supports swapmemory, then it can be larger than physical RAM. If a limit of 0 is specified, the containers memory is not limited.To allow maximum performance while preserving memory for other usage, a maximum amount of 10GB can beallocated for each OSD container.

ceph_osd_docker_cpu_limit limits the CPU usage of container. By default, containers run with the full CPUresource. This flag tells the kernel to restrict the container's CPU usage to the quota you specify. A maximum count offour vCPUs per OSD can be allocated (eight vCPUs per NVMe physical disk).

This architecture guide uses the following parameters to optimize Ceph OSDs containers:

CephAnsibleExtraConfig:ceph_osd_docker_memory_limit: 10gceph_osd_docker_cpu_limit: 4

The deployment with these parameters is done by modifying the two above values in the dell-environment.yaml heat template.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 31: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment considerations | 31

Performance tuningNova reserved memory

It reserves the amount of memory required for a host to perform its operation. This memory should normally betuned to maximize the number of guests while protecting the host. For a Hyper Converged Infrastructure, it shouldmaximize guests while protecting the host and Ceph.

We can figure out the reserved memory with the following formula which is the recommended approach by Red Hat:

left_over_mem = mem - (GB_per_OSD * osds) number_of_guests = left_over_mem / (average_guest_size + GB_overhead_per_guest)nova_reserved_memory_MB = 1024 * ((number_of_guests * GB_overhead_per_guest) + (GB_per_OSD * osds))

This architecture guide follows this approach with values and calculation as described below:

336 = 384 - (2 * 16)

Given our nodes with 384GB of RAM and 16 OSDs per node, assuming that each OSD consumes 3GB of RAM, thatis 48GB of RAM for Ceph, and leaving 336GB of RAM for Nova Compute.

134 = 336 / (2 + 0.5)

If the average guest each uses 2GB of RAM, then the overall system could host 113 guest machines. However, thereis the additional overhead for each guest machine running on the hypervisor. Assuming this overhead is 500MB, themaximum number of 2GB guest machines that could be ran would be 134.

115000 = 1000 * ((134 * 0.5 ) + (3 * 16))

Thus, reserved_host_memory_mb would equal 115000. The parameter value must be in megabytes (MB).

This value is defined in the dell-environment.yaml file as described below:

nova::compute::reserved_host_memory: 115000

CPU allocation ratio

This option helps you specify virtual CPU to physical CPU allocation ratio.Nova’s cpu_allocation_ratioparameter is used by the Nova scheduler when choosing which compute nodes to run the guest machines before itconsiders that the node is unable to handle any more guest machines. The reason behind is that the Nova schedulerdoes not take into account the CPU needs of the Ceph OSD services running on the same node as the Nova scheduler.Modifying the cpu_allocation_ratio parameter allows Ceph to have the CPU resources it needs to operateeffectively without those CPU resources being given to Nova Compute.

We can determine the cpu_allocation_ratio with the following formula which is the recommended approach by RedHat:

num_of_non_ceph_cores = total_num_of_cores - (num_of_cores_per_osd * num_osds)num_of_guest_machines_vcpus = num_of_non_ceph_cores / avg_guest_machine_cpu_utilizationcpu_allocation_ratio = num_of_guest_machines_vcpus / total_num_of_cores

Given our nodes with 48 cores and 16 OSDs per node with Hyper-threading enabled leaving 32 cores for Nova.

32 = 48 - (1 * 16)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 32: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

32 | Deployment considerations

Assuming that each guest machine utilizes 10% of its core, we end up with 320 available vCPUs.

320 = 32 / 0.1

Finally, we can consider using a cpu_allocation_ratio of 6.667

6.667 = 320 / 48

This value is defined in the dell-environment.yaml file as described below:

nova::cpu_allocation_ratio: 6.7

Note: Red Hat provides a script to do all the calculations called nova_mem_cpu_calc.py

Ceph placement group

A Placement Group (PG) aggregates a series of objects into a hash bucket, or group, and is mapped to a set of OSDs.An object belongs to only one PG, and all objects belonging to the same PG return the same hash result.

Note: For detailed information please refer Placement Groups on page 26

To design the sizing of each placement group we can use Ceph Calculator https://ceph.com/pgcalc/ to identify theoptimal value. We use this tool to calculate the amount of PG per pool.

Table 6: Placement Group configuration summary

Pool name Replicationfactor

OSD # %Data Target PGs/OSD

PG #

volumes 2 48 63.00 200 4096

vms 2 48 25.00 200 1024

images 2 48 12.00 200 512

Where

• Pool name: Name of the pool.• Replication factor: Number of replicas the pool will have. Two is the recommended value when using NVMe

disks.• OSD #: Number of OSDs which this Pool will have PGs. 48 is the entire cluster OSD count.• %Data: This value represents the approximate percentage of data which will be contained in this pool for that

specific OSD set.• Target PGs/OSD: Expected cluster OSD count when considering future scaling.• PG #: Number of PGs to create.

Note: Keep in mind that the PG count can be increased, but NEVER decreased without destroying andrecreating the pool.

The values calculated above are defined in the dell-environment.yaml file as described below.

CephPools: [{"name": "volumes", "pg_num": 4096, "pgp_num": 4096}, {"name": "vms", "pg_num": 1024, "pgp_num": 1024},{"name": "images", "pg_num": 512, "pgp_num": 512}]

BlueStore

BlueStore is a new storage backend for Ceph. It gives better performance (roughly 2x for writes), full datachecksumming, and built-in compression. BlueStore stores objects directly on the block devices without any filesystem interface, which improves the performance of the cluster. It provides features like efficient block device usage,

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 33: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment considerations | 33

direct management of storage devices, metadata management with RocksDB, multi device support, no large doublewrites, efficient copy-on-write and inline compression.

BlueStore backend supports three storage devices - Primary storage device , Write-ahead-log (WAL) device, anddatabase device. It can manage either one, two or all three storage devices.

Modify the dell-environment.yaml with the following parameters to enable BlueStore as the Ceph backend.

CephAnsibleDisksConfig: osd_scenario: lvm devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1 - /dev/nvme4n1 - /dev/nvme5n1 - /dev/nvme6n1 - /dev/nvme7n1 CephAnsibleExtraConfig: osd_objectstore: bluestore osds_per_device: 2

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 34: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

34 | Deployment

Chapter

5Deployment

Topics:

• Before you begin• Deployment workflow• Solution Admin Host• Red Hat OpenStack Platform

Director• Undercloud Deployment• Configure and deploy the

Overcloud• Red Hat Ceph Storage

Dashboard deployment andconfiguration (optional)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on RedHat OpenStack Platform utilizes Dell EMC PowerEdge R-Series serversfor deployment of RHOSP version13. Dell EMC PowerEdge R-Series rackservers key features:

• Automate productivity• Comprehensive security

This chapter also describes BIOS and Network configuration, installationprerequisites with proper configuration for Controller and Converged nodes.Additionally entire deployment process including manual creation of SAHand Director node for Undercloud and deploying Overcloud is part ofthis chapter. Lastly this chapter describes performance tuning parameters,resources isolation and how node placement works.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 35: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 35

Before you beginThis chapter outlines requirements for setting up an environment to provision Red Hat OpenStack Platform 13. Itcomprises requirements for setting up director, accessing it, and hardware requirements for hosts that the directorprovisions for OpenStack services. The Dell EMC Ready Architecture for Hyper-Converged Infrastructure onRed Hat OpenStack Platform is a companion to this deployment and provides detailed description of underlyingRed Hat OpenStack Platform for this ready architecture, its hardware and software components, and deploymentmethodologies.

Tested BIOS and firmware versions

Table 7: Validated firmware versions

Components Versions

BIOS version 1.6.13

iDRAC with Lifecycle Controller 3.30.30.30

Power supply 00.1B.53

Intel(R) ethernet 25G 2P XXV710 adapter 18.8.9

Intel(R) Gigabyte 4P X710/I350 rNDC 18.8.9

PERC H740P Mini 50.5.0-1750

Table 8: BIOS configuration

Parameter Controller node Converged SAH node

PXE Devicesettings

Device 3 enabled Device 3 enabled Default

PXE deviceinterface

Integrated NIC 1 Port 3partition 1

Integrated NIC 1 Port 3partition 1

Default

Virtualizationtechnology

Default Enabled Enabled

UEFI Bootsequence

Integrated RAID Controller1: red

PXE Device 3: integratedNIC 1 Port 3 partition

Integrated RAID Controller1: red

PXE Device 3: integratedNIC 1 Port 3 partition

Integrated RAID Controller1: red

PXE Device 3: integratedNIC 1 Port 3 partition

Note: Values of UEFI Boot settings should be checked in check box

Table 9: Switches firmware version

Product Version

S3048-ON firmware Cumulus Linux OS 3.7.1

S5248-ON firmware Cumulus Linux OS 3.7.1

Disk layout

Disk layout for Controller node and Converged node

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 36: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

36 | Deployment

Table 10: Disk configuration for Controller and SAH Node

Components Disk information

Layout RAID-10

Media type HDD

Capacity 2791 GB

Physical disk 10

Physical disk name Physical Disk 0:1:0 to Physical Disk 0:1:9

Table 11: Disk configuration for Converged Node

Components Disk information

Layout RAID-1

Media type SSD

Capacity 223 GB

Physical disk 2

Physical disk name Solid State Disk 0:1:0

Solid State Disk 0:1:1

Virtual Disk Creation

The following procedure describes the steps required for successfully creating the virtual disk on which the OperatingSystem will be installed. Depending on the type of node you are preparing, the virtual disk layout may differ. Refer tothe tables presented in the previous section for detailed information.

Virtual Disk creation procedure for Controller, SAH and Converged.

1. Log to the iDRAC Web User interface via the iDRAC IP of the physical host2. Expand the Configuration tab3. Select Storage Configuration4. Make sure the PERC H740P Mini Controller is selected in the drop-down list5. Under Virtual Disk Configuration, click on Create Virtual Disk6. Give a name to the Virtual Disk, select the appropriate RAID Layout7. Select appropriate Physical Disks and click on Add to Pending Operations8. Once back on the Configuration page, click on Apply now9. Watch the status of the creation on the Job Queue

To verify Virtual Disks have been created

1. Go to the iDRAC and login with given credentials2. On Dashboard, select Storage3. When you click on Virtual Disks as per below screenshot Virtual Disk details will be displayed.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 37: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 37

Figure 7: Verify Virtual Disk

Note: The display can differ depending on the node verification of the Virtual Disk.

Software requirements

Software requirements includes:

• Red Hat Enterprise Linux 7.6• Red Hat OpenStack Platform version 13• Red Hat Ceph Storage 3.2

Note:

User needs to be aware of Pool IDs at this stage.

Please contact Dell EMC sales representative for any software components required in performing thesesteps.

Deployment workflow

Figure 8: Workflow for RHOSP deployment over HCI

Figure 8: Workflow for RHOSP deployment over HCI on page 37 illustrates workflow of RHOSP deploymentover Hyper-Converged Infrastructure. The activity involves deployment of SAH node, configuring and installingUndercloud and finally deployment of Overcloud. The chapter gives detailed deployment.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 38: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

38 | Deployment

Solution Admin Host

Prepare Solution Admin Host

SAH node - Solution Admin Host node which holds Director VM, serves the purpose of configuring and deploying ofOpenStack plaform services to Controller and Converged nodes. For detailed information please refer to the Overviewon page 15 section.

Solution Admin Host requires jumphost, either Linux or Windows. Preparation of the Solution Admin Host beginswith the installation of Red Hat Enterprise Linux Server 7.6.

Creating Virtual Disks is an essential prerequisite for SAH Node deployment.

Note: For virtual disk layout please refer Virtual disk section in Virtual Disk Creation on page 36.

SAH deployment overview

A kickstart mechanism provides automated deployment of SAH node. The installation process can be accomplishedusing Virtual CD / DVD Media. SAH node kickstarts Undercloud deployment and creates a Dashboard VM requiredfor Overcloud.

Kickstart file customization

This kickstart file performs the following steps when properly configured:

• Partitions the disks• Sets SELinux to permissive mode• Disables firewall, and uses iptables• Disables NetworkManager• Configures networking for the rest of the nodes, including:

• Bonding• Bridges• Static IP addresses• Gateway• Name resolution• NTP service• Registers the system using the Red Hat Subscription Manager

• Additionally, there are some requirements that must be satisfied prior to installation of the Operating System:

• A Red Hat subscription• Access to the Subscription manager hosts

1. From a Linux host, clone the public repository of the Ready Architecture https://github.com/dsp-jetpack/JetPack/using Git.

2. Switch to the HCI_OSP13 branch https://github.com/dsp-jetpack/JetPack/tree/HCI_OSP133. Edit the osp_sah.ks located in the kickstart folder.4. Modify it according to your needs and set the following variables:

Table 12: Kickstart file variables

Variables Description

HostName The FQDN of the server, e.g., sah.acme.com.

SystemPassword The root user password for the system.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 39: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 39

Variables Description

SubscriptionManagerUser The user credential when registering with Subscriptionmanager.

SubscriptionManagerPassword The user password when registering with Subscriptionmanager.

SubscriptionManagerPool The pool ID used when attaching the system to anentitlement.

SubscriptionManagerProxy Optional proxy server to use when attaching the system toan entitlement.

SubscriptionManagerProxyPort Optional port for the proxy server.

SubscriptionManagerProxyUser Optional user name for the proxy server.

SubscriptionManagerProxyPassword Optional password for the proxy server.

Gateway The default gateway for the system.

NameServers A comma-separated list of nameserver IP addresses.

NTPServers A comma-separated list of time servers. This can be IPaddresses or FQDNs.

TimeZone The time zone in which the system resides.

anaconda_interface The public interface that allows connection to Red HatSubscription services. For 10GbE or 25GbE Intel NICs,"em4" (the fourth nic on the motherboard) should beused.

extern_bond_name The name of the bond that provides access to the externalnetwork.

extern_boot_opts The boot options for the bond on the external network.Typically, there is no need to change this variable.

extern_bond_opts The bonding options for the bond on the externalnetwork. Typically, there is no need to change thisvariable.

extern_ifaces A space delimited list of interface names to bond togetherfor the bond on the external network.

internal_bond_name The name of the bond that provides access for all internalnetworks.

internal_boot_opts The boot options for the bond on the internal network.Typically, there is no need to change this variable.

internal_bond_opts The bonding options for the bond on the internal network.Typically, there is no need to change this variable.

internal_ifaces A space delimited list of interface names to bond togetherfor the bond on the internal network.

mgmt_bond_name The boot options for the management VLAN interface.Typically, there is no need to change this variable.

prov_bond_name The VLAN interface name for the provisioning network.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 40: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

40 | Deployment

Variables Description

prov_boot_opts The boot options for the provisioning VLAN interface.Typically, there is no need to change this variable.

stor_bond_name The VLAN interface name for the storage network.

stor_boot_opts The boot options for the storage VLAN interface.Typically, there is no need to change this variable.

pub_api_bond_name The VLAN interface name for the public API interface.

pub_api_boot_opts The VLAN interface name for the private API interface.

priv_api_bond_name The VLAN interface name for the private API interface.

priv_api_boot_opts The boot options for the private API VLAN interface.Typically, there is no need to change this variable.

br_mgmt_boot_opts The bonding options, IP address and netmask for themanagement bridge.

br_prov_boot_opts The bonding options, IP address and netmask for theprovisioning bridge.

br_stor_boot_opts The bonding options, IP address and netmask for thestorage bridge.

br_pub_api_boot_opts The bonding options, IP address and netmask for thepublic API bridge.

br_priv_api_boot_opts The bonding options, IP address and netmask for theprivate API bridge.

prov_network The network IP address for the provisioning network foruse by the NTP server.

prov_netmask The netmask for the provisioning network for use by theNTP server.

Creating image

1. Create .img file

$ dd if=/dev/zero of=osp_ks.img bs=1M count=1

2. Create file system

$ mkfs ext3 -F osp_ks.img

mke2fs 1.42.9 (28-Dec-2013)Discarding device blocks: doneFilesystem label=OS type: LinuxBlock size=1024 (log=0)Fragment size=1024 (log=0)Stride=0 blocks, Stripe width=0 blocks128 inodes, 1024 blocks51 blocks (4.98%) reserved for the super userFirst data block=1Maximum filesystem blocks=10485761 block group8192 blocks per group, 8192 fragments per group

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 41: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 41

128 inodes per groupAllocating group tables: doneWriting inode tables: doneWriting superblocks and filesystem accounting information: done

3. Create a directory where the image file will be mounted

$ mkdir /mnt/usb

4. Mount the filesystem in the usb directory

$ mount -o loop osp_ks.img /mnt/usb

5. Copy the SAH kickstart file to the usb directory

$ cp osp_sah.ks /mnt/usb/

6. Umount the filesystem

$ umount /mnt/usb

7. Copy the .img file to a host from which you have access to the SAH node iDRAC user interface.

Presenting the image to the RHEL OS installation process

Note: At this stage, the RHEL 7.6 ISO file needs to be downloaded to the host from which you will accessthe SAH iDRAC user interface.

From a host that has access to the SAH iDRAC user interface,

1. Connect to the SAH iDRAC using appropriate credentials2. Refer to Table 8: BIOS configuration on page 35 and verify that BIOS parameters are set appropriately.3. Click on Launch virtual.4. On the console, click on Connect Virtual Media.5. On the Virtual Media window, browse for RHEL 7.6 ISO image and click on Map Device This will map the

RHEL 7.6 ISO as a virtual DVD.6. On the same window, browse to the image containing the kickstart file created in the previous section and click on

Map Device. This will map the osp_ks.img file as a virtual removable disk.

Figure 9: Check RHEL iso and image file

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 42: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

42 | Deployment

Deploy SAH node

1. From the console, click on Boot and select Virtual CD/DVD/ISO2. Click on power then select Power on System.3. Boot on the DVD.

• Select install Redhat enterprise Linux 7.6 and press ‘e’ to edit the selected item.

Figure 10: Install Redhat• Edit the command line when prompted. Add the following:

$ inst.ks=hd:sdb:/osp_sah.ks

Figure 11: Provide kickstart file

• Exit and continue the boot process (Ctrl+X).• After successful deployment of SAH node, from the console, click on Virtual Media and Disconnect

Virtual Media.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 43: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 43

Red Hat OpenStack Platform DirectorDirector is a single-system OpenStack installation that comprises components for provisioning and managingthe OpenStack nodes that form your OpenStack environment (Overcloud). For detailed information on Red HatOpenStack Platform Director, please refer to Red Hat OpenStack Platform Director on page 43

Kickstart file customization

This kickstart file performs the following steps when properly configured:

• Partitions the disk• Sets SELinux to enforcing mode• Configures iptables to ensure the following services can pass traffic:

• HTTP• HTTPS• DNS• TFTP• TCP port 8140

• Configures networking, including:

• Static IP addresses• Gateway• Name resolution• NTP time service• Registers the system using the Red Hat Subscription Manager• Installs the Red Hat OpenStack Platform Director installer

1. Login to the SAH node as root.2. Clone the public repository of the Ready Architecture https://github.com/dsp-jetpack/JetPack/ using Git.3. Switch to the HCI_OSP13 branch https://github.com/dsp-jetpack/JetPack/tree/HCI_OSP134. Edit the director.ks located in the kickstart folder.5. From the director Kickstart template provided as part of the Ready Architecture, modify it according to your needs

and set the following variables:

Table 13: Kickstart File Variables

Variables Description

rootpassword The root user password for the system.

timezone The timezone the system is in.

smuser The user credential when registering with Subscription manager.

smpassword The user password when registering with Subscription manager. The passwordmust be enclosed in single quotes if it contains certain special characters.

smpool Red Hat OpenStack Platform Director (Virtual Node) pool ID used whenattaching the system to an entitlement.

hostname The FQDN of the Director Node.

gateway The default gateway for the system.

nameserver A comma-separated list of nameserver IP addresses.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 44: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

44 | Deployment

Variables Description

ntpserver The SAH node's provisioning IP address. The SAH node is an NTP server,and will synchronize to the NTP servers specified in the SAH node's kickstartfile.

user The ID of an admin user to create to use for installing Red Hat OpenStackPlatform Director . Default admin user is osp_admin

password The password for the osp admin user.

eth0 This line specifies the IP address and network mask for the public APInetwork. The line begins with eth0, followed by at least one space, the IPaddress of the VM on the public API network, another set of spaces, and thenthe network mask.

eth1 This line specifies the IP address and network mask for the provisioningnetwork. The line begins with eth1, followed by at least one space, the IPaddress of the VM on the provisioning network, another set of spaces, andthen the network mask.

eth2 This line specifies the IP address and network mask for the managementnetwork. The line begins with eth2, followed by at least one space, the IPaddress of the VM on the management network, another set of spaces, andthen the network mask.

eth3 This line specifies the IP address and network mask for the private APInetwork. The line begins with eth3, followed by at least one space, the IPaddress of the VM on the private API network, another set of spaces, and thenthe network mask.

6. Save the file under /tmp.

Red Hat OpenStack Platform Director as VM deployment

1. Create the iso directory where the RHEL 7.6 iso file will be located.

$ mkdir -p /store/data/iso

2. Create the images directory where the VM image will be located.

$ mkdir /store/data/images

3. Copy RHEL 7.6 iso file to /store/data/iso directory.4. Create the director VM as follows:

$ virt-install --name director --ram 32768 --vcpus 8 --hvm --os-type linux --os-variant rhel7 \--disk /store/data/images/director.img,bus=virtio,size=80 --network bridge=br-pub-api \--network bridge=br-prov --network bridge=br-mgmt --network bridge=br-priv-api \--initrd-inject /tmp/director.ks --extra-args ks=file:/director.ks --noautoconsole --graphics spice \--autostart --location /store/data/iso/rhel-server-7.6-x86_64-dvd.iso

Note: Please refer Overview on page 15 for overview of the network bridges.

5. Once the deployment is triggered, progress can be monitored using virt-viewer.

$ virt-viewer director

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 45: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 45

6. The status of the VM can be verify by using virsh list command.

$ virsh list --all Id Name State---------------------------------------------------- - director shut off

Note: The VM will appear as shut off when the installation completes.

7. Once the VM is installed and shut off, start Director VM

$ virsh start director

Note: It can take a few minutes before the Director VM is pingable.

8. After successful installation, the Director VM is accessible through SSH using the appropriate credentials

Note: You need to use credentials that you have mentioned in kickstart file.

Undercloud Deployment

Configure the Undercloud

1. Login to director VM as root.2. Install python-tripleoclient and ceph-ansible.

$ yum -y install python-tripleoclient ceph-ansible

3. Change user to osp_admin and passwordless ssh.

$ su -l osp_admin

4. Create directories for templates and images.

$ mkdir ~/images$ mkdir ~/templates

5. Copy Undercloud configuration sample file and save it as Undercloud.conf.

$ cp /usr/share/instack-undercloud/undercloud.conf.sample ~/undercloud.conf

6. Modify undercloud.conf file according to the needs.

Table 14: Undercloud variables

Parameters Description

undercloud_hostname Defines the fully-qualified host name for the Undercloud.

local_ip The IP address and prefix of the Director Node on theprovisioning network in Classless Inter-Domain Routing(CIDR) format (xx.xx.xx.xx/yy). This must be the IP addressused for eth1 in director.cfg. The prefix used here mustcorrespond to the netmask for eth1 as well (usually 24).

subnets List of routed network subnet for provisioning andintrospection.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 46: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

46 | Deployment

Parameters Description

local_subnet Name of the local subnet where PXE and DHCP interfacesreside.

local_interface Name of the network interface responsible for PXE booting theOvercloud instances.

masquerade_network The network address and prefix of the Director Node on theprovisioning network in CIDR format (xx.xx.xx.xx/yy). Thismust be the network used for eth1 in director.cfg. The prefixused here must correspond to the netmask for eth1 as well(usually 24).

inspection_enable_uefi To support UEFI boot method.

enable_ui To enable TripleO user interface.

ipxe_enabled To support iPXE for deploy and introspection.

scheduler_max_attempts 30 maximum attempts when deploying the Overcloudinstances.

clean_nodes To swipe out disks of the Converged nodes when data alreadyexists.

cidr Network CIDR for the Neutron-managed subnet for Overcloudinstances.

dhcp_start The starting IP address on the provisioning network to use forOvercloud nodes.

Note: Ensure the IP address of the Director Node isnot included.

dhcp_end The ending IP address on the provisioning network forOvercloud nodes.

inspection_iprange An IP address range on the provisioning network to use duringnode inspection.

Note: This should not overlap with the dhcp_start/dhcp_end range.

gateway IP address of the gateway used by the Overcloud instances.Generally the undercloud ctl-plane IP.

Note: For more modification details please refer appendix Undercloud configuration file on page 92.

7. Save and exit the file.

Install Undercloud

1. Start the undercloud installation.

$ openstack undercloud install

2. When successful the displayed output will be similar to the following.

2019-01-29 10:48:29,615 INFO: Logging to /home/osp_admin/.instack/install-undercloud.log2019-01-29 10:48:29,730 INFO: Checking for a FQDN hostname...

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 47: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 47

2019-01-29 10:48:29,748 INFO: Static hostname detected as director.oss.labs2019-01-29 10:48:29,764 INFO: Transient hostname detected as director.oss.labs2019-01-29 11:00:47,078 INFO: Created flavor "swift-storage" with profile "swift-storage"2019-01-29 11:00:47,078 INFO: Configuring Mistral workbooks2019-01-29 11:00:59,358 INFO: Mistral workbooks configured successfully2019-01-29 11:01:50,544 INFO: Configuring an hourly cron trigger for tripleo-ui logging2019-01-29 11:01:52,118 INFO:#############################################################################Undercloud install complete.

The file containing this installation's passwords is at/home/osp_admin/undercloud-passwords.conf.

There is also a stackrc file at /home/osp_admin/stackrc.

These files are needed to interact with the OpenStack services, and should besecured.

#############################################################################

3. Download images which are required to install Overcloud.

$ source stackrc$ sudo yum install rhosp-director-images rhosp-director-images-ipa

4. Extract images to osp_admin/images directory.

$ cd ~/images$ for i in /usr/share/rhosp-director-images/overcloud-full-latest-13.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-13.0.tar ; do tar -xvf $i ; done

5. Upload images to Glance.

$ openstack overcloud image upload --image-path /home/osp_admin/images

Image "overcloud-full-vmlinuz" was uploaded.+--------------------------------------+------------------------+-------------+---------+--------+| ID | Name | Disk Format | Size | Status |+--------------------------------------+------------------------+-------------+---------+--------+| 74abf8fe-686f-47d2-8fe4-12c92c88c2a5 | overcloud-full-vmlinuz | aki | 6639920 | active |+--------------------------------------+------------------------+-------------+---------+--------+Image "overcloud-full-initrd" was uploaded.+--------------------------------------+-----------------------+-------------+----------+--------+| ID | Name | Disk Format | Size | Status |+--------------------------------------+-----------------------+-------------+----------+--------+| 94e8715e-232a-4ca0-bad3-e674dfb6264e | overcloud-full-initrd | ari | 62457227 | active |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 48: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

48 | Deployment

+--------------------------------------+-----------------------+-------------+----------+--------+Image "overcloud-full" was uploaded.+--------------------------------------+----------------+-------------+------------+--------+| ID | Name | Disk Format | Size | Status |+--------------------------------------+----------------+-------------+------------+--------+| 64981da1-498a-421a-8613-9acc38ff125a | overcloud-full | qcow2 | 1347420160 | active |+--------------------------------------+----------------+-------------+------------+--------+Image "bm-deploy-kernel" was uploaded.+--------------------------------------+------------------+-------------+---------+--------+| ID | Name | Disk Format | Size | Status |+--------------------------------------+------------------+-------------+---------+--------+| 625a1fac-9fa6-46b2-9abb-465e744b808a | bm-deploy-kernel | aki | 6639920 | active |+--------------------------------------+------------------+-------------+---------+--------+Image "bm-deploy-ramdisk" was uploaded.+--------------------------------------+-------------------+-------------+-----------+--------+| ID | Name | Disk Format | Size | Status |+--------------------------------------+-------------------+-------------+-----------+--------+| 9e653fad-cd55-4b14-9c30-2488ed5b239c | bm-deploy-ramdisk | ari | 420527022 | active |+--------------------------------------+-------------------+-------------+-----------+--------+

6. Verify successful uploading of Overcloud images to Glance.

$ openstack image list

+--------------------------------------+------------------------+--------+| ID | Name | Status |+--------------------------------------+------------------------+--------+| 625a1fac-9fa6-46b2-9abb-465e744b808a | bm-deploy-kernel | active || 9e653fad-cd55-4b14-9c30-2488ed5b239c | bm-deploy-ramdisk | active || 64981da1-498a-421a-8613-9acc38ff125a | overcloud-full | active || 94e8715e-232a-4ca0-bad3-e674dfb6264e | overcloud-full-initrd | active || 74abf8fe-686f-47d2-8fe4-12c92c88c2a5 | overcloud-full-vmlinuz | active |+--------------------------------------+------------------------+--------+

7. Get ID of subnet used as ctl-plane.

$ openstack subnet list

+--------------------------------------+-----------------+--------------------------------------+------------------+| ID | Name | Network | Subnet |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 49: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 49

+--------------------------------------+-----------------+--------------------------------------+------------------+| 23f88d67-0f42-42a4-b508-19b411404ee3 | ctlplane-subnet | 26ee3e35-accb-4e16-8b53-1f06288c6ed1 | 192.168.120.0/24 |+--------------------------------------+-----------------+--------------------------------------+------------------+

8. Add a DNS entry to the subnet.

$ openstack subnet set 23f88d67-0f42-42a4-b508-19b411404ee3 --dns-nameserver 8.8.8.8

9. Create registries for Overcloud and Ceph images in containerized environment.

$ openstack overcloud container image prepare --output-env-file /home/osp_admin/overcloud_images.yaml \--namespace=registry.access.redhat.com/rhosp13 \-e /usr/share/openstack-tripleo-heat-templates/environments/ceph-ansible/ceph-ansible.yaml \-e /usr/share/openstack-tripleo-heat-templates/environments/services-docker/ironic.yaml \--set ceph_namespace=registry.access.redhat.com/rhceph --set ceph_image=rhceph-3-rhel7 \--tag-from-label 13-0

Configure and deploy the OvercloudThis topic describes the steps needed to successfully deploy the Overcloud. The following procedures are discussed inthe order they need to be executed:

1. Prepare the nodes registration file.2. Configure networking.3. Configure cluster.4. Configure the static IPs.5. Configure the virtual IPs.6. Configure the nodes NICs.7. Register and introspect the nodes.8. Prepare and deploy the Overcloud.

In order to deploy the Overcloud successfully, a few heat template files are needed. Some of them have to be altered,others will keep their default values.

Directory structure stores the files per functionality and application. The templates that the user needs to modify are intwo directories ~/templates/overcloud/ and ~/templates/nic-configs/.

Please refer https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/13/html-single/hyper-converged_infrastructure_guide/index for overall deployment configuration management.

Note: Only the files that you need to modify will be covered in this chapter. However, you can refer to theEnvironment Files on page 83 appendix for a complete overview of the environment files used with thisarchitecture guide.

The following table describe the functionality of each environment file.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 50: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

50 | Deployment

Table 15: List of environment files

Filename Location Description Usercustomized

network-environment.yaml onpage 84

~/templates/overcloud/ Defines network environmentfor all OpenStack services

Yes

dell-environment.yaml on page89

~/templates/ Defines all cluster parametersincluding ceph

Yes

static-ip-environment.yaml onpage 88

~/templates/overcloud/ Defines IPs reserved for eachnode per OpenStack network

Yes

static-vip-environment.yaml onpage 87

~/templates/overcloud/ Defines IPs reserved for erOpenStack services requiringHA

Yes

storage-environment.yaml ~/templates/overcloud/environments/

Defines Storage backend usedby Nova and Cinder

No

nic_environment.yaml on page89

~/templates/nic-configs/

Defines network interfacesreserved bonding mapping

Yes

controller.yaml ~/templates/nic-configs/

Defines network configurationfor Controller nodes

No

computeHCI.yaml ~/templates/nic-configs/

Defines network configurationfor Converged nodes

Yes

overcloud_images.yaml ~/ Defines docker images locationfor all OpenStack relatedservices

No

puppet-pacemaker.yaml ~/templates/overcloud/environments

Defines HA OpenStackconfiguration with Pacemaker

No

Prepare the nodes registration file

In order to register the physical nodes on which the Overcloud will be deployed, the following actions need to beexecuted:

1. On the Director VM, navigate to the osp_admin home directory.2. Clone the public repository of the Ready Architecture https://github.com/dsp-jetpack/JetPack/ using Git.3. Switch to the HCI_OSP13 branch https://github.com/dsp-jetpack/JetPack/tree/HCI_OSP134. Edit the instackenv.json file.5. Collect the physical information for each node described in the following table.

Note: IP, VLAN and MTU are examples a user can use to configure and deploy per their networkrequirement.

Table 16: instackenv.json.yaml file parameters

ParameterName

Sample Value Description

name control-0 Name of the node.

capabilities node:control-0,boot_option:local, boot_mode:uefi

Capabilities of the node. UEFIboot mode enabled.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 51: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 51

ParameterName

Sample Value Description

root_device {"size":"2791"} Size of the Virtual boot disk.It should be either 2791 or 223depending of the type of node.

pm_addr 192.168.110.12 IPMI iDRAC IP address

pm_password xxxxx IPMI iDRAC Password.

pm_type pxe_drac DRAC mode

pm_user root IPMI iDRAC User.

6. Save the file under the osp_admin home directory.

Note: Please refer to Appendix B for more details about the file content.

Register and introspect the nodes

To register and introspect the nodes:

Note: At this time, all physical nodes need to be configured properly including BIOS settings and the virtualdisk created as described into the Before you begin on page 35 section.

1. From the osp_admin home directory, source the Undercloud environment file.

$ source ~/stackrc

2. Register the nodes in the Undercloud.

$ openstack overcloud node import ~/instackenv.json

3. Wait for successful registration.

Started Mistral Workflow tripleo.baremetal.v1.register_or_update.Execution ID: 1a85356c-7059-47c2-8342-0a7209e5d62dWaiting for messages on queue 'tripleo' with no timeout.6 node(s) successfully moved to the "manageable" state.Successfully registered node UUID 2a71f666-61ac-498d-a1e5-be6b340c3f69Successfully registered node UUID 518ea65f-a415-47fe-8839-637e7ab0f83fSuccessfully registered node UUID 7b4f7223-4fe3-4100-9a35-8175ace438b4Successfully registered node UUID f5a1b33a-0017-4712-8638-35d070a418a1Successfully registered node UUID 374e9765-2cdf-4986-b9cb-fc8ae35e4de0Successfully registered node UUID ba730382-2453-4a5b-b5db-1542dee710bc

4. Launch the introspection.

$ openstack overcloud node introspect --all-manageable --provide

5. Monitor introspection while it is running.

$ journalctl -l -u openstack-ironic-inspector -u openstack-ironic-inspector-dnsmasq -u openstack-ironic-conductor –f

6. At the time the introspection ends, all bare-metal nodes should be marked as available.

$ openstack baremetal node list +--------------------------------------+--------------+---------------+-------------+--------------------+-------------+| UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 52: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

52 | Deployment

+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+| 1cc47781-2b62-4fe2-8ce4-6a10c609ff4a | control-0 | None | power off | available | False || adaca128-1f2c-4eb6-a1bb-ae0b93fe2d2f | control-1 | None | power off | available | False || 2effb861-e759-4e5b-8744-16ab9d859cf0 | control-2 | None | power off | available | False || 26cb068a-84d6-4fb6-9433-9d061a7c6adb | computeHCI-0 | None | power off | available | False || e3c2906f-d590-4ae0-927f-4267312545b6 | computeHCI-1 | None | power off | available | False || 48309885-ebcd-41dd-8a9f-7cecfe3f1a0e | computeHCI-2 | None | power off | available | False |+--------------------------------------+--------------+---------------+-------------+--------------------+-------------+

Configure networking

To configure network environment parameters:

1. On the Director VM, from the osp_admin home directory, copy all files needed for the upcoming deployment.

$ cp -R JetPack/templates/ templates/

2. Edit the templates/overcloud/network-environment.yaml file.3. Search the CHANGEME section to make changes.4. Make changes as described in the following table.

Note: IP, VLAN and MTU are a set of examples the user can configure and deploy as per their networkrequirement.

Table 17: network-environment.yaml file parameters

Parameter Name Default Value Description

NeutronGlobalPhysnetMtu 1500 MTU value for Neutron networks

ManagementNetCidr 192.168.110.0/24 CIDR block for the Managementnetwork

InternalApiNetCidr 192.168.140.0/24 CIDR block for the Private APInetwork.

TenantNetCidr 192.168.130.0/24 CIDR block for the Tenantnetwork. For future support ofGeneric Routing Encapsulation(GRE) or VXLAN networks.

StorageNetCidr 192.168.170.0/24 CIDR block for the Storagenetwork.

StorageMgmtNetCidr 192.168.180.0/24 CIDR block for the StorageClustering network.

ExternalNetCidr 100.67.139.0/26 CIDR block for the Externalnetwork.

ManagementAllocationPools [{'start':'192.168.110.30','end':'192.168.110.45'}]

IP address range on theManagement network for use bythe iDRAC DHCP server.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 53: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 53

Parameter Name Default Value Description

InternalApiAllocationPools [{'start':'192.168.140.50','end':'192.168.140.120'}]

IP address range for the PrivateAPI network.

TenantAllocationPools [{'start':'192.168.130.50','end':'192.168.130.120'}]

IP address range for the Tenantnetwork. Not used unless youwish to configure Generic RoutingEncapsulation (GRE) or VXLANnetworks.

StorageAllocationPools [{'start':'192.168.170.50','end':'192.168.170.120'}]

IP address range for the Storagenetwork.

StorageMgmtAllocationPools [{'start':'192.168.180.50','end':'192.168.180.120'}]

IP address range for the StorageClustering network.

ExternalAllocationPools [{'start':'100.67.139.20', 'end':'192.168.139.50'}]

IP address range for the Externalnetwork.

ExternalInterfaceDefaultRoute 100.67.139.1 Router gateway on the Externalnetwork.

ManagementNetworkGateway 192.168.110.1 The IP address of the gateway onthe Management network.

ProvisioningNetworkGateway 192.168.120.1 The IP address of the gateway onthe Provisioning network, whichallows access to the Managementnetwork.

ControlPlaneDefaultRoute 192.168.120.13 Router gateway on theprovisioning network (orUndercloud IP address).

ControlPlaneSubnetCidr 24 CIDR of the control plane network.

EC2MetadataIp 192.168.120.13 IP address of the Undercloud.

DnsServers ["8.8.8.8"] DNS servers for the Overcloudnodes to use.

InternalApiNetworkVlanID 140 VLAN ID of the Private APInetwork.

StorageNetworkVlanID 170 VLAN ID of the Storage network.

StorageMgmtNetworkVlanID 180 VLAN ID of the StorageClustering network.

TenantNetworkVlanID 130 VLAN ID of the Tenant network.For future support of GenericRouting Encapsulation (GRE) orVXLAN networks.

ExternalNetworkVlanID 1391 VLAN ID of the External network.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 54: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

54 | Deployment

Parameter Name Default Value Description

NeutronExternalNetworkBridge " " Empty string for External VLAN,or br-ex if on the native VLAN.

ExternalNetworkMTU 1500 MTU Value for External network

InternalApiMTU 1500 MTU Value for Interlan APInetwork

StorageNetworkMTU 1500 MTU Value for Storage network

TenantNetworkMTU 1500 MTU Value for Tenant network

ProvisioningNetworkMTU 1500 MTU Value for Provisioningnetwork

ManagementNetworkMTU 1500 MTU Value for Managementnetwork

DefaultBondMTU 1500 MTU Value for Default bonds

Configure cluster

To configure cluster environment parameters:

1. On the Director VM, from the osp_admin home directory, edit the templates/dell-environment.yaml file

2. Search the CHANGEME section to make changes.3. Make changes as described in the following table.

Table 18: dell-environment.yaml file parameters

Parameter Name Default Value Description

NeutronPublicInterface bond1 Bond interface for external access

NeutronNetworkType vlan Tenant network type

OvercloudComputeHCIFlavor baremetal Flavor used for Converged role

OvercloudControllerFlavor baremetal Flavor used for Controller role

ComputeHCICount 3 Number of Converged nodes

ControllerCount 3 Number of Controller nodes

CephPools [{"name": "volumes","pg_num": 4096,"pgp_num": 4096},{"name": "vms","pg_num": 1024,"pgp_num": 1024},{"name": "images","pg_num": 512,"pgp_num": 521}]

Ceph PG values

CephPoolDefaultSize 2 Default Ceph pool size

osd_scenario lvm Storage backend scenario

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 55: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 55

Parameter Name Default Value Description

devices - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1- /dev/nvme3n1 - /dev/nvme4n1 - /dev/nvme5n1- /dev/nvme6n1 - /dev/nvme7n1

List of NVMe disks to use asOSDs

osd_objectstore bluestore Objectstore to used

osds_per_device 2 Number of OSDs per NVMe disk

osd_max_backfills 1 Limit backfill process runningsimultaneously

ceph_osd_docker_memory_limit 10 Limit the amount of memory foreach OSD docker process

ceph_osd_docker_cpu_limit 4 Limit the number of vCPUs foreach OSD docker process

cpu_allocation_ratio 6.7 CPU Allocation ratio

NovaReservedHostMemory 115000 Amount of memory reserved forNova

Configure the static IPs

To configure static IPs isolation:

1. On the Director VM, from the osp_admin home directory, edit the templates/overcloud/static-ip-environment.yaml file.

2. Make changes as described in the following table.

Note: These IP, VLAN and MTU are a set of examples the user can configure and deploy as per theirnetwork requirement.

Table 19: static-ip-environment.yaml file parameters

Parameter Name Sub-parameter Name Default Value Description

ControllerIPs tenant - 192.168.130.12- 192.168.130.13- 192.168.130.14

List of tenant IPs forController nodes ontenant network

ControllerIPs internal_api - 192.168.140.12- 192.168.140.13- 192.168.140.14

List of Internal IPs forController nodes onInternal API Network

ControllerIPs storage - 192.168.170.12- 192.168.170.13- 192.168.170.14

List of storage IPs forController nodes onstorage network

ControllerIPs external - 100.67.139.12- 100.67.139.13 -100.67.139.14

List of external IPs forController nodes onexternal network

ComputeHCIIPs tenant - 192.168.130.15- 192.168.130.16- 192.168.130.17

List of tenant IPs forConverged nodes ontenant network

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 56: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

56 | Deployment

Parameter Name Sub-parameter Name Default Value Description

ComputeHCIIPs internal_api - 192.168.140.15- 192.168.140.16- 192.168.140.17

List of Internal IPs forConverged nodes onInternal API network

ComputeHCIIPs storage - 192.168.170.15- 192.168.170.16- 192.168.170.17

List of storage IPs forConverged nodes onstorage network

ComputeHCIIPs storage_mgmt - 192.168.180.15- 192.168.180.16- 192.168.180.17

List of storage mgmt IPsfor Converged nodeson storage managementnetwork

Configure the Virtual IPs

To configure the service Virtual IPs :

1. On the Director VM, from the osp_admin home directory, edit the templates/overcloud/static-vip-environment.yaml file.

2. Search the CHANGEME section to make changes.3. Make changes as described in the following table.

Note: IP, VLAN and MTU are set of examples and user can configure and deploy as per their networkrequirement.

Table 20: static-vip-environment.yaml file parameters

Parameter Name Default Value Description

redis 192.168.140.49 Virtual IP for the redis service

ControlPlaneIP 192.168.120.121 Virtual IP for the provisioningnetwork

InternalApiNetworkVIP 192.168.140.121 Virtual IP for the internal APInetwork

ExternalNetworkVIP 100.67.139.62 Virtual IP for the Public APInetwork

StorageNetworkVIPs 192.168.170.121 Virtual IP for the storage network

StorageMgmtNetworkVIPs 192.168.180.122 Virtual IP for the storagemanagement network

Configure the NIC interfaces

Steps to configure the NICs interfaces:

1. On the Director VM, from to the osp_admin home directory, edit the templates/nic-configs/nic_environment.yaml file.

2. Search the CHANGEME section to make changes.3. Make changes as described in the following table.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 57: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 57

Table 21: nic_environment.yaml file parameters

Parameter Name Default Value Description

ControllerProvisioningInterface em3 Provisioning interfacename for Controllernodes

ControllerBond0Interface1 p1p1 Bond 0 Interface #1 namefor Controller nodes

ControllerBond0Interface2 p2p1 Bond 0 Interface #2 namefor Controller nodes

ControllerBond1Interface1 p1p2 Bond 1 Interface #1 namefor Controller nodes

ControllerBond1Interface2 p2p2 Bond 1 Interface #2 namefor Controller nodes

ControllerBondInterfaceOptions mode=802.3ad miimon=100xmit_hash_policy=layer3+4lacp_rate=1

Bonding mode forController nodes

ComputeHCIProvisioningInterface em3 Provisioning interfacename for Convergednodes

ComputeHCIBond0Interface1 p2p1 Bond 0 Interface #1 namefor Converged nodes

ComputeHCIBond0Interface2 p3p1 Bond 0 Interface #2 namefor Converged nodes

ComputeHCIBond1Interface1 p2p2 Bond 1 Interface #1 namefor Converged nodes

ComputeHCIBond1Interface2 p3p2 Bond 1 Interface #2 namefor Converged nodes

ComputeHCIBondInterfaceOptions mode=802.3ad miimon=100xmit_hash_policy=layer3+4lacp_rate=1

Bonding mode forConverged nodes

ComputeHCIBond2Interface1 p6p1 Bond 2 interface #1 namefor Converged nodes(for SRIOV/OVS-DPDKfuture usage)

ComputeHCIBond2Interface2 p7p1 Bond 2 interface #2 namefor Converged nodes(for SRIOV/OVS-DPDKfuture usage)

Prepare and deploy the Overcloud

Note: Before beginning with Overcloud deployment, in order to avoid power-cycling timeout issue whichmay occur during the Overcloud deployment, it is recommended to increase the default value of the followingparameter.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 58: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

58 | Deployment

1. On the director VM, change the value of the post_deploy_get_power_state_retry_interval to 60

# sudo sed -i 's/#post_deploy_get_power_state_retry_interval.*/post_deploy_get_power_state_retry_interval = 60/' /etc/ironic/ironic.conf

2. Restart all Ironic services

# sudo systemctl restart openstack-ironic*

Prepare and deploy the Overcloud.

1. Copy the TripleO heat-templates directory to the osp_admin templates directory.

$ cp -R /usr/share/openstack-tripleo-heat-templates/* ~/templates/overcloud

2. Generate a custom roles_data.yaml file that includes the HCI role.

$ openstack overcloud roles generate -o /home/osp_admin/templates/roles_data.yaml Controller ComputeHCI

3. Finally, launch the Overcloud deployment.

$ openstack overcloud deploy --log-file ~/overcloud_deployment.log \-t 120 --stack R139-HCI --templates ~/templates/overcloud \-e ~/templates/overcloud/environments/ceph-ansible/ceph-ansible.yaml \-e ~/templates/overcloud/environments/ceph-ansible/ceph-rgw.yaml \-r ~/templates/roles_data.yaml -e ~/templates/overcloud/environments/network-isolation.yaml \ -e ~/templates/overcloud/network-environment.yaml \-e /home/osp_admin/templates/nic-configs/nic_environment.yaml \ -e ~/templates/overcloud/static-ip-environment.yaml \-e ~/templates/overcloud/static-vip-environment.yaml \-e ~/templates/overcloud/node-placement.yaml \-e ~/templates/overcloud/environments/storage-environment.yaml \ -e ~/overcloud_images.yaml \-e ~/templates/dell-environment.yaml \-e ~/templates/overcloud/environments/puppet-pacemaker.yaml \--libvirt-type kvm --ntp-server 192.168.120.8

4. Wait for the Overcloud deployment to complete successfully.

Stack R139-HCI CREATE_COMPLETEHost 100.67.139.62 not found in /home/osp_admin/.ssh/known_hostsStarted Mistral Workflow tripleo.deployment.v1.get_horizon_url. ExecutionID: 9d0e7fad-2cef-4552-9079-9b07f91cea13Overcloud Endpoint: http://100.67.139.62:5000/Overcloud Horizon Dashboard URL: http://100.67.139.62:80/dashboardOvercloud rc file: /home/osp_admin/R139-HCIrcOvercloud Deployed

Red Hat Ceph Storage Dashboard deployment and configuration (optional)The following section illustrates steps to deploy and configure Red Hat Ceph Storage Dashboard VM which isoptional and based on the JetPack 13.1 deployment.

Red Hat Ceph Storage Dashboard deployment

1. Login to SAH node as root.2. From the directory where the JetPack repository has been cloned, switch to the master branch.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 59: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 59

3. Change directory to JetPack/src/mgmt .

[root@sah ~]# cd JetPack/src/mgmt/

4. Edit dashboard.cfg configuration file in the JetPack/src/mgmt/ directory in the following settings.

Table 22: dashboard.cfg file parameters

Parameter Name Description

rootpassword The root user password for the Ceph Dashboard VM.

smuser The user credential when registering with Subscriptionmanager.

smpassword The user password when registering with Subscriptionmanager. The password must be enclosed in single quotes ifit contains certain special characters.

smpool The pool ID used when attaching the Ceph Dashboard VMto an entitlement.

hostname The FQDN of the Ceph Dashboard VM.

gateway The default gateway for the Ceph Dashboard VM.

nameserver A comma-separated list of nameserver IP addresses.

eth0 This line specifies the IP address and network mask for thepublic API network. The line begins with eth0, followed byat least one space, the IP address, another set of spaces, andthen the network mask.

eth1 (if required) This line specifies the IP address and network mask for thestorage network. The line begins with eth1, followed by atleast one space, the IP address, another set of spaces, andthen the network mask.

5. Deploy the Ceph dashboard VM

$ python deploy-dashboard-vm.py dashboard.cfg /store/data/iso/rhel-server-7.6-x86_64-dvd.iso

Starting install...Retrieving file .treeinfo... | 1.9 kB 00:00:00Retrieving file vmlinuz... | 6.3 MB 00:00:00Retrieving file initrd.img... | 52 MB 00:00:00Allocating 'dashboard.img' | 100 GB 00:00:01Domain installation still in progress. You can reconnect tothe console to complete the installation process.

6. After deployment, VM will be in a shut-off state. Start the dashboard VM

$ virsh start dashboardDomain dashboard started

7. Verify the Ceph dashboard VM is up and running.

$ virsh list --all Id Name State---------------------------------------------------- 1 director running

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 60: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

60 | Deployment

13 dashboard running

Ceph dashboard VM configuration

1. Login to director VM as osp_admin .2. From the directory where the JetPack repository has been cloned, switch to the master branch.3. Copy pilot directory from JetPack folder to osp_admin home directory

$ cp -R JetPack/src/pilot .

4. Copy the network-environment.yaml file to the pilot/templates directory.

$ cp /home/osp_admin/templates/overcloud/network-environment.yaml /home/osp_admin/pilot/templates/network-environment.yaml

5. Copy the undercloud.conf file to the pilot directory.

$ cp /home/osp_admin/undercloud.conf /home/osp_admin/pilot/

6. Replace ceph-storage with computeHCI in the pilot/subscription.json file

$ sed -i 's/ceph-storage/computeHCI/' pilot/subscription.json

7. Replace storage with computeHCI in the pilot/config_dashboard.py file

$ sed -i 's/"storage" in node.fqdn/"computehci" in node.fqdn/' pilot/config_dashboard.py

8. Browse to the pilot directory located under the osp_admin home directory.

$ cd ~/pilot

9. Run the script provided as part of JetPack to configure the dashboard.

$ python config_dashboard.py <dashboard-public-ip> <password> <smuser> <smpasswd> <physical-pool-id> <ceph-pool-id>INFO:config_dashboard:Configuring Ceph Storage Dashboard on 100.67.139.185 (dashboard.labs.dell)INFO:config_dashboard:Identifying Ceph nodes (Monitor and OSD nodes)INFO:config_dashboard:r139a-hci-controller-0 (192.168.170.12) is a Ceph nodeINFO:config_dashboard:r139a-hci-controller-1 (192.168.170.13) is a Ceph nodeINFO:config_dashboard:r139a-hci-controller-2 (192.168.170.14) is a Ceph nodeINFO:config_dashboard:r139a-hci-computehci-1 (192.168.170.16) is a Ceph nodeINFO:config_dashboard:r139a-hci-computehci-2 (192.168.170.17) is a Ceph nodeINFO:config_dashboard:r139a-hci-computehci-0 (192.168.170.15) is a Ceph nodeINFO:config_dashboard:Preparing the subscription json file.INFO:config_dashboard:Register the overcloud nodes.INFO:__main__:Registering control 192.168.120.104 with CDNINFO:__main__:Disabling all repos on control 192.168.120.104INFO:__main__:Enabling the following repos on control 192.168.120.104: [u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-server-rhceph-3-tools-rpms']INFO:__main__:Registering control 192.168.120.124 with CDN

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 61: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Deployment | 61

INFO:__main__:Disabling all repos on control 192.168.120.124INFO:__main__:Enabling the following repos on control 192.168.120.124: [u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-server-rhceph-3-tools-rpms']INFO:__main__:Registering control 192.168.120.129 with CDNINFO:__main__:Disabling all repos on control 192.168.120.129INFO:__main__:Enabling the following repos on control 192.168.120.129: [u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-server-rhceph-3-tools-rpms']INFO:__main__:Registering computeHCI 192.168.120.115 with CDNINFO:__main__:Disabling all repos on computeHCI 192.168.120.115INFO:__main__:Enabling the following repos on computeHCI 192.168.120.115: [u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-server-rhceph-3-osd-rpms', u'rhel-7-server-rhceph-3-tools-rpms']INFO:__main__:Registering computeHCI 192.168.120.108 with CDNINFO:__main__:Disabling all repos on computeHCI 192.168.120.108INFO:__main__:Enabling the following repos on computeHCI 192.168.120.108: [u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-server-rhceph-3-osd-rpms', u'rhel-7-server-rhceph-3-tools-rpms']INFO:__main__:Registering computeHCI 192.168.120.112 with CDNINFO:__main__:Disabling all repos on computeHCI 192.168.120.112INFO:__main__:Enabling the following repos on computeHCI 192.168.120.112: [u'rhel-7-server-rpms', u'rhel-7-server-extras-rpms', u'rhel-ha-for-rhel-7-server-rpms', u'rhel-7-server-openstack-13-rpms', u'rhel-7-server-openstack-13-devtools-rpms', u'rhel-7-server-rhceph-3-mon-rpms', u'rhel-7-server-rhceph-3-osd-rpms', u'rhel-7-server-rhceph-3-tools-rpms']INFO:config_dashboard:Preparing hosts file on Ceph Storage Dashboard.INFO:config_dashboard:Preparing hosts file on Ceph nodes.INFO:config_dashboard:Preparing remote access on the Ceph Storage Dashboard.INFO:config_dashboard:Preparing remote access on the Ceph Storage Dashboard.INFO:config_dashboard:Preparing ansible host file on Ceph Storage Dashboard.INFO:config_dashboard:Adding Monitors Stanza to Ansible hosts fileINFO:config_dashboard:Adding RadosGW Stanza to Ansible hosts fileINFO:config_dashboard:Adding OSD Stanza to Ansible hosts fileINFO:config_dashboard:Adding MGR Stanza to Ansible hosts fileINFO:config_dashboard:Adding Graphana Stanza to Ansible hosts fileINFO:config_dashboard:Preparing /etc/ceph/ceph.conf file on Ceph nodes.INFO:config_dashboard:Preparing the Ceph Storage Cluster for data collection.INFO:config_dashboard:Installing the Ceph Storage Dashboard.INFO:config_dashboard:Ceph Storage Dashboard configuration is completeINFO:config_dashboard:You may access the Ceph Storage Dashboard at:INFO:config_dashboard: http://100.67.139.185:3000,INFO:config_dashboard:with user 'admin' and password 'admin'.INFO:config_dashboard:Add new ports to iptables ceph nodesINFO:config_dashboard:Unregister the overcloud nodes.INFO:register_overcloud:Unregistering control 192.168.120.104 with CDNINFO:register_overcloud:Unregistering control 192.168.120.124 with CDNINFO:register_overcloud:Unregistering control 192.168.120.129 with CDNINFO:register_overcloud:Unregistering computeHCI 192.168.120.115 with CDNINFO:register_overcloud:Unregistering computeHCI 192.168.120.108 with CDNINFO:register_overcloud:Unregistering computeHCI 192.168.120.112 with CDN

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 62: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

62 | Deployment

INFO:config_dashboard:Preparing the Ceph Storage Cluster prometheus service.INFO:config_dashboard:Restarting prometheus service on node (dashboard.labs.dell)

10. Browse and verify the Ceph dashboard by the url generated in previous logs.11. You can change the default login credentials admin/admin

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 63: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Validation and testing | 63

Chapter

6Validation and testing

Topics:

• Manual validation• Tempest test suite

This chapter illustrates the optional manual deployment of the Sanityprocedure including instructions for configuring and running the Tempest testsuite.

Tempest is OpenStack's official test Suite for all OpenStack services postdeployment.

Tempest validates the Dell EMC Ready Architecture Guide for thedeployment of Red Hat OpenStack Platform over Hyper-ConvergedInfrastructure.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 64: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

64 | Validation and testing

Manual validationThe following illustrates post deployment of Overcloud OpenStack services through creation and validation ofnetworks and subnets instances.

This section includes instructions for creating the networks and testing a majority of your RHOSP environment usingGlance (configured with Red Hat Ceph Storage), Cinder and Nova.

Note: You must complete those prior to creating instances and volumes, and testing of the functionaloperations of OpenStack.

Testing OpenStack services

1. Log into the Director VM as osp_admin using the user name and password specified when creating the nodeand source the overcloudrc file, or the name of the stack defined when deploying the overcloud.

$ cd ~/ $ source <overcloud_name>rc

2. Setting up new project.

$ openstack project create <project name>

3. Create new user for sanity.

$ openstack user create --project <project name> --password <password> --email <email id> <user name>

4. Create the tenant network by executing the following commands.

$ openstack network create <tenant_network_name>

5. Create the tenant subnet on the tenant network.

$ openstack subnet create <tenant_network_name> <vlan_network> \ --name <vlan_name>

6. Create the Router.

$ openstack router create <tenant_router>

7. Before you add the tenant network interface, you will need the subnets ID. Execute the following command todisplay them.

$ openstack network list

+--------------------------------------+----------------------------------------------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+----------------------------------------------------+--------------------------------------+ | 4164e0ba-07fe-4b2e-b5fa-01181987ab9f | public | fd43cb2b-b746-4443-9a81-ad99e36431df | | 8e36a5dd-383e-4415-9be6-d91d9fedb023 | HA network tenant 8b6fe7f3af074ccb9285043bb2f3cf5b | 86fd1a5b-d02d-42a9-9808-b3bfdac6f422 | | e44c6fe7-19d4-40a3-ae13-330ee7fb49cf | tenant_net1 | cfc4cbec-ea71-4384-9179-9dc7bc6d8c9e |

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 65: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Validation and testing | 65

+--------------------------------------+----------------------------------------------------+--------------------------------------+

8. Add the tenant network interface between the router and the tenant network.

$ openstack router-interface-add <tenant_router> <subnets_id>

9. Create the external network by executing the following commands.

$ openstack network create <external_network_name> --router:external \ --provider:network_type vlan --provider:physical_network physext \ --provider:segmentation_id <external_vlan_id>

10. Create the external subnet with floating IP addresses on the external network

$ openstack subnet create --name <external_subnet_name> \ --allocation-pool start=<start_ip>,end=<end_ip> \ --disable-dhcp --gateway <gateway_ip> <external_network_name> <external_vlan_network>

11. Set the external network gateway for the router.

$ openstack router gateway set <tenant_router_name> <external_network_name>

Test Glance image service

1. Create and upload the Glance image.

$ openstack image create --disk-format <format> \ --container-format <format> --public --file <file_path>

2. List available images to verify that your image was uploaded successfully.

$ openstack image list

3. To view more detailed information about an image, use the identifier of the image from the output of theOpenStack image list command above.

$ openstack image show <id>

Testing Nova compute provisioning service

Launch an instance using the boot image that you uploaded:

1. Get the ID of the flavor you will use.

$ openstack flavor list

2. Get the image ID.

$ openstack image list

3. Get the tenant network ID.

$ openstack network list

4. Generate a key pair. The command below generates a new key pair; if you try using an existing key pair in thecommand, it fails.

$ openstack keypair create --public ~/.ssh/id_rsa.pub <key_name>

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 66: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

66 | Validation and testing

Note: MY_KEY.pem is an output file created by the Nova keypair-add command, and will be used later.

5. Create an instance using the Nova boot command.

$ openstack server create --flavor <flavor_id> --image <imageid> --nic net-id=<tenantNetID> --key_name <key_name> <nameofinstance>

Note: Change the IDs to your IDs and the nameofinstance and the key_name .

6. List the instance you created:

$ openstack server list

Test Cinder block storage service

1. Create new volume.

$ openstack volume create --image centos --bootable --read-write --size 10 $VOL_NAME-$i

2. Verify list of volumes created.

$ openstack volume list

3. Attach the newly created volume to instance.

$ openstack volume attach $server_id $volume_id /dev/vdb

Test Swift object storage service

Verify operation of the Object Storage service.

• Show the service status.

$ swift stat

Expected output.

Account: AUTH_6f049e55ab9b49ca9ee342ed4c17a86b Containers: 13 Objects: 2066 Bytes: 10602595Containers in policy "policy-0": 13 Objects in policy "policy-0": 2066 Bytes in policy "policy-0": 10602595 Meta Temp-Url-Key: 60b16566fd14c10017ce78124af6e028 X-Account-Project-Domain-Id: default X-openstack-Request-Id: tx584d13dce54d4462b5e91-005c3f8a46 X-Timestamp: 1546424543.31946 X-Trans-Id: tx584d13dce54d4462b5e91-005c3f8a46 Content-Type: application/json; charset=utf-8 Accept-Ranges: bytes

Accessing the instance test

1. Find the active Controller by executing the following commands from the Director Node.

$ cd ~/ $ source stackrc $ nova list (make note of the controllers ips) $ ssh heat-admin@<controller ip>

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 67: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Validation and testing | 67

$ sudo -i # pcs cluster status

The displayed output will be similar to the following.

+--------------------------------------+-----------------------+--------+------------+-------------+--------------------------+| ID | Name | Status |Task State | Power State | Networks |+--------------------------------------+-----------------------+--------+------------+-------------+--------------------------+| cfe21aea-91be-49bb-931f-5061e4be397d | r139-hci-computehci-0 | ACTIVE |- | Running | ctlplane=192.168.120.134 || 64b94937-7a29-4950-af9e-d9980502d90d | r139-hci-computehci-1 | ACTIVE |- | Running | ctlplane=192.168.120.135 || e14e34ae-fdce-4865-bd8c-a9e5a6dbf9af | r139-hci-computehci-2 | ACTIVE |- | Running | ctlplane=192.168.120.136 || 8d1ecfde-47f0-4112-baf8-8877416a8a82 | r139-hci-controller-0 | ACTIVE |- | Running | ctlplane=192.168.120.141 || fd7f6bf6-b6e8-4154-b68f-7f92c274a29b | r139-hci-controller-1 | ACTIVE |- | Running | ctlplane=192.168.120.127 || a419f86d-5e86-490a-a583-62e14d7c5508 | r139-hci-controller-2 | ACTIVE |- | Running | ctlplane=192.168.120.129 |+--------------------------------------+-----------------------+--------

2. Initiate an SSH session to the active Controller, as heat-admin.3. Find the instances by executing the following command:

$ sudo -i $ ip netns

The displayed output will be similar to the following.

qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 (id: 2)qrouter-bb00b972-f67c-45ba-a573-ad5d7e8debc5 (id: 1)qdhcp-2e43972b-0778-4cc3-be64-9dcc9789863b (id: 0)

4. Access an instance namespace by executing the following command:

$ ip netns exec <namespace> bash

5. Verify that the namespace is the desired tenant network, by executing the following command.

$ ip a

The displayed output will be similar to the following.

1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo inet6 ::1/128 scope host valid_lft forever preferred_lft forever2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast qlen 1000 link/ether fa:16:3e:5b:59:d2 brd ff:ff:ff:ff:ff:ff inet 192.168.202.36/24 brd 192.168.202.255 scope global eth0 inet6 fe80::f816:3eff:fe5b:59d2/64 scope link valid_lft forever preferred_lft forever

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 68: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

68 | Validation and testing

6. Ping the IP address of the instance.

$ ip netns exec qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 ping 192.168.202.36

PING 192.168.202.36 (192.168.202.36) 56(84) bytes of data.64 bytes from 192.168.202.36: icmp_seq=1 ttl=64 time=1.06 ms64 bytes from 192.168.202.36: icmp_seq=2 ttl=64 time=0.158 ms^C--- 192.168.202.36 ping statistics ---2 packets transmitted, 2 received, 0% packet loss, time 1001msrtt min/avg/max/mdev = 0.158/0.611/1.064/0.453 ms

7. SSH into the instance, as Centos, using the keypair generated above.

sudo ip netns exec qdhcp-0a5a594a-f442-4e33-b025-0ba65969ab09 ssh -i ~/MY_KEY.pem [email protected]

Tempest test suiteTempest is OpenStack's official test suite which includes a set of integration tests to run against an OpenStack cluster.Tempest automatically runs against every service in every project of OpenStack to avoid failures that could occurduring merged changes. Verified post-installation Tempest API tests include:

• Under project creation operation - Scenarios include creation of project, update, deletion of project, creation ofproject by unauthorized name, empty name, duplicate name list projects, and deletion of non-existent project.

• User creation operation - List current users, get users, and list users with names.• Network - Create, update and delete networks, create port on non-existent network, show non-existent network,

bulk create, and delete network.• Image upload and launch - Activate, deactivate image, delete, register and upload image• Create floating IPs in private network - External network , fixed IP address, and a list of floating IPs.• Volume creation and attachment to VM - Create, get , update and delete.

Configure Tempest

1. Login to OSP director VM with user osp_admin.2. Clone tempest repository on home directory /home/osp_admin/ from Github.

$ git clone https://git.openstack.org/openstack/tempest

3. Install tempest.

$ sudo pip install tempest/

4. Verify version.

$ tempest --version

5. Source the admin credentials in Overcloud.

$ source ~/overcloudrc

6. Create and initialize the tempest workspace.

$ tempest init cloud-01

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 69: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Validation and testing | 69

7. List the existing workspace.

$ tempest workspace list

8. Generate the /etc/tempest.conf file.

$ discover-tempest-config --deployer-input ~/tempest-deployer-input.conf--debug --create identity.uri $OS_AUTH_URL identity.admin_password$OS_PASSWORD --network-id 3ba5a660-5172-4ea0-bb1c-72c0c14f87b6

9. Verify and modify tempest.conf file according to network, image, and url details10. Verify configuration file.

$ tempest verify-config -o /etc/tempest.conf

Run Tempest tests

Run Tempest for services from the existing Tempest workspace

$ cd cloud-01

$ stestr run --black-regex '\[.*\bslow\b.*\]' '^tempest\.(api)'

Summary

The main objective of Tempest API tests is to ensure that our Dell EMC Ready Architecture for Hyper-ConvergedInfrastructure on Red Hat OpenStack Platform is compatible with the OpenStack APIs. Tempest API tests ensure thatdeployment of the HCI cloud does not interrupt any OpenStack API functionality.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 70: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

70 | Performance measuring

Chapter

7Performance measuring

Topics:

• Overview• Performance tools• Test cases and test reports• Conclusion

This chapter details the testing methodology used throughout theexperimentation. It includes the tools and workloads used and the rationalefor their choice. It also provides the benchmark results along with thebottleneck analysis.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 71: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Performance measuring | 71

OverviewA myriad of Cloud computing and networking technology development delivers a wide variety of choices for anequally diverse group of information system organizations.

Cloud services performance significantly impacts future functionality and execution of information infrastructure.

A thorough evaluation of Cloud service performance is crucial and beneficial to both service providers andconsumers.

The following chapter outlines performance test methodology and the graphical representation of the results in threedifferent areas:

• Network performance: evaluation of network performance by evaluating throughput, latency and jitter of thenetwork traffic between virtual machines.

• Compute performance: evaluation of compute performance by evaluating memory throughput and memorylatency.

• Storage performance: evaluation of storage performance by evaluating IOPs and Latency.

Performance toolsThe Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform performanceis measured using the following tools:

1. Spirent: Spirent Temeva (Test Measure Validate) is a revolutionary new platform providing a Software-as-a-Service (SaaS) based dashboard to configure, measure, analyze and share valuable test metrics. This tool measuresnetwork and compute performance.

2. FIO: A popular Open-source performance benchmarking I/O workload generator tool. This tool measures storageperformance.

Test cases and test reportsThe following test scenarios are performed after setup configuration. Refer to https://www.spirent.com/products/temeva for how-to setup and configure Temeva.

The following sections detail the test case experimentation and corresponding analytical metrics performed.

Network performance

Test case 1

Objective: Measure network throughput (Mbits/sec) and L2/L3 network latency (ms) between instances on same/different compute hosts and network.

Description: The test is performed to calculate network throughput between instances using four sets ofcombinations:

1. Two VMs residing on same compute host and same network2. Two VMs residing on same compute host and different network3. Two VMs residing on different compute host and same network4. Two VMs residing on different compute host and different network

A unidirectional network traffic is generated between two VMs (referred as East/West traffic) for a range of framesizes - 64B, 256B, 512B, 1024B and 1518B each iterating over a duration of 60 seconds. Considering the samenetwork traffic or different network traffic, learning mode is configured as L2 or L3 respectively. The number offlows per port is set to 1000.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 72: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

72 | Performance measuring

Graphical representation of the test result with x-axis having standard ethernet frame sizes and y-axis havingcorresponding maximum network throughput. Figure 12: Network throughput vs frame size on page 72 graphdepicts network latency behavior with x-axis having standard Eehernet frame sizes and y-axis having correspondinglatency.

Figure 12: Network throughput vs frame size

Figure 13: Network latency vs frame size

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 73: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Performance measuring | 73

Analysis and inference: Network throughput increases as the frame size increases and becomes maximum for astandard ethernet frame size i.e., 1518B. Latency is high for low frame sizes and minimum at standard ethernet framesize i.e., 1518B.

When two VMs are present in the same compute host with different networks, packets flow through Linux Bridge,OVS, and virtual routers. Since routing is involved, it requires a network layer (layer 3) with higher latency.

Packet flows through Linux Bridge, OVS, and physical infrastructure when two VMs are in different compute and thesame network. Activity is limited to layer 2 data with no substantial delay like layer 3.

Frame size greater than or equal to 1024B yields maximum throughput and minimum consistent with L2/L3 networklatency.

Host linear frame size increases to 75% of the maximum throughput between VMs on the same computer. Theplacement of VMs on hyperconverged nodes are the bottleneck to high network throughput performance.

Enabling NFV feature like OVS-DPDK in Hyper-Converged Infrastructure enhances the packet processing andforwarding rate optimizing the HCI solution.

Test case 2

Objective: Measure L2/L3 network jitter (Mbits/sec) and L2/L3 network latency (ms) between instances on same/different compute hosts and network.

Description: The test is performed to calculate network throughput between instances using two sets ofcombinations:

1. Both of the two VMs spawned in same network (L2).2. Both of the two VMs spawned in different network (L3).

Unidirectional network traffic generated between two VMs (referred as east/west traffic) for a range of frame sizes -64B, 256B, 512B, 1024B and 1518B running over 60 seconds.

Learning mode is configured as L2 or L3 for either same or different network traffic for 1000 flows per port.

Graphical representation of test result with x-axis has standard ethernet frame sizes and y-axis corresponding tonetwork jitter.

Figure 14: Network jitter vs frame size

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 74: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

74 | Performance measuring

Analysis and inference: When VMs are present in a different network, the packet goes through Linux Bridge, OVS,and virtual routers. Packets moving through different paths will rise and increase the packet delay variation (Jitter).

When VMs are present in the same network, a packet goes through Linux Bridge and OVS, following a single pathreducing delay variation.

Compute performance

Test Case 3

Objective: Measure compute performance to memory IOPs (millions) and latency (us).

Description: The test is performed on a varied increasing count of agent VMs each having four vCPUs, 4GB RAMand 20GB disk hosted on a single compute node. Read and write memory IOPs of block size 4KB with differentaccess pattern i.e., random and sequential is stressed to maximum value on a group of VMs ranging from count 1 to30 which in return yields maximum possible IOPs supported by infrastructure at that point of time.

Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis showingaverage memory read IOPs in millions.

Figure 15: 4KB memory read IOPs

Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis showingaverage memory write IOPs in millions.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 75: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Performance measuring | 75

Figure 16: 4KB memory write IOPs

Graphical representation of test result with x-axis having several instances ranging from 1 to 30 and y-axis areshowing corresponding memory latency (us) for random/sequential read and write operations.

Figure 17: 4KB memory latency

Analysis and inference: Memory read/write IOPs for both access pattern - random and sequential depicted in Figure17: 4KB memory latency on page 75 shows the maximum possible IOPs per instance. Increasing OpenStackinstances decrease the average IOPs. The bends in the graph denote bottleneck for a particular number of instancesunder test for memory IOPs. IOPs can be improved significantly with the enablement of huge-pages and NUMA inHyperconverged compute nodes.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 76: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

76 | Performance measuring

A write operation consumes more IOPs than a read operation regardless of the memory access pattern. The latency ofboth random write and sequential write for 4KB block size is consequently higher than random read and sequentialread.

Storage performance

Test case 4:

Objective: Measure Storage performance to IOPs and latency (ms).

Description: The test measure storage performance to IOPs and latency on a configured Ceph Cluster that uses 16OSDs (2 OSDs per NVMe SSD disk) per node, i.e., 48 OSDs in total across three nodes.

The test performs a series of FIO benchmarks on a group of VMs ranging from count 40 to 240. The VMs under testconfigured with one vCPU, 1024MB RAM and 10GB disk on a single compute host.

FIO benchmark details follow:

• I/O Engine : libaio• I/O mode : direct I/O• Block size : 4KB• I/O Depth : 64• Number of jobs per VM : 8• File size : 512MB

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis isshowing random read/write IOPs.

Figure 18: 4K storage random IOPs

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis isshowing Sequential Read/Write IOPs.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 77: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Performance measuring | 77

Figure 19: 4K storage sequential IOPs

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis isshowing sequential read/write IOPs.

Figure 20: 4K storage random latency

Graphical representation of test result with x-axis having several instances ranging from 40 to 240 and y-axis isshowing sequential read/write IOPs.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 78: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

78 | Performance measuring

Figure 21: 4K storage sequential latency

Analysis and inference: Write operations are more expensive than read operations for smaller block size. Cephacknowledges a client after data has been entirely written off on a given number of OSDs, which in our case is 2xreplication, i.e., primary and secondary OSD.

Read operation client communication is acting/primary OSD. 240 VMs on a single node result illustrates a gradualbut minimal decrease in average IOPs with an increase in the number of compute resources. Latency is higher for arandom pattern than a sequential pattern for a given number of instances.

There is a consistent graphical curve for random latency. Sequential latency shows a linear incremental curve with anincrease in virtual compute resources.

Faster NVMe drives could improve performance but may shift the load to CPUs.

ConclusionDell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform is designed in ahyper-converged approach by colocating Ceph and compute services.

Dell EMC PowerEdge R740xd and Dell EMC PowerEdge R640 servers with Intel 25GbE networking provide aconcrete performance baseline and state-of-the-art hardware.

Software-defined storage Red Hat Ceph Storage 3.2 with BlueStore backend enabled is well suited for use caseswhere performance is a critical element.

Finally, Intel NVMe drives offer robustness and an improved model of performance-driven SSD drives.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform testingperformance methodology supplied by Dell EMC trusted Spirent partner.

The biggest challenge of the Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red HatOpenStack Platform is optimal tuning of memory, CPU cores, and Ceph OSD-disk ratio to address the resourcedistribution and contention. Section Performance tuning on page 31 illustrates performance tuning parametersdefining a flexible and optimized architecture. Each use case requirement is modifiable for the customer's

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 79: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Performance measuring | 79

infrastructure. Simulated testing methodology for measuring various performance metrics applies to a myriad ofdevices.

Performance improves by enabling NFV oriented features like Huge Pages for high memory I/O applications andCPU pinning for NUMA aware nodes with additional functionality like OVS-DPDK for intelligent packet forwardingand processing.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 80: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

80 | Bill of Materials

Appendix

ABill of Materials

Topics:

• Bill of Materials - SAH node• Bill of Materials - 3 Controller

nodes• Bill of Materials - 3 Converged

nodes• Bill of Materials - 1 Dell EMC

Networking S3048-ON switch• Bill of Materials - 2 Dell EMC

Networking S5248-ON switches

This chapter provide bill of materials information necessary to purchasethe proper hardware to deploy the Dell EMC Ready Architecture for HyperConverged Infrastructure on Red Hat OpenStack Platform

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 81: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Bill of Materials | 81

Bill of Materials - SAH node

Table 23: Bill Of Materials - SAH node

Function Description

Platform Dell EMC PowerEdge R640

CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25MCache,Turbo,HT (125W) DDR4-26662

RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)

LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+

Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+

Disk 10 x 600GB, 15K SAS,12Gb,512n,2.5,HP

Storage controller PERC H740P

RAID layout RAID10

Bill of Materials - 3 Controller nodes

Table 24: Bill Of Materials - 3 Controller nodes

Function Description

Platform Dell EMC PowerEdge R640

CPU 2 x Intel® Xeon® Gold 6126 2.6G,12C/24T,10.4GT/s, 19.25MCache,Turbo,HT (125W) DDR4-2666

RAM 192GB RAM (12 x 16GB RDIMM, 2666MT/s, Dual Rank)

LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+

Add-in network 2 x Intel XXV710 DP 25GbE DA/SFP+

Disk 10 x 600GB, 15K SAS,12Gb,512n,2.5,HP

Storage controller PERC H740P

RAID layout RAID10

Bill of Materials - 3 Converged nodes

Table 25: Bill Of Materials - 3 Converged nodes

Function Description

Platform Dell EMC PowerEdge R740xd

CPU 2 x Intel® Xeon® Platinum 8160 2.1G,24C/48T,10.4GT/s, 33MCache,Turbo,HT (150W) DDR4-2666

RAM 384GB RAM (12 x 32GB RDIMM, 2666MT/s, Dual Rank)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 82: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

82 | Bill of Materials

Function Description

LOM 2 x 1Gb, 2 x Intel X710 10GbE SFP+

Add-in network 4 x Intel XXV710 DP 25GbE DA/SFP+

Disk 8 x 3.2TB, NVMe, Mxd Use Expr Flash, P4610

2 x 240GB, SSD SATA, 2.5, HP, S4600

Storage Controller PERC H740P

RAID layout RAID 1 (Operating System disk only)

Bill of Materials - 1 Dell EMC Networking S3048-ON switch

Table 26: Bill Of Materials - 1 Dell EMC Networking S3048-ON switch

Product Description

S3048-ON 48 line-rate 1000BASE-T ports, 4 line-rate 10GbE SFP+ ports

Redundant power supply AC or DC power supply

Fans Fan Module I/O Panel to PSU Airflow

or

Fan Module I/O Panel to PSU Airflow

Validated operating system Cumulus Linux OS 3.7.1

Bill of Materials - 2 Dell EMC Networking S5248-ON switches

Table 27: Bill Of Materials - 2 Dell EMC Networking S5248-ON switches

Product Description

S5248-ON S5248-ON 100GbE, 40GbE, and 25Gb

Redundant power supply AC or DC power supply

Fans Fan Module I/O Panel to PSU Airflow

or

Fan Module I/O Panel to PSU Airflow

Validated operating system Cumulus Linux OS 3.7.1

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 83: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 83

Appendix

BEnvironment Files

Topics:

• Heat templates andenvironment yaml files

• Nodes registration json file• Undercloud configuration file

This appendix provides modification details of files which is mandatoryprocess for deployment of Overcloud. Following list are:

• yaml files(.yaml)• instackenv file(.json)• undercloud(.conf)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 84: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

84 | Environment Files

Heat templates and environment yaml filesTheir are two types of yaml files one is heat templates and second one is environment files. Heat templates areresponsible for OpenStack orchestration. The environment affects the runtime behavior of a template. It provides away to override the resource implementations and a mechanism to place parameters that the service needs.

Note: The files are sample files. The actual configurations are per site specific. An IP address blocksreserved setting values per configuration parameters which are site unique.

network-environment.yaml

resource_registry: OS::TripleO::Network::Ports::StorageMgmtVipPort: ./network/ports/ctlplane_vip.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: ./network/ports/noop.yaml

parameter_defaults: # CHANGEME: Change the following to the desired MTU for Neutron Networks NeutronGlobalPhysnetMtu: 1500 # CHANGEME: Change the following to the CIDR for the Management network ManagementNetCidr: 192.168.110.0/24

# CHANGEME: Change the following to the CIDR for the Private API network InternalApiNetCidr: 192.168.140.0/24

# CHANGEME: Change the following to the CIDR for the Tenant network TenantNetCidr: 192.168.130.0/24

# CHANGEME: Change the following to the CIDR for the Storage network StorageNetCidr: 192.168.170.0/24

# CHANGEME: Change the following to the CIDR for the Storage Clustering network StorageMgmtNetCidr: 192.168.180.0/24

# CHANGEME: Change the following to the CIDR for the External network ExternalNetCidr: 100.67.139.0/26 # CHANGEME: Change the following to the DHCP ranges for the iDRACs to use on # the Management network ManagementAllocationPools: [{'start': '192.168.110.30', 'end': '192.168.110.45'}]

# The allocation pools below are used to dynamically assign DHCP IP addresses # to the various networks on the overcloud nodes as they are provisioned. If # using static IPs instead, see static-ip-environment.yaml.

# CHANGEME: Change the following to the DHCP range to use on the # Private API network InternalApiAllocationPools: [{'start': '192.168.140.50', 'end': '192.168.140.120'}]

# CHANGEME: Change the following to the DHCP range to use on the # Tenant network

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 85: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 85

TenantAllocationPools: [{'start': '192.168.130.50', 'end': '192.168.130.120'}] # CHANGEME: Change the following to the DHCP range to use on the # Storage network StorageAllocationPools: [{'start': '192.168.170.50', 'end': '192.168.170.120'}] # CHANGEME: Change the following to the DHCP range to use on the # Storage Clustering network StorageMgmtAllocationPools: [{'start': '192.168.180.50', 'end': '192.168.180.120'}] # CHANGEME: Change the following to the DHCP range to use on the # External network ExternalAllocationPools: [{'start': '100.67.139.20', 'end': '100.67.139.50'}] # CHANGEME: Set to the router gateway on the external network ExternalInterfaceDefaultRoute: 100.67.139.1 # CHANGEME: Set to the router gateway on the management network ManagementNetworkGateway: 192.168.110.1

# CHANGEME: Set to the IP of the gateway on the provisioning network which # will allow access to the management network ProvisioningNetworkGateway: 192.168.120.1

# CHANGEME: Set to the router gateway on the provisioning network (or Undercloud IP) ControlPlaneDefaultRoute: 192.168.120.13

# CHANGEME: Set to the CIDR of the control plane network ControlPlaneSubnetCidr: "24"

# CHANGEME: Set to the IP address of the Undercloud EC2MetadataIp: 192.168.120.13

# CHANGEME: Set to the DNS servers to use for the overcloud nodes (maximum 2) DnsServers: ["8.8.8.8"]

# CHANGEME: Change the following to the VLAN ID to use on the # Private API network InternalApiNetworkVlanID: 140

# CHANGEME: Change the following to the VLAN ID to use on the # Storage network StorageNetworkVlanID: 170

# CHANGEME: Change the following to the VLAN ID to use on the # Storage Clustering network StorageMgmtNetworkVlanID: 180

# CHANGEME: Change the following to the VLAN ID to use on the # Tenant network TenantNetworkVlanID: 130

# CHANGEME: Change the following to the VLAN ID to use on the

# External network

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 86: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

86 | Environment Files

ExternalNetworkVlanID: 1391 # CHANGEME: Change the following to the MTU value to use on the # External network ExternalNetworkMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # internal network InternalApiMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # Storage network StorageNetworkMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # StorageMgmtNetwork network StorageMgmtNetworkMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # TenantNetwork network TenantNetworkMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # Provisioning network ProvisioningNetworkMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # Management network ManagementNetworkMTU: 1500 # CHANGEME: Change the following to the MTU value to use on the # Default Bonds MTU DefaultBondMTU: 1500

# CHANGEME: Change the following to mtu size used for the floating network ExtraConfig: neutron::plugins::ml2::physical_network_mtus: ['physext:1500'] # neutron::plugins::ml2::physical_network_mtus: physext:1500 # CHANGEME: Set to empty string for External VLAN, br-ex if on native VLAN of br-ex NeutronExternalNetworkBridge: "''" ServiceNetMap: NeutronTenantNetwork: tenant CeilometerApiNetwork: internal_api AodhApiNetwork: internal_api GnocchiApiNetwork: internal_api MongoDbNetwork: internal_api CinderApiNetwork: internal_api CinderIscsiNetwork: storage GlanceApiNetwork: storage GlanceRegistryNetwork: internal_api KeystoneAdminApiNetwork: ctlplane # allows undercloud to config endpoints KeystonePublicApiNetwork: internal_api NeutronApiNetwork: internal_api HeatApiNetwork: internal_api NovaApiNetwork: internal_api NovaMetadataNetwork: internal_api NovaVncProxyNetwork: internal_api SwiftMgmtNetwork: storage # Changed from storage_mgmt SwiftProxyNetwork: storage SaharaApiNetwork: internal_api HorizonNetwork: internal_api MemcachedNetwork: internal_api RabbitMqNetwork: internal_api RedisNetwork: internal_api MysqlNetwork: internal_api CephClusterNetwork: storage_mgmt CephPublicNetwork: storage CephRgwNetwork: storage ControllerHostnameResolveNetwork: internal_api

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 87: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 87

ComputeHostnameResolveNetwork: internal_api BlockStorageHostnameResolveNetwork: internal_api ObjectStorageHostnameResolveNetwork: internal_api CephStorageHostnameResolveNetwork: storage NovaColdMigrationNetwork: internal_api NovaLibvirtNetwork: internal_api

static-vip-environment.yaml

resource_registry: OS::TripleO::Network::Ports::NetVipMap: ./network/ports/net_vip_map_external.yaml OS::TripleO::Network::Ports::ExternalVipPort: ./network/ports/noop.yaml OS::TripleO::Network::Ports::InternalApiVipPort: ./network/ports/noop.yaml OS::TripleO::Network::Ports::StorageVipPort: ./network/ports/noop.yaml OS::TripleO::Network::Ports::StorageMgmtVipPort: ./network/ports/noop.yaml OS::TripleO::Network::Ports::RedisVipPort: ./network/ports/from_service.yaml parameter_defaults: ServiceVips: # CHANGEME: Change the following to the VIP for the redis service on the # Private API/internal_api network. # Note that this IP must lie outside the InternalApiAllocationPools range # specified in network-environment.yaml.

redis: 192.168.140.49

# CHANGEME: Change the following to the VIP on the Provisioning network # Note that this IP must lie outside the dhcp_start/dhcp_end range # specified in undercloud.conf. ControlPlaneIP: 192.168.120.251 # CHANGEME: Change the following to the VIP on the Private API network. # Note that this IP must lie outside the InternalApiAllocationPools range # specified in network-environment.yaml. InternalApiNetworkVip: 192.168.140.121 # CHANGEME: Change the following to the VIP on the Public API network. # Note that this IP must lie outside the ExternalAllocationPools range # specified in network-environment.yaml. ExternalNetworkVip: 100.67.139.62 # CHANGEME: Change the following to the VIP on the Storage network # Note that this IP must lie outside the StorageAllocationPools range # specified in network-environment.yaml. StorageNetworkVip: 192.168.170.121

# CHANGEME: Change the following to the VIP on the Provisioning network. # The Storage Clustering network is not connected to the controller nodes, # so the VIP for this network must be mapped to the provisioning network. # Note that this IP must lie outside the dhcp_start/dhcp_end range # specified in undercloud.conf. StorageMgmtNetworkVip: 192.168.120.252

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 88: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

88 | Environment Files

static-ip-environment.yaml

resource_registry: OS::TripleO::Controller::Ports::ExternalPort: ./network/ports/external_from_pool.yaml OS::TripleO::Controller::Ports::InternalApiPort: ./network/ports/internal_api_from_pool.yaml OS::TripleO::Controller::Ports::StoragePort: ./network/ports/storage_from_pool.yaml OS::TripleO::Controller::Ports::StorageMgmtPort: ./network/ports/noop.yaml OS::TripleO::Controller::Ports::TenantPort: ./network/ports/tenant_from_pool.yaml

OS::TripleO::ComputeHCI::Ports::ExternalPort: ./network/ports/noop.yaml OS::TripleO::ComputeHCI::Ports::InternalApiPort: ./network/ports/internal_api_from_pool.yaml OS::TripleO::ComputeHCI::Ports::StoragePort: ./network/ports/storage_from_pool.yaml OS::TripleO::ComputeHCI::Ports::StorageMgmtPort: ./network/ports/storage_mgmt_from_pool.yaml OS::TripleO::ComputeHCI::Ports::TenantPort: ./network/ports/tenant_from_pool.yaml

parameter_defaults: # Specify the IPs for the overcloud nodes on the indicated networks below. # The IPs are listed in the order: node0, node1, node2 for each network. # # Note that the IPs chosen must lie outside the allocation pools defined in # network-environment.yaml, and must not collide with the IPs assigned to # other nodes or networking equipment on the network, such as the SAH, # OSP Director node, Ceph Storage Admin node, etc.ControllerIPs:tenant:- 192.168.130.12- 192.168.130.13- 192.168.130.14internal_api:- 192.168.140.12- 192.168.140.23- 192.168.140.14storage:- 192.168.170.12- 192.168.170.13- 192.168.170.14external:- 100.67.139.12- 100.67.139.13- 100.67.139.14

ComputeHCIIPs:tenant:- 192.168.130.15- 192.168.130.16- 192.168.130.17internal_api:- 192.168.140.15- 192.168.140.16- 192.168.140.17storage:- 192.168.170.15- 192.168.170.16- 192.168.170.17

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 89: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 89

storage_mgmt:- 192.168.180.15- 192.168.180.16- 192.168.180.17

nic_environment.yaml

resource_registry: OS::TripleO::Controller::Net::SoftwareConfig: ./controller.yaml#############To be modified by EndUser######################

OS::TripleO::ComputeHCI::Net::SoftwareConfig: ./computeHCI.yaml

parameter_defaults: # CHANGEME: Change the interface names in the following lines for the # controller nodes provisioning interface and to include in the controller # nodes bonds ControllerProvisioningInterface: em3 ControllerBond0Interface1: p1p1 ControllerBond0Interface2: p2p1 ControllerBond1Interface1: p1p2 ControllerBond1Interface2: p2p2 # The bonding mode to use for controller nodes ControllerBondInterfaceOptions: mode=802.3ad miimon=100 xmit_hash_policy=layer3+4 lacp_rate=1

# CHANGEME: Change the interface names in the following lines for the # compute nodes provisioning interface and to include in the compute # nodes bonds ComputeHCIProvisioningInterface: em3 ComputeHCIBond0Interface1: p3p1 ComputeHCIBond0Interface2: p2p1 ComputeHCIBond1Interface1: p3p2 ComputeHCIBond1Interface2: p2p2 ComputeHCIBond2Interface2: p6p1 ComputeHCIBond2Interface2: p7p1 # The bonding mode to use for compute nodes ComputeHCIBondInterfaceOptions: mode=802.3ad miimon=100 xmit_hash_policy=layer3+4 lacp_rate=1##############Modification Ends Here#####################

dell-environment.yaml

resource_registry: OS::TripleO::NodeUserData: /home/osp_admin/templates/wipe-disks.yaml

parameter_defaults:

# Defines the interface to bridge onto br-ex for network nodes NeutronPublicInterface: bond1 # The tenant network type for Neutron NeutronNetworkType: vlan ## >neutron-disable-tunneling no mapping.

# The neutron ML2 and OpenvSwith VLAN Mapping ranges to support. NeutronNetworkVLANRanges: physint:201:250,physext

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 90: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

90 | Environment Files

# The logical to physical bridge mappings to use. # Defaults to mapping the external bridge on hosts (br-ex) to a physical name (datacenter). # You would use this for the default floating network NeutronBridgeMappings: physint:br-tenant,physext:br-ex

# Flavor used as the HCI compute OvercloudComputeHCIFlavor: baremetal

# Flavor to use for the Controller nodes OvercloudControllerFlavor: baremetal # Flavor to use for the ceph Storage nodes OvercloudCephStorageFlavor: baremetal # Flavor to use for the Swift storage nodes OvercloudSwiftStorageFlavor: baremetal # Flavor to use for the Cinder nodes OvercloudBlockStorageFlavor: baremetal

# Number of HCI Compute nodes ComputeHCICount: 3

# Number of Controller nodes ControllerCount: 3

# To customize the domain name of the overcloud nodes, change "localdomain" # in the following line to the desired domain name.

CloudDomain: oss.labs

# Set to true to enable Nova usage of Ceph for ephemeral storage. # If set to false, Nova uses the storage local to the compute. NovaEnableRbdBackend: true # devices: # - /dev/sda2

# Configure Ceph Placement Group (PG) values for the indicated pools CephPools: [{"name": "volumes", "pg_num": 4096, "pgp_num": 4096}, {"name": "vms", "pg_num": 1024, "pgp_num": 1024},{"name": "images", "pg_num": 512, "pgp_num": 521}] CephAnsiblePlaybookVerbosity: 1

CephPoolDefaultSize: 2

CephConfigOverrides: journal_size: 10240 journal_collocation: true

# Below parameter added with respect to HCI environment mon_max_pg_per_osd: 300 CephAnsibleDisksConfig: osd_scenario: lvm devices: - /dev/nvme0n1 - /dev/nvme1n1 - /dev/nvme2n1 - /dev/nvme3n1 - /dev/nvme4n1 - /dev/nvme5n1 - /dev/nvme6n1 - /dev/nvme7n1 CephAnsibleExtraConfig: osd_objectstore: bluestore

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 91: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 91

osds_per_device: 2 osd_recovery_op_priority: 3 osd_recovery_max_active: 3 osd_max_backfills: 1 ceph_osd_docker_memory_limit: 10g ceph_osd_docker_cpu_limit: 4

ComputeHCIExtraConfig: cpu_allocation_ratio: 6.7

ComputeHCIParameters: NovaReservedHostMemory: 115000

NovaComputeExtraConfig: nova::migration::libvirt::live_migration_completion_timeout: 800 nova::migration::libvirt::live_migration_progress_timeout: 150 ControllerExtraConfig: nova::api::osapi_max_limit: 10000 nova::rpc_response_timeout: 180 nova::keystone::authtoken::revocation_cache_time: 300 neutron::rpc_response_timeout: 180 neutron::keystone::authtoken::revocation_cache_time: 300 cinder::keystone::authtoken::revocation_cache_time: 300 glance::api::authtoken::revocation_cache_time: 300 tripleo::profile::pacemaker::database::mysql::innodb_flush_log_at_trx_commit: 0 tripleo::haproxy::haproxy_default_maxconn: 10000

Nodes registration json file

instackenv.json

{ "nodes": [ { "name": "control-0", "capabilities": "node:control-0,boot_option:local,boot_mode:uefi", "root_device": {"size":"2791"}, "pm_addr": "192.168.110.12", "pm_password": "xxxxxxxx", "pm_type": "pxe_drac", "pm_user": "root" }, { "name": "control-1", "capabilities": "node:control-1,boot_option:local,boot_mode:uefi", "root_device": {"size":"2791"}, "pm_addr": "192.168.110.13", "pm_password": "xxxxxxxx", "pm_type": "pxe_drac", "pm_user": "root" }, { "name": "control-2", "capabilities": "node:control-2,boot_option:local,boot_mode:uefi", "root_device": {"size":"2791"}, "pm_addr": "192.168.110.14",

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 92: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

92 | Environment Files

"pm_password": "xxxxxxxx", "pm_type": "pxe_drac", "pm_user": "root" },

{ "name": "computeHCI-0", "capabilities": "node:computeHCI-0,boot_option:local,boot_mode:uefi", "root_device": {"size":"223"}, "pm_addr": "192.168.110.15", "pm_password": "xxxxxxxx",

"pm_type": "pxe_drac", "pm_user": "root" },

{ "name": "computeHCI-1", "capabilities": "node:computeHCI-1,boot_option:local,boot_mode:uefi", "root_device": {"size":"223"}, "pm_addr": "192.168.110.16", "pm_password": "xxxxxxxx", "pm_type": "pxe_drac", "pm_user": "root" },

{ "name": "computeHCI-2", "capabilities": "node:computeHCI-2,boot_option:local,boot_mode:uefi", "root_device": {"size":"223"}, "pm_addr": "192.168.110.17", "pm_password": "xxxxxxxx", "pm_type": "pxe_drac", "pm_user": "root" } ]}

Undercloud configuration file

undercloud.conf

[DEFAULT]

## From instack-undercloud#

# Fully qualified hostname (including domain) to set on the# Undercloud. If left unset, the current hostname will be used, but# the user is responsible for configuring all system hostname settings# appropriately. If set, the undercloud install will configure all# system hostname settings. (string value)undercloud_hostname = director.OSS.LABS

# IP information for the interface on the Undercloud that will be# handling the PXE boots and DHCP for Overcloud instances. The IP# portion of the value will be assigned to the network interface# defined by local_interface, with the netmask defined by the prefix

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 93: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 93

# portion of the value. (string value)local_ip = 192.168.120.13/24

# Virtual IP or DNS address to use for the public endpoints of# Undercloud services. Only used with SSL. (string value)# Deprecated group/name - [DEFAULT]/undercloud_public_vip#undercloud_public_host = 192.168.24.2

# Virtual IP or DNS address to use for the admin endpoints of# Undercloud services. Only used with SSL. (string value)# Deprecated group/name - [DEFAULT]/undercloud_admin_vip#undercloud_admin_host = 192.168.24.3

# DNS nameserver(s) to use for the undercloud node. (list value)#undercloud_nameservers =

# List of ntp servers to use. (list value)#undercloud_ntp_servers =

# DNS domain name to use when deploying the overcloud. The overcloud# parameter "CloudDomain" must be set to a matching value. (string# value)#overcloud_domain_name = localdomain

# List of routed network subnets for provisioning and introspection.# Comma separated list of names/tags. For each network a section/group# needs to be added to the configuration file with these parameters# set: cidr, dhcp_start, dhcp_end, inspection_iprange, gateway and# masquerade. Note: The section/group must be placed before or after# any other section. (See the example section [ctlplane-subnet] in the# sample configuration file.) (list value)subnets = ctlplane-subnet

# Name of the local subnet, where the PXE boot and DHCP interfaces for# overcloud instances is located. The IP address of the# local_ip/local_interface should reside in this subnet. (string# value)local_subnet = ctlplane-subnet

# Certificate file to use for openstack service SSL connections.# Setting this enables SSL for the openstack API endpoints, leaving it# unset disables SSL. (string value)#undercloud_service_certificate =

# When set to True, an SSL certificate will be generated as part of# the undercloud install and this certificate will be used in place of# the value for undercloud_service_certificate. The resulting# certificate will be written to# /etc/pki/tls/certs/undercloud-[undercloud_public_host].pem. This# certificate is signed by CA selected by the# "certificate_generation_ca" option. (boolean value)#generate_service_certificate = false

# The certmonger nickname of the CA from which the certificate will be# requested. This is used only if the generate_service_certificate# option is set. Note that if the "local" CA is selected the# certmonger's local CA certificate will be extracted to /etc/pki/ca-# trust/source/anchors/cm-local-ca.pem and subsequently added to the# trust chain. (string value)#certificate_generation_ca = local

# The kerberos principal for the service that will use the# certificate. This is only needed if your CA requires a kerberos# principal. e.g. with FreeIPA. (string value)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 94: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

94 | Environment Files

#service_principal =

# Network interface on the Undercloud that will be handling the PXE# boots and DHCP for Overcloud instances. (string value)local_interface = eth1

# MTU to use for the local_interface. (integer value)local_mtu = 1500

# DEPRECATED: Network that will be masqueraded for external access, if# required. This should be the subnet used for PXE booting. (string# value)# This option is deprecated for removal.# Its value may be silently ignored in the future.# Reason: With support for routed networks, masquerading of the# provisioning networks is moved to a boolean option for each subnet.masquerade_network = 192.168.120.0/24

# Path to hieradata override file. If set, the file will be copied# under /etc/puppet/hieradata and set as the first file in the hiera# hierarchy. This can be used to custom configure services beyond what# undercloud.conf provides (string value)#hieradata_override =

# Path to network config override template. If set, this template will# be used to configure the networking via os-net-config. Must be in# json format. Templated tags can be used within the template, see# instack-undercloud/elements/undercloud-stack-config/net-# config.json.template for example tags (string value)#net_config_override =

# Network interface on which inspection dnsmasq will listen. If in# doubt, use the default value. (string value)# Deprecated group/name - [DEFAULT]/discovery_interface#inspection_interface = br-ctlplane

# Whether to enable extra hardware collection during the inspection# process. Requires python-hardware or python-hardware-detect package# on the introspection image. (boolean value)inspection_extras = true

# Whether to run benchmarks when inspecting nodes. Requires# inspection_extras set to True. (boolean value)# Deprecated group/name - [DEFAULT]/discovery_runbenchinspection_runbench = false

# Whether to support introspection of nodes that have UEFI-only# firmware. (boolean value)inspection_enable_uefi = true

# Makes ironic-inspector enroll any unknown node that PXE-boots# introspection ramdisk in Ironic. By default, the "fake" driver is# used for new nodes (it is automatically enabled when this option is# set to True). Set discovery_default_driver to override.# Introspection rules can also be used to specify driver information# for newly enrolled nodes. (boolean value)#enable_node_discovery = false

# The default driver or hardware type to use for newly discovered# nodes (requires enable_node_discovery set to True). It is# automatically added to enabled_drivers or enabled_hardware_types# accordingly. (string value)#discovery_default_driver = ipmi

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 95: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 95

# Whether to enable the debug log level for Undercloud openstack# services. (boolean value)#undercloud_debug = true

# Whether to update packages during the Undercloud install. (boolean# value)#undercloud_update_packages = true

# Whether to install Tempest in the Undercloud. (boolean value)enable_tempest = true

# Whether to install Telemetry services (ceilometer, gnocchi, aodh,# panko ) in the Undercloud. (boolean value)#enable_telemetry = false

# Whether to install the TripleO UI. (boolean value)enable_ui = true

# Whether to install requirements to run the TripleO validations.# (boolean value)enable_validations = true

# Whether to install the Volume service. It is not currently used in# the undercloud. (boolean value)#enable_cinder = false

# Whether to install novajoin metadata service in the Undercloud.# (boolean value)#enable_novajoin = false

# Array of host/port combiniations of docker insecure registries.# (list value)#docker_insecure_registries =

# One Time Password to register Undercloud node with an IPA server.# Required when enable_novajoin = True. (string value)#ipa_otp =

# Whether to use iPXE for deploy and inspection. (boolean value)# Deprecated group/name - [DEFAULT]/ipxe_deployipxe_enabled = true

# Maximum number of attempts the scheduler will make when deploying# the instance. You should keep it greater or equal to the number of# bare metal nodes you expect to deploy at once to work around# potential race condition when scheduling. (integer value)# Minimum value: 1scheduler_max_attempts = 30

# Whether to clean overcloud nodes (wipe the hard drive) between# deployments and after the introspection. (boolean value)clean_nodes = false

# DEPRECATED: List of enabled bare metal drivers. (list value)# This option is deprecated for removal.# Its value may be silently ignored in the future.# Reason: Please switch to hardware types and the# enabled_hardware_types option.#enabled_drivers = pxe_ipmitool,pxe_drac,pxe_ilo

# List of enabled bare metal hardware types (next generation drivers).# (list value)#enabled_hardware_types = ipmi,redfish,ilo,idrac

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 96: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

96 | Environment Files

# An optional docker 'registry-mirror' that will be configured in# /etc/docker/daemon.json. (string value)#docker_registry_mirror =

# List of additional architectures enabled in your cloud environment.# The list of supported values is: ppc64le (list value)#additional_architectures =

# Enable support for routed ctlplane networks. (boolean value)#enable_routed_networks = false

[auth]

## From instack-undercloud#

# Password used for MySQL root user. If left unset, one will be# automatically generated. (string value)#undercloud_db_password = <None>

# Keystone admin token. If left unset, one will be automatically# generated. (string value)#undercloud_admin_token = <None>

# Keystone admin password. If left unset, one will be automatically# generated. (string value)#undercloud_admin_password = <None>

# Glance service password. If left unset, one will be automatically# generated. (string value)#undercloud_glance_password = <None>

# Heat db encryption key(must be 16, 24, or 32 characters. If left# unset, one will be automatically generated. (string value)#undercloud_heat_encryption_key = <None>

# Heat service password. If left unset, one will be automatically# generated. (string value)#undercloud_heat_password = <None>

# Heat cfn service password. If left unset, one will be automatically# generated. (string value)#undercloud_heat_cfn_password = <None>

# Neutron service password. If left unset, one will be automatically# generated. (string value)#undercloud_neutron_password = <None>

# Nova service password. If left unset, one will be automatically# generated. (string value)#undercloud_nova_password = <None>

# Ironic service password. If left unset, one will be automatically# generated. (string value)#undercloud_ironic_password = <None>

# Aodh service password. If left unset, one will be automatically# generated. (string value)#undercloud_aodh_password = <None>

# Gnocchi service password. If left unset, one will be automatically# generated. (string value)

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 97: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

Environment Files | 97

#undercloud_gnocchi_password = <None>

# Ceilometer service password. If left unset, one will be# automatically generated. (string value)#undercloud_ceilometer_password = <None>

# Panko service password. If left unset, one will be automatically# generated. (string value)#undercloud_panko_password = <None>

# Ceilometer metering secret. If left unset, one will be automatically# generated. (string value)#undercloud_ceilometer_metering_secret = <None>

# Ceilometer snmpd read-only user. If this value is changed from the# default, the new value must be passed in the overcloud environment# as the parameter SnmpdReadonlyUserName. This value must be between 1# and 32 characters long. (string value)#undercloud_ceilometer_snmpd_user = ro_snmp_user

# Ceilometer snmpd password. If left unset, one will be automatically# generated. (string value)#undercloud_ceilometer_snmpd_password = <None>

# Swift service password. If left unset, one will be automatically# generated. (string value)#undercloud_swift_password = <None>

# Mistral service password. If left unset, one will be automatically# generated. (string value)#undercloud_mistral_password = <None>

# Rabbitmq cookie. If left unset, one will be automatically generated.# (string value)#undercloud_rabbit_cookie = <None>

# Rabbitmq password. If left unset, one will be automatically# generated. (string value)#undercloud_rabbit_password = <None>

# Rabbitmq username. If left unset, one will be automatically# generated. (string value)#undercloud_rabbit_username = <None>

# Heat stack domain admin password. If left unset, one will be# automatically generated. (string value)#undercloud_heat_stack_domain_admin_password = <None>

# Swift hash suffix. If left unset, one will be automatically# generated. (string value)#undercloud_swift_hash_suffix = <None>

# HAProxy stats password. If left unset, one will be automatically# generated. (string value)#undercloud_haproxy_stats_password = <None>

# Zaqar password. If left unset, one will be automatically generated.# (string value)#undercloud_zaqar_password = <None>

# Horizon secret key. If left unset, one will be automatically# generated. (string value)#undercloud_horizon_secret_key = <None>

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 98: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

98 | Environment Files

# Cinder service password. If left unset, one will be automatically# generated. (string value)#undercloud_cinder_password = <None>

# Novajoin vendordata plugin service password. If left unset, one will# be automatically generated. (string value)#undercloud_novajoin_password = <None>

[ctlplane-subnet]

## From instack-undercloud#

# Network CIDR for the Neutron-managed subnet for Overcloud instances.# (string value)# Deprecated group/name - [DEFAULT]/network_cidrcidr = 192.168.120.0/24

# Start of DHCP allocation range for PXE and DHCP of Overcloud# instances on this network. (string value)# Deprecated group/name - [DEFAULT]/dhcp_startdhcp_start = 192.168.120.121

# End of DHCP allocation range for PXE and DHCP of Overcloud instances# on this network. (string value)# Deprecated group/name - [DEFAULT]/dhcp_enddhcp_end = 192.168.120.250

# Temporary IP range that will be given to nodes on this network# during the inspection process. Should not overlap with the range# defined by dhcp_start and dhcp_end, but should be in the same ip# subnet. (string value)# Deprecated group/name - [DEFAULT]/inspection_iprangeinspection_iprange = 192.168.120.21,192.168.120.120

# Network gateway for the Neutron-managed network for Overcloud# instances on this network. (string value)# Deprecated group/name - [DEFAULT]/network_gatewaygateway = 192.168.120.13

# The network will be masqueraded for external access. (boolean value)#masquerade = false

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 99: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

References | 99

Appendix

CReferences

Topics:

• To learn more

Additional information can be found at https://www.dell.com/support/article/us/en/19/sln310368/dell-emc-ready-architecture-for-red-hat-openstackplatform?lang=en

Note: If you need additional services or implementation help, pleasecontact your Dell EMC representative.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1

Page 100: Infrastructure on Red Hat OpenStack Platform Dell EMC Ready ... · This architecture guide describes a Red Hat OpenStack Platform and Red Hat Ceph Storage approach for single node

100 | References

To learn moreAdditional information on the Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red HatOpenStack Platform can be found at https://www.dell.com/support/article/us/en/19/sln310368/dell-emc-ready-architecture-for-red-hat-openstackplatform?lang=en or by emailing [email protected]

Copyright © 2019 Dell EMC or its subsidiaries. All rights reserved. Trademarks and trade names may be used in thisdocument to refer to either the entities claiming the marks and names or their products. Specifications are correct atdate of publication but are subject to availability or change without notice at any time. Dell EMC and its affiliatescannot be responsible for errors or omissions in typography or photography. Dell EMC’s Terms and Conditions ofSales and Service apply and are available on request. Dell EMC service offerings do not affect consumer’s statutoryrights.

Dell EMC, the DELL EMC logo, the DELL EMC badge, and PowerEdge are trademarks of Dell EMC.

Dell EMC Ready Architecture for Hyper-Converged Infrastructure on Red Hat OpenStack Platform | Architecture Guide | Version 1