35
Presentation By: Roozbeh Shafiee Summer 2015 IRAN OpenStack Users Group MASTERING OPENSTACK (Episode 07) Compute Nodes

Mastering OpenStack - Episode 07 - Compute Nodes

Embed Size (px)

Citation preview

Page 1: Mastering OpenStack - Episode 07 - Compute Nodes

Presentation By:

Roozbeh Shafiee

Summer 2015

IRAN OpenStack Users Group

MASTERINGOPENSTACK

(Episode 07)

Compute Nodes

Page 2: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Agenda:

● Choosing CPU

● Choosing Hypervisor / Container

● Overcommitting

● Instance Solution Storages

● Friendly Advices

● Iran OpenStack Community

OpenStack Compute Nodes

Page 3: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Choosing CPU

OpenStack Compute Nodes

Page 4: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Virtualization Support:

At first check CPU supports virtualization in Server's BIOS:

● VT-x for Intel chips

● AMD-v for AMD chips

Most modern servers support this feature, but it may be disabled. For how to

Enable that, just check the BIOS documentation.

OpenStack Compute Nodes

Page 5: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

CPU Cores:

● The number of cores that the CPU has also affects the decision. It’s common for

current CPUs to have up to 12 cores.

● If an Intel CPU supports hyperthreading, those 12 cores are doubled to 24 cores.

● If you purchase a server that supports multiple CPUs, the number of cores is

further multiplied.

OpenStack Compute Nodes

Page 6: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Multithread Considerations:

Hyper-Threading is Intel’s proprietary simultaneous multithreading

implementation used to improve parallelization on their CPUs. You might

consider enabling Hyper-Threading to improve the performance of multithreaded

applications.

Whether you should enable Hyper-Threading on your CPUs depends upon your

use case. For example, disabling Hyper-Threading can be beneficial in intense

computing environments. We recommend that you do performance testing with

your local workload with both Hyper-Threading on and off to determine what is

more appropriate in your case.

OpenStack Compute Nodes

Page 7: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Choosing Hypervisor / Container

OpenStack Compute Nodes

Page 8: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Choosing Hypervisor / Container

A hypervisor or container provides software to manage virtual machine access to

the underlying hardware.

The hypervisor can do 3 operational tasks on virtual machines:

● Creates

● Manages

● monitors

OpenStack Compute Nodes

Page 9: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Supported Hypervisors / Containers

OpenStack Compute supports many hypervisors to various degrees, including:

● KVM

● LXD

● LXC

● QEMU

● Vmware ESX / ESXi

● Xen

● Microsoft Hyper-V

● Docker

OpenStack Compute Nodes

Page 10: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Factors in your choice of Hypervisor / Container

Probably the most important factor in your choice of hypervisor or container is

your current usage or experience. Aside from that, there are practical concerns to

do with feature parity, documentation, and the level of community experience.

For example, KVM is the most widely adopted hypervisor in the OpenStack

community. Besides KVM, more deployments run Xen, LXC, VMWare, and Hyper-V

than the others listed.

OpenStack Compute Nodes

Page 11: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Multi or Single Hypervisor / Container

It is also possible to run multiple hypervisors in a single deployment using host

aggregates or cells. However, an individual compute node can run only a single

hypervisor at a time.

OpenStack Compute Nodes

Page 12: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Overcommitting

OpenStack Compute Nodes

Page 13: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

What is Overcommitting?

OpenStack allows you to overcommit CPU and RAM on compute nodes. This

allows you to increase the number of instances you can have running on your

cloud, at the cost of reducing the performance of the instances.

OpenStack Compute Nodes

Page 14: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Allocation Ratios

OpenStack Compute uses the following ratios by default:

● CPU allocation ratio: 16:1

● RAM allocation ratio: 1.5:1

OpenStack Compute Nodes

Page 15: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

CPU Allocation Ratio

The default CPU allocation ratio of 16:1 means that the scheduler allocates up

to 16 virtual cores per physical core.

For example, if a physical node has 12 cores, the scheduler sees 192 available

virtual cores. With typical flavor definitions of 4 virtualcores per instance, this

ratio would provide 48 instances on a physical node.

OpenStack Compute Nodes

Page 16: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

The Formula for CPU Allocation Ratio

The formula for the number of virtual instances on a compute node is:

(OR*PC)/VC

OR: CPU overcommit ratio (virtual cores per physical core)

PC: Number of physical cores

VC: Number of virtual cores per instance

OpenStack Compute Nodes

Page 17: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

RAM Allocation Ratio

Similarly, the default RAM allocation ratio of 1.5:1 means that the scheduler

allocates instances to a physical node as long as the total amount of RAM

associated with the instances is less than 1.5 times the amount of RAM available

on the physical node.

For example, if a physical node has 48 GB of RAM, the scheduler allocates

instances to that node until the sum of the RAM associated with the instances

reaches 72 GB (such as nine instances, in the case where each instance has 8 GB

of RAM).

OpenStack Compute Nodes

Page 18: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Instance Storage Solutions

OpenStack Compute Nodes

Page 19: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Factors in your choice of Storage

In general, the questions you should ask when selecting storage are as follows:

● What is the platter count you can achieve?

● Do more spindles result in better I/O despite network access?

● Which one results in the best cost-performance scenario you’re aiming for?

● How do you manage the storage operationally?

OpenStack Compute Nodes

Page 20: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Temporary-Style Storage Solutions

As part of the procurement for a compute cluster, you must specify some

storage for the disk on which the instantiated instance runs. There are three

main approaches to providing this temporary-style storage, and it is important

to understand the implications of the choice.

They are:

● Off compute node storage - shared file system

● On compute node storage - shared file system

● On compute node storage - nonshared file system

OpenStack Compute Nodes

Page 21: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Off Compute Node Storage – Shared File System

In this option, the disks storing the running instances are hosted in servers outside

of the compute nodes.

If you use separate compute and storage hosts, you can treat your compute hosts

as “stateless.” As long as you don’t have any instances currently running on a

Compute host, you can take it offline or wipe it completely without having any

effect on the rest of your cloud.

This simplifies maintenance for the compute hosts.

OpenStack Compute Nodes

Page 22: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Off Compute Node Storage – Shared File System

There are several advantages to this approach:

● If a compute node fails, instances are usually easily recoverable.

● Running a dedicated storage system can be operationally simpler.

● You can scale to any number of spindles.

● It may be possible to share the external storage for other purposes.

OpenStack Compute Nodes

Page 23: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Off Compute Node Storage – Shared File System

The main downsides to this approach are:

● Depending on design, heavy I/O usage from some instances can affect unrelated

instances.

● Use of the network can decrease performance.

OpenStack Compute Nodes

Page 24: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

On Compute Node Storage – Shared File System

In this option, each compute node is specified with a significant amount of disk

space, but a distributed file system ties the disks from each compute node into

a single mount.

The main advantage of this option is that:

● It scales to external storage when you require additional storage.

OpenStack Compute Nodes

Page 25: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

On Compute Node Storage – Shared File System

This option has several downsides:

● Running a distributed file system can make you lose your data locality compared

with nonshared storage.

● Recovery of instances is complicated by depending on multiple hosts.

● The chassis size of the compute node can limit the number of spindles able to be

used in a compute node.

● Use of the network can decrease performance.

OpenStack Compute Nodes

Page 26: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

On Compute Node Storage – NonShared File System

In this option, each compute node is specified with enough disks to store the

instances it hosts.

There are two main reasons why this is a good idea:

● Heavy I/O usage on one compute node does not affect instances on other

compute nodes.

● Direct I/O access can increase performance.

OpenStack Compute Nodes

Page 27: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

On Compute Node Storage – NonShared File System

This has several downsides:

● If a compute node fails, the instances running on that node are lost.

● The chassis size of the compute node can limit the number of spindles able to be

used in a compute node.

● Migrations of instances from one node to another are more complicated and rely

on features that may not continue to be developed.

● If additional storage is required, this option does not scale.

OpenStack Compute Nodes

Page 28: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Friendly Advices

OpenStack Compute Nodes

Page 29: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Benefits of On or Off Compute Node Storage

Running a shared file system on a storage system apart from the computes nodes

is ideal for clouds where reliability and scalability are the most important factors.

Running a shared file system on the compute nodes themselves may be best in a

scenario where you have to deploy to preexisting servers for which you have little

to no control over their specifications.

Running a nonshared file system on the compute nodes themselves is a good

option for clouds with high I/O requirements and low concern for reliability.

OpenStack Compute Nodes

Page 30: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Different Configurations

For a fixed budget, it makes sense to have different configurations for your

compute nodes and your storage nodes. Compute nodes will be invested in CPU

and RAM, and storage nodes will be invested in block storage.

OpenStack Compute Nodes

Page 31: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Limit Resources

If you are more restricted in the number of physical hosts you have available for

creating your cloud and you want to be able to dedicate as many of your hosts as

possible to running instances, it makes sense to run compute and storage on the

same machines.

OpenStack Compute Nodes

Page 32: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Issues With Live Migration

Live migration can also be done with nonshared storage, using a feature known

as KVM live block migration. While an earlier implementation of block-based

migration in KVM and QEMU was considered unreliable, there is a newer, more

reliable implementation of block-based live migration as of QEMU 1.4 and

libvirt 1.0.2 that is also compatible with OpenStack.

OpenStack Compute Nodes

Page 33: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Iran OpenStack Community

OpenStack Compute Nodes

Page 34: Mastering OpenStack - Episode 07 - Compute Nodes

IRAN Community| OpenStack.ir

Stay in Touch and Join Us:

● Home Page: OpenStack.ir

● Meetup Page: Meetup.com/Iran-OpenStack

● Mailing List: [email protected]

● Twitter: @OpenStackIR , #OpenStackIRAN

● IRC Channel on FreeNode: #OpenStack-ir

OpenStack Compute Nodes

Page 35: Mastering OpenStack - Episode 07 - Compute Nodes

Roozbeh ShafieeIran OpenStack Community Manager

[email protected]

OpenStack.ir

Thank You

We need to work together to build a better community