23
Ceph: de factor storage backend for OpenStack OpenStack Summit 2013 Hong Kong

Openstack Summit HK - Ceph defacto - eNovance

Embed Size (px)

DESCRIPTION

by Sebastien Han

Citation preview

Page 1: Openstack Summit HK - Ceph defacto - eNovance

Ceph: de factor storage backend for OpenStack

OpenStack Summit 2013Hong Kong

Page 2: Openstack Summit HK - Ceph defacto - eNovance

Whoami💥 Sébastien Han💥 French Cloud Engineer working for eNovance💥 Daily job focused on Ceph and OpenStack💥 Blogger

Personal blog: http://www.sebastien-han.fr/blog/Company blog: http://techs.enovance.com/

Worldwide offices coverageWe design, build and run clouds – anytime -

anywhere

Page 3: Openstack Summit HK - Ceph defacto - eNovance

CephWhat is it?

Page 4: Openstack Summit HK - Ceph defacto - eNovance

The project

➜ Unified distributed storage system

➜ Started in 2006 as a PhD by Sage Weil

➜ Open source under LGPL license

➜ Written in C++

➜ Build the future of storage on commodity hardware

Page 5: Openstack Summit HK - Ceph defacto - eNovance

Key features

➜ Self managing/healing

➜ Self balancing

➜ Painless scaling

➜ Data placement with CRUSH

Page 6: Openstack Summit HK - Ceph defacto - eNovance

Controlled replication under scalable hashing

➜ Pseudo-random placement algorithm

➜ Statistically uniform distribution

➜ Rule-based configuration

Page 7: Openstack Summit HK - Ceph defacto - eNovance

Overview

Page 8: Openstack Summit HK - Ceph defacto - eNovance

Building a Ceph clusterGeneral considerations

Page 9: Openstack Summit HK - Ceph defacto - eNovance

How to start?➜ Use case

• IO profile: Bandwidth? IOPS? Mixed?• Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver?• Usage: do I use Ceph in standalone or is it combined with a software solution?

➜ Amount of data (usable not RAW)• Replica count• Failure ratio - How much data am I willing to rebalance if a node fail?• Do I have a data growth planning?

➜ Budget :-)

Page 10: Openstack Summit HK - Ceph defacto - eNovance

Things that you must not do

➜ Don't put a RAID underneath your OSD• Ceph already manages the replication• Degraded RAID breaks performances• Reduce usable space on the cluster

➜ Don't build high density nodes with a tiny cluster• Failure consideration and data to re-balance• Potential full cluster

➜ Don't run Ceph on your hypervisors (unless you're broke)

Page 11: Openstack Summit HK - Ceph defacto - eNovance

State of the integrationIncluding best Havana’s additions

Page 12: Openstack Summit HK - Ceph defacto - eNovance

Why is Ceph so good?

It unifies OpenStack components

Page 13: Openstack Summit HK - Ceph defacto - eNovance

Havana’s additions➜ Complete refactor of the Cinder driver:

• Librados and librbd usage• Flatten volumes created from snapshots• Clone depth

➜ Cinder backup with a Ceph backend:• backing up within the same Ceph pool (not recommended)• backing up between different Ceph pools• backing up between different Ceph clusters• Support RBD stripes• Differentials

➜ Nova Libvirt_image_type = rbd• Directly boot all the VMs in Ceph• Volume QoS

Page 14: Openstack Summit HK - Ceph defacto - eNovance

Today’s Havana integration

Page 15: Openstack Summit HK - Ceph defacto - eNovance

Is Havana the perfect stack?

Page 16: Openstack Summit HK - Ceph defacto - eNovance

Well, almost…

Page 17: Openstack Summit HK - Ceph defacto - eNovance

What’s missing?

➜ Direct URL download for Nova

• Already on the pipe, probably for 2013.2.1

➜ Nova’s snapshots integration

• Ceph snapshot

https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd

Page 18: Openstack Summit HK - Ceph defacto - eNovance

Icehouse and beyondFuture

Page 19: Openstack Summit HK - Ceph defacto - eNovance

Tomorrow’s integration

Page 20: Openstack Summit HK - Ceph defacto - eNovance

Icehouse roadmap

➜ Implement “bricks” for RBD

➜ Re-implement snapshotting function to use RBD snapshot

➜ RBD on Nova bare metal

➜ Volume migration support

➜ RBD stripes support

« J » potential roadmap➜ Manila support

Page 21: Openstack Summit HK - Ceph defacto - eNovance

Ceph, what’s coming up?Roadmap

Page 22: Openstack Summit HK - Ceph defacto - eNovance

Firefly

➜ Tiering - cache pool overlay

➜ Erasure code

➜ Ceph OSD ZFS

➜ Full support of OpenStack Icehouse

Page 23: Openstack Summit HK - Ceph defacto - eNovance

Many thanks!

Questions?

Contact: [email protected]: @sebastien_hanIRC: leseb