23
Ceph: de factor storage backend for OpenStack OpenStack Summit 2013 Hong Kong

Ceph: de factor storage backend for OpenStack

Embed Size (px)

DESCRIPTION

Ceph: de factor storage backend for OpenStack. OpenStack Summit 2013 Hong Kong. Whoami. đź’Ą SĂ©bastien Han đź’Ą French Cloud Engineer working for eNovance đź’Ą Daily job focused on Ceph and OpenStack đź’Ą Blogger Personal blog: http://www.sebastien-han.fr/blog/ - PowerPoint PPT Presentation

Citation preview

Page 1: Ceph: de factor storage backend for OpenStack

Ceph: de factor storage backend for OpenStack

OpenStack Summit 2013Hong Kong

Page 2: Ceph: de factor storage backend for OpenStack

Whoamiđź’Ą SĂ©bastien Hanđź’Ą French Cloud Engineer working for eNovanceđź’Ą Daily job focused on Ceph and OpenStackđź’Ą Blogger

Personal blog: http://www.sebastien-han.fr/blog/Company blog: http://techs.enovance.com/

Worldwide offices coverageWe design, build and run clouds – anytime -

anywhere

Page 3: Ceph: de factor storage backend for OpenStack

CephWhat is it?

Page 4: Ceph: de factor storage backend for OpenStack

The project

âžś Unified distributed storage system

âžś Started in 2006 as a PhD by Sage Weil

âžś Open source under LGPL license

âžś Written in C++

âžś Build the future of storage on commodity hardware

Page 5: Ceph: de factor storage backend for OpenStack

Key features

âžś Self managing/healing

âžś Self balancing

âžś Painless scaling

âžś Data placement with CRUSH

Page 6: Ceph: de factor storage backend for OpenStack

Controlled replication under scalable hashing

âžś Pseudo-random placement algorithm

âžś Statistically uniform distribution

âžś Rule-based configuration

Page 7: Ceph: de factor storage backend for OpenStack

Overview

Page 8: Ceph: de factor storage backend for OpenStack

Building a Ceph clusterGeneral considerations

Page 9: Ceph: de factor storage backend for OpenStack

How to start?âžś Use case

• IO profile: Bandwidth? IOPS? Mixed?• Guaranteed IOs : how many IOPS or Bandwidth per client do I want to deliver?• Usage: do I use Ceph in standalone or is it combined with a software solution?

➜ Amount of data (usable not RAW)• Replica count• Failure ratio - How much data am I willing to rebalance if a node fail?• Do I have a data growth planning?

âžś Budget :-)

Page 10: Ceph: de factor storage backend for OpenStack

Things that you must not do

➜ Don't put a RAID underneath your OSD• Ceph already manages the replication• Degraded RAID breaks performances• Reduce usable space on the cluster

➜ Don't build high density nodes with a tiny cluster• Failure consideration and data to re-balance• Potential full cluster

âžś Don't run Ceph on your hypervisors (unless you're broke)

Page 11: Ceph: de factor storage backend for OpenStack

State of the integrationIncluding best Havana’s additions

Page 12: Ceph: de factor storage backend for OpenStack

Why is Ceph so good?

It unifies OpenStack components

Page 13: Ceph: de factor storage backend for OpenStack

Havana’s additions➜ Complete refactor of the Cinder driver:

• Librados and librbd usage• Flatten volumes created from snapshots• Clone depth

➜ Cinder backup with a Ceph backend:• backing up within the same Ceph pool (not recommended)• backing up between different Ceph pools• backing up between different Ceph clusters• Support RBD stripes• Differentials

➜ Nova Libvirt_image_type = rbd• Directly boot all the VMs in Ceph• Volume QoS

Page 14: Ceph: de factor storage backend for OpenStack

Today’s Havana integration

Page 15: Ceph: de factor storage backend for OpenStack

Is Havana the perfect stack?

…

Page 16: Ceph: de factor storage backend for OpenStack

Well, almost…

Page 17: Ceph: de factor storage backend for OpenStack

What’s missing?

âžś Direct URL download for Nova

• Already on the pipe, probably for 2013.2.1

➜ Nova’s snapshots integration

• Ceph snapshot

https://github.com/jdurgin/nova/commits/havana-ephemeral-rbd

Page 18: Ceph: de factor storage backend for OpenStack

Icehouse and beyondFuture

Page 19: Ceph: de factor storage backend for OpenStack

Tomorrow’s integration

Page 20: Ceph: de factor storage backend for OpenStack

Icehouse roadmap

➜ Implement “bricks” for RBD

âžś Re-implement snapshotting function to use RBD snapshot

âžś RBD on Nova bare metal

âžś Volume migration support

âžś RBD stripes support

« J Â» potential roadmapâžś Manila support

Page 21: Ceph: de factor storage backend for OpenStack

Ceph, what’s coming up?Roadmap

Page 22: Ceph: de factor storage backend for OpenStack

Firefly

âžś Tiering - cache pool overlay

âžś Erasure code

âžś Ceph OSD ZFS

âžś Full support of OpenStack Icehouse

Page 23: Ceph: de factor storage backend for OpenStack

Many thanks!

Questions?

Contact: [email protected]: @sebastien_hanIRC: leseb