56
Inktank Openstack with Ceph

Openstack with ceph

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Openstack with ceph

Inktank

Openstack with Ceph

Page 2: Openstack with ceph

Ian Colle Ceph Program Manager, Inktank [email protected] @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com

Who is this guy?

Page 3: Openstack with ceph

People need storage solutions that… •  …are open

•  …are easy to manage

•  …satisfy their requirements - performance - functional - financial (cha’ ching!)

Selecting the Best Cloud Storage System

Page 4: Openstack with ceph

Hard Drives Are Tiny Record Players and They Fail Often jon_a_ross, Flickr / CC BY 2.0

Page 5: Openstack with ceph

D

55 times / day

= D

D D

x 1 MILLION

D D

D D

Page 6: Openstack with ceph

“That’s why I use Swift in my Openstack implementation” Hmmm, what about block storage?

I got it!

Page 7: Openstack with ceph

• Persistent - More familiar to users

• Not tied to a single host

- Decouples compute and storage - Enables Live migration

• Extra capabilities of storage system

- Efficient snapshots - Different types of storage available - Cloning for fast restore or scaling

Benefits of Block Storage

Page 8: Openstack with ceph

Ceph has reduced administration costs - “Intelligent Devices” that use a peer-to-peer mechanism to detect failures and react automatically – rapidly ensuring replication policies are still honored if a node becomes unavailable. - Swift requires an operator to notice a failure and update the ring configuration before redistribution of data is started.

Ceph guarantees the consistency of your data

- Even with large volumes of data, Ceph ensures clients get a consistent copy from any node within a region. - Swift’s replication system means that users may get stale data, even with a single site, due to slow asynchronous replication as the volume of data builds up.

Ceph over Swift

Page 9: Openstack with ceph

Swift has quotas, we do not (coming this Fall) Swift has object expiration, we do not (coming this Fall)

Swift over Ceph

Page 10: Openstack with ceph

Ceph Ceph provides object AND block storage in a single system that is compatible with the Swift and Cinder APIs and is self-healing without operator intervention.

Swift

If you use Swift, you still have to provision and manage a totally separate system to handle your block storage (in addition to paying the poor guy to go update the ring configuration)

Total Solution Comparison

Page 11: Openstack with ceph

Openstack I know, but what is Ceph?

Page 12: Openstack with ceph

OPEN SOURCE

COMMUNITY-FOCUSED

SCALABLE

NO SINGLE POINT OF FAILURE

SOFTWARE BASED

SELF-MANAGING

philosophy design

Page 13: Openstack with ceph

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Page 14: Openstack with ceph

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Page 15: Openstack with ceph

Monitors: • Maintain cluster map • Provide consensus for distributed decision-making • Must have an odd number • These do not serve stored objects to clients

M

OSDs: • One per disk (recommended) • At least three in a cluster • Serve stored objects to clients • Intelligently peer to perform replication tasks • Supports object classes

Page 16: Openstack with ceph

16

DISK

FS

DISK DISK

OSD

DISK DISK

OSD OSD OSD OSD

FS FS FS FS btrfs xfs ext4

M M M

Page 17: Openstack with ceph

M

M

M

HUMAN

Page 18: Openstack with ceph

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Page 19: Openstack with ceph

L

19

LIBRADOS • Provides direct access to RADOS for applications • C, C++, Python, PHP, Java • No HTTP overhead

Page 20: Openstack with ceph

LIBRADOS

M

M

M

APP

native

Page 21: Openstack with ceph

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Page 22: Openstack with ceph

M

M

M

LIBRADOS RGW

APP

native

REST

LIBRADOS RGW

APP

Page 23: Openstack with ceph

RADOS Gateway: • REST-based interface to RADOS • Supports buckets, accounting • Compatible with S3 and Swift applications

Page 24: Openstack with ceph

RADOS A Reliable, Autonomous, Distributed Object Store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD (RADOS Block Device) A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RGW (RADOS Gateway) A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Page 25: Openstack with ceph

M

M

M

VM

LIBRADOS LIBRBD

VIRTUALIZATION CONTAINER

Page 26: Openstack with ceph

RADOS Block Device: • Storage of virtual disks in RADOS • Allows decoupling of VMs and containers • Live migration! • Images are striped across the cluster • Boot support in QEMU, KVM, and OpenStack Nova (more on that later!) • Mount support in the Linux kernel

Page 27: Openstack with ceph

RADOS A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes

LIBRADOS A library allowing apps to directly access RADOS, with support for C, C++, Java, Python, Ruby, and PHP

RBD A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver

CEPH FS A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE

RADOSGW A bucket-based REST gateway, compatible with S3 and Swift

APP APP HOST/VM CLIENT

Page 28: Openstack with ceph

What Makes Ceph Unique? Part one: CRUSH

Page 29: Openstack with ceph

APP ??

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

Page 30: Openstack with ceph

APP

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

Page 31: Openstack with ceph

APP

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

D C

A-G

H-N

O-T

U-Z

F*

Page 32: Openstack with ceph

10 10 01 01 10 10 01 11 01 10

10 10 01 01 10 10 01 11 01 10

hash(object name) % num pg

CRUSH(pg, cluster state, rule set)

Page 33: Openstack with ceph

10 10 01 01 10 10 01 11 01 10

10 10 01 01 10 10 01 11 01 10

Page 34: Openstack with ceph

CRUSH • Pseudo-random placement algorithm • Ensures even distribution • Repeatable, deterministic • Rule-based configuration

• Replica count • Infrastructure topology • Weighting

Page 35: Openstack with ceph

35

Page 36: Openstack with ceph

36

Page 37: Openstack with ceph

37

Page 38: Openstack with ceph

38

Page 39: Openstack with ceph

What Makes Ceph Unique Part two: thin provisioning

Page 40: Openstack with ceph

40

Page 41: Openstack with ceph

HOW DO YOU

SPIN UP

THOUSANDS OF VMs

INSTANTLY

AND

EFFICIENTLY?

Page 42: Openstack with ceph

42

Page 43: Openstack with ceph

43

Page 44: Openstack with ceph

44

Page 45: Openstack with ceph

How Does Ceph work with Openstack?

Page 46: Openstack with ceph

RBD support initially added in Cactus Have increased features / integration with each subsequent release You can use both the Swift (object/blob store) and Keystone (identity service) APIs to talk to RGW Cinder block storage as a service talks directly to RBD Nova cloud computing controller talks to RBD via the hypervisor Coming in Havana – Ability to create a volume from an RBD image via the Horizon UI

Ceph / Openstack Integration

Page 47: Openstack with ceph
Page 48: Openstack with ceph

What is Inktank? I really like your polo shirt, please tell me what it means!

Page 49: Openstack with ceph
Page 50: Openstack with ceph

The majority of Ceph contributors Formed by Sage Weil (CTO), the creator of Ceph, in 2011 Funded by DreamHost and other investors (Mark Shuttleworth, etc.)

Who?

Page 51: Openstack with ceph

To ensure the long-term success of Ceph To help companies adopt Ceph through services, support, training, and consulting

Why?

Page 52: Openstack with ceph

Guide the Ceph roadmap - Hosting a virtual Ceph Design Summit in early May

Standardize the Ceph development and release schedule - Quarterly stable releases, interim releases every 2 weeks * May 2013 – Cuttlefish RBD Incremental Snapshots! * Aug 2013 – Dumpling Disaster Recovery (Multisite) Admin API

* Nov 2013 – Some really cool cephalopod name that starts with an E

Ensure Quality - Maintain Teuthology test suite - Harden each stable release via extensive manual and automated testing

Develop reference and custom architectures for implementation

What?

Page 53: Openstack with ceph

• Inktank is a Strategic partner for Dell in Emerging Solutions

• The Emerging Solutions Ecosystem Partner Program is designed to deliver complementary cloud components

• As part of this program, Dell and Inktank provide: > Ceph Storage Software - Adds scalable cloud storage to the Dell OpenStack-powered cloud - Uses Crowbar to provision and configure a Ceph cluster (Yeah

Crowbar!) > Professional Services, Support, and Training - Collaborative Support for Dell hardware customers > Joint Solution - Validated against Dell Reference Architectures via the

Technology Partner program

Inktank/Dell Partnership

Page 54: Openstack with ceph

Try Ceph and tell us what you think! http://ceph.com/resources/downloads/ http://ceph.com/resources/mailing-list-irc/

- Ask, if you need help. - Help others, if you can!

Ask your company to start dedicating dev resources to the project! http://github.com/ceph Find a bug (http://tracker.ceph.com) and fix it! Participate in our Ceph Design Summit!

What do we want from you??

Page 55: Openstack with ceph

We’re planning the next release of Ceph and would love your input. What features would you like us to include?

iSCSI?

Live Migration?

One final request…

Page 56: Openstack with ceph

Questions?

56

Ian Colle Ceph Program Manager, Inktank [email protected] @ircolle www.linkedin.com/in/ircolle ircolle on freenode inktank.com | ceph.com