Upload
others
View
4
Download
0
Embed Size (px)
Citation preview
RED HAT CEPH
TECHNICAL OVERVIEW
James Rankin
Senior Solutions Architect
GAME-CHANGING TECHNOLOGY
DESIGNED FOR MODERN WORKLOADS
EXPAND ON DEMAND
WORKS WITH INNOVATIVE TECHNOLOGY LIKE CISCO UCS
LOWER $/TB
BEFORE WE BEGIN
RED HAT STORAGEFOR ON-PREMISE
SERVER (CPU/MEM)
1TB
SCALE OUT ARCHITECTURES USING X86 HARDWARE
1TB
SCALE OUT PERFORMANCE, CAPACITY, AND AVAILABILITY
SC
AL
E U
P C
AP
AC
ITY
...
...SERVER
...
THE OPEN SOFTWARE-DEFINED STORAGE PLATFORM FOR THE MODERN HYBRID DATACENTER
TRADITIONAL STORAGE THE RED HAT WAY
HARDWARE CENTRIC
LOCKED DOWN
EXPENSIVE
PROPRIETARY
SOFTWARE CENTRIC
DISRUPTIVE ARCHITECTURE
INEXPENSIVE
OPEN SOURCE
COMMUNITY DRIVENVENDOR CONTROLLED
CEPH & INKTANK
● Red Hat acquired Inktank, the company behind Ceph
● Ceph is scale out software defined storage
– Unlike Glusterfs, Ceph's main focus is block
– Ceph is architecturally an object store (RADOS)
– Ceph can assemble block images (similar to LUNs) through chunks of objects
● Glusterfs & Ceph are complementary technologies that address different use cases
CEPH ARCHITECTURE
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes
RADOS
A reliable, autonomous, distributed object store comprised of self-healing, self-managing, intelligent storage nodes
LIBRADOS
A library allowingapps to directlyaccess RADOS,with support forC, C++, Java,Python, Ruby,and PHP
LIBRADOS
A library allowingapps to directlyaccess RADOS,with support forC, C++, Java,Python, Ruby,and PHP
RBD
A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver
RBD
A reliable and fully-distributed block device, with a Linux kernel client and a QEMU/KVM driver
CEPH FS
A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE
CEPH FS
A POSIX-compliant distributed file system, with a Linux kernel client and support for FUSE
RADOSGW
A bucket-based REST gateway, compatible with S3 and Swift
RADOSGW
A bucket-based REST gateway, compatible with S3 and Swift
APPAPP APPAPP HOST/VMHOST/VM CLIENTCLIENT
OSD & MONITOR NODES
DISKDISK
FSFS
DISKDISK DISKDISK
OSDOSD
DISKDISK DISKDISK
OSDOSD OSDOSD OSDOSD OSDOSD
FSFS FSFS FSFSFSFS btrfsxfsext4
MMMMMM
OSD & MONITOR NODES
Monitors:• Maintain cluster membership
and state• Provide consensus for
distributed decision-making• Small, odd number• These do not serve stored
objects to clients
MM
OSDs:• 10s to 10000s in a cluster• One per disk
• (or one per SSD, RAID group…)
• Serve stored objects to clients• Intelligently peer to perform
replication and recovery tasks
CRUSH10 10 01 01 10 10 01 11 01
1010 10 01 01 10 10 01 11 01
10
1010 1010 0101 0101 1010 1010 0101 1111 0101 1010
hash(object name) % num pg
CRUSH(pg, cluster state, rule set)
CRUSH
CRUSH• Pseudo-random placement
algorithm• Fast calculation, no lookup• Repeatable, deterministic
• Statistically uniform distribution• Stable mapping
• Limited data migration on change
• Rule-based configuration• Infrastructure topology aware• Adjustable replication• Weighting
CALAMARI
● Graphical management tool for Ceph Enterprise● Quickly determine cluster health, utilization, etc● Red Hat open sourced Calamari (only ~30 days after
acquisition)
USE CASE: WEB APP STORAGE
WEB APPLICATION
APP SERVER
APP SERVER
APP SERVER
CEPH STORAGE CLUSTER(RADOS)
CEPH OBJECT GATEWAY(RGW)
APP SERVER
S3/Swift S3/Swift S3/Swift librados
USE CASE: COLD STORAGE
APPLICATION
CACHE POOL (REPLICATED)
BACKING POOL (ERASURE CODED)
CEPH STORAGE CLUSTER
USE CASE: RHEL BLOCK STORAGE
RHEL
kmod-rbd
CEPH STORAGE CLUSTER(RADOS)
CEPH BLOCK DEVICE(RBD)
RHEL
kmod-rbd
RHEL
kmod-rbd
USE CASE: OPENSTACK
RHEL-OSP
KEYSTONE SWIFT CINDER GLANCE NOVA
CEPH STORAGE CLUSTER(RADOS)
CEPH OBJECT GATEWAY
(RGW)
CEPH BLOCK DEVICE(RBD)
KVM
THANK YOU
“...A DISRUPTIVE ANDUNSTOPPABLE FORCE.”
–IDC REPORT