45
GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE BACKEND FOR CEPH SAGE WEIL – RED HAT 2017.09.12

GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

  • Upload
    lamngoc

  • View
    219

  • Download
    2

Embed Size (px)

Citation preview

Page 1: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

GOODBYE, XFS: BUILDING A NEW, FASTERSTORAGE BACKEND FOR CEPH

SAGE WEIL – RED HAT2017.09.12

Page 2: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

2

OUTLINE

● Ceph background and context– FileStore, and why POSIX failed us

● BlueStore – a new Ceph OSD backend● Performance● Recent challenges● Current status, future● Summary

Page 3: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

MOTIVATION

Page 4: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

4

CEPH

● Object, block, and file storage in a single cluster

● All components scale horizontally

● No single point of failure

● Hardware agnostic, commodity hardware

● Self-manage whenever possible

● Open source (LGPL)

● “A Scalable, High-Performance Distributed File System”

● “performance, reliability, and scalability”

Page 5: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

5

● Released two weeks ago

● Erasure coding support for block and file (and object)

● BlueStore (our new storage backend)

Page 6: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

6

CEPH COMPONENTS

RGWA web services gateway

for object storage, compatible with S3 and

Swift

LIBRADOSA library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby, PHP)

RADOSA software-based, reliable, autonomous, distributed object store comprised ofself-healing, self-managing, intelligent storage nodes and lightweight monitors

RBDA reliable, fully-distributed

block device with cloud platform integration

CEPHFSA distributed file system

with POSIX semantics and scale-out metadata

management

OBJECT BLOCK FILE

Page 7: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

7

OBJECT STORAGE DAEMONS (OSDS)

FS

DISK

OSD

DISK

OSD

FS

DISK

OSD

FS

DISK

OSD

FS

xfsbtrfsext4

M

M

M

Page 8: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

8

OBJECT STORAGE DAEMONS (OSDS)

FS

DISK

OSD

DISK

OSD

FS

DISK

OSD

FS

DISK

OSD

FS

xfsbtrfsext4

M

M

M

FileStore FileStoreFileStoreFileStore

Page 9: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

9

IN THE BEGINNING

FakeStore

EBOFS

2004 2008 2012 2014 2016

Page 10: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

10

● ObjectStore

– abstract interface for storing local data

– EBOFS, FakeStore

● Object – “file”

– data (file-like byte stream)

– attributes (small key/value)

– omap (unbounded key/value)

● Collection – “directory”

– actually a shard of RADOS pool

– Ceph placement group (PG)

● All writes are transactions

– Atomic + Consistent + Durable

– Isolation provided by OSD

OBJECTSTORE AND DATA MODEL

Page 11: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

11

LEVERAGE KERNEL FILESYSTEMS!

FakeStoreFileStore + BtrFS

FileStore + XFS

EBOFS EBOFS removed

2004 2008 2012 2014 2016

Page 12: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

12

● FileStore

– evolution of FakeStore

– PG = collection = directory

– object = file

● Leveldb

– large xattr spillover

– object omap (key/value) data

● /var/lib/ceph/osd/ceph-123/

– current/

● meta/– osdmap123– osdmap124

● 0.1_head/– object1– object12

● 0.7_head/– object3– object5

● 0.a_head/– object4– object6

● omap/– <leveldb files>

FILESTORE

Page 13: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

13

● Most transactions are simple

– write some bytes to object (file)

– update object attribute (file xattr)

– append to update log (kv insert)

...but others are arbitrarily large/complex

● Serialize and write-ahead txn to journal for atomicity

– We double-write everything!

– Lots of ugly hackery to make replayed events idempotent

[ { "op_name": "write", "collection": "0.6_head", "oid": "#0:73d87003:::benchmark_data_gnit_10346_object23:head#", "length": 4194304, "offset": 0, "bufferlist length": 4194304 }, { "op_name": "setattrs", "collection": "0.6_head", "oid": "#0:73d87003:::benchmark_data_gnit_10346_object23:head#", "attr_lens": { "_": 269, "snapset": 31 } }, { "op_name": "omap_setkeys", "collection": "0.6_head", "oid": "#0:60000000::::head#", "attr_lens": { "0000000005.00000000000000000006": 178, "_info": 847 } }]

POSIX FAILS: TRANSACTIONS

Page 14: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

14

POSIX FAILS: KERNEL CACHE

● Kernel cache is mostly wonderful

– automatically sized to all available memory

– automatically shrinks if memory needed

– efficient

● No cache implemented in app, yay!

● Read/modify/write vs write-ahead

– write must be persisted

– then applied to file system

– only then you can read it

Page 15: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

15

POSIX FAILS: ENUMERATION

● Ceph objects are distributed by a 32-bit hash

● Enumeration is in hash order

– scrubbing

– data rebalancing, recovery

– enumeration via librados client API

● POSIX readdir is not well-ordered

● Need O(1) “split” for a given shard/range

● Build directory tree by hash-value prefix

– split big directories, merge small ones

– read entire directory, sort in-memory

…DIR_A/DIR_A/A03224D3_qwerDIR_A/A247233E_zxcv…DIR_B/DIR_B/DIR_8/DIR_B/DIR_8/B823032D_fooDIR_B/DIR_8/B8474342_barDIR_B/DIR_9/DIR_B/DIR_9/B924273B_bazDIR_B/DIR_A/DIR_B/DIR_A/BA4328D2_asdf…

Page 16: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

16

THE HEADACHES CONTINUE

● New FileStore+XFS problems continue to surface

– FileStore directory splits lead to throughput collapse when an entire pool’s PG directories split in unison

– Cannot bound deferred writeback work, even with fsync(2)

– QoS efforts thwarted by deep queues and periodicity in FileStore throughput

– {RBD, CephFS} snapshots triggering inefficient 4MB object copies to create object clones

Page 17: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

17

ACTUALLY, OUR FIRST PLAN WAS BETTER

FakeStoreFileStore + BtrFS

FileStore + XFS

EBOFS EBOFS removed

2004 2008 2012 2014 2016

BlueStoreNewStore

Page 18: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

BLUESTORE

Page 19: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

19

● BlueStore = Block + NewStore

– consume raw block device(s)

– key/value database (RocksDB) for metadata

– data written directly to block device

– pluggable inline compression

– full data checksums

● Target hardware

– Current gen HDD, SSD (SATA, SAS, NVMe)

● We must share the block device with RocksDB

BLUESTORE

BlueStore

BlueFS

RocksDB

BlueRocksEnv

data metadata

ObjectStore

BlockDevice

Page 20: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

20

ROCKSDB: BlueRocksEnv + BlueFS

● class BlueRocksEnv : public rocksdb::EnvWrapper

– passes “file” operations to BlueFS

● BlueFS is a super-simple “file system”

– all metadata lives in the journal

– all metadata loaded in RAM on start/mount

– journal rewritten/compacted when it gets large

– no need to store block free list

superblock journal …

more journal …

data

data

data

data

file 10 file 11 file 12 file 12 file 13 rm file 12 file 13 ...

● Map “directories” to different block devices

– db.wal/ – on NVRAM, NVMe, SSD

– db/ – level0 and hot SSTs on SSD

– db.slow/ – cold SSTs on HDD

● BlueStore periodically balances free space

Page 21: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

METADATA

Page 22: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

22

● Per object metadata

– Lives directly in key/value pair

– Serializes to 100s of bytes

● Size in bytes

● Attributes (user attr data)

● Inline extent map (maybe)

struct bluestore_onode_t { uint64_t size; map<string,bufferptr> attrs; uint64_t flags;

// extent map metadata struct shard_info { uint32_t offset; uint32_t bytes; }; vector<shard_info> shards;

bufferlist inline_extents; bufferlist spanning_blobs;};

ONODE – OBJECT

Page 23: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

23

● Collection metadata

– Interval of object namespace

pool hash name bits

C<12,3d3e0000> “12.e3d3” = <19>

pool hash name snap

O<12,3d3d880e,foo,NOSNAP> = …

O<12,3d3d9223,bar,NOSNAP> = …

O<12,3d3e02c2,baz,NOSNAP> = …

O<12,3d3e125d,zip,NOSNAP> = …

O<12,3d3e1d41,dee,NOSNAP> = …

O<12,3d3e3832,dah,NOSNAP> = …

struct spg_t { uint64_t pool; uint32_t hash;};

struct bluestore_cnode_t { uint32_t bits;};

● Nice properties– Ordered enumeration of objects

– We can “split” collections by adjusting collection metadata only

CNODE – COLLECTION

Page 24: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

24

● Blob

– Extent(s) on device

– Checksum metadata

– Data may be compressed

● SharedBlob

– Extent ref count on cloned blobs

BLOB AND SHAREDBLOB

struct bluestore_blob_t { uint32_t flags = 0;

vector<bluestore_pextent_t> extents; uint16_t unused = 0; // bitmap

uint8_t csum_type = CSUM_NONE; uint8_t csum_chunk_order = 0; bufferptr csum_data;

uint32_t compressed_length_orig = 0; uint32_t compressed_length = 0; };

struct bluestore_shared_blob_t { uint64_t sbid; bluestore_extent_ref_map_t ref_map; };

Page 25: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

25

● Map object extents → blob extents

● Serialized in chunks

– stored inline in onode value if small

– otherwise stored in adjacent keys

● Blobs stored inline in each shard

– unless it is referenced across shard boundaries

– “spanning” blobs stored in onode key

EXTENT MAP

...

O<,,foo,,> = onode + inline extent map

O<,,bar,,> = onode + spanning blobs

O<,,bar,,0> = extent map shard

O<,,bar,,4> = extent map shard

O<,,baz,,> = onode + inline extent map

...

logi

cal o

ffset

s

blob

blob

blo

b

shar

d #

1sh

ard

#2

span

ning

blo

b

Page 26: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

DATA PATH

Page 27: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

27

Terms

● TransContext

– State describing an executing transaction

● Sequencer

– An independent, totally ordered queue of transactions

– One per PG

DATA PATH BASICS

Three ways to write

● New allocation

– Any write larger than min_alloc_size goes to a new, unused extent on disk

– Once that IO completes, we commit the transaction

● Unused part of existing blob

● Deferred writes

– Commit temporary promise to (over)write data with transaction

● includes data!– Do async (over)write

– Then clean up temporary k/v pair

Page 28: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

28

● OnodeSpace per collection

– in-memory name → Onode map of decoded onodes

● BufferSpace for in-memory Blobs

– all in-flight writes

– may contain cached on-disk data

● Both buffers and onodes have lifecycles linked to a Cache

– TwoQCache – implements 2Q cache replacement algorithm (default)

● Cache is sharded for parallelism

– Collection → shard mapping matches OSD's op_wq

– same CPU context that processes client requests will touch the LRU/2Q lists

IN-MEMORY CACHE

Page 29: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

PERFORMANCE

Page 30: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

30

HDD: RANDOM WRITE

4 8 16 32 64 128

256

512

1024

2048

4096

050

100150200250300350400450

Bluestore vs Filestore HDD Random Write Throughput

FilestoreBluestore (wip-bitmap-alloc-perf)BS (Master-a07452d)BS (Master-d62a4948)Bluestore (wip-bluestore-dw)

IO Size

Thr

ough

put (

MB

/s)

4 8 16 32 64 128

256

512

1024

2048

4096

0200400600800

100012001400160018002000

Bluestore vs Filestore HDD Random Write IOPS

FilestoreBluestore (wip-bitmap-alloc-perf)BS (Master-a07452d)BS (Master-d62a4948)Bluestore (wip-bluestore-dw)

IO Size

IOP

S

Page 31: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

31

HDD: SEQUENTIAL WRITE

4 8 16 32 64 128

256

512

1024

2048

4096

050

100150200250300350400450

Bluestore vs Filestore HDD Sequential Write Throughput

FilestoreBluestore (wip-bitmap-alloc-perf)BS (Master-a07452d)BS (Master-d62a4948)Bluestore (wip-bluestore-dw)

IO Size

Thr

ough

put (

MB

/s)

4 8 16 32 64 128

256

512

1024

2048

4096

0

500

1000

1500

2000

2500

3000

3500

Bluestore vs Filestore HDD Sequential Write IOPS

FilestoreBluestore (wip-bitmap-alloc-perf)BS (Master-a07452d)BS (Master-d62a4948)Bluestore (wip-bluestore-dw)

IO Size

IOP

S

WTH

Page 32: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

32

RGW ON HDD+NVME, EC 4+2

1 Bucket 128 Buckets 4 Buckets 512 Buckets Rados1 RGW Server 4 RGW Servers Bench

0

200

400

600

800

1000

1200

1400

1600

1800

4+2 Erasure Coding RadosGW Write Tests

32MB Objects, 24 HDD/NVMe OSDs on 4 Servers, 4 Clients

Filestore 512KB ChunksFilestore 4MB ChunksBluestore 512KB ChunksBluestore 4MB Chunks

Thr

ough

put (

MB

/s)

Page 33: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

33

ERASURE CODE OVERWRITES

● Luminous allows overwrites of EC objects

– Requires two-phase commit to avoid “RAID-hole” like failure conditions

– OSD creates temporary rollback objects

● clone_range $extent to temporary object● overwrite $extent with new data

● BlueStore can do this easily…

● FileStore literally copies (reads and writes to-be-overwritten region)

Page 34: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

34

BLUESTORE vs FILESTORE3X vs EC 4+2 vs EC 5+1

Bluestore FilestoreHDD/HDD

0

100

200

300

400

500

600

700

800

900

RBD 4K Random Writes

16 HDD OSDs, 8 32GB volumes, 256 IOs in flight

3XEC42EC51

IOP

S

Page 35: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

OTHER CHALLENGES

Page 36: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

36

USERSPACE CACHE

● Built ‘mempool’ accounting infrastructure

– easily annotate/tag C++ classes and containers

– low overhead

– debug mode provides per-type (vs per-pool) accounting

● Requires configuration

– bluestore_cache_size (default 1GB)

● not as convenient as auto-sized kernel caches

● Finally have meaningful implementation of fadvise NOREUSE (vs DONTNEED)

Page 37: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

37

MEMORY EFFICIENCY

● Careful attention to struct sizes: packing, redundant fields

● Checksum chunk sizes

– client hints to expect sequential read/write → large csum chunks

– can optionally select weaker checksum (16 or 8 bits per chunk)

● In-memory red/black trees (e.g., std::map<>) bad for CPUs

– low temporal write locality → many CPU cache misses, failed prefetches

– use btrees where appropriate

– per-onode slab allocators for extent and blob structs

Page 38: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

38

ROCKSDB

● Compaction

– Overall impact grows as total metadata corpus grows

– Invalidates rocksdb block cache (needed for range queries)

– Awkward to control priority on kernel libaio interface

● Many deferred write keys end up in L0

● High write amplification

– SSDs with low-cost random reads care more about total write overhead

● Open to alternatives for SSD/NVM

Page 39: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

STATUS

Page 40: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

40

STATUS

● Stable and recommended default in Luminous v12.2.z (just released!)

● Migration from FileStore

– Reprovision individual OSDs, let cluster heal/recover

● Current efforts

– Optimizing for CPU time

– SPDK support!

– Adding some repair functionality to fsck

Page 41: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

FUTURE

Page 42: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

42

FUTURE WORK

● Tiering

– hints to underlying block-based tiering device (e.g., dm-cache, bcache)?

– implement directly in BlueStore?

● host-managed SMR?

– maybe...

● Persistent memory + 3D NAND future?

● NVMe extensions for key/value, blob storage?

Page 43: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

43

SUMMARY

● Ceph is great at scaling out● POSIX was poor choice for storing objects● Our new BlueStore backend is so much better

– Good (and rational) performance!– Inline compression and full data checksums

● Designing internal storage interfaces without a POSIX bias is a win...

...eventually● We are definitely not done yet

– Performance!– Other stuff

● We can finally solve our IO problems ourselves

Page 44: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

44

● 2 TB 2.5” HDDs

● 1 TB 2.5” SSDs (SATA)

● 400 GB SSDs (NVMe)

● Luminous 12.2.0

● CephFS

● Cache tiering

● Erasure coding

● BlueStore

● Untrained IT staff!

BASEMENT CLUSTER

Page 45: GOODBYE, XFS: BUILDING A NEW, FASTER STORAGE · PDF file– Interval of object namespace ... Bluestore (wip-bluestore-dw) IO Size I O P S WTH. 32 RGW ON HDD+NVME, EC 4+2 1 Bucket 128

THANK YOU!

Sage WeilCEPH PRINCIPAL ARCHITECT

[email protected]

@liewegas