Ceph Day London 2014 - Best Practices for Ceph-powered Implementations of Storage as-a-Service

Preview:

DESCRIPTION

Paul Brooks and Steve L. Smith, Dell

Citation preview

Dell and CEPHSteve Smith: Steve_l_smith@dell.com@SteveSAtDell

Paul BrookPaul_brook@dell.comTwitter @PaulBrookAtDell

Ceph Day LondonOctober 22nd 2014

agenda

• Why we are here. – we sell CEPH support

• You need hardware to sit this on. Here are some ideas

• Some best practice shared with CEPH colleagues this year

• A concept – (Research Data – would like your input)

Dell Corporation

Dell is a certified reseller of Red Hat-Inktank Services, Support and Training.

• Need to Access and buy Red Hat Services & Support?

15+ Years of Red Hat and Dell

• Red Hat 1-year /3-year subscription packages – Inktank Pre-Production subscription– Gold (24*7) Subscription

• Red Hat Professional Services– Ceph Pro services Starter Pack – Additional days services options – Ceph Training from Red Hat

3Confidential

Or…you can download CEPH for Free

Dell Corporation

Components Involved

http://docs.openstack.org/training-guides/content/module001-ch004-openstack-architecture.html

Dell Corporation

Dell OpenStack Cloud Solution

Stuff

You Get Stuff

Dell Corporation

Best Practices(well…….some)

Dell Corporation

With acknowledgement and thanks to Kyle and Mark at InkTank

Planning your Ceph Implementation

• Business Requirements– Budget considerations, organisational commitment– Replacing Enterprise SAN/NAS for cost saving– xaaS use cases for massive-scale, cost-effective storage – Avoid lock-in – use open source and industry standards– Steady-state vs. Spike data usage

• Sizing requirements– What is the initial storage capacity?– What is the expected growth rate?

• Workload requirements– Does the workload need high performance or it is more

capacity focused?– What are IOPS/Throughput requirements?– What applications will be running on Ceph cluster?– What type of data will be stored?

Dell Corporation

Architectural considerations – Redundancy and replication considerations

• Tradeoff between Cost vs. Reliability (use-case dependent)

• How many node failures can be tolerated?

• In a multi-rack scenario, should a whole rack failure be tolerated?

• Is there a need for multi-site data replication?

• Erasure coding (more capacity with the same raw disk. More CPU load)

• Plan for redundancy of the monitor nodes – distribute across fault zones

• 3 copies = 8 nines availability, less than 1 second downtime per year

• Many many things affect performance - in Ceph, above Ceph and below Ceph.

Dell Corporation

Understanding Your Workload

Dell Corporation

CEPH Architecture Refresh

Dell Corporation

Understanding Ceph (1)

Dell Corporation

Understanding Ceph (2)

Dell Corporation

Understanding The Storage Server

Dell Corporation

Multi-Site Issues

• Within a CEPH cluster RADOS enforces Strong Consistency

• The Writer process will wait for the ACK, which happens after the primary copy, the replicated copies and the journals have all been written.

• On a WAN this might extend latencies unacceptably.

• Alternatives

• For S3/Swift systems, federated gateways between CEPH clusters, RADOS uses Eventual Consistency.

• For remote backup use RBD with sync agents and incremental snapshots.

Dell Corporation

Recommended Storage Server ConfigurationsCEPH and InkTank recommendations are a bit out of date.

• CPU – 1 core GHz per OSD – so a 2 x 8-core Intel Haswell 2.0GHz server could support 32

OSDs– less for AMD

• Memory – 2GB per OSD– Must be ECC

• Disk Controller – SAS or SATA without extender for data and journal, RAID 1 for operating system disks

• Data Disks – Size doesn’t matter! Rebuilds happen across hundreds of placement groups.– 12 disks seems a good number

• Journal Disks – SSDs – write optimised

Dell Corporation

Intel Processors

Dell Corporation

Memory Considerations

C0 C2C1 C3

C4 C6C5 C7 C7C5 C6C4

C3C1 C2C0

• Always populate all channels – groups of 8• Anything less loses significant memory bandwidth

• Speed drops with 3DPC (sometimes 2DPC)• Use Dual Rank RDIMMs for maximum performance and expandability

• Important to PIN process and data to same NUMA node• But let OS processes float• Or try Hyperthreading

• Sensible memory is now 64GB (8 x 8GB RDIMMs) Dell Corporation

M

STORAGE NODE

RADOS GATEWAY

DreamObjects Hardware Specs

STORAGE NODE

STORAGE NODE

STORAGE NODE

STORAGE NODE

STORAGE NODE

x90

x4

MANAGEMENT NODE x3

LOAD BALANCER x2STORAGE NODE

Dell PowerEdge R5156 core AMD CPU, 32GB RAM2x 300GB SAS drives (OS)12x 3TB SATA drives2x 10GbE, 1x 1GbE, IPMI

Dell PowerEdge R4152x 1TB SATA1x 10GbE

MANAGEMENT NODE

Dell Corporation

Ceph Gateway Server

• Gateway does CRC32 and MD5 checksumming– Now included in Intel AVX2 on Haswell

• 64GB memory (minimum sensible)

• 2 separate 10GbE NICs, 1 for client comms, 1 for store/retrieve

• Make sure you have enough file handles, default is 100 - you should start at 4096!

• Load balancing with multiple gateways

Dell Corporation

Ceph Cluster Monitors

• Best practice to deploy monitor role on dedicated hardware– Not resource intensive but critical – Stewards of the cluster– Using separate hardware ensures no contention for resources

• Make sure monitor processes are never starved for resources– If running monitor process on shared hardware, fence off resources

• Deploy an odd number of monitors (3 or 5)– Need to have an odd number of monitors for quorum voting– Clusters < 200 nodes work well with 3 monitors– Larger clusters may benefit from 5– Main reason to go to 7 is to have redundancy in fault zones

• Add redundancy to monitor nodes as appropriate– Make sure the monitor nodes are distributed across fault zones– Consider refactoring fault zones if needing more than 7 monitors– Build in redundant power, cooling, disk

20

Dell Corporation

Networking Overview• Plan for low latency and high bandwidth

• Use 10GbE switches within the rack

• Use 40GbE uplinks between racks in the datacentre

• Use more bandwidth at the backend compared to the front end

• Enable Jumbo frames

• Replication is done by the storage not the client

• Client writes to primary and journal

• Primary writes to replicas through back end network

• Backend also does recovery and rebalancing

21

Dell Corporation

Potential Dell Server Hardware Choices

• Rackable Storage Node– Dell PowerEdge R720XD OR new 13g R730/R730xd

• Bladed Storage Node– Dell PowerEdge C8000XD Disk

and PowerEdge C8220 CPU

– 2x Xeon E5-2687 CPU, 128GB RAM

– 2x 400GB SSD drives (OS and optionally Journals)

– 12x 3TB NL SAS drive

– 2x 10GbE, 1x 1GbE, IPMI

• Monitor Node– Dell PowerEdge R415

– 2x 1TB SATA

– 1x 10GbE

Confidential22

Dell Corporation

Mixed Use Deployments

• For simplicity, dedicate hardware to specific role– That may not always be practical (e.g., small clusters)– If needed, can combine multiple functions on same

hardware

• Multiple Ceph Roles (e.g., OSD+RGW, OSD+MDS, Mon+RGW)– Balance IO-intensive with CPU/memory intensive roles– If both roles are relatively light (e.g., Mon and RGW) can

combine

• Multiple Applications (e.g., OSD+Compute, Mon+Horizon)– In OpenStack environment, may need to mix

components– Follow same logic of balancing IO-intensive with CPU

intensive23

Dell Corporation

Super-size CEPH• Lots of Disk space

• CEPH Rules apply

• Great for cold dark storage

• Surprisingly popular with Customers

• 3PB raw in a rack!

R730/R730XD or R720/R720XD

PowerVault JBOD

Dell Corporation

Other Design Guidelines

• Use simple components, don't buy more than you need. – Save money on RAID, redundant NICs, PS

and buy more disks

• Keep networks as flat as possible (East-West)– VLANs don't scale– Use Software Defined Networking for multi-tenancy in cloud

• Design the fault zones carefully for NoSPoF– Rack– Row– Datacentre

25

Dell Corporation

Research Data: Beta Slides

Dell Corporation

Concept: Get started?

Keep, Search,

Collaborate-Publish

Research Data & Publications

Digital - Pre-Publication (Any Format?)

Digital -Other (Any Format?)

Dell Corporation

Concept: Get started?

Keep, Search,

Collaborate-Publish

Research Data & Publications

Digital - Pre-Publication (Any Format?)

Digital -Other (Any Format?)

How tag metadata?

How long to store?

File types to store?

How Search?

Data Security?

How Collaborate?

Dell Corporation

Holding a tin cup below a Niagara Falls of data!" 

Data keeps on coming &……. ..coming……& coming………..

Has anyone else had this problem and already solved it. ?

Open Source is best protection/longevity. “Web 2.0/Social has already solved scale-storage problem”

Dell Corporation

Solve problems one at a time

OpenStack Layer (Access)

CEPH Storage

Identity Manageme

nt

Governance

Policy & Control

PUBLISH:

Existing Publishing

routes

Dell Corporation

Solve problems one at a time

OpenStack Layer (Access)

CEPH Storage

Identity Manageme

nt

Governance

Policy & Control

PUBLISH:

Existing Publishing

routes

Start Here

Dell Corporation

Recommended