Upload
tiana-hodder
View
216
Download
3
Embed Size (px)
Citation preview
Software Defined storage, Big Data and Ceph. What is all the fuss about?
Kamesh Pemmaraju, Sr. Product Mgr, Dell
Neil Levine, Dir. of Product Mgmt, Red Hat
OpenStack Summit Atlanta, May 2014
CEPH
CEPH UNIFIED STORAGE
FILE SYSTEM
BLOCK STORAGE
OBJECT STORAGE
Keystone
Geo-ReplicationNative API
3
Multi-tenant
S3 & Swift
OpenStack
Linux Kernel
iSCSI
Clones
Snapshots
CIFS/NFS
HDFSDistributed Metadata
Linux Kernel
POSIX
Copyright © 2013 by Inktank | Private and Confidential
CEPHFSA distributed file
system with POSIX semantics and
scale-out metadata management
RGWA web services
gateway for object storage, compatible
with S3 and Swift
RBDA reliable, fully-distributed block device with cloud
platform integration
ARCHITECTURE
4Copyright © 2013 by Inktank | Private and Confidential
LIBRADOSA library allowing apps to directly access RADOS (C, C++, Java, Python, Ruby,
PHP)
RADOSA software-based, reliable, autonomous, distributed object store comprised ofself-healing, self-managing, intelligent storage nodes and lightweight monitors
APP HOST/VM CLIENT
COMPONENTS
5
S3/SWIFT HOST/HYPERVISOR iSCSI CIFS/NFS SDK
INTERFACES
STORAGE CLUSTERS
MONITORS OBJECT STORAGE DAEMONS (OSD)
BLOCK STORAGE
FILE SYSTEM
OBJECT STORAGE
NODE NOD
E NODE
NODE
NODE
NODE
NODE
NODE
NODE
Copyright © 2014 by Inktank | Private and Confidential
THE PRODUCT
7
INKTANK CEPH ENTERPRISEWHAT’S INSIDE?
Ceph Object and Ceph Block
Calamari
Enterprise Plugins (2014)Support Services
Copyright © 2013 by Inktank | Private and Confidential
Subscription-based
Priced on capacitySingle price for all
protocols
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
9
OPEN STACKKEYSTONE
APISWIFT API CINDER API
GLANCE API
NOVAAPI
CEPH STORAGE CLUSTER(RADOS)
CEPH OBJECT GATEWAY
(RGW)
CEPH BLOCK DEVICE(RBD)
HYPERVISOR
(Qemu/KVM)
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
10
OPEN STACKKEYSTONE
APISWIFT API CINDER API
GLANCE API
NOVAAPI
CEPH OBJECT GATEWAY
(RGW)
CEPH BLOCK DEVICE(RBD)
HYPERVISOR
(Qemu/KVM)
Volumes Ephemeral
Copy-on-Write Snapshots
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: OPENSTACK
11
RED HAT ENTERPRISE LINUX OPENSTACK PLATFORM
CEPH STORAGE CLUSTER(RADOS)
CEPH OBJECT GATEWAY
(RGW)
CEPH BLOCK DEVICE(RBD)
CERTIFIED!
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: CLOUD STORAGE
12
WEB APPLICATIONAPP
SERVERAPP
SERVERAPP
SERVER
CEPH STORAGE CLUSTER(RADOS)
CEPH OBJECT GATEWAY
(RGW)
CEPH OBJECT GATEWAY(RGW)
APP SERVER
S3/Swift S3/Swift S3/Swift S3/Swift
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: WEBSCALE APPLICATIONS
13
WEB APPLICATIONAPP
SERVERAPP
SERVERAPP
SERVER
CEPH STORAGE CLUSTER(RADOS)
APP SERVER
NativeProtocol
NativeProtocol
NativeProtocol
NativeProtocol
ROADMAPINKTANK CEPH ENTERPRISE
14Copyright © 2013 by Inktank | Private and Confidential
1.2 2.0
CEPH
CALAMARI
PLUGINS
Erasure Coding CephFS
Cache Tiering
User Quotas
UI Management Call Home Support Analytics
Hyper-V
Ceph 0.80 Firefly Ceph “H-Release”
May 2014 Q4 2014
HDFS Support
RHEL7 Support
VMware
NFS/CIFS
iSCSI
RBD Mirroring
SNMP
2015
QoS
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
15
KVM/RHEV
CACHE POOL (REPLICATED)
BACKING POOL (REPLICATED)
CEPH STORAGE CLUSTER
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
16
KVM/RHEV
CACHE: WRITEBACK MODE
BACKING POOL (REPLICATED)
CEPH STORAGE CLUSTER
Read/Write Read/Write
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: PERFORMANCE BLOCK
17
KVM/RHEV
CACHE: READ ONLY MODE
BACKING POOL (REPLICATED)
CEPH STORAGE CLUSTER
Write Write Read Read
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: ARCHIVE / COLD STORAGE
18
APPLICATION
CACHE POOL (REPLICATED)
BACKING POOL (ERASURE CODED)
CEPH STORAGE CLUSTER
ROADMAPINKTANK CEPH ENTERPRISE
19Copyright © 2013 by Inktank | Private and Confidential
1.2 2.0
CEPH
CALAMARI
PLUGINS
Cache Tiering
CephFS
User Quotas
RHEL7 Support
UI Management Call Home Support Analytics
Hyper-V
Ceph 0.77 Firefly Ceph 0.87 “H-Release”
April 2014 September 2014
HDFS
Erasure Coding
VMware
NFS/CIFS
iSCSI
QoS
SNMP
2015
RBD Mirroring
CEPH BLOCK DEVICE (RBD)
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: DATABASES
20
MYSQL / MARIADBRHEL7 RBD Kernel Module
CEPH STORAGE CLUSTER(RADOS)
NativeProtocol
NativeProtocol
NativeProtocol
NativeProtocol
Copyright © 2013 by Inktank | Private and Confidential
USE CASE: HADOOP
21
POSIXRHEL7 CephFS Kernel Module
CEPH STORAGE CLUSTER(RADOS)
NativeProtocol
NativeProtocol
NativeProtocol
NativeProtocol
CEPH FILE SYSTEM (CEPHFS)
22
Training for Proof of Concept or Production
Users
Online Training for Cloud Builders and
Storage Administrators
Instructor led with virtual lab
environment
INKTANK UNIVERSITY
Copyright © 2014 by Inktank | Private and Confidential
VIRTUAL PUBLIC
May 21 – 22
European Time-zone
June 4 - 5
US Time-zone
REGISTER TODAY:www.inktank.com/university
Ceph Reference Architectures and case study
Outline
• Planning your Ceph implementation• Choosing targets for Ceph deployments• Reference Architecture Considerations• Dell Reference Configurations• Customer Case Study
• Business Requirements– Budget considerations, organizational commitment– Avoiding lock-in – use open source and industry standards– Enterprise IT use cases– Cloud applications/XaaS use cases for massive-scale, cost-effective storage – Steady-state vs. Spike data usage
• Sizing requirements– What is the initial storage capacity?– What is the expected growth rate?
• Workload requirements– Does the workload need high performance or it is more capacity focused?– What are IOPS/Throughput requirements?– What type of data will be stored?
– Ephemeral vs. persistent data, Object, Block, File?
Planning your Ceph Implementation
How to Choose Targets Use Cases for Ceph
Virtualization and Private Cloud
(traditional SAN/NAS)
High Performance(traditional SAN)
PerformanceCapacity
NAS & Object Content Store(traditional NAS)
Cloud Applicatio
ns
Traditional IT
XaaS Compute CloudOpen Source Block
XaaS Content StoreOpen Source NAS/Object
Ceph Target
Ceph Target
• Tradeoff between Cost vs. Reliability (use-case dependent)
• Use the Crush configs to map out your failures domains and performance pools
• Failure domains – Disk (OSD and OS)– SSD journals– Node– Rack– Site (replication at the RADOS level, Block replication, consider latencies)
• Storage pools– SSD pool for higher performance– Capacity pool
• Plan for failure domains of the monitor nodes
• Consider failure replacement scenarios, lowered redundancies, and performance impacts
Architectural considerations – Redundancy and replication considerations
Server Considerations• Storage Node:
– one OSD per HDD, 1 – 2 GB ram, and 1 Gz/core/OSD, – SSD’s for journaling and for using the tiering feature in Firefly– Erasure coding will increase useable capacity at the expense of additional
compute load– SAS JBOD expanders for extra capacity (beware of extra latency and
oversubscribed SAS lanes)
• Monitor nodes (MON): odd number for quorum, services can be hosted on the storage node for smaller deployments, but will need dedicated nodes larger installations
• Dedicated RADOS Gateway nodes for large object store deployments and for federated gateways for multi-site
Networking Considerations• Dedicated or Shared network
– Be sure to involve the networking and security teams early when design your networking options
– Network redundancy considerations – Dedicated client and OSD networks– VLAN’s vs. Dedicated switches– 1 Gbs vs 10 Gbs vs 40 Gbs!
• Networking design– Spine and Leaf– Multi-rack– Core fabric connectivity– WAN connectivity and latency issues for multi-site deployments
Ceph additions coming to the Dell Red Hat OpenStack solutionPilot configuration Components
• Dell PowerEdge R620/R720/R720XD Servers• Dell Networking S4810/S55 Switches, 10GB• Red Hat Enterprise Linux OpenStack Platform • Dell ProSupport • Dell Professional Services • Avail. w/wo High Availability
Specs at a glance • Node 1: Red Hat Openstack Manager • Node 2: OpenStack Controller (2 additional
controllers for HA)• Nodes 3-8: OpenStack Nova Compute• Nodes: 9-11: Ceph 12x3 TB raw storage • Network Switches: Dell Networking S4810/S55• Supports ~ 170-228 virtual machines
Benefits • Rapid on-ramp to OpenStack cloud• Scale up, modular compute and storage
blocks • Single point of contact for solution support• Enterprise-grade OpenStack software
package
Storage bundles
Example Ceph Dell Server Configurations
Type Size Components
Performance 20 TB • R720XD• 24 GB DRAM• 10 X 4 TB HDD (data drives)• 2 X 300 GB SSD (journal)
Capacity 44TB /105 TB*
• R720XD• 64 GB DRAM• 10 X 4 TB HDD (data drives)• 2 X 300 GB SSH (journal)
• MD1200• 12 X 4 TB HHD (data drives)
Extra Capacity 144 TB /240 TB*
• R720XD• 128 GB DRAM• 12 X 4 TB HDD (data drives)
• MD3060e (JBOD)• 60 X 4 TB HHD (data drives)
• Dell & Red Hat & Inktank have partnered to bring a complete Enterprise-grade storage solution for RHEL-OSP + Ceph
• The joint solution provides:– Co-engineered and validated Reference Architecture – Pre-configured storage bundles optimized for
performance or storage– Storage enhancements to existing OpenStack Bundles– Certification against RHEL-OSP – Professional Services, Support, and Training
› Collaborative Support for Dell hardware customers› Deployment services & tools
What Are We Doing To Enable?
UAB Case Study
Overcoming a data delugeInconsistent data management across research teams hampers productivity
• Growing data sets challenged available resources
• Research data distributed across laptops, USB drives, local servers, HPC clusters
• Transferring datasets to HPC clusters took too much time and clogged shared networks
• Distributed data management reduced researcher productivity and put data at risk
Solution: a storage cloudCentralized storage cloud based on OpenStack and Ceph
• Flexible, fully open-source infrastructurebased on Dell reference design
− OpenStack, Crowbar and Ceph− Standard PowerEdge servers and storage− 400+ TBs at less than 41¢ per gigabyte
• Distributed scale-out storage provisions capacity from a massive common pool
− Scalable to 5 petabytes
• Data migration to and from HPC clusters via dedicated 10Gb Ethernet fabric
• Easily extendable framework for developing and hosting additional services
− Simplified backup service now enabled
“We’ve made it possible for users to satisfy their own storage needs with the Dell private cloud, so that their research is not hampered by IT.”
David L. Shealy, PhDFaculty Director, Research Computing
Chairman, Dept. of Physics
Building a research cloudProject goals extend well beyond data management
• Designed to support emerging data-intensive scientific computing paradigm– 12 x 16-core compute nodes– 1 TB RAM, 420 TBs storage– 36 TBs storage attached to each compute
node
• Virtual servers and virtual storage meet HPC− Direct user control over all aspects of the
application environment− Ample capacity for large research data sets
• Individually customized test/development/ production environments− Rapid setup and teardown
• Growing set of cloud-based tools & services− Easily integrate shareware, open source, and
commercial software
“We envision the OpenStack-based cloud to act as the gateway to our HPC resources, not only as the purveyor of services we provide, but also enabling users to build their own cloud-based services.”
John-Paul Robinson, System Architect
Research Computing System (Next Gen)
A cloud-based computing environment with high speed access to dedicated and dynamic compute resources
OpenStac
k node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
OpenStack node
HPCCluster
HPCCluster
HPC Storage
DDR Infiniband QDR Infiniband
10Gb Ethernet
Cloud services layerVirtualized server and storage computing cloud
based on OpenStack, Crowbar and Ceph
UAB Research Network
THANK YOU!
Contact InformationReach Kamesh and Neil for additional
information:Dell.com/OpenStack
Dell.com/Crowbar
Inktank.com/Dell
@kpemmaraju
@neilwlevine
Visit the Dell and Inktank booths in the OpenStack Summit Expo
Hall