45
1 SUSE Enterprise Storage Roadmap Andreas Jaeger Senior Product Manager [email protected] Lars Marowksy-Brée Distinguished Engineer [email protected]

SUSE Enterprise Storage Roadmap - Image Relay

  • Upload
    others

  • View
    8

  • Download
    0

Embed Size (px)

Citation preview

Page 1: SUSE Enterprise Storage Roadmap - Image Relay

1

SUSE Enterprise Storage Roadmap

Andreas Jaeger

Senior Product Manager

[email protected]

Lars Marowksy-Brée

Distinguished Engineer

[email protected]

Page 2: SUSE Enterprise Storage Roadmap - Image Relay

2

Agenda

1. SUSE Overall Strategy

2. SUSE Enterprise Storage Roadmap

3. Q&A

Page 3: SUSE Enterprise Storage Roadmap - Image Relay

3

SUSE Overall Strategy

Page 4: SUSE Enterprise Storage Roadmap - Image Relay

4

Open Source At The Heart Of Our SDI And Application Delivery Approach

Container and Application Platforms

Software-Defined Infrastructure

Physical Infrastructure:

Public Cloud

Storage NetworkingCompute

Multimodal Operating System

Platform as a Service

Infrastructure

& Lifecycle

ManagementContainer Management

Uyuni

SALT

Prometheus/Grafana

Page 5: SUSE Enterprise Storage Roadmap - Image Relay

5

SUSE Enterprise Storage — Architecture

Client Servers

(Windows, Linux, Unix)

RADOS (Common Object Store)

Block Devices

Server

Object Storage File Interface

Storage

Server

Storage

Server

Storage

Server

Storage

Server

Server

Server

Storage

Server

Storage

Server

Applications File Share

OSD OSD OSD OSD OSD OSD

Netw

ork

Clu

ste

r MO

N

RB

D

iSC

SI

S3

SW

IFT

CephF

S

Mo

nit

ors

MO

NM

ON

Page 6: SUSE Enterprise Storage Roadmap - Image Relay

6

SUSE Enterprise Storage

Last 12-month Accomplishments

• Launched SUSE Enterprise Storage 6 June 2019

9th release

• SUSE team driving Ceph Dashboard upstream project

• Latest upstream release Ceph Nautilus

8 out of top 20 Ceph contributors are from SUSE

• Early customers coming back with much larger deployments

One of our largest APJ customers quadrupled their deployment over last two years to 40PBs

Our first customer has grown their deployment 15 times larger than their initial deployment

• Revenue for FY19 doubled relative to FY18

Monitor

Nodes

Management

Node

Storage

Nodes

Unified

Open Source

Software on

x86 and Arm

Resilient &

Self-healing

High

Performance

Massively

Scalable

Public

Cloud Like

Pricing

Unified

ClusterHardware

Flexibility

Reduced

IT Costs

Object

Storage

Block

Storage

File

System

Page 7: SUSE Enterprise Storage Roadmap - Image Relay

7

Use Case Focused Solutions

Partnership Ecosystem

Backup to Disk Solution Compliant

Archives

File Sync and

Share

Appliance

HPC Storage

Certified Reference Architectures

Cloud &

Containers

SUSE

OpenStack

CloudSUSE Enterprise

Storage

SUSE CaaS

Platform

AnalyticsCSPs +

SUSE Enterprise

Storage

Cla

ss

ic

Wo

rklo

ad

s

Clo

ud

Na

tive

Wo

rklo

ad

s

Data Protector

Page 8: SUSE Enterprise Storage Roadmap - Image Relay

8

SUSE Enterprise Storage Roadmap

Page 9: SUSE Enterprise Storage Roadmap - Image Relay

9

SUSE Enterprise Storage 7

Outlook

Deployment changes

Overview

Rook

Cephadm

Dashboard

Octopus

Windows Driver

Page 10: SUSE Enterprise Storage Roadmap - Image Relay

10

SUSE Enterprise Storage Outlook

Page 11: SUSE Enterprise Storage Roadmap - Image Relay

11

Deployment: Two Options For SUSE Enterprise Storage 7

Ceph Ceph

cephadmRook

CaaSP

SLESSLES

HardwareHardware

SES 7 Rook SES 7 cephadm

• Same code base for both

options

• Shared container images for

both stacks

• Ceph Dashboard supports both

stacks thanks to ceph-

orchestrator

Page 12: SUSE Enterprise Storage Roadmap - Image Relay

12

Deployment: Update Stack

Ceph Ceph

cephadmDeepSea

(salt)

SLESSLES

HardwareHardware

SES 6 DeepSea SES 7 cephadm

• SES 7 will use a new stack

using upstream framework

(cephadm) to deploy container

and configure them

• Upstream community work to

replace three install stacks

(DeepSea, ceph-deploy, ceph-

ansible) by a common one

• New framework will be base for

additional comfortable

deployment and day 2

operations

Page 13: SUSE Enterprise Storage Roadmap - Image Relay

13

Deployment: Update Stack

Ceph Ceph

DeepSea

(salt)

SLESSLES

HardwareHardware

SES 6 DeepSea SES 7 Rook

• Runs on Kubernetes using

upstream Rook framework

• Kubernetes allows self-

managing, self-healing, self-

managing operations

• Runs containers

• Allows colocation of storage

and workload in single cluster

Rook

CaaSP

Page 14: SUSE Enterprise Storage Roadmap - Image Relay

14

Ceph & Kubernetes

Page 15: SUSE Enterprise Storage Roadmap - Image Relay

15

What Is Cloud Native

Definition1: “Cloud native technologies empower organizations to build and run scalable applications in modern, dynamic

environments such as public, private, and hybrid clouds. Containers, service meshes, microservices, immutable infrastructure,

and declarative APIs exemplify this approach.

These techniques enable loosely coupled systems that are resilient, manageable, and observable. Combined with robust

automation, they allow engineers to make high-impact changes frequently and predictably with minimal toil.”

Features

1: CNCF Cloud Native Definition v1.0, see https://github.com/cncf/foundation/blob/master/charter.md

Container

packaged

Micro

service

oriented

Dynamically

managed

Page 16: SUSE Enterprise Storage Roadmap - Image Relay

16

Ceph And Cloud Native?

Container Storage

Interface

Ceph Provides Persistent Storage

for Cloud Native Workloads

Kubernetes

Cloud Native =

Container Packaged

Dynamically managed

Micro Services Oriented

Ceph as a Cloud Native Application

Kubernetes

Page 17: SUSE Enterprise Storage Roadmap - Image Relay

17

Ceph as Cloud Native on Kubernetes

• Colocation of Storage &

Compute

• Provides file, block and

object storage to

Kubernetes containers

Hyper-Convergence

• Self-managing, self-scaling, and self-healing storage services – build using Kubernetes facilities

Use of K8s Facilities

Benefits

Page 18: SUSE Enterprise Storage Roadmap - Image Relay

18

Run Ceph on Kubernetes

• Rook is a cloud-native storage orchestrator

• Rook expands Kubernetes to run storage

• Deployment

• Configuration

• Provisioning

• Scaling

• Upgrading

• Rook is not in the data path

• Rook automates storage management tasks

• Rook manages Ceph daemons

• Rook is a CNCF incubating project

and part of Ceph ecosystem

Ceph And Rook

Page 19: SUSE Enterprise Storage Roadmap - Image Relay

19

Rook Architecture

Page 20: SUSE Enterprise Storage Roadmap - Image Relay

20

Disaggregated: Kubernetes + Ceph

The cluster is split:

• Dedicated storage nodes for Ceph

• Dedicated "compute" nodes for container workloads

Master nodes (1, 3, 5,…) Worker Nodes (> 1) for Ceph Worker Nodes (> 1) for Container

Container Container Container

Page 21: SUSE Enterprise Storage Roadmap - Image Relay

21

Disaggregated: Kubernetes + Ceph

• All nodes run both Ceph and normal containers

• Most common co-located deployment storage and compute

Master nodes (1, 3, 5,…) Worker Nodes (> 1) for Ceph and containers

Container Container Container Container Container Container

Page 22: SUSE Enterprise Storage Roadmap - Image Relay

22

• Single Ceph cluster provides

storage to multiple K8s

clusters and clients

• Ceph cluster setup without

Kubernetes in SES 7

Separate Clusters

Two Set-Ups: Separate Or Hyperconverged

• Hyperconverged Ceph and

K8s cluster

• Provides storage only to

combined cluster, not

external clients

Hyperconverged Cluster

Page 23: SUSE Enterprise Storage Roadmap - Image Relay

23

• Single Ceph cluster provides

storage to multiple K8s

clusters and external clients

• Ceph cluster setup running on

Kubernetes (SES7: Runs on

SUSE CaaS Platform)

• Ceph/Rook will provide to

Kubernetes in same cluster:

• RBD, CephFS, S3 Object Store

Separate Clusters

Outlook: Kubernetes Everywhere (Beyond SES7)

Plan for SES7: Provide S3 object storage externally Other external interfaces later:

• Block storage (RBD, iSCSI)

• File storage (CephFS, NFS, Samba)

K8s Storage Cluster K8s Cluster

Page 24: SUSE Enterprise Storage Roadmap - Image Relay

24

Rook Ceph components and

architecture [HOL-1196]

SUSE Enterprise Storage 6 on

SUSE CaaS Platform [CAS-1150]

SUSECON Sessions

Page 25: SUSE Enterprise Storage Roadmap - Image Relay

25

Cephadm And Octopus

Page 26: SUSE Enterprise Storage Roadmap - Image Relay

26

Cephadm Based Installation Vision

• Common installation framework, replacing DeepSea, ceph-ansible, ceph-deploy

• ceph-bootstrap: small Salt layer for OS dependencies, bootstrapping and OS upgrades

• cephadm:

• Will give complete “day 2” management experience:

• Start with minimal bootstrap process: One monitor, one manager

• Bring up Ceph Dashboard

• Rest is day 2 operation:

• Done via CLI or GUI

• Adding OSDs, RGWs, more monitors and managers etc.

• Allows integration with SUSE Manager

Page 27: SUSE Enterprise Storage Roadmap - Image Relay

27

mgr/cephadm:

• Orchestration

• Remote ssh connections

configure ssh keys bootstrap

deploys

2

Architecture

cephadm:

• Deploy Ceph components

(container, system services, etc)

• Deploy monitoring

mgr/cephadm:

• Package installation

• OS configuration like firewall, ntp, tuning

1

3

calls on

remote

hosts

Page 28: SUSE Enterprise Storage Roadmap - Image Relay

28

Cephadm: Container-Based

• Uses same containers as Rook

• Will make updating and upgrading of Ceph easier:

• Automate updates of point releases

• Container isolation eases upgrading of co-located services

• Make Ceph upgrades independent from OS upgrades

Page 29: SUSE Enterprise Storage Roadmap - Image Relay

29

Ceph Dashboard For Day 2

• Octopus: First deployment functions

• OSD deployment

• View hosts, deployed daemons, disks - and their status

• Improved Pools, RGW, CephFS, iSCSI management

• Multiple user account security

• Later:

• Guided install (after bootstrap)

• Add hosts to cluster, remove host

• Guided upgrades

Page 30: SUSE Enterprise Storage Roadmap - Image Relay

30

Further Octopus Improvements

• Performance improvements:

• Librbd caching improvements

• RGW bucket listing

• Telemetry module reports more information

Page 31: SUSE Enterprise Storage Roadmap - Image Relay

31

SUSECON Sessions

Improving software-defined-

storage outcomes through

telemetry insights [SUP-1312]

Managing & Monitoring Ceph

with the Ceph Dashboard

[SUP-1086]

Page 32: SUSE Enterprise Storage Roadmap - Image Relay

32

Windows Driver

Page 33: SUSE Enterprise Storage Roadmap - Image Relay

33

Windows Driver

• Provide Block Storage directly to Windows clients

• Provide Block Storage to Hyper-V and thus Windows guests

• CephFS Driver: Under evaluation

Page 34: SUSE Enterprise Storage Roadmap - Image Relay

34

SES 6 iSCSI Gateway

Protocol Conversion iSCSI to Native Ceph

SES 7 Windows RBD Driver

Native Protocol Connection to Block Storage

SES Cluster

OSD OSD OSD OSD OSD OSD MON

OSD OSD OSD OSD OSD OSD MON

OSD OSD OSD OSD OSD OSD MON

Simplifies

Deployment

Windows Ceph Comparision SES 6 To 7

Ceph RBD

Client

Improves

Performance

Ceph RBD

Client

SES Cluster

OSD OSD OSD OSD OSD OSD MON

OSD OSD OSD OSD OSD OSD MON

OSD OSD OSD OSD OSD OSD MON

iSCSI GW

Page 35: SUSE Enterprise Storage Roadmap - Image Relay

35

Windows Driver: Current Work

Block

Storage

Today: iSCSI

New: RBD

Hyper V

Block Storage

to Hyper V

Today: iSCSI

New: RBD

Allows booting

from RBD

Hypervisor

Block Storage to

Windows Guest

Today: iSCSI

New: RBD

Page 36: SUSE Enterprise Storage Roadmap - Image Relay

36

Windows Driver: Outlook

Hypervisor

File

Storage

Today: SMB, NFS

Under evaluation: CephFS

Page 37: SUSE Enterprise Storage Roadmap - Image Relay

37

SUSECON Session

Ceph in a Windows world

[TUT-1121]

Page 38: SUSE Enterprise Storage Roadmap - Image Relay

38

SUSE Enterprise Storage Roadmap

Page 39: SUSE Enterprise Storage Roadmap - Image Relay

39

Manageability Interoperability Efficiency Availability

• Ease of Installation

• GUI based

Monitoring &

Management

Development Focus Areas

SUSE Enterprise Storage

• Unified Block, File &

Object

• Fabric Interconnects

• Cache Tiering

• Containerization

• Hierarchical Storage

Management

• Backup/Archive

• Continuous Data

Protection

• Remote Replication

Page 40: SUSE Enterprise Storage Roadmap - Image Relay

40

Release

2019 2020 2021 20232017 2018

SUSE Enterprise Storage 5GA 10/2017 – EOS Q4/2020Ceph Luminus, SLES 12 SP3

SUSE Enterprise Storage 6GA 05/2019 – EOS Q3/2021Ceph Nautilus, SLES 15S P1

SUSE Enterprise Storage 7GA Q3 2020 – EOS Q3 2022Ceph Octopus, SLES 15 SP2

SUSE Enterprise Storage 8GA Q2 2021 – EOS Q3 2023Ceph Pacific, SLES 15 SP3

2022 2024

Lifecycle of SUSE Enterprise Storage: Yearly releases to incorporate new upstream Ceph features and hardware support

Release supported for two years

Version X is supported until 3 months of release of version (X+2)

Seamless update from one version to other

SUSE Enterprise Storage 9GA Q2 2022 – EOS Q3 2024

Ceph “Q”, SLES 15 SP4

Lifecycle

Page 41: SUSE Enterprise Storage Roadmap - Image Relay

41

SES

Simplify Modernize Accelerate

Overall Themes

• Ease of Use

• Management

• Availability

• Efficiency

Version ReleaseAIOps driven

Management

CephFS

Snapshots

Containerized

Deployment

Graceful System

Shutdown

K8s Orchestration

by Rook

Native Windows

Client Drivers

Disk management

via Ceph dashboard

NVMe

over fabrics

Bidirectional Public

Cloud Sync

Data

de-duplication

Performance Optimization

(Crimson)

Phone Home Metrics

& Error Analysis

Make SDS easier to run, operate, update, and maintain

Enterprise Grade Ceph Attach storage everywhere

Run the business Change the business Scale the business

2019 2020 2022 20236

7

8

SES 6 SES 7 SES 8 SES 9

2021

9

SUSE Enterprise Storage

Page 42: SUSE Enterprise Storage Roadmap - Image Relay

42

Recap: Highlights In SUSE Enterprise Storage 7

Native Windows Client Driver:

• Simplify deployment of SES

• Performance improvement

Ceph Dashboard: Disk handling

• Allows customer to easily add and replace

OSDs (disks)

Two deployment options

Rook/Kubernetes:

• Hyperconverged setup

• Self-healing, scaling of services

Cephadm install stack:

• Base for advanced Day 2 operations

• Patch management with SUSE Manager

• Updating versions will become easier

Page 43: SUSE Enterprise Storage Roadmap - Image Relay

43

Menu @ SUSECON

• Architecting Ceph clusters [TUT-1296]

• Architecting Ceph solutions [BP-1073]

• Ceph in a Windows world [TUT-1121]

• Ceph Security [DEV-1384]

• Design and Implementation of Large Scale SUSE Enterprise

Storage [BP-1240]

• Geo-redundancy with SUSE Enterprise Storage [BP-1052]

• Improving software-defined-storage outcomes through telemetry

insights [SUP-1312]

• Increasing SUSE Enterprise Storage performance for iSCSI for

hypervisors [CAS-1398]

• Living on the edge: SUSE Enterprise Storage for multi-cloud

environments [BOV-1213]

• Managing & Monitoring Ceph with the Ceph Dasboard [SUP-

1086]

• Multi-cluster object storage: The future [BP-1319]

• Optimizing Ceph deployments for high performance [BP-1072]

• Pitfalls to avoid when deploying SUSE Enterprise Storage [BP-

1261]

• Rook Ceph components and architecture [HOL-1196]

• SUSE Enterprise Storage 6 on SUSE CaaS Platform [CAS-1150]

• SUSE Enterprise Storage from design to implementation [SUP-

1093]

• SUSE Enterprise Storage Solutions and ecosystem [BOV-1281]

• Troubleshooting SES deployments [SUP-1416]

• Using Ceph for persistent storage on a Kubernetes platform [BP-

1133]

AND: Technology Showcase: SES Booth

Page 44: SUSE Enterprise Storage Roadmap - Image Relay

44

General Disclaimer

This document is not to be construed as a promise by any participating company to

develop, deliver, or market a product. It is not a commitment to deliver any material, code,

or functionality, and should not be relied upon in making purchasing decisions. SUSE

makes no representations or warranties with respect to the contents of this document, and

specifically disclaims any express or implied warranties of merchantability or fitness for any

particular purpose. The development, release, and timing of features or functionality

described for SUSE products remains at the sole discretion of SUSE. Further, SUSE

reserves the right to revise this document and to make changes to its content, at any time,

without obligation to notify any person or entity of such revisions or changes. All SUSE

marks referenced in this presentation are trademarks or registered trademarks of SUSE,

LLC, Inc. in the United States and other countries. All third-party trademarks are the

property of their respective owners.

Page 45: SUSE Enterprise Storage Roadmap - Image Relay