17
Michael Greene VICE PRESIDENT, SOFTWARE AND SERVICES GROUP, INTEL GENERAL MANAGER, SYSTEM TECHNOLOGIES AND OPTIMIZATION @greene1of5 June 10, 2017 *Other names and brands may be claimed as the property of others.

Ceph Day Beijing - Storage Modernization with Intel and Ceph

Embed Size (px)

Citation preview

Page 1: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Michael GreeneVICE PRESIDENT, SOFTWARE AND SERVICES GROUP, INTEL

GENERAL MANAGER, SYSTEM TECHNOLOGIES AND OPTIMIZATION

@greene1of5

June 10, 2017

*Other names and brands may be claimed as the property of others.

Page 2: Ceph Day Beijing - Storage Modernization with Intel and Ceph

From now until 2020, the size of the digital universe will about double every two years*

Information Growth

What we do with data is changing, traditional storage infrastructure does not solve tomorrow’s problems

Complexity

Shifting of IT services to cloud computing and next-generation platforms

Cloud

Emergence of flash storage, new storage media and software-defined environments

New Technologies

T r en d s d r i vi n g t h e n e e d f o r

S t o r ag e M o d er n i z at ion

2Source: IDC – The Digital Universe of Opportunities: Rich Data and the Increasing Value of the Internet of Things - April 2014

Page 3: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Enterprise IT Storage End-User Pain PointsSURGING CAPACITY IS THE PRIMARY CHALLENGE AND THE MAJOR DRIVER OF THE STORAGE NEEDS

Source: 451 Research, Voice of the Enterprise: Storage Q4 2015

Data/Capacity

Inadequate PerformanceLicensing Cost & Maintenance Cost

Disaster RecoveryMultiple Storage Silos

Typical end-user storage pain points

Costs

Provisioning And Configuration

Performance & Capabilities

Data Silos

3

Page 4: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Intel’s role in storageAdvance the Industry

Open Source & Standards

Build an Open EcosystemIntel® Storage Builders

End user solutionsCloud, Enterprise

Intel Technology LeadershipStorage Optimized PlatformsIntel® Xeon® E5-2600 v4 Platform

Intel® Xeon® Processor D-1500 PlatformEthernet Controllers 10/40/100Gig

Intel® SSD’s for DC & Cloud

Storage Optimized SoftwareIntel® Intelligent Storage Acceleration Library Intel® Storage Performance Development Kit

Intel Cache acceleration softwareVSM, COSBench, CeTune

SSD & Non-Volatile MemoryInterfaces: SATA , NVMe PCIe,

Form Factors: 2.5”, M.2, U.2, PCIe AICNew Technologies: 3D NAND, Intel® Optane

Cloud & Enterprise partner storage solution architectures80+ partners

Software and workloads used in performance tests may have been optimized for performance only on Intel microprocessors. Performance tests, such as SYSmark and MobileMark, are measured using specific computer systems, components, software, operations and functions. Any change to any of those factors may cause the results to vary. You should consult other information and performance tests to assist you in fully evaluating your contemplated purchases, including the performance of that product when combined with other products.

helping customers to enable cloud storage

Next gen solutions architectures

Intel solution architects have deep expertise on Ceph for low cost and high performance usage

4

Page 5: Ceph Day Beijing - Storage Modernization with Intel and Ceph

5

Ceph* in PRCCeph* is very important in PRC

– Redevelopment based on the upstream code

– More companies move to Open Source storage solutions

Intel/Redhat held three Ceph* days in Beijing and Shanghai

– 1000+ attendees from 500+ companies

– A vibrant community and ecosystem

Growing number of PRC code contributors.

– Alibaba*, China Mobile*, Chinac , Ebay*, H3C*, Istuary*, KylinCloud*, LETV*, Tencent*, UMCloud*, UnitedStack*, XSKY*, ZTE*

*other names and brands may be claimed as the property of others.Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Page 6: Ceph Day Beijing - Storage Modernization with Intel and Ceph

6

Ceph* at Intel – Our 2017 Ceph Focus Areas Optimize for Intel® Platforms, Flash and Networking• Hardware offloads through QAT & SOCs• 3D Xpoint™ enabling • IA optimized storage libraries to reduce latency (ISA-L, SPDK)

Performance Profiling, Analysis and Community Contributions

Ceph* Enterprise readiness and Hardening

End Customer POCsPOCs

Enable IA optimized Ceph based storage solutions Go to market

Intel® Storage Acceleration Library (Intel® ISA-L)

Intel® Storage Performance Development Kit (Intel® SPDK)

Intel® Cache Acceleration Software (Intel® CAS)

Virtual Storage Manager Ce-Tune Ceph Profiler

*other names and brands may be claimed as the property of others.Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Page 7: Ceph Day Beijing - Storage Modernization with Intel and Ceph

7

Ceph* performance trend with SSD

18.5x

*other names and brands may be claimed as the property of others.

Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.Refer to P14-17 for detail configuration

PER-NODEPERFORMANCE

IMPROVEMENT IN CEPH ALL-FLASH

ARRAY!6.2x 3.7x

4.21x

1.19x1.05x

0.80.1 0.86 0.86+Jemalloc 10.0.5 BlueStore 12.0.0+numa opt.

4x SNB_UP

3x S3700

10xHDD

4x IVB_DP

6x S3700

5x HSW_DP

1x P3700

4x S3510

5x BDW_DP

1x P3700

4x P3520

per node throughput 588.25 3673 13573.75 57093.4 68000

0

10000

20000

30000

40000

50000

60000

70000

80000

IOPS

Ceph 4K RW per-node performance optimization history

6.2x3.7x

4.21x

1.19x

Page 8: Ceph Day Beijing - Storage Modernization with Intel and Ceph

8

The 1st Optane Ceph All-Flash Array Cluster!

Refer to P17 for detail configuration *other names and brands may be claimed as the property of others.Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Intel® Optane™ + TLC 3D NAND

2.8M IOPS for 4K random read with extremely low latency!

0.9ms average latency, 2.25ms 99.99% tail latency

2.25xPerformance improvement compare with P3700 + 4xP3520 on HSW_DP

20xlatency reduction

99.99% latencycompared with P3700

All-flash demo at OpenStack Summit Boston!

For details check out the poster chat during the Ceph Day

Page 9: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Call for action

• Participate in the open source community, and different storage projects

• Try our tools – and give us feedback

• CeTune: https://github.com/01org/CeTune

• Virtual Storage Manager: https://01.org/virtual-storage-manager

• COSBench: https://github.com/intel-cloud/cosbench

• Optimize Ceph* for efficient SDS solutions!

9*other names and brands may be claimed as the property of others.Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Page 10: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Have a productive Ceph Day* BeijingBig Thank you to:

Speakers from Intel, Redhat*, QCT*, XSKY*, Inspur*, Alibaba*, ZTE*, ChinaMobile*…

Ceph.com and ceph.org.cn for the support

Your participation.

10

欢迎*other names and brands may be claimed as the property of others.Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Page 11: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Legal noticesNo license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.

Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.

This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.

The products and services described may contain defects or errors known as errata which may cause deviations from published specifications. Current characterized errata are available on request.

Intel, the Intel logo, 3D Xpoint, Optane are trademarks of Intel Corporation in the U.S. and/or other countries.

*Other names and brands may be claimed as the property of others

© 2017 Intel Corporation.

11

Page 12: Ceph Day Beijing - Storage Modernization with Intel and Ceph
Page 13: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Backup

13

Page 14: Ceph Day Beijing - Storage Modernization with Intel and Ceph

14

Ceph* All Flash SATA configuration

- IVB (E5 -2680 V2) + 6X S3700

COMPUTE NODE2 nodes with Intel® Xeon™ processor x5570 @ 2.93GHz, 128GB mem1 node with Intel Xeon processor E5 2680 @2.8GHz, 56GB mem

STORAGE NODEIntel Xeon processor E5-2680 v2 32GB Memory1xSSD for OS6x 200GB Intel® SSD DC S37002 OSD instances each Drive

WORKLOADS•Fio with librbd•20x 30 GB volumes each client•4 test cases: 4K random read & write; 64K

Sequential read & write

2x10Gb NIC

CEPH1

MON

OSD1 OSD12…

FIO FIO

CLIENT 1

1x10Gb NIC

FIO FIO

CLIENT 2

FIO FIO

CLIENT 3

FIO FIO

CLIENT 4

CEPH2 CEPH3 CEPH4

TE

ST

EN

VIR

ON

ME

NT

*other names and brands may be claimed as the property of others.Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.

Page 15: Ceph Day Beijing - Storage Modernization with Intel and Ceph

Ceph* All Flash SATA configuration - HSW (E5 -2699 v3) + P3700 + S3510

2x10Gb NIC

CEPH1

MON

OSD1 OSD8…

FIO FIO

CLIENT 1

1x10Gb NIC

FIO FIO

CLIENT 2

FIO FIO

CLIENT 3

FIO FIO

CLIENT 4

FIO FIO

CLIENT 5

CEPH2 CEPH3 CEPH4 CEPH5

5x Client NodeIntel® Xeon® processor E5-2699 v3 @ 2.3GHz, 64GB memory10Gb NIC

5x Storage NodeIntel Xeon processor E5-2699 v3 @ 2.3GHz, 64GB memory1x Intel® DC P3700 800G SSD for Journal (U.2)4x 1.6TB Intel® SSD DC S3510 as data drive2 OSDs on single S3510 SSD

Workloads• Fio with librbd• 20x 30 GB volumes each client• 4 test cases: 4K random read & write;

64K Sequential read & write

Test Environment

Page 16: Ceph Day Beijing - Storage Modernization with Intel and Ceph

16

Ceph* All Flash 3D NAND configuration - HSW (E5 -2699 v3) + P3700 + P3520

5x Client NodeIntel® Xeon™ processor E5-2699 v3 @ 2.3GHz, 64GB mem10Gb NIC

5x Storage NodeIntel Xeon processor E5-2699 v3 @ 2.3 GHz 128GB Memory1x 400G SSD for OS1x Intel® DC P3700 800G SSD for journal (U.2)4x 2.0TB Intel® SSD DC P3520 as data drive2 OSD instances one each P3520 SSD

Test Environment

CEPH1

MON

OSD1 OSD8…

FIO FIO

CLIENT 1

1x10Gb NIC

FIO FIO

CLIENT 2

FIO FIO

CLIENT 3

FIO FIO

CLIENT 4

FIO FIO

CLIENT 5

CEPH2 CEPH3 CEPH4 CEPH5

*Other names and brands may be claimed as the property of others.

Workloads• Fio with librbd• 20x 30 GB volumes each client• 4 test cases: 4K random read &

write; 64K Sequential read & write

2x10Gb NIC

Page 17: Ceph Day Beijing - Storage Modernization with Intel and Ceph

CEPH2

17

Ceph* All Flash Optane configuration - BDW (E5-2699 v4) + Optane + P4500

8x Client Node• Intel® Xeon™ processor E5-2699 v4 @ 2.3GHz,

64GB mem• 1x X710 40Gb NIC

8x Storage Node• Intel Xeon processor E5-2699 v4 @ 2.3 GHz • 256GB Memory• 1x 400G SSD for OS• 1x Intel® DC P4800 375G SSD as WAL and rocksdb• 8x 2.0TB Intel® SSD DC P4500 as data drive• 2 OSD instances one each P4500 SSD• Ceph 12.0.0 with Ubuntu 14.01

2x40Gb NIC

Test Environment

CEPH1

MON

OSD1 OSD16…

FIO FIO

CLIENT 1

1x40Gb NIC

FIO FIO

CLIENT 2

FIO FIO

CLIENT 3

FIO FIO

…..

FIO FIO

CLIENT8

*Other names and brands may be claimed as the property of others.

CEPH3 … CEPH8

Workloads• Fio with librbd• 20x 30 GB volumes each client• 4 test cases: 4K random read & write; 64K

Sequential read & write