27
© 2015 IBM Corporation Improving Oracle I/O Throughput with Linux on System z David Simpson - Oracle Technical Specialist, IBM ([email protected])

Improving Oracle I/O Throughput with Linux on System z · PDF file4 CPUs 6 CPUs 0% 20% 40% 60% 80% 100% 120% 140% ... Red Hat -A Performance Comparison Between RHEL 5 and RHEL 6 on

  • Upload
    lydan

  • View
    218

  • Download
    5

Embed Size (px)

Citation preview

© 2015 IBM Corporation

Improving Oracle I/O Throughput with Linux on System z

David Simpson - Oracle Technical Specialist, IBM ([email protected])

© 2015 IBM Corporation

Copyright and Trademark Information

� For IBM – can be found at http://www.ibm.com/legal/us/en/copytrade.shtml

� For Oracle – can be found at

http://www.oracle.com/us/legal/index.html

� Any performance results/observations in this presentation are purely for education and planning purposes. No Test results should be construed as indicative of any particular customer workload or benchmark result.

© 2015 IBM Corporation

Agenda

� What’s new with I/O & IBM z13

� I/O Improvement Tips

� When to use Flash?

� Comments/Questions

3

© 2015 IBM Corporation

© 2015 IBM Corporation

© 2015 IBM Corporation6

© 2015 IBM Corporation

FICON (ECKD) / FCP comparison (1)

7

4 CPUs 6 CPUs

0%

20%

40%

60%

80%

100%

120%

140%

160%

Normalized Transactional Throughput

FICON (20 aliases) FCP (rr_min_io=100)

Source: Juergen Doelle http://www-03.ibm.com/support/techdocs/atsmastr.nsf/WebIndex/WP102234

© 2015 IBM Corporation

FICON (ECKD) / FCP comparison (2)

8

4 CPUs 6 CPUs0%

20%

40%

60%

80%

100%

120%

CP

U c

ost per

transactional th

roughput

Normalized CPU cost per transaction

FICON (20 aliases) FCP (rr_min_io=100)

© 2015 IBM Corporation

FICON (ECKD) / FCP comparison (3)

�FCP offers better throughput

�ECKD/FICON uses less CPU per transaction

�You have to tune both environments

�Recommendation: it depends

9

© 2015 IBM Corporation

ASM or LVM file Systems?

� LVM – Logical Volume Manager in Linux

� ASM – Automated Storage Management provided by Oracle

– Oracle RAC One and Oracle RAC will require ASM

� Overall recommendation: ASM

� Don’t combine both!10

LVM ASM

pro • Direct control on setting and

layout

• Can choose file system

• Automated, out of the box

environment

• Very good integration with Oracle

con • Complex setup • RMAN required for backup

© 2015 IBM Corporation

Oracle 12c ASM

11

ASM memory target:

� Oracle 12c, the ASM instance now has a default memory target of 1 gigabyte

(256mb default for Oracle 11gR2).

� If set to a lower target, it can be ignored unless overridden with a hidden parameter.

� “Community” has observed setting to 750mb with good results (possibly lower in

light-utilization workloads).

$ sqlplus "/ as sysasm"alter system set "_asm_allow_small_memory_target"=true scope=spfile;alter system set memory_target=750m scope=spfile;alter system set memory_max_target=750m scope=spfile;exit# service ohasd stop# service ohasd start

Source: http://www.rocworks.at/wordpress/?p=271

© 2015 IBM Corporation

File System Recommendations

� Supported and Recommended File Systems on Linux (MOS: 236826.1)

– SUSE ext3 or xfs (NEW) recommendation for DB files

(xfs great for concurrent writes)

– Red Hat 6.x - ext4 still

recommended for database files XFS is starting to be made

available. Stay tuned for Oracle

updates in this area.

– Reiser (the default) does not

perform that well with Oracle

databases.

12

SLES 11: ext3 SLES 11: xfs

© 2015 IBM Corporation

Database File System Performance

�Red Hat 5.5 to Red Hat 6.0 (ext4) File System Improvements.

13Source : Red Hat - A Performance Comparison Between RHEL 5 and RHEL 6 on System z

© 2015 IBM Corporation

Database files on Filesystem: Disable read ahead

� Oracle parameter file systems: filesystemio_options=setall

– Provides asynchronous & direct I/O (avoids linux file system cache)

� Reduce Linux Read-Ahead for LVM file systems.

– lvchange -r none <lv device name>

14

© 2015 IBM Corporation

Sample multipath.conf

defaults {

dev_loss_tmo 90 #zSeries specific, no. of secs wait before marking path bad

fast_io_fail_tmo 5 #zSeries specific, length time to wait before failing I/O

# uid_attribute "ID_SERIAL" #use uid_attribute instead of getuid for SLES 11SP3+ & RH7

max_fds "max" #Red Hat 6.3+, SLES 11SP3+

path_selector "round-robin 0" #round-robin for SLES 11 SP2+ and RedHat 6.x

# path_selector "service-time 0" #default for SLES 11 SP3+, RedHat 7+

path_grouping_policy "multibus" # SLES 11 SP1+ and Red Hat 6.x

# rr_min_io 100 #rr_min_io for older Linux distro's (Red Hat 5.x,SLES 11sp1 & older)

# rr_min_io_rq 1 #RamSan RS-710/810, use 4 RamSan RS-720/820

# rr_min_io_rq 15 #IBM XiV

rr_min_io_rq 100 #IBM DS8000, rr_min_io_rq for newer Linux (SLES 11sp2+ RH 6+)

}

15IBM Customer: with your multipath settings I got 552MB/s (from 505mb/s)

© 2015 IBM Corporation

•The service times shown on this slide reflect the expected random read service time for an 8KB block of data, some are estimated.

Caching data as close to the DB as possible (random read)

IBM z Systems

© 2015 IBM Corporation

IBM z13 Increased CPU Cache

17

© 2015 IBM Corporation

Oracle Cache, Memory and I/O Access

© 2015 IBM Corporation

Introducing DS8870 Flash TechnologyFlash Optimized Improvements…

� 4x faster flash performance in 50% lessspace than existing flash options

� Accelerate database performance by up to 3.2xwith industry-leading DS8870 availability

� Shrink batch times by up to 10%

� Unmatched flexibility with new flash optimization and flash drive capacity options

� Faster replication: 70% better response than disk

19

© 2015 IBM Corporation

DS8870 SSD vs. HPFE: Four Array (RAID-5, 6+p

Sequential

0

1

2

3

4

Read Write

GB/s

SSD HPFE

4KB Random

0

50

100

150

200

250

300

Read Write

KIO

/s

SSD HPFE

1.9X faster

3.8X faster

Note: similar HPFE performance measurements were attained on CKD and Open systems

20

© 2015 IBM Corporation

System z & IBM Flash System: Highest Reliability, Maximum Performance

IOPS

Cut IO Wait Time

80%+Latency

Under

100

Microseconds

Extreme Performance

HighestReliability levels

Purposed-built, Enterprise Architecture

Enterprise ReliabilityNo applicationOr architecture

Changes

Benefits & economics out weigh disk

Reduce floor space, power & cooling

Macro Efficiency

Servers, Applications and Databases

are FASTER!Go FROM 7

milliseconds to 700

microseconds

IBM MicroLatency™

Why IBM FlashSystem for Linux on System z?

Performance of Linux on System z with FlashSystemI/O bound relational databases can benefit from IBM FlashSystem

over spinning disks.

� 21x reduction in response times*

� 9x improvement in IO wait times*

� 2x improvement in CPU utilization** IBM internal test results

Now you can leverage the “Economies of Scale” of Flash• Easily added to your existing SAN• Accelerate Application Performance• Gain Greater System Utilization • Lower Software & Hardware Cost• Save Power / Cooling / Floor Space• Drive Value Out of Big Data

Would you like to demo this architecture? You can now demo hardware either in person or virtually. Demo Location: Benchmark Center in Poughkeepsie,NY

IBM FlashSystem is certified

(reference SSIC) to attach to

Linux on System z, with or

without an SVC, to meet your

business objectives

© 2015 IBM Corporation

Database – I/O Analysis of Oracle AWR Report

© 2015 IBM Corporation

Databases – I/O Analysis (pg 2)

23

© 2015 IBM Corporation

Databases – I/O Analysis (pg 3)

24

© 2015 IBM Corporation

Databases – I/O Analysis (pg 4)

25

© 2015 IBM Corporation

Acceleration of Database with IBM FlashSystem

After switching to FlashSystem(05:27 PM) Disk IO wait disappears and waiting is now on host CPU. This graph shows the effect of the low latency of FlashSystem and how it increases the host CPU utilization.

© 2015 IBM Corporation

Questions?