69
Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI S5000T and Dorado Storage This document introduces how to deploy and test DB2 V10.1 database on Red Hat Enterprise Linux 6 with VMware Virtualization, based on HUAWEI Mid-Range SAN Storage OcanStor S5500T and Solid Storage OceanStor Dorado5100. The detailed deploy and test steps are listed in this paper and a series of best practices are introduced. Jarvis WANG / [email protected] Yanping ZOU / [email protected] Enterprise Application Solution Group, IT Storage Solution SDT 2012-12-4 Version 1.1

Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

  • Upload
    lamliem

  • View
    218

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in

Linux VM Using HUAWEI S5000T

and Dorado Storage

This document introduces how to deploy and test DB2 V10.1 database on Red Hat

Enterprise Linux 6 with VMware Virtualization, based on HUAWEI Mid-Range SAN

Storage OcanStor S5500T and Solid Storage OceanStor Dorado5100. The detailed deploy

and test steps are listed in this paper and a series of best practices are introduced.

Jarvis WANG / [email protected]

Yanping ZOU / [email protected]

Enterprise Application Solution Group, IT Storage Solution SDT

2012-12-4 Version 1.1

Page 2: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

1

Catalogue

1 Preface .............................................................................................................................................. 3

2 Features ............................................................................................................................................ 4

2.1 DB2 LUW 10.1 ................................................................................................................................................ 4

2.2 VMware vSphere 5.0 ...................................................................................................................................... 12

2.3 Red Hat Enterprise Linux 6 ............................................................................................................................ 14

2.4 OceanStor T Series Storage ............................................................................................................................ 15

2.5 OceanStor Dorado .......................................................................................................................................... 17

3 Summary ........................................................................................................................................ 19

3.1 Architecture .................................................................................................................................................... 19

3.2 Storage configuration ..................................................................................................................................... 20

4 Deploy and test on traditional storage ........................................................................................ 22

4.1 Prepare for install DB2 ................................................................................................................................... 22

4.2 Install database software ................................................................................................................................ 26

4.3 Load and test on SMS table spaces ................................................................................................................ 35

4.4 Migrate objects to DMS table spaces ............................................................................................................. 37

4.5 Resize the buffer pool .................................................................................................................................... 39

5 Migrate objects to solid storage ................................................................................................... 43

5.1 Close database and VM .................................................................................................................................. 43

5.2 Map Dorado LUNs to VM ............................................................................................................................. 43

5.3 Create file system and logical volumes .......................................................................................................... 44

5.4 Migrate redo log files ..................................................................................................................................... 45

5.5 Migrate table space ........................................................................................................................................ 45

6 Test results summary .................................................................................................................... 50

6.1 SMS and DMS table space performance ........................................................................................................ 50

6.2 Performance effect of buffer pool .................................................................................................................. 50

6.3 HDD and SSD performance ........................................................................................................................... 52

6.4 Migration performance analyzing .................................................................................................................. 54

7 Best practices ................................................................................................................................. 55

7.1 Storage performance and capacity ................................................................................................................. 55

7.2 Table space design .......................................................................................................................................... 57

Page 3: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

2

7.3 Buffer pool design .......................................................................................................................................... 58

7.4 Table and index design ................................................................................................................................... 59

7.5 Transaction log design .................................................................................................................................... 60

7.6 Tiering storage design .................................................................................................................................... 61

7.7 High availability and reliability ...................................................................................................................... 62

7.8 OS and instance parameters ........................................................................................................................... 63

8 Terms and Abbreviations .............................................................................................................. 66

Page 4: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

3

1 Preface

Server virtualization and solid state storage technologies make the datacenter management

tasks more cost effective while the performance speed up with less power consumption. Using

DB2 with virtualization and solid state storage could get better ROI (return on investment).

HUAWEI Dorado storage offers very high transaction performance (Dorado5100 600K IOPS,

Dorado2100 100K IOPS) with a response time lower than 2ms, which are standard SAN

storages with high reliability. Dorado storages are the best choice for DB2 LUW 10.1

database. This document introduces how to deploy and test DB2 V10.1 database on Red Hat

Enterprise Linux 6 with VMware Virtualization, based on HUAWEI Mid-Range SAN Storage

OcanStor S5500T and Solid Storage OceanStor Dorado5100.

Three tests are also performed in this white paper: DMS and SMS table space performance

comparing, performance test on different size and layout of buffer pool, and HDD and SSD

performance comparing.

This technical paper is used for reference when deploying and configuring DB2 on HUAWEI

storage. To read this paper, knowledge of VMware virtualization, HUAWEI SAN storage, and

solid storage is required.

A summary of contents:

Chapter 2 – Features of DB2 LUW 10.1, VMware ESXi 5.0, Red Hat 6, OceanStor

S5000T, and Dorado storage

Chapter 3 – Test environment and Configurations summary

Chapter 4 – Install and test DB2 LUW 10.1 on traditional storage

Chapter 5 – Migrate database schema objects to solid storage

Chapter 6 – Test results summary

Chapter 7 – Best practices when using DB2 with HUAWEI SAN storage

Chapter 8 – Terms and Abbreviations

Page 5: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

4

2 Features

2.1 DB2 LUW 10.1

Figure 2-1 Instance and database architecture

All activities on the DB2 are controlled by Engine Dispatchable Units (EDUs), which are

composed by threads. DB2 Agent is the most common EDU. It processes most SQL

operations. In a multi-processor environment, each application may be allocated with multiple

subagents. All agents and subagents are managed by the thread pool. This prevents threads

from being frequently created and exited. Agents process transaction requests and write the data change logs into the log buffer, and then wait for the logger to write the records from the

Page 6: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

5

log buffer to the log files. When processing transaction, the agents read the data required for

processing the transactions from table spaces to the buffer pool and update the data or

generate new data. Changed data in the buffer pool is called dirty data. When the buffer pool

usage exceeds a particular threshold or a soft checkpoint issued, the dirty data is written by

the page cleaners to the table space. Instances perform asynchronous pre-fetch on the data.

The agents place the asynchronous pre-fetch requests in the pre-fetch queue. These requests

are then consolidated by the Pre-fetcher. Then parallel large-block data read operations are

performed on the table space. Deadlock Detector is a database deadlock inspection EDU.

Periodically, Deadlock Detector inspects the database locks and informs the agents to handle

it when a deadlock has occurred.

Processes

Figure 2-2 DB2 processes

DB2 processes include connection managers (db2tcpcm and db2ipccm), primary process

(db2sysc), and management processes (db2wdog and db2acd). When a client connects to the

database, it first sets up a connection with the connection manager. The connection manager

retrieves an agent (db2agent) from the agent pool to communicate with the client. If the client

requires parallel processing, the DB2 agent invokes multiple sub-agents (db2agentp and

db2agents) to work with it. A database has pre-fetch threads (db2pfchr), page cleaning threads

(db2pclnr), and log read/write threads (db2logr and db2logw), and a deadlock inspection

thread (db2dlock).

You can run the “db2pd –edus” command in the operating system to view the DB2 processes or threads. They can be categorized as follows:

Listening threads

They‟re called connection managers in DB2. It accepts clients‟ requests to set up connections and allocate agents for the clients. Each communication protocol is monitored by a thread.

− db2ipccm: monitors local client IPC connections.

Page 7: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

6

− db2tcpcm: monitors TCP/IP connections.

− db2tcpcmd: monitors connections initiated by TPC/IP discovery tools.

Agent threads

They are used to processes clients‟ requests. DB2 allocates an agent thread (db2agent) for each

connection. These agent threads may invoke multiple sub-agent threads in some operations and assign the clients‟ requests to them. Sub-agent threads can be categorized as follows:

− db2agentp: sub-agent in a partition database environment.

− db2agents: database sub-agent that has the Intra-query Parallelism feature enabled.

− db2agenta: sub-agent that is linked to applications but is still idle.

− db2agnti: independent sub-agent that is used to monitor transactions.

− db2agnsc: sub-agent that is used to restart the database after it is closed unexpectedly.

− db2agentg: network gateway sub-agent that is used to connect remote databases.

Fenced mode process (db2fmp)

It executes storage processes and user-defined functions outside the firewall.

User define code execute thread (db2vend)

Users can define code to perform a series of operations according to the EDU behavior. The process

db2vend is used to execute the user-defined code.

Database threads

Various database threads work in concert with each other to complete database operations. These

database threads are list below:

− db2dlock: deadlock inspection thread.

− db2loggr: reads transaction logs, processes transactions, and recovers data.

− db2loggw: writes transaction logs to log files.

− db2logmgr: log manager.

− db2logts: records locations according to the table space logs and stores the information to the “DB2TSCHG.HIS” file, which is located in the log directory.

− db2pfchr: pre-fetch thread.

− db2pclnr: page cleaning thread.

− db2redom: Redo Master thread. It reads log records when the database is recovering, and assigns

the log records to Redo Worker.

− db2redow: Redo Worker thread. It processes log records when the database is recovering.

− db2stmm: automatic memory management thread.

− db2evm: events monitor thread.

− db2bm: backup recovery thread.

Instance threads

DB2 instance management processes include:

− db2sysc: database main process, which controls other EDUs.

Page 8: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

7

− db2wdog: database watchdog, which processes unexpected exits.

− db2thcln: thread cleaner, which cleans resources when the EDU exits unexpectedly.

− db2acd: Automatic Computing Daemon. It executes management tasks such as health monitoring.

Memory

Figure 2-3 DB2 memory architechture

DB2 instance can manage multiple databases. Each database can run multiple applications.

Each application can be executed by multiple agents. Therefore, the DB2 database memories,

from big to small, are as follows:

Instance memory

Database Manager (DBM) is also called DB2 database instance. The memories used by DBM

include database memory, monitor memory, and auditing cache area.

Database memory

Memory space required for running the database. The most important areas in database

memory are buffer pool and log buffer. Every application connected to the database uses part

of the memory space.

Application memory

Memory space required for executing applications. Application memory include three parts:

application shared memory, memory accessed by each application independently, and the

memory required for executing agents.

Agent memory

It includes agent memory stack, sorting memory, and java memory.

Page 9: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

8

Storage

Figure 2-4 DB2 storage architechture

DB2 storage includes table spaces, transaction log files, archive log files, and backup sets.

There‟re 3 kinds of table spaces: automatic storage table space, SMS (system managed space)

table space, and DMS (database managed space) table space. A table space is striped on one

or more containers. A container can be a folder, a file, or a raw device. Automatic storage

group and SMS table space use folders as their containers, and DMS table space use files or

raw devices as its containers. DB2 logical storage objects such as tables, indexes and

partitions are created on table space.

The space of objects is allocated by “extent”, which is made up of a number of consecutive

pages. Page is the basic I/O operation unit, the size of which can be specified when a table

space is created. Possible values are 4 KB, 8 KB, 16 KB, and 32 KB. Usually, OLTP database

use a relatively small page size whereas OLAP database use a relatively large page size.

There‟re two kinds of logging mode in DB2: circular logging or archive logging. The log files

are reused in circular logging mode. Circular logging supports only crash recovery, that is if a

DB2 instance crashes for some reasons such as power failure or user error, the next database

restart uses information from the log files to bring database to a consistent point. In archive

logging mode, full log files are copied to archive log path. The advantage of choosing archive

logging is that rollforward recovery can use both archived logs and active logs to restore a

database either to the end of the logs or to a specific point in time. The archived log files can

be used to recover changes made after a database backup was taken. This type of logging is

different from circular logging where you can recover only to the time of the backup, and all

changes made after that are lost.

Page 10: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

9

High-Availability Shared Storage Cluster

Figure 2-5 DB2 Storage Cluster

DB2 High Availability (HA) is a fault recovery cluster that share disks between the primary

and standby servers. In the HA structure, the primary and standby nodes share the same

storage. Tivoli System Automation (TSA) maintains resources (mounting points, virtual IP

addresses, and instances). When the resources on the primary node fail, TSA restarts these

resources on the standby node to recover the cluster from the failure. The primary and standby

nodes must share the root directories for DB2 management users and instance users, and these

directories can only be mounted on one node. The DB2 HA structure contains the following

components:

db2haicu

DB2 High Availability Instance Configuration Utility, which is the configuration tool for the

DB2 fault recovery cluster.

TSA

TSA is an IBM cluster management software. From DB2 V9.5 onward, TSA is automatically

installed when the DB2 database software is installed.

Page 11: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

10

High-Availability Data Replication Cluster

Figure 2-6 DB2 data replication cluster

High Availability Disaster Recovery (HADR) is a high-availability data replication

technology integrated in the DB2 database. This technology synchronizes the primary and

standby servers by replicating logs, and ensures service continuity by automatically

connecting clients through the re-route mechanism to the standby server when the primary

server fails. The primary and standby servers can be synchronized in any of the four modes:

SYNC

A log is considered to have been successfully written only when the log has been written to

the primary database, and a response from the backup database is received confirming that the

log has been written to the backup database. In this mode, the transaction response time is the

longest, but the transactions are prevented from being lost to the maximum.

NEARSYNC

A log is considered to have been successfully written only when the log has been written to

the primary database, and a response from the backup database is received confirming that the

log has been received by the backup database. In this mode, the logs received by the backup

database may not be successfully written to log files on storage devices. The transaction

response time is shorter than that in SYNC mode, and the transactions will be lost only when

both servers fails.

ASYNC

A log is considered to have been successfully written only when the log has been written to

the primary database, and the log has been sent out without confirming that the log has been

received by the backup database. In this mode, the transaction response time is shorter than

that in NEARSYNC mode, but the possibility of losing transactions is higher.

SUPERASYNC

A log is considered to have been successfully written only when the log has been written to

the primary database, and the log has been sent out without confirming that the log has been

received by the backup database. In this mode, the transaction response time is the shortest,

but the possibility of losing transactions is the highest.

Page 12: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

11

By using the automatic client re-route mechanism, client is enabled to automatically connect

itself to the backup server when its connection with the primary server is interrupted. To

configure automatic client re-route, run the “UPDATE ALTERNATE SERVER” command to

set backup database information. When a connection between a client and the primary

database is established, the backup database configuration information is sent to the DB2

client. Therefore, when the client‟s connection with the primary database is interrupted, the

client uses this information to connect itself to the backup database, minimizing the impact of

database failures. The DB2 client automatically completes this process without intervention

from the user.

Multi-Parition Cluster

Figure 2-7 Multi-Partition Cluster

DB2 Data Partitioning Feature (DPF) uses the share-nothing structure. A database is divided

into independent partitions in this share-nothing structure. Each partition has its own

resources such as memory, CPU, disk, data, index, configuration file, and transaction logs.

DPF reduces the competition between nodes for shared resources and allows the database to

support larger amounts of data and more user access through effective expansion. The

database partitions can also be deployed on the same node. These partitions on the same node

are called logical partitions. Each node in a multi-partition cluster can have multiple logical

partitions.

Data is distribued to different partitions according to the hash algorithm, each partition only

responsible for processing its own data. When a user performs an SQL operation, the

connected partition is called coordinate node, which processes user requests and then,

according to the partition key, divides the request into several sub-tasks for different partitions

to execute. The coordinate node then collects the execution results from different partitions

and returns them to the user. These partitions are transparent to applications.

DB2 pureScale feature

The DB2 pureScale feature provides a shared-disk architecture that is used to transparently

scale OLTP clusters without application changes while maintaining the highest availability

levels available on distributed platforms.

Page 13: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

12

It is primarily used to create active or active scale-out OLTP clusters. Members in a DB2

pureScale environment can be dynamically scaled out or scaled down according to business

demand.

New storage features in DB2 LUW 10.1

Multi Temperature Storage Support

You can respectively create storage groups on disks with different response time, and create

automatic storage table spaces on each storage groups. And then place hotter tables and

indexes on faster table spaces. When the temperature of objects changed, just alter the table

space to another storage group and rebalance it to migrate the data.

Deep compression enhancements

− Lempel-Ziv (LZ) based algorithm creates a static dictionary based at the table level

− Data resides compressed on pages, whether it‟s on buffer pool, table space, or

archive log

− Compression done in many ways: table level row compression, page level row

compression, and etc.

− Archive logs compressed at time of archive movement.

Multi-Core parallelism enhancements

Data is scanned by multiple subagents per partition regardless of cores per partition

mapping, partitioned indexes are parallel scanned, and a subagent is assigned a new

range when it has completed its work on the current range.

pureScale enhancements

Workload manager and range partitioning are now supported in pusrScale for DB2 10.1.

HADR enhancements

− Multiple standby: now you can have up to three standby databases in an HADR

group and all can be read only.

− Replay time delay.

− Log spooling: spool logs on standby to prevent spikes on throughput.

2.2 VMware vSphere 5.0

Storage DRS

This feature delivers the DRS benefits of resource aggregation, automated initial placement,

and bottleneck avoidance to storage. You can group and manage similar datastores as a single

load-balanced storage resource called a datastore cluster. Storage DRS makes disk (VMDK)

placement and migration recommendations to avoid I/O and space utilization bottlenecks on

the datastores in the cluster.

Page 14: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

13

Profile-driven storage

This solution allows you to have greater control and insight into characteristics of your

storage resources. It also enables virtual machine storage provisioning to become independent

of specific storage available in the environment. You can define virtual machine placement

rules in terms of storage characteristics and monitor a virtual machine's storage placement

based on these administrator-defined rules.

Storage Awareness

A new set of APIs that allows vCenter Server to detect capabilities of a storage device,

making it easier to select the appropriate storage disk for virtual machine placement. Storage

capabilities, such as RAID level, thin or thick provisioning, replication state, and so on, can

now be made visible with vCenter Server.

VMFS5

VMFS5 is a new version of vSphere Virtual Machine File System that offers improved

scalability and performance, and provides internationalization support. With VMFS5, you can

create a 64TB datastore on a single extent. RDMs in physical compatibility mode with the

size larger than 2TB can now be presented to a virtual machine. In addition, on SAN storage

hardware that supports vStorage APIs - Array Integration (also known as VAAI), ESXi 5.0

uses the atomic test and set (ATS) locking mechanism for VMFS5 datastores. Using this

mechanism can improve performance, although the degree of improvement depends on the

underlying storage hardware.

Array Integration: Thin Provisioning

Reclaim blocks of a thin-provisioned LUN when a virtual disk is deleted or migrated. You can

also preallocate space on thin-provisioned LUNs and receive advanced warnings and error

messages when a datastore on a thin-provisioned LUN starts to fill up. The behavior of a full

thin-provisioned disk is also improved. Only virtual machines that are trying to allocate new

blocks on a full thin-provisioned datastore are paused. Virtual machines that do not require

additional blocks on the thin-provisioned disk continue to run.

Swap to Host Cache

The VMkernel scheduler is modified to allow ESXi swap to extend to local or network SSD

devices, which enables memory overcommitment and minimizes performance impact. The

VMkernel automatically recognizes and tags SSD devices that are local to ESXi or are on the

network.

2TB+ LUN support

vSphere 5.0 provides support for 2TB+ VMFS datastores. Very large VMFS5 datastores with

the size of up to 64TB can be created on a singe storage device without additional extents.

Page 15: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

14

Storage vMotion snapshot support.

Allows you to use Storage vMotion for a virtual machine in snapshot mode with associated

snapshots. You can better manage storage capacity and performance by using flexibility of

migrating a virtual machine along with its snapshots to a different datastore. A new Storage

vMotion mechanism uses a mirror driver, which synchronizes the source disk to the

destination disk, making the migration quicker.

Software FCoE

vSphere 5.0 introduces support for a software Fibre Channel over Ethernet (FCoE) driver.

Other new features in VMware vSphere 5.0 have no or less relate with I/O subsystem, which

is not the topic in this white paper. Refer the following link to know about more new features

in VMware vSphere 5.0:

http://www.vmware.com/support/vsphere5/doc/vsphere-esx-vcenter-server-50-new-features.html

2.3 Red Hat Enterprise Linux 6

File system

The next generation Ext filesystem, Ext4, is the default filesystem for Red Hat Enterprise

Linux 6. Ext4 combines the stability of Ext3 with significant scalability (up to 16TB) and

performance enhancements.

The optional XFS filesystem is available for customers deploying even larger, specialized

environments with high-end servers and storage arrays.

The optional GFS2 file system is designed for high-availability clusters with 2-16 nodes, and

now includes support for clustered Samba deployments.

I/O subsystem

Many new features in the I/O subsystem cover interconnects (FCoE, iSCSI, etc.) and

hardware/software optimizations (SR-IOV,NPIV, topology awareness, thin provisioning,

block discard, VSAN fabrics, etc.).LVM (Logical Volume Manager) enhancements include

online resizing of mirrored volumes, dynamic multipath load balancing, and snapshot

rollbacks.

Storage topology awareness allows higher level software (drivers, logical volume

management, filesystems, virtual guests and applications) to interrogate the storage hardware

to identify optimal I/O blocking patterns – offering the opportunity to optimize

performance based on physical storage capabilities.

Resource Management

The new Control Group (cgroups) feature of Red Hat Enterprise Linux 6 offers a powerful

way to allocate processor, memory, and I/O resources among applications and virtual guests.

Cgroups provide a generic framework for plug-in controllers that manage resources such as memory, scheduling, CPUs, network traffic, and I/O. Cgroups become increasingly important

Page 16: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

15

as system sizes grow, by ensuring that high-priority tasks are not starved of resources by

lower priority tasks.

Other new features in Red Hat Linux 6 have no or less relate with I/O subsystem, which is not

the topic in this white paper. Refer the following link to know about more new features in Red

Hat Linux 6:

http://www.redhat.com/f/pdf/rhel-6-features.pdf

2.4 OceanStor T Series Storage

Figure 2-8 OceanStor T Series Storage

Huawei OceanStor S5000T unified storage system is a new-generation storage product for

mid-range and high-end storage applications. It boasts integration of block-level and file-level

data storage, support for a variety of storage protocols, and GUI-based central storage

management. Delivering leading performance, enhanced efficiency, maximized return on

investment, and all-in-one solutions, the OceanStor S5000T is ideally applicable to scenarios

such as large-database OLTP/OLAP, high-performance computing, digital media, internet

applications, central storage, backup, disaster recovery, and data migration.

Unification

Unified SAN and NAS

Supports SAN and NAS storage protocols, structured and non-structured data within one

storage system

Unified protocols

Compatible with various storage networks and protocols, including iSCSI, FC, NFS,

CIFS, HTTP, and FTP

GUI-based central storage management

Provides a graphical user interface for central management of files and data blocks. The

wizards guide users through every configuration

Page 17: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

16

Flexibility and Reliability

Upgrade

Users can easily upgrade block-level storage to unified storage

Hot swapping modules

Users can hot-swap controllers, fans, power supplies, I/O modules, and hard disks

without compromising ongoing user services

Diversified disk types

Supports SAS, NL SAS, SATA, FC and SSD disks, fitting into various scenarios

Reliable architecture

Full component redundancy prevents single points of failure. Data coffer and file system

mirror improve system reliability

Advanced I/O port scalability and flexibility

Supports up to 12 I/O modules with a maximum of 48 I/O ports. Supported I/O port

types include 4 Gbps or 8 Gbps FC, 1 Gbps iSCSI, 10 Gbps iSCSI(TOE), 10Gbps FCoE

and 6 Gbps SAS ports

Data protection

Provides seamless integration with Enterprise Vault and NetBackup to achieve disaster

recovery through remote replication

Application optimization

HostAgent implements application-level fast backup/recovery and DR verification for

mainstream operating systems, such as Oracle, DB2, Exchange Server, and SQL Server

Economy and Efficiency

Multi-node clustering

Active-active nodes achieve simultaneous operating among nodes and provide parallel

access to data

SmartCache acceleration

Improves performance by using SSD as secondary cache. The IOPS could be improved

multiple times in mixed workload environment

Automatic thin provisioning

HyperThin supports automatic capacity expansion for improved disk utilization so that

customers can buy storage devices on demand, reducing the total cost of ownership

DST

Dynamic storage tiering (DST) implements transparent bidirectional dynamic data

migration between various storage media based on file access frequency or a time policy.

It maximizes return on investment

File-based deduplication

Detects duplicate data by Hash fingerprints and exploits the data storage space

Page 18: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

17

Proactive remote service

The up-to-date Cloud Service provides functions of system health check, warning, and

alarm notification. The robust troubleshooting expert background minimizes adverse

impacts on a system and slashes system downtimes

2.5 OceanStor Dorado

Figure 2-9 OceanStor Dorado

Huawei OceanStor Dorado5100 is an SAN solid-state storage array for the enterprise-class

high performance storage market. The Dorado5100 adopts exclusively solid-state storage as

its system architecture as well as the dual-controller design to provide a compelling user

experience. The Dorado5100 can meet the requirements of various applications such as large

scale database, high-performance computing, and VDI for highly reliable and high-

performance storage.

Outstanding Performance

IOPS

The Dorado5100 delivers 600,000 transaction IOPS – a performance more than

traditional arrays with 2000 15K RPM SAS disks

Access Speed

Access latency is a low 500 μs, just 5% of traditional arrays

Energy Savings

Access Speed

Access latency is a low 500 μs, just 5% of traditional arrays

Low power consumption

Tapical consumption as low as 110 W/U, an energy savings of up to 90% over traditional

arrays with equivalent performance

Intelligent CPU clock speed control

Page 19: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

18

The Dorado5100 intelligently controls the clock requency of the CPU based on processor

workload

16-speed intelligent fan speed control:

Fan speed is regulated intelligently based on system temperature to reduce fan noise and

power consumption, increasing the environmental flexibility of the equipment

Stability and Reliability

Media safeguards

Technologies such as wear leveling, bad block management, and random scrambling

greatly extend media service life to deliver an MTBF of greater than 1 million hours

Data protection

The Dorado5100 uses rigorous 32-bit error correcting code (ECC) in 1-KB blocks and a

threshold warning function so that errors are corrected as soon as they are discovered. A

variety of RAID levels are also offered to further improve data reliability

Redundant architecture

The dual Active/Active controller provides interruption-free redundancy to ensure

operational reliability and availability

Lower TCO

Protect your investment

There is no need to change software versions or your current application architecture.

This means you can easily incorporate the Dorado5100 into your existing IT architecture

while continuing to leverage your investment

Reduce power use:

Enjoy the benefits of memory system power savings of 90% annually

Save space

The Dorado5100 uses 95% less space than traditional arrays with equivalent

performance, greatly reducing cabinet costs

Simplify management

The Dorado5100 offers user-friendly management and maintenance, supports both GUI

and CLI management methods, supports visual, text message, and email warnings

Page 20: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

19

3 Summary

3.1 Architecture

Figure 3-1 Architecture

1 – HUAWEI Teacal RH5885

With 4 – Intel(R) Xeon(R) CPU E7- 4820 @ 2.00GHz

32 – 16GB Memory

Page 21: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

20

2 – 8Gbps FC HBA

Installed VMware ESXi 5.0 with a VM installed Red Hat Enterprise Linux 6.3 OS

TPCCRunner V1.00

DB2 10.1 for Linux X64

2 – HUAWEI Tecal RH2288

Each with 2 – Intel(R) Xeon(R) CPU E5- 2260 @ 2.00GHz

8 – 8GB Memory

Each installed Enterprise Linux Server 6.3 X86_64

TPCCRunner V1.00

1 – HUAWEI OceanStor S5500T storage system with

With 16GB cache

48 – 600GB 10K RPM SAS disks

1 – HUAWEI OceanStor Dorado5100 storage system with

With 96GB cache

24 – 400GB eMLC SSD disks

2 – 8Gbps FC Switch

1 – HUAWEI S5700 1Gbps Ethernet

1 – DBA Management PC

3.2 Storage configuration

The following table shows the RAID groups configuration of OceanStor S5500T. There‟re

totally 6 LUNs created and mapped to the DB2 database host.

Figure 3-2 OceanStor S5500T storage configuration

Page 22: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

21

The following table shows the RAID groups configuration of Dorado5100. There‟re totally 3

LUNs created and mapped to the DB2 database host.

Figure 3-3 Dorado storage configuration

Page 23: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

22

4 Deploy and test on traditional storage

This chapter describes the deploy steps and two tests on traditional storage. The tests are:

SMS and DMS comparing test, buffer pool size and layout effect test. The first test is

performed to find the performance different of SMS and DMS table space. The second test is

performed to find the performance effect of different size and layout of buffer pool. The

detailed results are analyzed in chapter 6.

4.1 Prepare for install DB2

Step1 Build test environment

Build test environment refer to chapter 3.1. Create RAID groups and LUNs refer to

chapter 3.2, and map these LUNs to the RH5885 server.

Step2 Install operating system

Install VMware ESXi 5.0 on RH5885 Server.

Create a 1TB data store using LUN „SYS‟. Create a VM with a 30GB OS disk and a

512GB data disk on the data store; install Red Hat Enterprise Linux 6.3 X64

Operating System with the option „minimal system‟.

Add the following LUNs to the VM as RAW disk mappings.

LUN Virtual Device ID

ARC1 SCSI (1:1) Disk 3

USR1 SCSI (2:0) Disk 4

BAK1 SCSI (2:1) Disk 5

USR2 SCSI (3:0) Disk 6

BAK2 SCSI (3:1) Disk 7

Step3 Change kernel parameter

# Edit file „/etc/sysctl.conf‟ to configure the memory and other

kernel parameters.

vi /etc/sysctl.conf

-------------------

# 256 * Mem_GB

Page 24: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

23

kernel.shmmni=131072

# Mem_Bytes

kernel.shmmax=547608330240

# Mem_Bytes / 4096

kernel.shmall=133693440

# 250 256000 32 256*Mem_GB

kernel.sem=250 256000 32 131072

# 1024 * Mem_GB

kernel.msgmni=524288

vm.swappiness=0

vm.overcommit_memory=0

# Mem * 90% / HUGEPAGE_SIZE

vm.nr_hugepages=236032

--------------------

# Make the kernel parameters valid immediate

sysctl -p

Step4 Add host name attribute

# Edit file „/etc/sysconfig/network‟ to configure the host name

vi /etc/sysconfig/network

-------------------

HOSTNAME=db2srv

-------------------

# Edit file „/etc/hosts‟ to add host name entries

vi /etc/hosts

-------------------

127.0.0.1 localhost

129.27.221.11 db2srv

-------------------

Page 25: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

24

Step5 Install necessary packages and softwares

# Install AIO and C++ library, and VNC Server

yum install libaio.x86_64 libstdc++.x86_64 tigervnc-server.x86_64

# Download and install JRE 7

yum install jre-7u7-linux-x64.rpm

# Download and unpack TPCCRunner test tool

unzip TPCCRunner_BIN_V1.00.zip

mv TPCCRunner_BIN_V1.00 /opt/TPCCRunner

Step6 Disable SELinux and Firewall

# Edit SELinux configuration file “/etc/selinux/config” to

disable it, close firewall service and reboot the system to make

the settings valid

vi /etc/selinux/config

-------------------

SELINUX=disabled

-------------------

chkconfig iptables off

reboot

Step7 Create File System

# Create ext3 file systems on devices from S5500T storage

printf "y\r" | mkfs.ext3 /dev/sdb # USR1

printf "y\r" | mkfs.ext3 /dev/sdc # BAK1

printf "y\r" | mkfs.ext3 /dev/sdd # HOME

printf "y\r" | mkfs.ext3 /dev/sde # ARC

printf "y\r" | mkfs.ext3 /dev/sdf # USR2

printf "y\r" | mkfs.ext3 /dev/sdg # BAK2

# Add auto mount attributes to „/etc/fstab‟

vi /etc/fstab

--------------

/dev/sdd /opt/db2/home ext3 defaults 0 0

/dev/sde /opt/db2/arc ext3 defaults 0 0

/dev/sdb /opt/db2/usr1 ext3 defaults 0 0

Page 26: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

25

/dev/sdc /opt/db2/bak1 ext3 defaults 0 0

/dev/sdf /opt/db2/usr2 ext3 defaults 0 0

/dev/sdg /opt/db2/bak2 ext3 defaults 0 0

--------------

# Create directory for the mount points

mkdir -p /opt/db2/home

mkdir -p /opt/db2/arc

mkdir -p /opt/db2/usr1

mkdir -p /opt/db2/bak1

mkdir -p /opt/db2/usr2

mkdir -p /opt/db2/bak2

# Mount the file systems

mount –a

Step8 Create Group and Users

# Create a group „dasadm1‟

groupadd dasadm1

# Add user „dasusr1‟ to group „dasadm1‟, create home directory in

„/home/‟

useradd -g dasadm1 dasusr1 -b /home

passwd dasusr1

# Add user „db2fenc1‟ to group „dasadm1‟, create user home

directory under „/opt/db2/home‟

useradd -g dasadm1 db2fenc1 -b /opt/db2/home

passwd db2fenc1

# Add user „db2inst1‟ to group „dasadm1‟, create user home

directory under „/opt/db2/home‟

useradd -g dasadm1 db2inst1 -b /opt/db2/home

passwd db2inst1

# Change the owner user of the mounted file systems to „db2inst1‟

and the owner group to „dasadm1‟

chown db2inst1:dasadm1 /opt/db2/arc

chown db2inst1:dasadm1 /opt/db2/usr1

Page 27: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

26

chown db2inst1:dasadm1 /opt/db2/usr2

chown db2inst1:dasadm1 /opt/db2/bak1

chown db2inst1:dasadm1 /opt/db2/bak2

# Change the directory privilege to „drwxr-xr-x‟

chmod 755 /opt/db2/arc

chmod 755 /opt/db2/usr1

chmod 755 /opt/db2/usr2

chmod 755 /opt/db2/bak1

chmod 755 /opt/db2/bak2

Step9 Change limit

# Edit limitation configurations file to configure the amount of

memory, number of processes, and number of opened files

limitation for user „db2inst1‟.

vi /etc/security/limits.conf

--------------------------------

# Mem_Bytes * 90%

db2inst1 soft memlock 483393536

db2inst1 hard memlock 483393536

db2inst1 soft nproc 16384

db2inst1 hard nproc 65536

db2inst1 soft nofile 16384

db2inst1 hard nofile 65536

--------------------------------

-- End.

4.2 Install database software

Change the environment variable „DISPLAY‟ pointing to an valid X server, and then unpack

DB2 v10.1 Linux install package to install DB2 software on the server

tar -zxf v10.1_linuxx64_server.tar.gz

cd server

./db2setup

Page 28: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

27

Page 29: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

28

Page 30: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

29

Page 31: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

30

Page 32: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

31

Page 33: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

32

Page 34: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

33

Page 35: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

34

Page 36: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

35

4.3 Load and test on SMS table spaces

This chapter describes the detailed steps to load and test OLTP workload on SMS table spaces.

The benchmark workload is TPCC-LIKE, a widely used model to test OLTP workload. The

loading and testing tool „TPCCRunner‟ is an open source software written in java which can

be found on https://sourceforge.net. The loading and testing scripts, configuration files are

attached in the PDF file. The following table lists the attachment name and description of the

file.

attachment description

create_db.sql Create database

create_sms_tbs.sql Create SMS table spaces

create_tb.sql Create tables

create_idx.sql Create indexes

loader.properties TPCCRunner Loader configuration file

sms.master.properties TPCCRunner Master configuration file

sms.slave1.properties TPCCRunner Slave configuration file

Step1 Change registry environments and database manager parameters

su - db2inst1

# Use large/huge page for memory allocation

db2set DB2_LARGE_PAGE_MEM=DB

# Sacrifice storage capacity to maximum optimize insert and

update performance

db2set DB2MAXFSCRSEARCH=1

# Set max parallel I/O to 32 for any device

db2set DB2_PARALLEL_IO=*:32

# Set 512GB * 90% = 461GB memory for instance

db2 update dbm cfg using INSTANCE_MEMORY 120848384

Step2 Create database

# Create a directory „hwdb‟ to create database

mkdir ~/hwdb

# Create database „hwdb‟

db2 -t -f sql/create_db.sql

# Start database

Page 37: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

36

db2 connect to hwdb

# set 461GB – 1GB = 460GB memory to database

db2 update db cfg for hwdb using DATABASE_MEMORY 120586240

# Set prefetchers and dbwriters to 1

db2 update db cfg for hwdb using NUM_IOSERVERS 1

db2 update db cfg for hwdb using NUM_IOCLEANERS 1

# Set the default extent size to 512KB

db2 update db cfg for hwdb using DFT_EXTENT_SZ 128

# Set five 64MB log files, issue soft checkpoint when 2 logs full

db2 update db cfg for hwdb using LOGFILSIZ 16384

db2 update db cfg for hwdb using LOGPRIMARY 5 SOFTMAX 200

db2 update db cfg for hwdb using LOGSECOND 250

# Set log archive to DISK directory

db2 update db cfg for hwdb using LOGARCHMETH1 "DISK:/opt/db2/arc"

# Restart database to make the parameters valid

db2 terminate

db2 connect to hwdb

# Backup database before switching to archive mode

db2 backup db hwdb to /opt/db2/bak1,/opt/db2/bak2

db2 connect to hwdb

Step3 Load database

# Create SMS table spaces

db2 -t -f sql/create_sms_ts.sql

# Create TPC-C tables

db2 -t -f sql/create_tb.sql

# Populate TPC-C tables

cd /opt/TPCCRunner

Page 38: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

37

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Loader

loader.properties

cd -

# Create indexes and update statistics

db2 -t -f sql/create_idx.sql

db2 reorgchk update statistics

Step4 Execute TPCC-LIKE Test

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

sms.master.properties > ~/log/sms.tpcc.log &

# On client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

sms.slave1.properties

# Collect I/O and CPU performance

iostat -kx 60 30 > ~/log/sms.iostat.log &

mpstat -P ALL 60 30 > ~/log/sms.mpstat.log &

-- End.

4.4 Migrate objects to DMS table spaces

This chapter describes the steps that migrate TPC-C schema objects from SMS table spaces to

DMS table spaces, and also the performance tests performed on DMS table spaces, to find out

which type of table space is better in performance.The migrating and testing scripts, and

configuration files are attached in the PDF file. The following table lists the attachment name

and description of the file.

attachment description

create_bak_ts.sql Create backup table spaces

backup_tb.sql Backup TPC-C tables

drop_tb.sql Drop TPC-C tables

drop_ts.sql Drop table spaces

mklv.sh Make logical volumes

create_dms_ts.sql Create DMS table spaces

create_tb.sql Create TPC-C tables

restore_tb.sql Restore TPC-C tables

create_idx.sql Create indexes

dms.master.properties TPCCRunner Master configuration file

Page 39: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

38

dms.slave1.properties TPCCRunner Slave configuration file

Step1 Backup and then clean test data

# Create a new schema „db2bak‟ to perform backup operation

db2 create schema db2bak;

# Create backup table space

db2 -t -f sql/create_bak_ts.sql

# Backup TPC-C tables to schema „db2bak‟

db2 -t -f sql/backup_tb.sql

# Drop original TPC-C tables

db2 -t -f sql/drop_tb.sql

# Drop original SMS table spaces

db2 -t -f sql/drop_ts.sql

Step2 Create DMS table space and restore data

# Switch to user „root‟

su - root

# Unmount data directory

umount /opt/db2/usr1

umount /opt/db2/usr2

# Remove the auto mount registries

cat /etc/fstab | grep -v usr > fstab.new

mv fstab.new /etc/fstab

# Make logical volumes

sh mklv.sh

# Exit to user „db2inst1‟

exit

Page 40: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

39

# Create DMS table spaces

db2 -t -f sql/create_dms_ts.sql

# Create TPC-C tables on DMS table spaces

db2 -t -f sql/create_tb.sql

# Restore data for new TPC-C tables

db2 -t -f sql/restore_tb.sql

# Re-create indexes and update statistics

db2 -t -f sql/create_idx.sql

db2 reorgchk update statistics

Step3 Execute TPCC-LIKE Test

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

dms.master.properties > ~/log/dms.tpcc.log &

# On client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

dms.slave1.properties

# Collect I/O and CPU performance

ls -l /dev/vgusr/ > ~/log/dms.iostat.log

iostat -kx 60 30 >> ~/log/dms.iostat.log &

mpstat -P ALL 60 30 > ~/log/dms.mpstat.log &

-- End.

4.5 Resize the buffer pool

This chapter describes the tests performed on database with different size and layout of buffer

pool. The tests are performed to find out the performance effect of buffer pool, to get an result

to refer when deploying and testing DB2 on HUAWEI storage.

Page 41: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

40

Step1 Test a range size of buffer pool

# Set the total size of buffer pool to 20G

db2 alter bufferpool bpdft size 524288;

db2 alter bufferpool bpc size 524288;

db2 alter bufferpool bpol size 2097152;

db2 alter bufferpool bps size 2097152;

# start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

dms.master.properties

# on client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

dms.slave1.properties

# Re-run TPCCRunner with corresponding number of users according

to the response time, to find out the maximum possible users load

# Set the total size of buffer pool respectively to 40G、160G、

320G,Re-run test like above

# Set the total size of buffer pool to 40G

db2 alter bufferpool bpdft size 524288;

db2 alter bufferpool bpc size 524288;

db2 alter bufferpool bpol size 2097152;

db2 alter bufferpool bps size 2097152;

# Set the total size of buffer pool to 160G

db2 alter bufferpool bpdft size 4194304;

db2 alter bufferpool bpc size 4194304;

db2 alter bufferpool bpol size 16777216;

db2 alter bufferpool bps size 16777216;

# Set the total size of buffer pool to 320G

db2 alter bufferpool bpdft size 8388608;

db2 alter bufferpool bpc size 8388608;

db2 alter bufferpool bpol size 33554432;

db2 alter bufferpool bps size 33554432;

Page 42: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

41

Step2 Set the default buffer pool to 'ibmdefaultbp' for all the TPC-C table spaces

# Set the “ibmdefaultbp” buffer pool to 80G

db2 alter bufferpool ibmdefaultbp size 20971520;

# Compose the buffer pool

db2 alter table space tsdft bufferpool ibmdefaultbp;

db2 alter table space tsc0 bufferpool ibmdefaultbp;

db2 alter table space tsc1 bufferpool ibmdefaultbp;

db2 alter table space tsc2 bufferpool ibmdefaultbp;

db2 alter table space tsc3 bufferpool ibmdefaultbp;

db2 alter table space tsc4 bufferpool ibmdefaultbp;

db2 alter table space tsc5 bufferpool ibmdefaultbp;

db2 alter table space tsc6 bufferpool ibmdefaultbp;

db2 alter table space tsc7 bufferpool ibmdefaultbp;

db2 alter table space tsc8 bufferpool ibmdefaultbp;

db2 alter table space tsc9 bufferpool ibmdefaultbp;

db2 alter table space tsol0 bufferpool ibmdefaultbp;

db2 alter table space tsol1 bufferpool ibmdefaultbp;

db2 alter table space tsol2 bufferpool ibmdefaultbp;

db2 alter table space tsol3 bufferpool ibmdefaultbp;

db2 alter table space tsol4 bufferpool ibmdefaultbp;

db2 alter table space tsol5 bufferpool ibmdefaultbp;

db2 alter table space tsol6 bufferpool ibmdefaultbp;

db2 alter table space tsol7 bufferpool ibmdefaultbp;

db2 alter table space tsol8 bufferpool ibmdefaultbp;

db2 alter table space tsol9 bufferpool ibmdefaultbp;

db2 alter table space tss0 bufferpool ibmdefaultbp;

db2 alter table space tss1 bufferpool ibmdefaultbp;

db2 alter table space tss2 bufferpool ibmdefaultbp;

db2 alter table space tss3 bufferpool ibmdefaultbp;

db2 alter table space tss4 bufferpool ibmdefaultbp;

db2 alter table space tss5 bufferpool ibmdefaultbp;

db2 alter table space tss6 bufferpool ibmdefaultbp;

db2 alter table space tss7 bufferpool ibmdefaultbp;

db2 alter table space tss8 bufferpool ibmdefaultbp;

db2 alter table space tss9 bufferpool ibmdefaultbp;

Page 43: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

42

# start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

dms.master.properties

# on client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

dms.slave1.properties

# Re-run TPCCRunner with corresponding number of users according

to the response time, to find out the maximum possible users load

-- End.

Page 44: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

43

5 Migrate objects to solid storage

This chapter describes the detail step to migrate TPC-C objects from traditional storage to

solid storage. In each migrating step, TPC-C Like test is performed to find the change of

transaction throughput and response time.The migrating and testing scripts, and configuration

files are attached in the PDF file. The following table lists the attachment name and

description of the file.

attachment description

mklv_drd.sh Make logical volume on devices from Dorado

master.2000.properties TPCCRunner master configuration file

slave1.2000.properties TPCCRunner slave1 configuration file

migrate_tsdft.sql Migrate objects from table space ‘tsdft’ to ‘tsfdft’

migrate_tscN.sql Migrate objects from table space ‘tsc[0-9]t’ to ‘tsfc[0-9]’

migrate_tsolN.sql Migrate objects from table space ‘tsol[0-9]t’ to ‘tsfol[0-9]’

migrate_tssN.sql Migrate objects from table space ‘tss[0-9]t’ to ‘tsfs[0-9]’

5.1 Close database and VM

# Close database and instance

db2 force application all

db2stop

# Shutdown VM

su - root

halt

5.2 Map Dorado LUNs to VM

Add the following LUNs to the VM as RAW disk mappings.

LUN Virtual Device ID

FSYS SCSI (1:2)

Page 45: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

44

FUSR1 SCSI (2:2)

FUSR2 SCSI (3:2)

Power on the VM, the original disk sequence is changed, items in „/etc/fstab‟ and

„/etc/rc.d/rc.local‟ should be carefully changed to the corresponding device, then reboot

system.

After reboot, issue the following commands to start instance and database.

db2start

db2 connect to hwdb

5.3 Create file system and logical volumes

# Switch to user „root‟

su - root

# Change the I/O scheduler to „noop‟ for newly added solid device

echo 'echo noop > /sys/block/sdg/queue/scheduler' >> /etc/rc.d/rc.local

echo 'echo noop > /sys/block/sdj/queue/scheduler' >> /etc/rc.d/rc.local

echo 'echo noop > /sys/block/sdd/queue/scheduler' >> /etc/rc.d/rc.local

sh /etc/rc.d/rc.local

# Make logical volumes on devices from LUN „FUSR1‟ and „FUSR2‟

sh mklv_drd.sh

# Create ext3 file system on device from LUN „FSYS‟

printf "y\r" | mkfs.ext3 /dev/sdg # FSYS

# Add auto mount points to „/etc/fstab‟

vi /etc/fstab

--------------

/dev/sdg /opt/db2/fsys ext3 defaults 0 0

--------------

# Create mount directory

mkdir -p /opt/db2/fsys

# Mount the newly created file system

Page 46: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

45

mount -a

# Change the owner user of the newly mounted file system to „db2inst1‟

and the owner group to „dasadm1‟

chown db2inst1:dasadm1 -R /opt/db2/fsys

# Change the directory privilege to „drwxr-xr-x‟

chmod 755 -R /opt/db2/fsys

# Exit to user „db2inst1‟

exit

5.4 Migrate redo log files

# Change the log path, and re-start database, record the database restart

time

db2 update db cfg using NEWLOGPATH "/opt/db2/fsys"

db2 terminate

db2 connect to hwdb

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

master.2000.properties > ~/log/log2ssd.2000.tpcc.log &

# On client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

slave1.2000.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/log2ssd.2000.iostat.log

iostat -kx 60 20 >> ~/log/log2ssd.2000.iostat.log &

mpstat -P ALL 60 20 > ~/log/log2ssd.2000.mpstat.log &

5.5 Migrate table space

Step1 Migrate objects on table space “tsdft”

# Migrate objects on table space „tsdft‟ to SSD table space

„tsfdft‟, record the execute time of the migrating script

db2 -t -f sql/migrate_tsdft.sql | tee ~/log/dft2ssd.migrate.log

Page 47: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

46

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

master.2000.properties > ~/log/dft2ssd.2000.tpcc.log &

# On client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

slave1.2000.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/dft2ssd.2000.iostat.log

iostat -kx 60 20 >> ~/log/dft2ssd.2000.iostat.log &

mpstat -P ALL 60 20 > ~/log/dft2ssd.2000.mpstat.log &

# Re-run TPCCRunner with more clients, slaves, and number of

users, to find out the maximum possible user load, save the test

results and performance logs to files with prefix „dft2ssd.max‟

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

dft2ssd.master.properties > ~/log/dft2ssd.max.tpcc.log &

# On clients, start TPCCRunner Slaves

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

dft2ssd.slaveN.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/dft2ssd.max.iostat.log

iostat -kx 60 20 >> ~/log/dft2ssd.max.iostat.log &

mpstat -P ALL 60 20 > ~/log/dft2ssd.max.mpstat.log &

Step2 Migrate objects on table space “tsc[0-9]”

# Migrate objects on table space „tsc[0-9]‟ to SSD table space

„tsfc[0-9]‟, record the execute time of the migrating script

db2 -t -f sql/migrate_tscN.sql | tee ~/log/c2ssd.migrate.log

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

master.2000.properties > ~/log/c2ssd.2000.tpcc.log &

# On client1, start TPCCRunner Slave1

Page 48: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

47

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

slave1.2000.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/c2ssd.2000.iostat.log

iostat -kx 60 20 >> ~/log/c2ssd.2000.iostat.log &

mpstat -P ALL 60 20 > ~/log/c2ssd.2000.mpstat.log &

# Re-run TPCCRunner with more clients, slaves, and number of

users, to find out the maximum possible user load, save the test

results and performance logs to files with prefix „c2ssd.max‟

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

c2ssd.master.properties > ~/log/c2ssd.max.tpcc.log &

# On clients, start TPCCRunner Slaves

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

c2ssd.slaveN.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/c2ssd.max.iostat.log

iostat -kx 60 20 >> ~/log/c2ssd.max.iostat.log &

mpstat -P ALL 60 20 > ~/log/c2ssd.max.mpstat.log &

Step3 Migrate objects on table space “tsol[0-9]”

# Migrate objects on table space „tsol[0-9]‟ to SSD table space

„tsfol[0-9]‟, record the execute time of the migrating script

db2 -t -f sql/migrate_tsolN.sql | tee ~/log/ol2ssd.migrate.log

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

master.2000.properties > ~/log/ol2ssd.2000.tpcc.log &

# On client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

slave1.2000.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/ol2ssd.2000.iostat.log

Page 49: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

48

iostat -kx 60 20 >> ~/log/ol2ssd.2000.iostat.log &

mpstat -P ALL 60 20 > ~/log/ol2ssd.2000.mpstat.log &

# Re-run TPCCRunner with more clients, slaves, and number of

users, to find out the maximum possible user load, save the test

results and performance logs to files with prefix „ol2ssd.max‟

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

ol2ssd.master.properties > ~/log/ol2ssd.max.tpcc.log &

# On clients, start TPCCRunner Slaves

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

ol2ssd.slaveN.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/ol2ssd.max.iostat.log

iostat -kx 60 20 >> ~/log/ol2ssd.max.iostat.log &

mpstat -P ALL 60 20 > ~/log/ol2ssd.max.mpstat.log &

Step4 Migrate objects on table space “tss[0-9]”

# Migrate objects on table space „tss[0-9]‟ to SSD table space

„tsfs[0-9]‟, record the execute time of the migrating script

db2 -t -f sql/migrate_tssN.sql | tee ~/log/s2ssd.migrate.log

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

master.2000.properties > ~/log/s2ssd.2000.tpcc.log &

# On client1, start TPCCRunner Slave1

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

slave1.2000.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/s2ssd.2000.iostat.log

iostat -kx 60 20 >> ~/log/s2ssd.2000.iostat.log &

mpstat -P ALL 60 20 > ~/log/s2ssd.2000.mpstat.log &

# Re-run TPCCRunner with more clients, slaves, and number of

users, to find out the maximum possible user load, save the test

results and performance logs to files with prefix „s2ssd.max‟

Page 50: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

49

# Start TPCCRunner Master program

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Master

s2ssd.master.properties > ~/log/s2ssd.max.tpcc.log &

# On clients, start TPCCRunner Slaves

java -cp bin/:lib/db2jcc4.jar iomark.TPCCRunner.Slave

s2ssd.slaveN.properties

# Collect I/O and CPU performance

ls -l /dev/vgfusr/ > ~/log/s2ssd.max.iostat.log

iostat -kx 60 20 >> ~/log/s2ssd.max.iostat.log &

mpstat -P ALL 60 20 > ~/log/s2ssd.max.mpstat.log &

-- End.

Page 51: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

50

6 Test results summary

6.1 SMS and DMS table space performance

The following chart shows the maximum supported active users when TPC-C schema objects

on SMS table spaces and DMS table spaces. The performance of DMS table space is better

than SMS table spaces, so choose DMS when the performance is the factor you mainly

focused on.

Figure 6-1 Maximum active users with SMS table space and DMS table space

6.2 Performance effect of buffer pool

The following chart shows the change of maximum active users (Act.Users) when setting

different size of buffer pool for the database. The number of active user increases with the

expansion of the buffer pool size.

2050

2500

0

500

1000

1500

2000

2500

3000

SMS DMS

Act

ive

En

d U

sers

Act.Users

Page 52: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

51

Figure 6-2 Maximum active users with different size of buffer pool

The following chart shows the change of maximum active users (Act.Users) when splitting or

composing the buffer pool with 80G memory. Splitting is a little better than composing, but

more complex. Follow the best practices in the last chapter of this paper to configure you

buffer pool.

Figure 6-3 Maximum active users when splitting or composing the buffer pool

1680 1950

2500

2800 2900

0

500

1000

1500

2000

2500

3000

3500

20G 40G 80G 160G 320G

Ati

ve E

nd

Use

rs

Act.Users

2500 2350

0

500

1000

1500

2000

2500

3000

Splitting Composing

Act

ive

En

d U

sers

Act.Users

Page 53: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

52

6.3 HDD and SSD performance

The following chart shows the change of response time when gradually migrate TPC-C

schema objects from HDDs to SSD when the active users is 2000. The Response Time

(avg_db_rt) reduces from 200ms to 22ms, reduces to 11%.

Figure 6-4 Change of response time with gradually migrating each table space

The description of horizontal ordinate in the chart:

- HDD: all objects are on the HDDs

- log2ssd: after migrating redo log files to the SSDs

- dft2ssd: after migrating objects on table space “tsdft” to the SSDs

- c2ssd: after migrating objects on table space “tsc0” – “tsc9” to the SSDs

- ol2ssd: after migrating objects on table space “tsol0” – “tsol9” to the SSDs

- s2ssd: after migrating objects on table space “tss0” – “tss9” to the SSDs

200 200

142 129 123

22

0

50

100

150

200

250

HDD log2ssd dft2ssd c2ssd ol2ssd s2ssd

Re

spo

nse

Tim

e (

ms)

avg_db_rt

Page 54: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

53

The following chart shows the change of maximum active users when gradually migrate TPC-

C schema objects from HDDs to SSD. The Active Users((Act.Users)) increases from 2000 to

16000, increases to 800%.

Figure 6-5 Maximum active users with migrating each table space

The description of horizontal ordinate in the chart:

- HDD: all objects are on the HDDs

- log2ssd: after migrating redo log files to the SSDs

- dft2ssd: after migrating objects on table space “tsdft” to the SSDs

- c2ssd: after migrating objects on table space “tsc0” – “tsc9” to the SSDs

- ol2ssd: after migrating objects on table space “tsol0” – “tsol9” to the SSDs

- s2ssd: after migrating objects on table space “tss0” – “tss9” to the SSDs

2000 2000 2535 3550 3800

16000

0

2000

4000

6000

8000

10000

12000

14000

16000

18000

HDD log2ssd dft2ssd c2ssd ol2ssd s2ssd

Act

ive

En

d U

sers

Act.Users

Page 55: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

54

6.4 Migration performance analyzing

When migrating DB2 objects, the migration window is very important, this chapter analyzes

the performance of “LOAD FROM CURSOR” migrating method and also the average

throughput adding the time of re-create indexes and runstats is also analyzed.

The following table shows the LOAD throughtput and AVERAGE throughput of the 3 big

tables in TPC-C model, the average throughtput is 18 MB/s to 24 MB/s.

Table Rows Table

Size (MB)

LOAD Time/

TOTAL Time (s)

LOAD Throughput/

AVERAGE Throughput (MB/s)

CUSTOMER 300,000,000 191837 4763 / 9428 40.20 / 20.35

ORDER_LINE 3,046,349,357 238111 2339 / 12797 101.81 / 18.61

STOCK 1,000,000,000 325729 5769 / 13598 56.56 / 23.95

"TOTAL" time includes "LOAD" time, index re-creating time, and RUNSTATS time.

LOAD Throughput is the throughput in LOAD phase. AVERAGE Throughput is the average in the whole migration phase.

Page 56: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

55

7 Best practices

7.1 Storage performance and capacity

HUAWEI provides a series of storage from low-end to high-end, and also provides solid

storage with low latency and very high performance. Before deploying DB2 database, identify

the performance and capacity needs is essential to choose the storage model and to configure

the type and number of disks.

Estimate storage resource depending on performance factor first and then considering

capacity factor.

OLTP performance design

In DB2 OLTP databases, the I/O pattern is high frequency single block random read/write

with little sequential multi block pre-fetch, the read ratio is typically 40%-80%. For storage,

the mainly performance indicator is how many I/O requests could be processed per second.

We use the following formula to calculate RAID performance, and then estimate disk spindle

requirements for user table spaces.

RAID level OLTP IOPS For Example, 60% OLTP read ratio

8 - 15K RPM SAS disk RAID Group

RAID10 IOPSDISK * N / (2 - R) 200 * 8 / (2 – 0.6) = 1142.85

RAID5 IOPSDISK * N / (4 - 3R) 200 * 8 / (4 – 3 * 0.6) = 727.27

RAID6 IOPSDISK * N / (6 - 5R) 200 * 8 / (6 – 5 * 0.6) = 533.33

IOPSDISK sdands for I/O per second performance of a single disk,for HUAWEI storage, we use 8000 for SLC

SSD, 5000 for MLC SSD, 200 for 15K RPM HDD, 150 for 10K RPM HDD, and 50 for 7.2K RPM HDD.

N stands for “number of disks in the RAID group”.

R stands for OLTP read ratio (0-1).

Please conside the performance improvement by storage cache hit when using the formula.

Page 57: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

56

OLAP performance design

In DB2 OLAP databases, the I/O pattern is multi-stream sequential read. For storage, the

mainly performance indicator is how much amount of data could be transferred per second.

We use the following formula to calculate RAID performance, and then estimate disk spindle

requirements for user table spaces.

RAID level OLAP Throughput (MB/s) For Example, 2 - 8Gbps FC

8 - 15K RPM SAS disk RAID Group

RAID10 MIN ( MBPSPATH , MBPSDISK * N ) MIN (1600, 50 * 8 ) = 400

RAID5 MIN ( MBPSPATH , MBPSDISK * N ) MIN (1600, 50 * 8 ) = 400

RAID6 MIN ( MBPSPATH , MBPSDISK * N ) MIN (1600, 50 * 8 ) = 400

MBPSDISK sdands for muti-stream sequential read throughput of a single disk, for HUAWEI storage, we use

100 for SSD, 50 for 15K RPM HDD, 30 for 10K RPM HDD, and 20 for 7.2K RPM HDD.

N stands for “number of disks in the RAID group”.

MBPSPATH sdands for the total throughput of connection between all database hosts and storage.

After estimating the disk spindle requirements for user table spaces, calculate the size of

tables and indexes to choose a proper type of disk from kinds of capacity.

Size of tables and indexes

We use the following formula to calculate the size of tables and indexes:

ETS = ARS * (CNR + RID * PYO * 365) / FF * (1+BGY)^(PYO-1) * ( 1 + GOH )

* ETS – Estimated Table Size

* ARS – Average Record Size

* CNR – Current Number of Records

* RID – Records Increase per Day

* PYO – Planed Years of Ownership

* FF – Fill Factor

* BGY – Business Growth per Year

* GOH – Global Over Head

After estimating user table spaces, you could move onto other database areas, considering

performance and reliability, put user table spaces, transaction log files, archive log path,

backup path on separate physical disks.

Page 58: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

57

Transaction log files is small, important, and latency intensive. Commonly use RAID10 with

four 15K RPM disks to service transaction log and don‟t share the RAID group with other I/O

intensive area.

For OLTP database, temporary table spaces should be larger than the biggest index in the

database. For OLAP database, temporary table spaces should be as large as user table spaces.

Use SMS table space for temporary table spaces. Use RAID10 for temporary table spaces to

improve the write performance for sorting operation.

Keep the capacity of archive log path larger than the total size of log files generated between

two database full backup operations. Use RAID5 to store archive log.

Keep the capacity of backup path larger than the total size of backup sets generated in the

backup retain period. Use RAID5 to store backup set.

Enough space should be left for diagnostic log files of the database.

After all estimating, choose the storage series based on the performance test report provided

by HUAWEI, configure enough disk resource, run benchmarks on nonproduction systems to

estimate resource consumption by workload and to plan the capacity needed.

7.2 Table space design

DMS or SMS table space

Choose DMS table space for performance and SMS table space for easier maintenance. DMS

table spaces using device containers perform the best. DMS table spaces with file containers,

or SMS table spaces, are also reasonable choices for OLTP workloads if maximum

performance is not required. Using DMS table spaces with file containers, where FILE

SYSTEM CACHING is turned off, can perform at a level comparable to DMS raw table

space containers.

Page size

For OLTP applications that perform random row read and write operations, use a smaller page

size because it does not waste buffer pool space with unwanted rows. Set 4KB page size is

recommended by HUAWEI for most OLTP database.

For OLAP applications that perform sequential row scan operations, use a larger page size

because that's most efficient to transfer data. Set 32KB page size is recommended by

HUAWEI for most OLAP database.

Extent size and Pre-fetch size

With little or no sequential I/O expected, the settings for the EXTENTSIZE and the

PREFETCHSIZE parameters on the CREATE table space statement do not have a substantial

effect on I/O efficiency. The value of the PREFETCHSIZE parameter on the CREATE table

space statement should be set to the value of the EXTENTSIZE parameter multiplied by the

number of device containers and EXTENTSIZE should be set to the value of LUN stripe size

configured in storage system.

Page 59: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

58

OVERHEAD and TRANSFERRATE

The settings for OVERHEAD on the CREATE table space statement specifies the I/O

controller overhead and disk seek and latency time. This value is used to determine the cost of

I/O during query optimization. Set OVERHEAD to 5 for 15K RPM disks and 0.5 for SSDs.

The settings for TRANSFERRATE on the CREATE table space statement specifies the time

to read one page into memory. This value is used to determine the cost of I/O during query

optimization. The default value is ok.

Data placement in table spaces

Create database objects that need to be recovered together in the same table space for easier

backup and restore capabilities.

Assign a buffer pool to temporary table spaces for their exclusive use to increase the

performance of activities such as sorts or joins. Create one system temporary table space for

each page size. Use SMS table spaces for temporary table spaces.

Define smaller buffer pools for seldom-accessed data or for applications that require random

access into a large table.

Store LOB or LONG data in SMS table spaces or DMS spaces with file containers so that file

system caching might provide buffering and, as a result, better performance.

Create a single file system on each LUN and dedicate it to a single partition DB2 database.

RAID and LUN configuration

Choose RAID10 for write intensive OLTP database and RAID5 for OLAP database.

Use 15K RPM disks or SSDs for user table spaces.

The number of disks should be less than 12 for a RAID group. For RAID5, configure 5, 7, 9,

and 11 disks is recommended. For RAID6, configure 6, 8, 10, and 12 disks is recommended.

For RAID10, any even number of disks less than 12 is reasonable.

Average distributes the owning controller of LUNs on the two storage controllers.

For write intensive RAID5 or RAID6, the tripe size should be smaller or equal to 1MB. For

read intensive RAID5 or RAID6, configure LUN stripe width (stripe unit) to 512KB. For

RAID10, configure LUN stripe width (stripe unit) to 512KB.

Choose “intelligent pre-fetch” for OLTP LUNs and “none pre-fetch” for OLAP LUNs.

Choose “write back with mirroring” for HDD LUNs and “write through” for SSD LUNs.

7.3 Buffer pool design

Use AUTOMATIC for the NUM_IOCLEANERS, NUM_IOSERVERS and PREFETCHSIZE

parameters.

Use the self-tuning memory manager (STMM) and other automatic features to provide stability and strong performance.

Page 60: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

59

Buffer pool hit ratios are a fundamental metric for buffer pool monitoring. They give an

important overall measure of how effectively the system is in using memory to reduce disk

I/O. Hit ratios of 80-85% or higher for data and 90-95% or higher for indexes are typically

considered good for an OLTP environment. These ratios can be calculated for individual

buffer pools using data from the buffer pool snapshot or the “db2pd -bufferpools” command.

Keep frequently used read-only or read-mostly data in a single table space. Do not mix read-

only or read-mostly with heavily write intensive tables.

The default value of chngpgs_thresh configuration parameter is 60%, which is normally too

high for OLTP workloads. A value between 20% and 40% is more appropriate.

Use the improved proactive page cleaning algorithm by setting the

DB2_USE_ALTERNATE_PAGE_CLEANING registry variable to YES. This new

algorithm eliminates bursty cleaning that is generally associated with the chngpgs_thresh and

softmax database configuration parameters. If you set this registry variable to YES, the

setting of the chngpgs_thresh configuration parameter has no effect.

7.4 Table and index design

Range partitioned table

Table partitioning can be used for large tables to provide easier maintenance and better query

performance. The DB2 optimizer performs range elimination and scans only the relevant

partitions to improve the query performance. Online maintenance of range partitioned table is

intended to be easier and reduce overall administration costs on large tables because of the

following features:

BACKUP, RESTORE, and RUNSTATS commands can be run at the individual table

partition level.

Table partitions can be easily rolled in and rolled out of the database.

Flexible index placement.

Use range partitioned tables under the following conditions:

Your application requires a larger table capacity.

Your data can be logically organized into several data partitions based on one or more

column value ranges.

Your application requires fast online roll-in and roll-out of a large range of data.

Your business require backup and restore of individual data partitions instead of an entire

table. Placing data partitions in different table spaces allows the backing up and restoring

of a specific range of data.

You want increased query performance through partition elimination and local indexes.

Your business objectives include better data lifecycle management.

Page 61: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

60

Index

Use an index only where there is a clear advantage for frequent access exists.

Use columns that best match with the most frequent used queries as index keys.

Use include columns for two or more columns that are frequently accessed together to enable

index only access for queries.

Use partitioned indexes for partitioned tables.

Create clustered index on columns that have range predicates.

Indicate a PCTFREE value to reduce the need of index reorganization. For OLTP workloads

with significant insert or update operations, use a large PCTFREE value.

7.5 Transaction log design

Circular logging or archive logging

In development and test environments, you could use circular logging. To simplify database

administration in these environments where transaction logging is not essential, use circular

logging.

Use archive logging in production environments to be able to perform many recovery

operations including, online backup, incremental backup, online restore, point-in-time

rollforward, and issuing the RECOVER DATABASE command.

Size and number of log files and soft checkpoint max

The size of log files is defined by the LOGFILSIZ database configuration parameter. The

database configuration parameter SOFTMAX is used to influence the number of logs that

need to be recovered following a crash (such as a power failure). The default value of

SOFTMAX is 100, which means the database manager will try to keep the number of logs

that need to be recovered to 1.

You should configure LOGFILESIZ and SOFTMAX depending on your acceptable recovery

window. For small and medium OLTP database, setting LOGFILESIZ to 64MB and

SOFTMAX to 100 or 200 could result a reasonable failure recovery window.

The number of log files is defined by LOGPRIMARY configuration parameter, the value of

which is chosen depends on transaction length and frequency of commits. For OLTP database,

there‟re frequently commits, if circular logging is used, set the value of LOGPRIMARY to 10

or more, if archive logging is used, set the value of LOGPRIMARY to 5 or less. For database

with very long transactions, increase the value of LOGPRIMARY. On all conditions, set

LOGSECOND configuration parameter to a relatively large value to avoid log full.

Mirroring log path

You can set an alternate path for transaction logging by using the mirrorlogpath database

configuration parameter. If this parameter is set, the database manager creates active log files

in both the log path and the mirror log path. All log data is written to both paths, increasing

protection from accidental loss of a log file.

Page 62: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

61

RAID and LUN configuration

Choosing RAID10 with 4 disks for log storage is reasonable for most databases.

Choose 512KB stripe width (stripe unit) and “write back with mirroring” write policy.

Because of “write back” policy, the log sync latency is always very low; no matter you use

HDDs or SSDs.

7.6 Tiering storage design

Disk types

HUAWEI storages support 5 kinds of disks with different speed: 7.2K RPM magnet disk, 10K

RPM magnet disk, 15K RPM magnet disk, MLC solid disk, and SLC solid disk. Also,

HUAWEI servers supports PCIE solid card with the lowest latency and highest I/O

throughput. The performance of the 6 kinds of media is listed below; you could choose from

the disks for DB2 database objects with different I/O character and access frequency.

Media type OLTP Latency OLTP IOPS OLAP Throughput

7.2K RPM magnet disk 20 ms 30 20 MB/s

10K RPM magnet disk 15 ms 150 30 MB/s

15K RPM magnet disk 10 ms 200 50 MB/s

MLC solid disk 2 ms 5000 80 MB/s

SLC solid disk 1 ms 8000 100 MB/s

solid PCIE card 0.1 ms 100000 1 GB/s

Frequently accessed objects

To upgrade or optimize your system, first use „db2pd -tcbstats index‟ command to identify

the size and access frequency of all tables and indexes in current database, and then migrate

the most hot object to solid storage step by step until the performance is ideal for your

business.

For newly-deployed system, run benchmark test on the system and use „db2pd -tcbstats

index‟ command to identify the size and access frequency of all tables and indexes in current

database, and then migrate the most hot objects to solid storage

Log files

The log file sync latency is reasonable when putting log files on HDD LUNs with “write back”

policy, so it‟s unnecessary to put log files on SSD LUNs when your budgets is not enough.

Archive log destination and backup destination

7.2K RPM NL-SAS disk is reasonable for archive log destination and backup destination.

Page 63: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

62

7.7 High availability and reliability

Identify and eliminate single point of failures (SPOF) in the business infrastructure. The

following figure shows a typically NO SPOF SAN network for DB2 share disk cluster. In the

SAN network, all components are redundancy, including servers, HBAs, SAN switches,

storage controllers, and etc.

Figure 7-1 High availability and reliability architecher

Implement high availability and disaster recovery solutions in all layers of business

infrastructure such as, database, application, and middleware.

Use separate high performing disks for data, transaction logs, and archived logs.

Use mirrored logs for redundancy.

Create a backup and restore plan for backing up databases, table paces, and transactional logs.

Create enough hot spares to avoid data lose after possible RAID degree.

Some or all aspects of reliability, availability, and scalability can be achieved by

implementing the following solutions.

Shared disk cluster

It provides high availability on node failure in the cluster. This solution provides only high

availability and does not offer scalability, disaster recovery, or protection against disk

corruption.

Page 64: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

63

Disk mirroring technology

There are many solutions that provide commercial disk mirroring technology for

implementing high availability or disaster recovery with shared disk cluster solution. However,

these solutions do not completely protect you against disk corruption. If the source disk is

corrupted, the corrupted data is propagated to the target as well. Moreover, this solution does

not offer instantaneous failover capability, which is critical for 24x7 business.

DB2 High Availability Disaster Recovery feature

It is a low-cost and easy to manage replication solution. It provides high availability and

disaster recovery solution for both partial and complete site failures. It also provides

instantaneous failover.

DB2 pureScale® feature

It is a shared disk architecture that allows business enterprise to transparently scale OLTP

clusters dynamically on demand. It provides unlimited capacity, reliability, and continuous

availability.

Partitioned database environments

A partitioned database environment is a shared-nothing architecture that allows the database

manager to scale to hundreds of terabytes of data and hundreds of CPUs across multiple

database partitions to form a single, large database server. These partitions can be located

within a single server, across several physical machines, or a combination. The database data

is distributed across multiple database partitions, offering tremendous scalability and

workload parallelization across these partitions.

Typical OLTP workloads are short running transaction that access few random rows of a table.

Partitioned database environments are better suited for data warehouse and business

intelligence workloads.

Virtual Hypervisor cluster

When using DB2 with virtual hypervisor such as VMware or HyperV, the solution that

creating failover cluster on the hypervisor level also provides high availability and reliability

function for DB2 database.

Backup policy

Create a backup and restore plan for backing up databases, table paces, and transactional logs.

Incremental backup

Consider enabling the trackmod database configuration parameter for incremental backups to

track database modifications so that the BACKUP DATABASE command can determine

which subsets of database pages should be included in the backup image for either database

backups or table space backups.

7.8 OS and instance parameters

Queue depth

When using Linux in FC SAN network, the default queue depth for a LUN is 32, it‟s

unnecessary to change it for most scenarios. If you find it‟s too small for your business, install the newest FC HBA drivers and set the queue depth larger when installing.

Page 65: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

64

When using VMware ESX 4 or 5 as virtualization hypervisor, the default queue depth for a

LUN is 32, create more LUNs to increase the total queue depth. The default queue depth for a

virtual SCSI adapter is 64, average distributing I/O intensive virtual disks onto several SCSI

adapters is recommended.

In AIX, set AIO parameter “maxservers” to a reasonable value (typically 10 *

Num_of_CPUs), set disk parameter “queue_depth” to a reasonable value (typically 32), and

set adapter parameter “num_cmd_elems” to a reasonable value (typically 128) to eliminate

possible queuing.

The DB2 registry environment “DB2_PARALLEL_IO” should be set to the same value as the

disk queue depth (typically 32).

Linux block device I/O scheduler

DB2 is more and more deployed on Linux operating system. There‟re 4 kinds of block device

I/O schedulers: “noop”, “anticipatory”, “deadline” and “cfq”. For traditional magnet disk

LUNs, choose “deadline” scheduler to reduce I/O latency. For solid disks, chose “noop”

scheduler to simply I/O process and improve performance.

Block device I/O scheduler could be set globally for all devices or be set for a specify device. Add

“elevator=xxx” at the end of the kernel line in grub configuration file and reboot the OS make the scheduler

“xxx” valid on all devices. Using the command “echo xxx > /sys/block/sdx/queue/scheduler” to change the

I/O scheduler of “/dev/sdx” to “xxx” immediate, which will be restored to default after reboot.

Multi-path best practices

Install HUAWEI UltraPath software with the newest version for multi-path selection when

deploy DB2 on HUAWEI storage only.

When using HUAWEI storage with other vendor‟s, use OS default multi-path feature or

install UltraPath depending on the software‟s compatibility list.

When using VMware ESX hypervisor, using “VMW_SATP_LSI” plug-in with

“VMW_PSP_RR” policy for path selection, and config the round-robin policy as “iops” for

each HUAWEI device.

Using the following command in ESXi 5.0 command line to set multi-path policy for HUAWEI storage

devices, the settings will be valid on next reboot:

esxcli storage nmp satp rule add --satp=VMW_SATP_LSI --vendor="HUAWEI" --description

"HUAWEI SAN Storage"

esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp VMW_SATP_LSI

Using the following commands to set the round-robin policy for “VMW_PSP_RR” after reboot:

for lun in `esxcli storage nmp device list | grep HUAWEI | sed 's/^.*(//g' | sed

's/)//g'`; do

esxcli storage nmp psp roundrobin deviceconfig set --device $lun --type=iops --iops=1 --

bytes=1048576

Page 66: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

65

esxcli storage nmp psp roundrobin deviceconfig get --device $lun

done

Large/Huge page

Use large/huge page to allocate instance memory when the size of instance memory larger

than 100GB.

If you want to use huge page in Linux, first set the kernel parameter “vm.nr_hugepages” to allocate memory

for huge pages, then set “memlock” for user in the limit configuration file “/etc/security/limits.conf”, and at

last set the value of DB2 registry environment “DB2_LARGE_PAGE_MEM” to “DB”.

Amount of FSCRs to search

The DB2 registry environment DB2MAXFSCRSEARCH specifies the amount of free space

control record (FSCRs) to search when adding a record to a table. The default is to search five

FSCRs, which has a significant effect on performance. Modifying this value allows you to

balance insert speed with space reuse. Use large values to optimize for space reuse. Use small

values to optimize for insert speed. Setting it to 1 FSCR is recommended by HUAWEI

because the performance is more headachy than storage capacity nowadays.

Page 67: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

66

8 Terms and Abbreviations

ATS Automatic and Set

CIFS Common Internet File System

CLI Command Line Interface

CPU Central Processing Unit

DBM Database Manger

DDR Double Data Rate

DHCP Dynamic Host Configuration Protocol

DMS Database Managed Space

DPF Data Partitioning Feature

DST Dynamic storage tiering

EDUs Engine Dispatchable Units

ECC Error Correcting Code

eMLC Enterprise Multi-level Cell

ESXi Bare-metal embedded hypervisor s that run directly on server hardware

FC Fibre Channel

FCoE Fibre Channel over Ethernet

FTP File Transfer Protocol

GUI Graphical user interface

HBA Host Bus Adapter

HDD Hard Disk Drive

HTTP Hypertext Transfer Protocol

IOPS I/O per second

Page 68: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

67

iSCSI SCSI over IP

ISM Integrated Management Software of Huawei Unified Storage System

LUN Logical Unit Number

LZ Lempel-Ziv

MLC Multi-level cell

NFS Network File System

NL SAS Nearline Serial Attached SCSI

OLAP Online Analytical Processing

OLTP Online Transaction Processing

RDM Raw Device Mapping

PCIE Peripheral Component Interconnect Express

RAID Redundant Array Inexpensive Disk

ROI Return on investment

RPM Round per Minute

SAN Storage Area Network

SAS Serial Attached SCSI

SATA Serial ATA

SLC Single Layer Cell

SMS System Managed Space

SPOF Single point of failure

SSD Solid State Disk

STMM Selt-tuning memory manager

VAAI VStorage APIs – Array Integration

VDI Virtual Desktop Infrastructure

VM Virtual Machine

VMFS Virtual Machine File System

Page 69: Deploy and Test DB2 LUW 10.1 in Linux VM Using HUAWEI ... · Linux VM Using HUAWEI S5000T and Dorado Storage ... Deploy and Test DB2 LUW 10.1 ... 3.1 Architecture

Deploy and Test DB2 LUW 10.1 in Linux VM using S5000T and Dorado Storage

Ver 1.0 Huawei Technologies Co., Ltd

68

Copyright © Huawei Technologies Co., Ltd. 2012. All rights reserved.

No part of this document may be reproduced or transmitted in any form or by any means without prior written consent of Huawei Technologies Co., Ltd.

Trademark Notice

HUAWEI, and are trademarks or registered trademarks of Huawei Technologies Co., Ltd.

Other trademarks, product, service and company names mentioned are the property of their respective owners.

General Disclaimer

The information in this document may contain predictive statements including, without limitation,

statements regarding the future financial and operating results, future product portfolio, new

technology, etc. There are a number of factors that could cause actual results and developments to

differ materially from those expressed or implied in the predictive statements. Therefore, such

information is provided for reference purpose only and constitutes neither an offer nor an

acceptance. Huawei may change the information at any time without notice.

HUAWEI TECHNOLOGIES CO., LTD.

Huawei Industrial Base

Bantian Longgang

Shenzhen 518129, P.R. China

Tel: +86-755-28780808

www.huawei.com

PROVIDED BY HUAWEI STORAGE PERFORMANCE LAB