54
Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: [email protected] TWITTER: @mstopfer1 LinkedIn: Melanie Stopfer Melanie Stopfer is a Consulting Learning Specialist and Developer for IBM Software Group. As a Certified DB2 10.1 Advanced Technical Expert and Learning Facilitation Specialist, she has provided in-depth technical support to IM customers specializing recovery, performance and database upgrade and migration best practices since 1988. In 2009, Melanie was the first DB2 LUW speaker to be inducted into the IDUG Speaker Hall of Fame and was again selected Best Overall Speaker at IDUG EMEA 2011 and 2012 and IDUG NA 2012. 1

Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: [email protected] TWITTER: @mstopfer1 LinkedIn:

  • Upload
    others

  • View
    21

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Wowza! Great Tips I've Learned About DB2 LUW 10

Melanie Stopfer

IBM Software Group

EMAIL: [email protected]

TWITTER: @mstopfer1

LinkedIn: Melanie Stopfer

Melanie Stopfer is a Consulting Learning Specialist and Developer for IBM Software Group. As a Certified DB2 10.1 Advanced Technical Expert and Learning Facilitation Specialist, she has provided in-depth technical support to IM customers specializing recovery, performance and database upgrade and migration best practices since 1988. In 2009, Melanie was the first DB2 LUW speaker to be inducted into the IDUG Speaker Hall of Fame and was again selected Best Overall Speaker at IDUG EMEA 2011 and 2012 and IDUG NA 2012.

1

Page 2: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style2

Objectives:

• Describe DB2 10 performance tips.

• Utilize DB2 10 problem determination tricks.

• Analyze DB2 10 tips to improve utilities.

• Learn DB2 10 monitoring tricks to improve performance and

problem determination.

• Implement commonly overlooked DB2 10 features.

TIP PREFERRED PRACTICE

Abstract:

Look at all the great hints and tips I’ve learned about DB2 LUW 10. Melanie will present tips and tricks that she has learned about DB2 LUW. Many times these small but great enhancements are overlooked. The goal is to provide you with an arsenal of tips you can use in various database administration situations. Come learn DB2 LUW 10 ideas, tips and tricks that you need to know.

2

Page 3: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style3

Monitor logging stats using 10.1 table function MON_GET_TRANSACTION_LOG

Log Used (Meg) Log Space Free (Meg) Pct Used Max Log Used (Meg) Max Sec. Used (Meg)-------------- -------------------- ----------- ------------------ -------------------

2 55 4 2 0

LOG_TO_REDO_FOR_RECOVERY LOG_HELD_BY_DIRTY_PAGES------------------------ -----------------------

2765074 2765074

SELECT

int(total_log_used/1024/1024) as "Log Used (Meg)", int(total_log_available/1024/1024) as "Log Space Fr ee (Meg)", int((float(total_log_used) / float(total_log_used+total_log_available))*100) as "Pct Used", int(tot_log_used_top/1024/1024) as "Max Log Used (Meg)", int(sec_log_used_top/1024/1024) as "Max Sec . Used (Meg)", log_to_redo_for_recovery , log_held_by_dirty _pages

FROM table (MON_GET_TRANSACTION_LOG(-2)) as tlog

Great for scripting and history tracking.Previously: get snapshot for database | grep –i log

Love It

The MON_GET_TRANSACTION_LOG table function returns information about the transaction logging subsystem for the currently connected database. The function returns some new monitor statistics like LOG_HADR_WAIT_TIME, which shows the time spent waiting on HADR processing for a database.

The table function MON_GET_TRANSACTION_LOG can be used to replace the SNAPDB view for access to log related statistics.

Page 4: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style4

MON_GET_AUTO_MAINT_QUEUE table function –Get information about the automatic maintenance jobs

SELECT MEMBER, QUEUE_POSITION,

JOB_STATUS, JOB_TYPE,

VARCHAR(DB_NAME, 10) AS DB_NAME,

OBJECT_TYPE,

VARCHAR(OBJECT_SCHEMA, 10) AS OBJECT_SCHEMA

VARCHAR(OBJECT_NAME, 10) AS OBJECT_NAME

FROM TABLE(MON_GET_AUTO_MAINT_QUEUE()) AS T

ORDER BY MEMBER, QUEUE_POSITION ASC

MEMBER QUEUE_POSITION JOB_STATUS JOB_TYPE DB_NAME OBJECT_TYPE OBJECT_SCHEMA OBJECT_NAME------ -------------- ---------- -------- ------- ----------- ------------- -----------

0 1 EXECUTING RUNSTATS SAMPLE TABLE TEST EMPLOYEE 0 2 QUEUED REORG SAMPLE TABLE TEST T10 3 QUEUED REORG SAMPLE TABLE TEST BLAH

WhatASYNC reorg,

runstats, backupsscheduled ?

OTHER COLUMNS: JOB_DETAILS, JOB_PRIORITY, MAINT_WINDOW_TYPE,

QUEUE_ENTRY_TIME, EXECUTION_START_TIME, EARLIEST_ST ART_TIME

The MON_GET_AUTO_MAINT_QUEUE table function returns information about all automatic maintenance jobs (with the exception of real-time statistics which does not submit jobs on the automatic maintenance queue) that are currently queued for execution by the autonomic computing daemon (db2acd).

There is a separate automatic maintenance queue per member. Note: Jobs in the automatic maintenance queue are ordered first by earliest start time (such as start of next maintenance window where job may run), priority (for entries with same earliest start time) and queue entry time (for entries with the same earliest start time and priority).

The MON_GET_AUTO_MAINT_QUEUE interface provides a drill down for when the state is AUTOMATED, providing details about where the maintenance job is in the auto maintenance queue, and what other jobs are ahead of the job in the queue. The MON_GET_AUTO_MAINT_QUEUE table function does not report any automatic maintenance jobs if automatic maintenance is not enabled.

DBNAME, JOB_TYPE, JOB_DETAILS, JOB_PRIORITY, MAINT_WINDOW_TYPE, QUEUE_POSITION, QUEUE_ENTRY_TIME, EXECUTION_START_TIME, EARLIEST_START_TIME are also columns that may be selected.

Page 5: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style5

MON_GET_RTS_RQST table function –Retrieve information about real-time statistics requests

SELECT MEMBER

QUEUE_POSITION,

REQUEST_STATUS,

REQUEST_TYPE,

OBJECT_TYPE,

VARCHAR(OBJECT_SCHEMA, 10) AS SCHEMA,

VARCHAR(OBJECT_NAME, 10) AS NAME

FROM TABLE(MON_GET_RTS_REQUEST()) AS T

ORDER BY MEMBER, QUEUE_POSITION ASC

MEMBER QUEUE_POSITION REQUEST_STATUS REQUEST_TYPE OBJECT_TYPE SCHEMA NAME------ -------------- -------------- --------------- ----------- --------- -------

0 1 EXECUTING COLLECT_STATS TABLE TEST EMPLOYEE0 2 QUEUED COLLECT_STATS TABLE TEST T10 3 QUEUED WRITE_STATS TABLE TEST T30 - PENDING WRITE_STATS TABLE TEST BLAH1 - PENDING COLLECT_STATS TABLE TEST DEPT1 - PENDING WRITE_STATS TABLE TEST SALES2 - PENDING WRITE_STATS TABLE TEST SALARY

WhatSYNC runstats

scheduled ?

Other Columns: REQUEST_TIME, QUEUE_ENTRY_TIME, EXECUTION_START_TIM E

The MON_GET_RTS_REQUEST table function returns information about all real-time statistics requests that are pending on all members, and the set of requests that are currently being processed by the real time statistics daemon (such as on the real-time statistics processing queue). The queue for processing real-time statistics requests exists only on a single member.

The MON_GET_RTS_REQUEST table function does not report any real-time statistics requests if real-time statistics collection is not enabled.

REQUEST_STATUS VARCHAR(10)

One of:

• PENDING - Request is waiting to be picked up the real-time statistics daemon

• QUEUED - Request has been gathered by the real-time statistics daemon and is awaiting processing

• EXECUTING - Request is currently being processed by the real-time statistics daemon

REQUEST_TIME, QUEUE_ENTRY_TIME, EXECUTION_START_TIME are also columns that may be returned in the select list.

Page 6: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style6

DB2_HISTORY_FILTER

• Operating system: All

• Specifies operations that will not to modify recovery history file.

• Use registry variable to reduce potential contention on history file by filtering out operations.

• Specify using comma separated list. Possible values for are:

• G: Reorg operations

• L: Load operations

• Q: Quiesce operations

• T: Alter table space operations

• U: Unload operations

• NULL (default)

• db2set DB2_HISTORY_FILTER=T, L

Page 7: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style7

… cnc2c1… cnc2c1

… cnc2c1

TARGETTABLE

SOURCETABLE

STAGINGTABLE

INSERT

DELETE

UPDATE

REPLAY PHASE

valuekeytabnametabschema

SYSTOOLS.ADMIN_MOVE_TABLE

Keys ofrow changed

by online workoad

captured viatriggers

Rows with keys present

in staging table arere-copied

from source table

COPY PHASE

INSERT

DELETE

UPDATE

OnlineWorkload

1INIT PHASE

2

3

4 SWAP PHASE

Create triggers,target, staging tables

Rename Target -> Source

REVIEW: ADMIN_MOVE_TABLE: Processing phases

The ADMIN_MOVE_TABLE stored procedure uses a multi-phase approach to moving a table. The procedure can be requested to automatically process all phases based on a single call or an administrator can use a set of calls to control when each phase begins.

The basic phases used are:

•The INIT phase, performs setup work like creating a staging table to record any changes to the source table that are made during the move. It can also create a target table for the move. The progress of the procedure is reflected in the SYSTOOLS.ADMIN_MOVE_TABLE control table. A set of triggers are created to capture the changes in the source table and store information in the staging table.

•The COPY phase, copies all of the data from the source table to the target table.

•The REPLAY phase updates the target table with changed data from the source table, based on the staging table contents.

•The SWAP phase finalizes the move by renaming the target table to replace the source table name. This pahse does require exclusive control of the source table.

Page 8: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style8

ADMIN_MOVE_TABLE Improvements

• DB2 10.1 Fix Pack 2

• Referential Integrity Restriction on base table removed

• ADMIN_MOVE_TABLE improved performance

• Prior to DB2 10 Fix Pack 2:

• Foreign keys (referential constraints) were not supported, either parent or child.

• To move a table with foreign keys, capture foreign keys using db2lookcommand, then drop foreign keys, perform move operation, and recreate the keys.

• SYNTAX:

call SYSPROC.ADMIN_MOVE_TABLE( 'MDC', 'HIST1', 'MDCTSP2', 'MDCTSPI', 'MDCTSP2', NULL, NULL, NULL, NULL, 'KEEP,REORG', 'MOVE' )

DB2 10.1 Fix Pack 2 removes the Referential Integrity restriction on the base table.

Prior to DB2 10 Fix Pack 2, foreign keys (referential constraints) are not supported, either on the parent or child table. To move a table with foreign keys, you could capture the foreign keys using the db2look command, then drop the foreign keys, perform the move operation, and then recreate the keys.

Page 9: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style9

newNogs

Oldlogs

NewlogsTables

Indexes

Standby Server

Primary Server

DB2 Engine

DB2 Engine

tcp/iplogbuffer

logbufferFILLED!

tcp/ip bufferFILLED!

1. On Standby, log entry for large OfflineREORG must be processed completely.

2. On Standby, transactions delayed andstandby log buffer becomes full. A larger logbuffer area may help to avoid transaction delayson Primary.

3. If TCP/IP buffers on Primary system arefilled, transactions now must be kept inDB2 buffers until normal flow of loggeddata to Standby can be resumed. Commit processing on Primary is delayed.

Log reader Log writer

HADR

TablesIndexes

Replay MasterShredder

Redo MasterRedo Workers

Oldlogs

Log writer

Log reader

HADR

REORG

Backup Pressure causing transaction delays on HADR Primary?

DB2_HADR_BUF_SIZE=multiple of primary logbuffer.Default=2X.

The example shows a potential performance problem that could occur by running an offline REORG for a large table on the Primary database. The logged data for the offline REORG is very small compared to the work necessary to complete the REORG. When that log entry arrives at the Standby and is replayed, the entire REORG operation is performed immediately. While the REORG is being performed, additional changes from the Primary arrive and begin to fill the log buffer. If the log buffer is completely full, the Standby will not be able to receive more changes. This could cause log data to fill the TCP/IP buffers on the Primary which will force the log data to be kept in DB2 buffers. Regardless of the mode, asynchronous, near-synchronous, or synchronous, the commit processing on the Primary will be stalled at this point.

In all modes except SUPERASYNC, if the log buffer memory on the standby becomes filled, there can be some delays on commit processing on the Primary database. These delays can cause the primary and standby database to drop out of peer state. In SUPERASYNC mode the Standby is never considered to reach a peer state.

Super Asynchronous mode can be used to reduce impact when replay is slow and the log buffer gets filled. In other HADR modes, a full log buffer can cause transactions to be blocked.

The DB2 Registry variable, DB2_HADR_BUF_SIZE, can be used to request a specific log buffer size on the Standby database. This variable specifies the standby log receiving buffer size in unit of log pages. If not set, DB2 will use two times the primary logbufsz for the standby receiving buffer size. This variable should be set in the standby instance. It is ignored by the primary database. If HADR synchronization mode is ASYNC, during peer state, a slow standby may cause the send operation on the primary to stall and therefore block transaction processing on the primary. If a primary/standby power mismatch is only temporary, such peaks in primary transaction load can be absorbed (buffered) by a larger log-receiving buffer on the standby.

Starting in DB2 Version 9.5 Fix Pack 1, when the DB2 registry variable DB2_HADR_PEER_WAIT_LIMIT is set, the HADR primary database will break out of the peer state if logging on the primary database has been blocked for the specified number of seconds because of log replication to the standby. When this limit is reached, the Primary database will break the connection to the Standby database. If the peer window is disabled, the Primary will enter disconnected state and logging resumes. If the peer window is enabled, the primary database will enter disconnected peer state, in which logging continues to be blocked. The Primary database leaves disconnected peer state upon re-connection or peer window expiration. Logging resumes once the Primary database leaves disconnected peer state.

Page 10: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style10

How can you relieve HADR built-up back pressure?

• TIP: Implement Log Spooling on Standby database (10.1)

• Helps avoid back-pressure issues on primary database caused by sudden spikes in logging activity on standby

• HADR_SPOOL_LIMIT - determines maximum amount of log data allowed to be spooled to disk on HADR standby

• Default 0, log spooling disabled; If set to ‘-1’, no defined limit

• When set to number of 4K pages, sets limit on gap between replay processing and latest log changes

• Log records will use active log space on Standby• Log files automatically deleted when no longer required

• Enabling log spooling does not compromise HADR protection of transactions

• Data from primary is still replicated in log form to standby using specified sync mode

• Could lead to longer takeover time. Standby cannot assume role of primary until replay of spooled logs finishes

The hadr_spool_limit configuration parameter enables log spooling on the HADR standby database. Log spooling allows transactions on the HADR primary to make progress without having to wait for the log replay on HADR standby. Log data that is sent by the primary is then written, or spooled, to disk on the standby if it falls behind in log replay. The standby can later on read the log data from disk. This allows the system to better tolerate either a spike in transaction volume on the primary, or a slow down of log replay (due to the replay of particular type of log records) on the standby.

When making use of log spooling, ensure that adequate disk space is provided to the active log path of the standby database. There must be enough disk space to hold the active logs, which is determined by the logprimary, logsecond, logfilsiz, and hadr_spool_limit configuration parameters.

The default value of 0 means no spooling. This means that the standby can be behind the primary up to the size up the log receive buffer. When the buffer is full, it is possible that new transactions on the primary will be blocked because the primary cannot send any more log data to the standby system.

Page 11: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style11

For Reference Only: Student Notes

A value of -1 means unlimited spooling (as much as supported by the disk space available). If you are using a high value for hadr_spool_limit, you should consider that if there is a large gap between the log position of the primary and log replay on the standby, which might lead to a longer takeover time because the standby cannot assume the role of the new standby until the replay of the spooled logs finishes.

Use of log spooling does not compromise the HADR protection provided by the HADR feature. Data from the primary is still replicated in log form to the standby using the specified sync mode; it just takes time to apply (through log replay) the data to the table spaces.

Page 12: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style12

HADR status report – how much pressure?

Standby Buffer Usage

PRIMARY_LOG_FILE,PAGE,POS = S0000060.LOG, 975, 493196000

STANDBY_LOG_FILE,PAGE,POS = S0000060.LOG, 975, 493196000

HADR_LOG_GAP(bytes) = 113966

STANDBY_REPLAY_LOG_FILE,PAGE,POS = S0000060.LOG, 729, 492192678

STANDBY_RECV_REPLAY_GAP(bytes) = 900600

PRIMARY_LOG_TIME = 04/26/2012 11:06:06.000000 (1335452766)

STANDBY_LOG_TIME = 04/26/2012 11:06:06.000000 (1335452766)

STANDBY_REPLAY_LOG_TIME = 04/26/2012 11:06:06.000000 (1335452766)

STANDBY_RECV_BUF_SIZE(pages) = 200

STANDBY_RECV_BUF_PERCENT = 23

STANDBY_SPOOL_LIMIT(pages) = 5000PEER_WINDOW(seconds) = 60

PEER_WINDOW_END = 04/26/2012 11:07:14.000000 (1335452834)

READS_ON_STANDBY_ENABLED = Y

STANDBY_REPLAY_ONLY_WINDOW_ACTIVE = N

Spool Limit

db2pd –db testdb –hadr

If 100, indicates receive buffer full. If log spooling, standby can

continue to receive logs.

If value of gap reaches combined value of standby’s receive buffer size and standby’s spool limit, standby stops receiving logs and blocks primary if in peer state.

There are some elements that describe the HADR related wait times.

•LOG_HADR_WAIT_CUR - The length of time, in seconds, that the logger has been waiting on an HADR log shipping request. A value of 0 is returned if the logger is not waiting. When the wait time reaches the value that is returned in the PEER_WAIT_LIMIT field, HADR breaks out of peer state to unblock the primary database.

•LOG_HADR_WAIT_RECENT_AVG - The average time, in seconds, for each log flush.

•LOG_HADR_WAIT_ACCUMULATED - The accumulated time, in seconds, that the logger has spent waiting for HADR to ship logs.

•LOG_HADR_WAITS_COUNT - The total count of HADR wait events in the logger. The count is incremented every time the logger initiates a wait on HADR log shipping, even if the wait returns immediately. As a result, this count is effectively the number of log flushes while the databases are in peer state.

•STANDBY_RECV_BUF_PERCENT- shows the percentage of standby log receive buffer that is currently being used. Even if this value is 100, indicating that the receive buffer is full, the standby can continue to receive logs if you enabled log spooling.

•STANDBY_SPOOL_LIMIT - should match the hadr_spool_limit configuration value for the standby database.

•STANDBY_RECV_REPLAY_GAP - average, in bytes, of the gap between the standby log receive position and the standby log replay position. If the value of this gap reaches the combined value of the standby’s receive buffer size and the standby’s spool limit, the standby stops receiving logs and blocks the primary if it is in peer state.

Page 13: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style13

Review DB2 Compression feature – prior to DB2 10.1

• Data row compression was introduced in DB2 9.1• COMPRESS option for CREATE and ALTER TABLE is used to specify compression

• DB2 9.1 Classic Row Compression uses a static dictionary:

• Dictionary stored in data object, about 100K in size

• Compression Dictionary needs to be built before a row can be compressed

• Dictionary can be built or rebuilt using REORG TABLE offline, which also compresses existing data

• DB2 9.5 • Automatic Dictionary Creation Dictionary automatically builds when compressed

table reaches threshold size (about 2 MB). Applies to SQL INSERT, IMPORT and LOAD.

• XML inline data compressed. Queue Replication supported

• DB2 9.7• Compression for Indexes, Temporary data, XDA XML data, Inline LOBS, and SQL Replication.

• Index compression applies several space saving techniques, no dictionary

• XML compression uses a second static dictionary based on XML data

• Compression intended to reduce:

• Disk storage requirements, I/Os for scanning tables, buffer pool memory storage

Management of page-level dictionaries is fully automatic

•Creation happens when page is almost full

•Recreation may be triggered when page becomes full again

•Recreation is only triggered if actual compression ratio is significantly less than projected compression ratio (based on last dictionary build)

•Deletion happens if record is placed in a committed empty page

Database manager can decide to skip (re-)building page-level dictionaries

•Decision is based on runtime stats (generated savings)

•For dictionary recreation, age of existing dictionary is also considered

Goal: No noticeable CPU overhead if adaptive compression generates little savings

Page 14: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style14

Review: DB2 10.1 Adaptive Compression - Page-Level Dictionaries

• Rows are inserted into a page and

compressed with existing static

dictionary

• When page is almost full, page gets

compressed

1. Detect common recurring patterns

in original records & build

dictionary structure

2. Build compressed page by

compressing all existing records

3. Insert page compression dictionary

(special record)

4. Insert more compressed records in

additional free space

Original Page

Page Compression Dictionary

Compressed Page

Page Compression Dictionary

CREATE/ALTER TABLE …. COMPRESS NO | COMPRESS YES ADAPTIVE | COMPRESS YES STATIC

Compression is applied for each row individually. Combination of two compression algorithms. Both algorithms detect different kinds of patterns: Globally recurring byte sequences and locally recurring byte sequences. Compression via table-level dictionary is applied first. When records are placed in a page, compression via page-level dictionary is applied. Rows are compressed during DML operations and admin tasks such as: INSERT, UPDATE, IMPORT, LOAD, REORG, and REDISTRIBUTE.

Management of page-level dictionaries is fully automatic. Creation happens when page is almost full. Recreation may be triggered when page becomes full again. Recreation is only triggered if actual compression ratio is significantly less than projected compression ratio (based on last dictionary build). Deletion happens if record is placed in a committed empty page.

Database manager can decide to skip (re-)building page-level dictionaries. Decision is based on runtime stats (generated savings). For dictionary recreation, age of existing dictionary is also considered.

Goal: No noticeable CPU overhead if adaptive compression generates little savings

Page 15: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style15

Adaptive Compression Performance Considerations (1)

• Data Drift – phenomenon of new data inserted and updated without compressionbecause static compression dictionary no longer fully represents subsequent recurring patterns

• Option 1: REORG TABLE … RESETDICTIONARY• Requires the table to be offline and at least two times the space

• A static dictionary has a maximum size and once completely filled, even if rebuild dictionary, may not be able to hold all patterns.

• Option 2: CREATE/ALTER TABLE … COMPRESS YES ADAPTIVE• Page-level dictionaries automatically and transparently rebuilt as data page fills up,

reclaimed page space used for future inserts

• Fewer physical I/O operations required for reading data, because stored on fewer pages.

• Same buffer pool memory holds more data.

• Fewer I/O operations leads to decreased processor usage.

Constantly evolving new data values inserted into a table will not be defined in the static compression dictionary, so recurring patterns for those data values will not be compressed. Data drift is the term used to describe the phenomenon of new data being inserted and updated without compression because the compression dictionary no longer fully represents subsequent recurring patterns. Data drift causes the compression rate for classic compression to degrade slightly over time.

The REORG TABLE >>> RESETDICTIONARY command can be used to rebuild the static table level compression dictionary. However, the table reorg requires the table to be offline.

Adaptive compression, in addition to static compression, creates separate compression dictionaries at the page level. The page level compression dictionaries are dynamic and automatically and transparently rebuilt as a data page fills up. The reclaimed storage space frees up a significant amount of space on the page, so new rows can be inserted on the current page rather than starting a new page.

Dynamically maintained page level compression dictionaries are an excellent mechanism for addressing data drift.

Fewer physical I/O operations are required for reading the data in a compressed table, because the same number of rows is stored on fewer pages. The same buffer pool memory can hold more data. Fewer I/O operations also leads to a decrease in processor usage.

Page 16: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style16

Adaptive Compression Performance Considerations (2)

• Adaptive compression assesses recurring patterns across an

entire data row, rather than individual columns.

• PREFERRED PRACTICE: Maximize compression potential, when

create table arrange columns so recurring patterns occur.

• For example: make columns like City, State, and PostalCode contiguous

• PREFERRED PRACTICE: Pre-sort data by cluster column, then

during ETL process improves adaptive compression

with more recurring patterns on same page.

Adaptive compression assesses recurring patterns across an entire row of data, rather than individual columns. To maximize compression potential, when creating a table, consider arranging its columns so that recurring patterns are likely to occur.

For example, if your table will have columns for city, state, and postal code, create the table so these columns are contiguous. This way sets up the table for recurring patterns better than city, telephone number, state, and postal code, for example.

Page 17: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style17

Adaptive Compression Performance Considerations (3)

• Identify Ideal candidate tables for adaptive compression:• tables with significant amounts of data

• tables with substantial anticipated growth

• PREFERRED PRACTICE: Identify the top-space consumers by listing tables from biggest to smallest. Identify which tables not compressed or only using static compression.

• SYSCAT.TABLES.ROWCOMPMODE = S (static), A (adaptive), blank (no compression)

• SYSPROC.ADMIN_GET_TAB_INFO.DATA_OBJECT_P_SIZE calculates base table storage size requirements in kilobytes

SELECT S.TABSCHEMA, S.TABNAME,

SUM (X.DATA_OBJECT_P_SIZE)/1024/1024 AS STORAGESIZE_GB,

S.ROWCOMPMODE

FROM TABLE (SYSPROC.ADMIN_GET_TAB_INFO (‘PAYROLL’, ‘’)) X

JOIN SYSCAT.TABLES S ON S.TABSCHEMA = X.TABSCHEMA AND S.TABNAME = X.TABNAME

GROUP BY S.TABSCHEMA, S.TABNAME, S.ROWCOMPMODE

ORDER BY STORAGESIZE_GB DESC;

Before applying adaptive compression to existing tables, an important step is to first assess which tables are ideal candidates for compression: classic or adaptive. Ideal candidates for adaptive compression are tables that contain significant amounts of data or tables for which substantial growth ins anticipated over time.

Use the SQL administration function ADMIN_GET_TAB_INFO to return an order list of table names and table data object sizes for a particular schema. Disply the current compression settings and the row compression mode that are in use by joining the ROWCOMPMODE column from the SYSCAT.TABLES view.

The above query identifies the top-space consumers by listing the tables from biggest to smallest. This helps the DBA identify which tables to start working with.

Page 18: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style18

SYSPROC.ADMIN_GET_TAB_INFO.DATA_OBJECT_P_SIZE

• Data object physical size.

• Amount of disk space physically allocated for the table, reported in

kilobytes.

• For MDC and ITC tables, this size includes the size of the block map

object.

• The size returned takes into account full extents allocated for the table

and includes the EMP extents for objects created in DMS table spaces.

• This size represents the physical size of the base table only. Space

consumed by LOB data, Long Data, Indexes and XML objects are

reported by other columns

See definition above for Data_Object_P_Size.

Page 19: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style19

Adaptive Compression Performance Considerations (4)

• PREFERRED PRACTICE: Estimate space savings• SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO

• Based on assumption will run: REORG TABLE … RESETDICTIONARY

• Computes space savings estimate for all tables in schema, ie. PAYROLL

• TIP: Table name should be specified in second argument because runtime is long when querying entire schema.

SELECT TABSCHEMA, TABNAME, ROWCOMPMODE,

PCTPAGESSAVED_CURRENT, -- current compression savings

PCTPAGESSAVED_STATIC, -- estimated static compression savings

PCTPAGESSAVED_ADAPTIVE -- estimated adaptive compression savings

FROM TABLE (SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO (‘PAYROLL’, ‘TAB1’)) ;

With candidate tables identified, you can estimate space savings with the ADMIN_GET_TAB_COMPRESS_INFO administrative table function. The percentages returned are based on an assumption that the following command will be run:

REORG TABLE … RESETDICTIONARY

The SYSPROC_ADMIN_GET_TAB_COMPRESS_INFO function computes the estimates for all tables in a particular schema if a table name is not specified in the second parameter. In practical uses, a table name should be specified because the run time is long when querying an entire schema.

Page 20: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style20

Adaptive Compression – Convert Existing Tables• ALTER TABLE …. COMPRESS YES ADAPTIVE

• Does not apply adaptive compression to existing table data if upgraded tables from 9.5/9.7

• New tables created in DB2 10.1: COMPRESS YES is COMPRESS YES ADAPTIVE

• REORG TABLE …. RESETDICTIONARY

• TIP: Make sure use LONG table space since compression may cause the row size to be short. Regular table space holds maximum of 255 rows per page.

• SYSCAT.TABLESPACES.DATATYPE = A (regular), L (long)

• ALTER TABLESPACE … CONVERT TO LARGE

• Alternative: Use ADMIN_MOVE_TABLE procedure

• Keeps table online until SWAP phase

• By default, dictionary built before COPY operation using Bernoulli sampling from source table.

• Default sample size is 20MB.

• TIP: Amount of data sampled can be specified using DEEPCOMPRESSION_SAMPLE.

• If specify 0, standard Automatic dictionary creation (ADC) is used

• Can specify how much data (in KB) sampled when creating compression dictionary

• If REORG specified, RESETDICTIONARY option causes dictionary to be built based on complete scan during the SWAP phase.

• Optimal dictionary created but time required for reorg adds time to complete moving table

• TIP: If require XML compression dictionary, then REORG …LONGLOBDATA is only method. • REORG_USE_TEMPSPACE - Can specify temporary table space for REORG. If not specified, uses same table space

as table being reorganized.

• REORG performed before exclusive lock acquired on source table, so would not impact access to source by applications.

Existing tables that are either uncompressed or compressed with static compression remain that way after upgrading to DB2 10. They can be converted to use adaptive compression with ALTER TABLE ... COMPRESS YES ADAPTIVE and REORG TABLE … RESETDICTIONARY.

By default, the target table will have the same compression option as the source table, so a source table with COMPRESS set to YES would cause the new target table to have COMPRESS set to YES. If the source table has compression set off and you want to implement data compression for the target table you can do one of the following:•You could manually create the target table with COMPRESS YES and invoke the procedure using the syntax that allows to specify your target table by name.•You could run the procedure in multi-step mode and use the ALTER TABLE statement to alter the target table before the COPY phase is started.

The ADMIN_MOVE_TABLE stored procedure moves the data in an active table into a new table object with the same name, while the data remains online and available for access. This procedure could also be used to get a more efficient compression dictionary built for a table with a very short loss of access to the table. The default sample size is 20MB of data. The sample size can be adjusted by running the ADMIN_MOVE_TABLE_UTIL procedure with a key of DEEPCOMPRESSION_SAMPLE. If the ADMIN_MOVE_TABLE_UTIL procedure is used to set the DEEPCOMPRESSION_SAMPLE field 0, DB2 will use its standard Automatic dictionary creation (ADC) method to create the dictionary for the target table. If the REORG option is used, the RESETDICTIONARY option will cause a dictionary to be built based on a complete scan during the SWAP phase. This would generate an optimal dictionary for the target table but extend the time required to complete the move. The REORG is performed before the exclusive lock is acquired on the source table, so it would not impact access to the source by applications.

•REORG - This option sets up an extra offline REORG on the target table before performing the swap. If you require an optimal XML compression dictionary, then REORG is the only method. You may set the REORG option at any point up to and including the SWAP phase.•REORG_USE_TEMPSPACE -If you call the REORG option in the table move, you can also specify a temporary table space for the USE clause of the REORG command. If a value is not specified here, the REORG commanduses the same table space as the table being reorganized. •DEEPCOMPRESSION_SAMPLE - If the source table has compression enabled, this field specifies how much data (in KB) is sampled when creating a dictionary for compression. A value of 0 means no sampling is done and the default of 20 MB is used.

LONGLOBDATA is not required even if the table contains long or LOB columns. The default is to avoid reorganizing these objects because it is time consuming and does not improve clustering. LONGLOBDATA is required when Long field and LOB data are to be reorganized. Running a reorganization with the LONGLOBDATA option on tables with XML columns will reclaim unused space and thereby reduce the size of the XML storage object. This parameter is required when converting existing LOB data into inlined LOB data.

Page 21: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style21

Adaptive Compression Performance Considerations (5)

• Monitor Compression Ratios• Overtime table-level compression ratio might gradually decline

• TIP: Advantage of analyzing SYSCAT.TABLES.PCTPAGESSAVED after RUNSTATS is execution time may be less than ADMIN_GET_TAB_COMPRESS_INFO

• PREFERRED PRACTICE: Create Compression History Table

CREATE TABLE MEL.COMPRESSION_HISTORY (TABSCHEMA VARCHAR (128),TABNAME VARCHAR(128), PCTPAGESSAVED SMALLINT, M_DATE DATE);

• RUNSTATS ON TABLE schema.tablename

• INSERT INTO MEL.COMPRESSION_HISTORYSELECT TABSCHEMA, TABNAME, PCTPAGESSAVED, CURRENT DATEFROM SYSCAT.TABLESWHERE TABSCHEMA NOT LIKE ‘SYS%’;

• SELECT PCTPAGESSAVED, M_DATE FROM MEL.COMPRESSION_HISTORYWHERE TABSCHEMA = ‘schema’ AND TABNAME = ‘tabname’ORDER BY M_DATE ASC;

• Analyze table’s data drift and schedule table REORG … RESETDICTIONARY if needed.• rebuilds BOTH static table-level dictionary and page-level compression dictionaries

As existing table data is manipulated and new date is inserted into tables, page-level compression dictionaries are created and dynamically updated as needed. However, the existing table-level compression dictionary is static. Over time, the table-level compression ratio might decline.

Monitor the compression ratios of the table over time and choose one of the following methods to monitor compression ratios:

•Running SYSPROC.ADMIN_GET_TAB_COMPRESS_INFO to estimate space savings. The advantage of this method is that the current compression savings can be determined with a single function call.

•Running RUNSTATS for a given table, then querying the PCTPAGESSAVED column of the SYSCAT.TABLES catalog view. The advantage of this method is that execution time might be less than the SYSPROC.ADMIN_GET_TAB_COMRPESS_INFO function.

When a table exhibits too much data drift, schedule a table reorganization and rebuild the table-level compression dictionary and page-level compression dictionaries.

Page 22: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style22

Partition

Automatic Storage

Table Space

StorageGroup

Physical Disk

2012Q1 2011Q4 2011Q3 2011Q2 2011Q1 2010Q4 … 2006Q3

Partitioned Table Sales

Table Space 14

stogrp_hot

spath: /sales/datahot

Table Space 13 Table Space 12 Table Space 11

spath:/sales/datawarm

stogrp_warm

Table Space 10 Table Space 9 Table Space 1

spath: //sales/datawarm

stogrp_cold

SSD RAID Array FC/SAS RAID Array SATA RAID Array

V10.1

Review: Introducing Storage Groups

CREATE STORAGE GROUP stogrp_hot ON ‘/sales_datahot’

With introduction of storage groups, now there is an extra layer of abstraction between tbsp and disks. Now you can group table spaces sharing similar characteristics into same group of storage paths. After you create storage groups that map to the different classes of storage in your database management system, you can assign automatic storage table spaces to those storage groups, based on which table spaces have hot or cold data.

You can dynamically reassign a table space to a different storage group as the data changes or your business direction changes.

SSD = Solid State Drive

FC = Fiber Channel

SAS = Serial Attached SCSI

SATA = Serial ATA

Page 23: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style23

Why should you care about Storage Groups?

• ALTER DATABASE statement has been deprecated and might be removed in a future release.

• CREATE STOGROUP or ALTER STOGROUP statements provide same functionality as the ALTER DATABASE statement and more.

• Databases created prior to DB2 10.1 had an implicit default storage group of storage paths specified at database creation.

• CREATE DB dbalias ON /SP1, /SP2 DBPATH ON /P1

• Only 1 group of storage paths is utilized by all AMS table spaces defined in database!

• In DB2 10.1 when create database, storage path(s) specified at database creation are associated with default storage group IBMSTOGROUP.

• TIP: If no storage paths specified, DBPATH is default storage path.

• TIP:

• ALTER DATABASE … ADD STORAGE ON /SP3, adds storage to default storage group.

• ALTER STORAGE GROUP … ADD <storage path> adds storage to specific storage group and table spaces associated with that storage group.

• TIP: Automatic storage table spaces required for storage group definitions.

Although, you can create a database specifying the AUTOMATIC STORAGE NO clause, the AUTOMATIC STORAGE clause is deprecated and might be removed from a future release. Databases that are created specifying the AUTOMATIC STORAGE NO clause of the CREATE DATABASE command do not have storage groups associated with them.

The system managed spaces (SMS) table space type is now deprecated for permanent table spaces that are defined by the user.

You can still specify the SMS type for catalog table spaces and temporary table spaces. Automatic storage continues to use SMS type for temporary table spaces. The recommended table space types for user table spaces are automatic storage or database managed spaces (DMS).In previous releases, SMS permanent table spaces were used because they were simple to create and manage. To create a SMS table spaces, you do not have to specify an initial size but you must ensure that there is enough free disk space. The size and growth of the container files are managed at the operating system level. However, SMS table spaces do not perform as well as DMS table spaces.

With the introduction of automatic storage, management of DMS table spaces was simplified by providing a function that automatically resizes containers. IBM continues to invest and develop in automatic storage and DMS table spaces.

The ALTER DATABASE statement has been deprecated and might be removed in a future release. The CREATE STOGROUP or ALTER STOGROUP statements provide the same functionality as the ALTER DATABASE statement and more.

In DB2 Version 10.1, you can issue the ALTER STOGROUP statement to add or remove storage paths to any storage group. In addition, you can use this statement to change the definition and attributes of a storage group. Use the CREATE STOGROUP statement to create a new storage group and assign storage paths to it.

With the ALTER DATABASE statement, you can only add or remove storage paths to the default storage group for the database. You cannot indicate a specific storage group.

Page 24: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style24

Multi-Temperature Storage Performance Considerations (1)

• PREFERRED PRACTICE: Define performance values OVERHEAD and DISK READ RATE at storage group level.

• Eliminates storage parameter changes at table space level since table spaces move between various storage group temperatures.

• Default Overhead=6.725 milliseconds. Default Disk Read Rate=100 megabytes per second.

• UPGRADE TIP: Alter tablespace OVERHEAD and TRANSFERRATE values to INHERIT.

• If TRANSFERRATE and OVERHEAD not specified for automatic storage table space,default is INHERIT.

CREATE STORAGE GROUP stogrp_hot ON ‘/sales_datahot’OVERHEAD 0.75 DEVICE READ RATE 500;

CREATE STORAGE GROUP stogrp_warm ON ‘/sales_datawarm’OVERHEAD 3.4 DEVICE READ RATE 112;

CREATE STORAGE GROUP stogrp_cold ON ‘/sales_datacold’OVERHEAD 4.2 DEVICE READ RATE 100;

CREATE TABLESPACE current_mth USING STOGROUP stogrp_hotOVERHEAD INHERIT TRANSFERRATE INHERIT;

CREATE TABLESPACE previous_mth USING STOGROUP stogrp_warmOVERHEAD INHERIT TRANSFERRATE INHERIT;

CREATE TABLESPACE old_mth USING STOGROUP stogrp_coldOVERHEAD INHERIT TRANSFERRATE INHERIT;

If TRANSFERRATE and OVERHEAD are not specified for an automatic storage table space, the default is to INHERIT the value from the storage group it is using.

OVERHEAD number-of-milliseconds - Specifies the I/O controller usage and disk seek and latency time. This value is used to determine the cost of I/O during query optimization. The value of number-of-milliseconds is any numeric literal (integer, decimal, or floating point). If this value is not the same for all storage paths, set the value to a numeric literal which represents the average for all storage paths that belong to the storage group. If the OVERHEAD clause is not specified, the OVERHEAD will be set to 6.725 milliseconds.

DEVICE READ RATE number-megabytes-per-second- Specifies the device specification for the read transfer rate in megabytes per second. This value is used to determine the cost of I/O during query optimization. The value of number-megabytes-per-second is any numeric literal (integer, decimal, or floating point). If this value is not the same for all storage paths, set the value to a numeric literal which represents the average for all storage paths that belong to the storage group. If the DEVICE READ RATE clause is not specified, the DEVICE READ RATE will be set to the built-in default of 100 megabytes per second.

Page 25: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style25

Multi-Temperature Storage – How Do I Convert?• PREFERRED PRACTICE: Converting DMS table space to Automatic Storage

ALTER TABLESPACE dms01 MANAGED BY AUTOMATIC STORAGEUSING STOGROUP stogrp01

• All new growth comes from Storage Group’s automatic storage paths.

• EASE OF USE – NO REBALANCE REQUIRED! Implicit online rebalance from existing containers or storage group to automatic storage paths associated with new storage group.

• If USING STOGROUP clause not specified, extents rebalanced to default storage group.

• Default storage group IBMSTOGROUP associated with storage path(s) defined when create database.

• Use SYSCAT.STOGROUPS.DEFAULTSG=‘Y’ to determine default storage group.

• Cannot drop current default storage group. Can drop IBMSTOGROUP storage group if not designated as default. If drop IBMSTOGROUP, can create again with that name.

• Can also explicitly defined default storage group with:CREATE/ALTER STORAGE GROUP stogrpx SET AS DEFAULT

• TIP: With addition of storage groups within automatic storage:

• limit for storage paths is increased from 128 storage paths per database to 128 storage paths per storage group.

• 256 storage groups can be defined per database.

Automatic storage table spaces are required when assigning table spaces to use storage groups with CREATE/ALTER TABLESPACE … USING STOGROUP syntax.

You can convert your database-managed space (DMS) table spaces in a database to use Automatic Storage. Using Automatic Storage simplifies your storage management tasks.

The ALTER TABLESPACE MANAGED BY AUTOMATIC STORAGE e nables automatic storage for a database managed (DMS) table space. Once automatic storage is enabled, no further container operations can be executed on the table space. The table space being converted cannot be using RAW (DEVICE) containers.

If the USING STOGROUP clause is not included when converting from a DMS table space to an automatic storage table space then the default storage group is specified.

Using the CREATE TABLESPACE statement or ALTER TABLESPACE statement, you can specify or change the storage group a table space uses. If a storage group is not specified when creating a table space, then the default storage group is used. A specific storage group can be selected with the USING STOGROUP option of CREATE TABLESPACE. The database storage path defined when the database is created named IBMSTOGROUP. This will be the default storage group for automatic storage table spaces when USING STOGROUP is not specified. The ALTER STOGROUP statement option SET AS DEFAULT can be used to change which storage group is the default for a database..

Page 26: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style26

Multiple Storage Groups for Default Table Spaces

CREATE DATABASE tp1 AUTOMATIC STORAGE YES

ON /tp1_syscat

DBPATH on /tp1_dbp1

CATALOG TABLESPACE MANAGED BY AUTOMATIC STORAGE

TEMPORARY TABLESPACE MANAGED BY AUTOMATIC STORAGE

USER TABLESPACE MANAGED BY AUTOMATIC STORAGE;

CREATE STOGROUP DFTUSERSG

ON /tp1_dft1, /tp1_dft2, /tp1_dft3SET AS DEFAULT;

ALTER TABLESPACE USERSPACE1 USING DFTUSERSG;

CREATE STOGROUP SYSTEMSG

ON /tp1_tmp1, /tp1_tmp2, /tp1_tmp3;

ALTER TABLESPACE TEMPSPACE1 USING SYSTEMSG;

RENAME STORAGE GROUP IBMSTOGROUP TO SYSCATSG;

TIP:

Page 27: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style27

Multi-Temperature Storage Performance Considerations (2)

• Data Priority Identification

• TIP: DB2 10 WLM with multi-temperature storage allows you to assign

workload priority based on the data priority. Previous DB2 versions

allowed WLM to assign workload priority based on organizational

group, role or application.

• Use DATA TAG n, where n is priority value of 1 to 9. None is default; 0

no data tag specified.

• Use as part of WLM configuration in work class definition or referenced

within threshold definition with CREATE WORK CLASS SET and CREATE

THRESHOLD statements.

DATA TAG integer-constant or DATA TAG NONE - Specifies a tag for the data for table spaces using this storage group unless explicitly overridden by the table space definition. This value can be used as part of a WLM configuration in a work class definition or referenced within a threshold definition. For more information, see the CREATE WORK CLASS SET and CREATE THRESHOLD statements. Valid values for integer-constant are integers from 1 to 9. If NONE is specified, there is no data tag.

Data priority identification allows a data tags to be defined at both the storage group and table space levels. This data tag is an integer value in the range of 1 to 9. It is used by DB2 Workload Manager (WLM) to assign workload priority based on the data tag. Unlike previous workload definitions in earlier versions of DB2, which determined priority based on an organization group or application, DB2 10 WLM with multi-temperature storage now allows workload priority based on the data priority. In this way queries can be prioritized through the data accessed, rather than the group or application running the query.

Page 28: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style28

Great Idea! Range Partition Tables & Storage Groups (1)

• PREFERRED PRACTICE: Put each range in a separate table space.• Allows incremental backup to track modifications at table space level and page level for current range

with frequent changes. TRACKMOD for older ranges with no changes will show that table space may be skipped during backup.

• Allows individual ranges to be assigned to different multi-temperature storage groups.

• Allows isolation of specific table space with specific bufferpool.

CREATE STORAGE GROUP stogrp_hot ON ‘/sales_datahot’ OVERHEAD 0.75 DEVICE READ RATE 500;

CREATE STORAGE GROUP stogrp_warm ON ‘/sales_datawarm’ OVERHEAD 3.4 DEVICE READ RATE 112;

CREATE STORAGE GROUP stogrp_cold ON ‘/sales_datacold’ OVERHEAD 4.2 DEVICE READ RATE 100;

CREATE TABLESPACE tbspQtr3 USING STOGROUP stogrp_hot OVERHEAD INHERIT TRANSFERRATE INHERIT;

CREATE TABLESPACE tbspQtr2 USING STOGROUP stogrp_warm OVERHEAD INHERIT TRANSFERRATE INHERIT;

CREATE TABLESPACE tbspQtr1 USING STOGROUP stogrp_cold OVERHEAD INHERIT TRANSFERRATE INHERIT;

CREATE TABLE mel.sales ( …) PARTITION BY RANGE (sales_date)

PART 2013Q1 STARTING FROM (’01/01/2013’) IN tbspQtr1,

PART 2013Q2 STARTING FROM (’04/01/2013’) IN tbspQtr2,

PART 2013Q3 STARTING FROM (’07/01/2013’) IN tbspQtr3 ENDING AT (‘09/30/2013’)

COMPRESS YES ADAPTIVE;

CREATE INDEX mel.ix1 ON mel.sales (sales_date) PARTITIONED;

LOAD FROM …. INTO mel.sales…. (initial population of table data with load into mel.sales)

The first step to the range partition creation process is the table space definition with assignment to three different storage groups. Because movement between storage group temperatures is at the table space level, a table space will be defined for each table partition within the table. This allows the partitions for the table to be moved easily between storage groups as needed.

Page 29: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style29

Table Space Modification Status

• db2pd -tablespaces trackmodstate

• TrackmodState column displays status of table space with respect to last backup

• MON_GET_TABLESPACE

• new monitor element named tbsp_trackmod_state

1. CLEAN - No modifications occurred in tablespace since previous backup.

2. DIRTY - Table space contains data that needs to be picked up by the next backup.

3. ININCREMENTAL - Table space contains modifications that were copied into an incremental backup.

4. READFULL - Latest table space modification state change was caused by a dirty table space that is being read by a full backup that might not have completed successfully, or is currently in progress.

5. READINCREMENTAL - Latest table space modification state change was caused by a dirty table space that is being read by an incremental backup that might not have completed successfully, or is currently in progress.

6. N/A or UNAVAILABLE - Trackmod configuration parameter is set to No.

DB2 9.7 FP5

The db2pd -tablespaces command and the MON_GET_TABLESPACE table function provide information about the modification status of table spaces. You can use this information to make better decisions about how you perform backups.

You can now specify the trackmodstate option for the db2pd -tablespaces command to display the status of the table space with respect to the last backup. In the output, a new TrackmodState column is displayed, which can have one of six values for each table space: Clean, Dirty, Incremental, ReadFull, ReadIncremental, and n/a.

The MON_GET_TABLESPACE table function is updated with a new monitor element. This new monitor element is named tbsp_trackmod_state. The tbsp_trackmod_state monitor element states what status the tablespace is in by displaying one of the six values mentioned previously, except for n/a which is replaced by UNAVAILABLE for the new monitor element.

To receive information on the modification status of table spaces, you must set the trackmod configuration parameter to Yes.

You can use this monitor element to determine the modification status of a table space. The status of a table space can be in one of the following states:

1. CLEAN - No modifications occurred in the tablespace since the previous backup. If an incremental or delta backup is executed at this time, no data pages from this tablespace would be backed up.

2. DIRTY - Table space contains data that needs to be picked up by the next backup.

3. ININCREMENTAL - Table space contains modifications that were copied into an incremental backup. This state is in a DIRTY state relative to a full backup such that a future incremental backup needs to include some pages from this pool. This state is also in a CLEAN state such that a future delta backup does not need to include any pages from this pool.

4. READFULL - The latest table space modification state change was caused by a dirty table space that is being read by a full backup that might not have completed successfully, or is currently in progress.

5. READINCREMENTAL - The latest table space modification state change was caused by a dirty table space that is being read by an incremental backup that might not have completed successfully, or is currently in progress.

6. UNAVAILABLE - The trackmod configuration parameter is set to No. Therefore, no table spacemodification status information is available.

Page 30: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style30

Great Idea! Range Partition Tables & Storage Groups (2)• DDL to add new quarter partition to range partition table

• PREFERRED PRACTICE: When attaching a new partition, create new table within the desired target storage group through table space definition.

• Prevents a post ATTACH rebalance operation when alter tablespace … using stogroup … !!!

CREATE TABLESPACE tbspQtr4 USING STOGROUP stogrp_hot OVERHEAD INHERIT TRANSFERRATE INHERIT;

CREATE TABLE mel.newqtr ( …) in tbspQtr4COMPRESS YES ADAPTIVE;

CREATE INDEX mel.newqtr_idx ON mel.newqtr (sales_date);

LOAD FROM /fil OF IXF REPLACE INTO mel.newqtr;

• TIP: Complete data integrity checking, including range validation and other

constraints checking through application logic independent of DB2.

• TIP: Static compression dictionary built at partition level. If compression enabled on partitioned table, make sure enabled on standalone table before load data. Load’s ADC builds compression dictionary.

• TIP: Specify REQUIRE MISSING INDEXES on ATTACHALTER TABLE mel.sales ATTACH PARTITION 2013Q4 STARTING FROM (’10/01/2013’) ENDING AT (’12/31/2013’) FROM TABLE mel.newqtrREQUIRE MATCHING INDEXES ;

• TIP: SET INTEGRITY FOR mel.sales ALL IMMEDIATE UNCHECKED;

Use ADMIN_TAB_COMPRESS_INFO

to verify if REORG … RESET DICTIONARY

yields better compression.

When a new range of data is available, a new standalone table is created, loaded, validated and then attached to the range partition table as a new range. When preparing for the ATTACH operation, the table containing the new table partition must have the same column and data type information as the range partitioned table.

When adding a partition to an existing partitioned table, create the table partition within the desired target storage pool through the table partition table space defintion prior to the ATTACH operation. This prevents a post ATTACH rebalance operation.

With DB2 10.1 the IX level table lock is used for the ALTER TABLE ATTACH statement rather that the Super-exclusive Z lock used with DB2 9.7. If applications are using either static SQL or Dynamic SQL in repeatable read isolation, the ALTER TABLE ATTACH will need to wait for the application statement to complete.

BUILD MISSING INDEXES - Specifies that if the source table does not have indexes that correspond to the partitioned indexes on the target table, a SET INTEGRITY operation builds partitioned indexes on the new data partition to correspond to the partitioned indexes on the existing data partitions. Indexes on the source table that do not match the partitioned indexes on the target table are dropped during attach processing.

REQUIRE MATCHING INDEXES - Specifies that the source table must have indexes to match the partitioned indexes on the target table; otherwise, an error is returned (SQLSTATE 428GE) and information is written to the administration log about the indexes that do not match. If the REQUIRE MATCHING INDEXES clause is not specified and the indexes on the source table do not match all the partitioned indexes on the target table, the following behavior occurs:•For indexes on the target table that do not have a match on the source table and are either unique indexes or XML indexes that are defined with REJECT INVALID VALUES, the ATTACH operation fails (SQLSTATE 428GE). •For all other indexes on the target table that do not have a match on the source table, the index object on the source table is marked invalid during the attach operation. If the source table does not have any indexes, an empty index object is created and marked as invalid. The ATTACH operation will succeed, but the index object on the new data partition is marked as invalid. Typically, SET INTEGRITY is the next operation to run against the data partition. SET INTEGRITY will force a rebuild, if required, of the index object on data partitions that were recently attached. The index rebuild can increase the time required to bring the new data online. Information is written to the administration log about the indexes that do not match.

Page 31: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style31

Range Partition Tables Performance Considerations

PREFERRED PRACTICE: Make all indexes Partitioned (default) on Range Partition Table.

• Exclusive lock only on new partition when ALTER TABLE ATTACH/ADD. (V10.1)

• Exclusive lock only on old partition when ALTER TABLE DETACH. (V9.7.1)

• Reorg table on specific partitions – not entire table. (V9.7.1)REORG TABLE …. ON DATA PARTITION ….

• SET INTEGRITY …. ALL IMMEDIATE UNCHECKED so applications can access new data faster. (V10.1)

• Performance, Performance, Performance!

With DB2 10.1 the IMMEDIATE UNCHECKED option of SET INTEGRITY can be used to bypass range and integrity checks to make the new data added by the ALTER TABLE ATTACH statement faster.

If data integrity checking, including range validation and other constraints checking, can be done through application logic that is independent of the data server before an attach operation, newly attached data can be made available for use much sooner. You can optimize the data roll-in process by using the SET INTEGRITY…ALL IMMEDIATE UNCHECKED statement to skip range and constraints violation checking. In this case, the table is brought out of SET INTEGRITY pending state, and the new data is available for applications to use immediately, as long as there are no nonpartitioneduser indexes on the target table. If there are nonpartitioned indexes (except XML columns path indexes) on the table to maintain after an attach operation, the SET INTEGRITY…ALL IMMEDIATE UNCHECKED statement behaves as though it were a SET INTEGRITY…IMMEDIATE CHECKED statement.

Page 32: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style32

Great Idea! Range Partition Tables & Storage Groups (3)

• View all storage groups and their paths defined in database:

• SELECT STORAGE_GROUP_NAME, DB_STORAGE_PATH

FROM TABLE (ADMIN_GET_STORAGE_PATHS (‘’,-1)) AS S

• db2pd –db dbalias –storagegroups

• TIP: Query table spaces and associated storage groups prior to rebalancing:

• SELECT TBSPACE, SGNAME FROM SYSCAT.TABLESPACES

• Move partitions to new storage groups:

• ALTER TABLESPACE tbspQtr3 USING STOGROUP stogrp_warm;

• ALTER TABLESPACE tbspQtr2 USING STOGROUP stogrp_cold;

• Implicit online rebalance from existing storage group to new storage group.

Multi-temperature data storage can provide fast access to data. You can manage your IT budget more efficiently by configuring your database so that only frequently accessed data (hot data) is stored on expensive fast storage,

such as solid-state drives (SSD), and infrequently accessed data (cold data) is stored on slower, less-expensive storage, such as low-rpm hard disk drives.

As hot data cools down and is accessed less frequently, you can dynamically move it to the slower storage, thereby extending the life of your less expensive storage assets that are used for storing warm and cold data.

In database systems, there is a strong tendency for a relatively small proportion of data to be hot data and the majority of the data to be cold data. Current data is often considered to be hot data, but it typically becomes cold as it ages. These sets of multi-temperature data pose considerable challenges to DBAs who want to optimize the use of fast storage by trying not to store cold data there. As a data warehouse consumes more storage, optimizing the use of fast storage becomes increasingly important for managing storage costs. With your hot data stored on your fastest storage assets, multi-temperature data storage can help to reduce the time it takes to retrieve your most frequently accessed data, while reducing the cost of storing infrequently accessed warm and cold data.

Page 33: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style33

Great Idea! Range Partition Tables & Storage Groups (4)

• TIP: Monitor storage group rebalance operation

SELECT TBSP_NAME, REBALANCER_MODE, REBLANCER_STATUS, REBALANCER_EXTENTS_REMAINING, REBALANCER_EXTENTS_PROCESSED,REBALANCER_START_TIMEFROM TABLE (MON_GET_REBALANCE_STATUS (‘', -1))

• REBALANCER_MODE• FWD_REBAL – occurs when new containers added or size of existing containers increase.

Data movement starts with first extent in table space and ends with HWM extent.

• REV_REBAL – occurs when containers removed or reduced in size and data must move out of space that is freed. Data movement starts at HWM extent and moves in reverse order through table space, ending with first extent in table space.

• FWD_REBAL_OF_2PASS or REV_REBAL_OF_2PASS – forward rebalance followed by reverse rebalance. Status shows which phase of two-pass rebalance in progress.

• TIP: If rebalance is affecting application throughput, consider:ALTER TABLESPACE tablespace_name REBALANCE SUSPEND; ALTER TABLESPACE tablespace_name REBALANCE RESUME;

• TIP: If database is deactivated with table space is suspended state, database reactivation resumes rebalance.

A major advance in DB2 Version 10.1 is the ability to create storage groups, which are groups of storage paths. A storage group contains storage paths with similar characteristics. Some critical attributes of the underlying storage to consider when creating or altering a storage group are available storage capacity, latency, data transfer rates, and the degree of RAID protection.

These storage groups can be used to create different classes of storage (multi-temperature storage classes) where frequently accessed (or hot) data is stored in storage paths residing on fast storage, while infrequently accessed (or

cold) data is stored in storage paths residing on slower, less expensive storage. After you create storage groups that map to the different classes of storage in your database management system, you can assign automatic storage table spaces to those storage groups, based on which table spaces have hot or cold data. You can use storage groups to physically partition table spaces managed by automatic storage. You can dynamically reassign a table space to a different storage group by using the ALTER TABLESPACE statement with the USING STOGROUP option.

A database-managed table space can be converted to an automatic storage table space by executing an ALTER TABLESPACE statement and specifying the MANAGED BY AUTOMATIC STORAGE clause on the table space. Note that after this is done, it is necessary to perform a rebalance operation on the table space by executing an ALTER TABLESPACE statement and specifying the REBALANCE clause on the table space. In 10.1, the rebalance operation is enhanced to manually SUSPEND and RESUME a rebalance operation during performance

sensitive periods.

You can take further advantage of organizing your data into storage groups by configuring DB2 workload manager (WLM) to prioritize activities based on the priority of the data being accessed. The ADMIN_GET_STORAGE_PATHS table function can be used to get the list of automatic storage paths for each database storage group, including file system information for each storage path. Other table functions that have been added or modified to support the monitoring of storage groups include: MON_GET_REBALANCE_STATUS, MON_GET_TABLESPACE and MON_GET_CONTAINER.

Page 34: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style34

Do you know about unused disk space held by Indexes?

• Standard DB2 tables use single storage object to hold all indexes for a table

• Each index is supported by set of pages, that may reside in extent with

pages from other indexes

• Pages within index can become empty when data rows are deleted

• Empty pages still considered part of index and hold disk storage

• When index is dropped, pages assigned to index become available for use

• Pages from dropped index remain allocated to index object for that

table

• REORGCHK report will not reflect pages freed by dropped index

• Other indexes on same table could reuse pages

• If new index is created on table, pages could be reused

• With DB2 10.1, REORG utility can rebuild all indexes for table and release

unused pages back to table space

DB2 uses a single storage object to manage the indexes for a standard table. The extents used to contain the index storage object can contain pages from multiple indexes. Index pages may become empty when large sets of data rows are deleted.

If an index is dropped, that pages used by that index are available for reuse by other indexes and can be used to support a new index on the same table, but the pages remain allocated to the index object.

In Version 10.1, you can use the new online index reorganization functionality to reclaim unused index space on tables that reside in DMS table spaces. This functionality is available through the following options:

•Issuing the REORG INDEX FOR TABLE or REORG INDEXES ALL FOR TABLEcommand with the new RECLAIM EXTENTS clause.

•Calling the db2Reorg API and specifying the new DB2REORG_INDEX_RECLAIM_EXTENTS value for the reorgFlags parameter in the db2ReorgStruct data structure.

•Setting automatic index reorganization and specifying the reclaimExtentsSizeForIndexObjects attribute in the ReorgOptions element in the input XML file.

Page 35: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style35

Characteristics for Online Index reorganization prior 10.1

• REORG utility with INDEXES ALL FOR TABLE used to rebuild all

indexes on table

• ALLOW WRITE ACCESS option does not block application

changes to table during index reorganization

• Default mode is ALLOW READ ACCESS

• Exclusive table lock necessary to swap new index object for old one at

end of processing

• Index reorganization is single long running transaction, which

impacts active log space utilization

• New index object needs to be allocated in same table space as

original index object

• Requires twice index disk space

• Can generate I/O workload to build index object copy

The visual shows some of the characteristics for the index reorganization processing with DB2 9.7.

The ALLOW WRITE ACCESS option allows the indexes for a table to be reorganized with minimal loss of access to the table, but an exclusive lock is required to swap to the new set of indexes at the end of processing.

The REORG utility uses an index copy during index reorganization, which requires the index table space to be able to hold both copies at the same time.

A lengthy index reorganization could impact database log space requirements, since it runs as a long running single transaction.

Page 36: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style36

DB2 10.1 RECLAIM EXTENTS reclaims unused index object space

• REORG INDEXES ….. RECLAIM EXTENTS

• Phase 1 - Collocation

• Move pages around within index object to form empty extents

• During this phase, reorg will do periodic commits

• Phase 2 - Drain

• Waits for concurrent access to release table locks

• Does not prevent new application requests from starting

• Does not create new log records in this phase

• Phase 3 – Extent reclaim

• Free extents in index object released to table space

• Some utilities and commands compatible:

• IMPORT (insert), EXPORT, BACKUP, RUNSTATS

• ALTER TABLESPACE

• Some utilities not compatible:

• IMPORT (replace), LOAD

With DB2 10.1, the REORG utility provides the option RECLAIM EXTENTS, which moves index pages around within the index object to create empty extents, and then free these empty extents from exclusive use by the index object and makes the space available for use by other database objects within the table space. Extents are reclaimed from the index object back to the table space.

Page 37: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style37

Checking indexes for reclaimable space

INDEX_NAME IID INDEX_OBJECT_L_SIZE INDEX_OBJECT_P_SIZE RECLAIMABLE_SPACE -------------------- ------ -------------------- -------------------- --------------------HIST1IX1 1 2784 2784 480HIST1IX2 2 2784 2784 480HIST2IX1 1 2784 2784 448HIST2IX2 2 2784 2784 448

4 record(s) selected.

select varchar(indname,20) as index_name,

iid, index_object_l_size, index_object_p_size, reclaimable_space

from table( admin_get_index_info ('','CLPM',NULL) ) as T1

• ADMIN_GET_INDEX_INFO. RECLAIMABLE_SPACE• TIP: Estimate of disk space that can be reclaimed from entire index object

by running REORG INDEXES with RECLAIM EXTENTS.• For indexes in DMS/AMS table spaces

The ADMIN_GET_INDEX_INFO table function can be used to query index status information. The RECLAIMABLE_SPACE column provides an estimate of disk space, in kilobytes, that can be reclaimed from the entire index object by running the REORG INDEXES or REORG INDEX command with the RECLAIM EXTENTS option.

Page 38: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style38

Using REORG INDEXES to reclaim index space

• PREFERRED PRACTICE:

REORG indexes all for table clpm.hist1 allow write access CLEANUP ALL RECLAIM EXTENTS;

REORG indexes all for table clpm.hist2 allow write access CLEANUP ALL RECLAIM EXTENTS;

select varchar(indname,20) as index_name, iid, index_object_l_size, index_object_p_size, recl aimable_space

from table(admin_get_index_info('','CLPM',NULL) ) a s T1

INDEX_NAME IID INDEX_OBJECT_L_SIZE INDEX_OBJECT_P_SIZE RECLAIMABLE_SPACE -------------------- ------ -------------------- -------------------- --------------------HIST1IX1 1 2240 2240 0HIST1IX2 2 2240 2240 0HIST2IX1 1 2272 2272 0HIST2IX2 2 2272 2272 0

4 record(s) selected.

A new mechanism to reclaim unused space in indexes has been introduced to provide a more efficient way to free space for indexes that reside in DMS table spaces. Deleting a substantial amount of data from tables on a regular basis results in unused space in tables and associated indexes. This space cannot be used by any other object in the same table space until reorganization takes place.

In Version 10.1, you can use the new online index reorganization functionality to reclaim unused index space on tables that reside in DMS table spaces. This functionality is available through the following options:

•Issuing the REORG INDEX FOR TABLE or REORG INDEXES ALL FOR TABLE command with the new RECLAIM EXTENTS clause.

•Calling the db2Reorg API and specifying the new DB2REORG_INDEX_RECLAIM_EXTENTS value for the reorgFlags parameter in the db2ReorgStruct data structure.

•Setting automatic index reorganization and specifying the reclaimExtentsSizeForIndexObjects attribute in the ReorgOptions element in the input XML file.

INDEX index-name RECLAIM EXTENTS Specifies the index to reorganize and reclaim extents that are not being used. This action moves index pages around within the index object to create empty extents, and then free these empty extents from exclusive use by the index object and makes the space available for use by other database objects within the table space. Extents are reclaimed from the index object back to the table space. ALLOW READ ACCESS is the default, however all access modes are supported.

TABLE table-name RECLAIM EXTENTS Specifies the table to reorganize and reclaim extents that are not being used. The table-name variable must specify a multidimensional clustering (MDC) or insert time clustering (ITC) table. The name or alias in the form: schema.table-name can be used. The schema is the user name under which the table was created. If you omit the schema name, the default schema is assumed.

For REORG TABLE RECLAIM EXTENTS when the ON DATA PARTITION clause is specified, the access clause only applies to the named partition. Users can read from and write to the rest of the table while the extents on the specified partition are being reclaimed. This situation also applies to the default access levels.

Page 39: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style39

Reclaiming Unused Space in Index Table Spaces• TIP: Consider reducing HWM of index table space after REORG INDEXES … RECLAIM EXTENTS.

SELECT TS_NAME , TBSP_TOTAL_PAGES, TBSP_USED_PAGES, TBSP_PAGE_TOP, RECLAIMABLE_SPACE_ENABLED FROM TABLE ( MON_GET_TABLESPACE ( ‘’ , -1 ) ) as ts

TS_NAME TOTAL_PAGES USED_PAGES HIGH_WATER_MARK RECLAIMABLE SPACE_ENABLED

---------- ----------- ---------- --------------- --------------

USERSPACE1 8192 3264 3264 0

TP1DMSH 4608 3120 3120 0

TP1DMSAD 12000 6400 6400 0

TP1DMSINDEX 20000 7040 13984 1

TSP1INDEX 8192 5760 5760 1

TSP2INDEX 8192 2048 2048 1

• For Automatic Storage table spaces:

• ALTER TABLESPACE … REDUCE MAX

• For DMS, requires 2 steps to release unused extents:

1. ALTER TABLESPACE dmstsx LOWER HIGH WATER MARK

2. ALTER TABLESPACE dmstsx REDUCE (ALL CONTAINERS 100 M)

• TIP: Reduces backup image storage size and backup elapsed time

• DB2 Backup utility must backup to the table space high water mark extent

RECLAIMABLE_SPACE_ENABLED=1 indicates table space supports reducing

HWM

The ability to use ALTER TABLESPACE options to lower the high water mark depends on whether the table space supports reclaimable storage. The MON_GET_TABLESPACE table function can be used to query various table space statistics. The column RECLAIMABLE_SPACE_ENABLED can checked for a value of ‘1', indicating the table space supports reclaiming space. The report could be used to locate any table spaces that show a high water mark that is much higher than the number of pages currently in use.

To lower the high water mark, you must have an Automatic Storage table space that was created with DB2 9.7 or later. Reclaimable storage is not available in table spaces created with earlier versions of the DB2 product. You can see which table spaces in a database support reclaimable storage using the MON_GET_TABLESPACE table function.

For a DMS or Automatic Storage table space created in DB2 9.7 or later, you can use reclaimable storage to return unused storage to the system for reuse. Reclaiming storage is an online operation; it does not impact the availability of data to users. You can reclaim the unused storage at any time by using the ALTER TABLESPACE statement with the REDUCE option:

• For Automatic Storage table spaces, the REDUCE option has sub options to specify whether to reduce storage by the maximum possible amount or by a percentage of the current table space size.

• For DMS table spaces, first use the ALTER TABLESPACE statement with the LOWER HIGH WATER MARK option, and then the ALTER TABLESPACE statement with the REDUCE, RESIZE, DROP and associated container operation clauses.

For DMS, requires 2 steps to release unused extents: 1. ALTER TABLESPACE dmstsx LOWER HIGH WATER MARK2. ALTER TABLESPACE dmstsx REDUCE (ALL CONTAINERS 100 M)

Page 40: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style40

Directories for Database Path storage changed• Partition-global directory

• The partition-global directory has the path: your_instance/NODExxxx⁄SQLxxxxx

• /database/inst481/NODE0000/SQL00001:

• db2rhist.asc

• db2rhist.bak

• db2rhist.lock

• HADR

• load

• LOGSTREAM0000 (Default database logs)

• MEMBER0000

• SQLDBCONF

• SQLOGAB

• SQLOGCTL.GLFH.1

• SQLOGCTL.GLFH.2

• SQLOGCTL.GLFH.LCK

• SQLSGF.1

• SQLSGF.2

• SQLSPCS.1

• SQLSPCS.2

• /database/inst481/NODE0000/SQL00001/LOGSTREAM0000:

• S0000000.LOG

• S0000001.LOG

• S0000002.LOG

• Member-specific directory

• The member-specific directory has the path:

your_instance/NODExxxx/SQLxxxx/

MEMBERxxxx

• /database/inst481/NODE0000/SQL00001

/MEMBER0000:

• db2event

• DB2TSCHG.HIS

• HADR

• SQLBP.1

• SQLBP.2

• SQLDBCONF

• SQLINSLK

• SQLOGCTL.LFH.1

• SQLOGCTL.LFH.2

• SQLOGMIR.LFH

• SQLTMPLK

• Local database Directory

• /database/inst481/NODE0000/sqldbdir:

• sqldbbak

• sqldbdir

• sqldbins

Notes:Database directories and filesWhen you create a database, information about the database including default information is stored in a directory hierarchy. The hierarchical directory structure is created for you. You can specify the location of the structure by specifying a directory path or drive for the CREATE DATABASE command; if you do not specify a location, a default location is used.In the directory that you specify as the database path in the CREATE DATABASE command, a subdirectory that uses the name of the instance is created.Within the instance-name subdirectory, the partition-global directory is created. The partition-global directory contains global information associated with your new database. The partition-global directory is named NODExxxx/SQLyyyyy, where xxxx is the data partition number and yyyyy is the database token (numbered >=1). Under the partition-global directory, the member-specific directory is created. The member-specific directory contains local database information. The member-specific directory is named MEMBERxxxx where xxxx is the member number. In a DB2® pureScale® environment, there is a member-specific directory for each member, called MEMBER0000, MEMBER0001, and so on. In a partitioned database environment, member numbers have a one-to-one mapping with their corresponding partition number, therefore there is one NODExxxx directory per member and partition. Member-specific directories are always named MEMBERxxxxand they always reside under the partition-global directory. An Enterprise Server Edition environment runs on a single member, and has one member-specific directory, called MEMBER0000. Partition-global directory The partition-global directory has the path: your_instance/NODExxxx⁄SQLxxxxx.The partition-global directory contains the following files: Global deadlock write-to-file event monitor files that specify either a relative path or no path at all. Table space information files. The files SQLSPCS.1 and SQLSPCS.2 contain table space information. These files are duplicates of each other for backup purposes.Storage group control files. The files SQLSGF.1 and SQLSGF.2 contain storage group information associated with the automatic storage feature of a database. These files are duplicates of each other for maintenance and backup purposes. The files are created for a database when you create the database using the CREATE DATABASE command or when you upgrade a nonautomatic storage database to DB2 Version 10.1 or later.Temporary table space container files. The default location for new containers is in They reside under <instance/NODExxxx/<db-name> directory. The files are managed locally by each member. The table space file names are made unique for each member by inserting the member number into the file name, for example: /storage path/SAMPLEDB/T0000011/C0000000.TMP/SQL00002.MEMBER0001.TDAThe global configuration file. The global configuration file, SQLDBCONF, contains database configuration parameters that refer to single, shared resources that must remain consistent across the database. Do not edit this file. To change configuration parameters, use the UPDATE DATABASE CONFIGURATION and RESET DATABASE CONFIGURATION commands. History files. The DB2RHIST.ASC history file and its backup DB2RHIST.BAK contain history information about backups, restores, loading of tables, reorganization of tables, altering of a table space, and other changes to a database. The DB2TSCHG.HIS file contains a history of table space changes at a log-file level. For each log file, DB2TSCHG.HIS contains information that helps to identify which table spaces are affected by the log file. Table space recovery uses information from this file to determine which log files to process during table space recovery. You can examine the contents of history files in a text editor.Logging-related files. The global log control files, SQLOGCTL.GLFH.1, SQLOGCTL.GLFH.2, contain recovery information at the database level, for example, information related to the addition of new members while the database is offline and maintaining a common log chain across members. The log files themselves are stored in the LOGSTREAMxxxx directories (one for each member) in the partition-global directory. Locking files. The instance database lock files, SQLINSLK,and SQLTMPLK, help to ensure that a database is used by only one instance of the database manager. Automatic storage containers

Page 41: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style41

Changes to paths used for log archive and retrieval

• If set LOGARCHMETH1=DISK, path defined is different from previous releases. Archive log files placed in path include:

• Instance name

• Database name

• Node

• Log Stream

• Log Chain ID /home/inst481/archive/inst481/TP1/NODE0000/LOGSTREAM0000/C0000001/

• When archived logs retrieved, a different directory structure used for Enterprise Edition and pureScale databases:

• New subdirectory LOGSTREAMnnnn added to path where logs retrieved, which could be active log path, mirror log path or overflow log path.

Log path:/dblogs/inst481/tp1/NODE0000/LOGSTREAM0000/Logs retrieved in path:/dblogs/inst481/tp1/NODE0000/LOGSTREAM0000/LOGSTREAM0000/

For example, if you set the logarchmeth1 configuration parameter to DISK:/u/dbuser/archived_logs, the archivelog files are placed in the /u/dbuser/archived_logs/instance/dbname/nodename/logstream/chainid/ directory.

Page 42: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style42

The backup files naming changed

•File name for backup images on disk has:

• Database alias

• Type of backup (0=full, 3=table space, 4=copy from table load)

• Instance name

• Database Partition (DBPARTnnn, nnn is 000 for single partition DB)

• Timestamp of backup (YYYYMMDDHHMMS)

• Sequence number (for multiple files)

MUSICDB.0.DB2.DBPART000. 20120522120112 .001

The naming convention for backup utility file output has changed in DB2 10.1.

Page 43: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style43

Compression for archived logs – saving storage is a good thing!

• Archived log files contain significant amount of data and these archives can grow

quickly

• If modified data is already in compressed tables, logging is reduced by virtue of

including compressed record images in log records

• Compression of archived log files further increases storage savings, even in these

environments

• Archive logs are compressed based on database configuration options

• LOGARCHCOMPR1 – controls compression for first set of archive logs

(logarchmeth1)

• LOGARCHCOMPR2 – controls compression for second set of archive logs

(logarchmeth2)

• Default is OFF, activate with ON

• Works when logarchmethx is DISK, TSM, or VENDOR

• Can be configured online

• Does not require DB2 Storage Optimization Feature

Archived logs can be compressed in DB2 10.1 and save storage space.

Page 44: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

44

Click to edit Master title style44

REVIEW: System-period Temporal Table

CREATE TABLE employees

(empID INTEGER NOT NULL PRIMARY KEY,

dept VARCHAR(50),

...,

system_start TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW BEGIN, system_end TIMESTAMP(12) NOT NULL GENERATED ALWAYS AS ROW END,

trans_id TIMESTAMP(12) GENERATED ALWAYS AS TRANSACTION START ID,

PERIOD SYSTEM_TIME (system_start, system_end) ) IN emp_space;

CREATE TABLE employees_historyLIKE employees IN hist_space COMPRESS YES WITH RESTR ICT ON DROP;

ALTER TABLE employees ADD VERSIONING

USE HISTORY TABLE employees_history ;

� History table can be configured to suit your needs, e.g. partitioning, compression, table space, etc., but must be in the same database

Another example:

DROP TABLE employees;

CREATE TABLE employees

(empID INTEGER NOT NULL PRIMARY KEY,

dept VARCHAR(50),

system_start TIMESTAMP(12) NOT NULL GENERATED AS ROW BEGIN,

system_end TIMESTAMP(12) NOT NULL GENERATED AS ROW END,

trans_id TIMESTAMP(12) GENERATED AS TRANSACTION START ID,

PERIOD SYSTEM_TIME (system_start, system_end));

CREATE TABLE employees_history LIKE employees WITH RESTRICT ON DROP;

ALTER TABLE employees ADD VERSIONING USE HISTORY TABLE employees_history;

INSERT INTO employees (empID, dept) VALUES (12345,'J3'), (67890, 'K25');

UPDATE employees SET dept='M24' WHERE empID=12345;

DELETE FROM employees WHERE empID=67890;

UPDATE employees SET dept='M15' WHERE empID=12345;

Page 45: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

45

Click to edit Master title style45

system_time values are always set by DB2!

REVIEW: Insert and Update

EmpID Dept System_start System_end

12345 J13 11/15/1995 12/30/9999

67890 K25 11/15/1995 12/30/9999

On 11/15/1995, Employee 12345 and 67890 were hired into the department J13 & K25.

INSERT INTO employees (empID, dept) VALUES (12345,’ J13’), (67890, ’K25’)

EmpID Dept System_start System_end

12345 J13 11/15/1995 01/31/1998

EmpID Dept System_start System_end

12345 M24 01/31/1998 12/30/9999

67890 K25 11/15/1995 12/30/9999

On 1/31/1998, Employee 12345 moved to department M2 4.

UPDATE employees SET dept=‘M24’ WHERE empID=12345

System validity period:[inclusive, exclusive]

Note: only date portion of TIMESTAMP value shown in examples to simplify display

Table: employees

Table: employees_historyTable: employees

45

Note: only the date portion of TIMESTAMP values are shown in these examples, to keep things easy to read.

When a row in the base table (employees) is updated, the before-image of the row is inserted into the history table.

All periods use the inclusive-exclusive model, also called closed-open model. This means that the end point of the period is no longer included in the validity period. For example, the following period from 1995-11-15 to 1998-01-31 indicates that employee 12345 was in department J13 until and including 1998-01-30 but no longer on 199-01-31, which is the start date in his new department (M24).

EmpID Dept System_start System_end

12345 J13 11/15/1995 01/31/1998

Page 46: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

46

Click to edit Master title style46

REVIEW: Delete and Update

EmpID Dept System_start System_end

12345 J13 11/15/1995 01/31/1998

67890 K25 11/15/1995 03/31/2000

On 3/31/2000, Employee 67890 left the company.

DELETE FROM employees WHERE empID=67890

67890 was in K25 from 11/15/1995 to 3/31/2000

EmpID Dept System_start System_end

12345 M24 01/31/1998 12/30/9999

EmpID Dept System_start System_end

12345 J13 11/15/1995 01/31/1998

12345 M24 01/31/1998 05/31/2000

67890 K25 11/15/1995 03/31/2000

On 5/31/2000, Employee 12345 joined the department M15.

UPDATE employees SET dept=‘M15’ WHERE empID=12345

12345 was in M24 from 1/31/1998 to 5/31/2000

EmpID Dept System_start System_end

12345 M15 05/31/2000 12/30/9999

Table: employees_historyTable: employees

Table: employees_historyTable: employees

M24 01/31/1998

46

Any update or delete on the base table (employees) causes DB2 to transparently insert the before images of all affected rows into the history table. In this process, DB2 automatically sets the system start and end timestamps to maintain a precise temporal history of data changes.

Users and applications insert, update, and delete rows only in the base table. The maintenance of the history table is best left to DB2. The only exception is when you periodically need to archive or purge old rows from the history table.

Page 47: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

47

Click to edit Master title style47

REVIEW: Querying a System-period temporal table

1. Which department is employee 12345 in ( right now )?SELECT dept FROM employees WHERE empID=12345;

2. Which department was employee 12345 in on 12/01/1997?SELECT dept FROM employees FOR SYSTEM_TIME AS OF '12/01/1997' WHERE empID=12345;

3. How many departments has employee 12345 worked in since 1995?SELECT COUNT(DISTINCT dept) FROM employees FOR SYSTEM_TIME FROM ‘1995-01-01’ TO CURRENT_TIMESTAMPWHERE empID=12345;

EmpID Dept System_start System_end

12345 M15 05/31/2000 12/30/9999

M15

J13

3

EmpID Dept System_start System_end

12345 J13 11/15/1995 01/31/1998

12345 M24 01/31/1998 05/31/2000

67890 K25 11/15/1995 03/31/2000

Without the FOR SYSTEM_TIME clause,query reads the current data only.

Table: employees_historyTable: employees

47

Users and application typically query only the base table (employees). If you run a regular SQL statement (without any new temporal clause) against the base table, the query only read current data and the history is not touched.

If you want to query the table as of a certain point in time in the past, you extend the reference to the base table with the " FOR SYSTEM_TIME AS OF" clause. This causes DB2 to also access the history table as needed.

You can not only retrieve rows that were valid at a specific point in time, you can also look at a certain period of time to see all rows that were valid between two points in time.

Page 48: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

48

Click to edit Master title style48

REVIEW: Time Ranges Using SYSTEM_TIME FROM…TO… or BETWEEN…AND….

1. Search within an inclusive-exclusive range of time:SELECT * FROM employees FOR SYSTEM_TIME FROM '1995-01-01' TO '1998-01-31'WHERE empID=12345;

2. Search within an inclusive-inclusive range of time: SELECT *FROM employees FOR SYSTEM_TIME BETWEEN ‘1995-01-01’ AND '1998-01-31'WHERE empID=12345;

EmpID Dept System_start System_end

12345 M15 05/31/2000 12/30/9999

EmpID Dept System_start System_end

12345 J13 11/15/1995 01/31/1998

12345 M24 01/31/1998 05/31/2000

67890 K25 11/15/1995 03/31/2000

Table: employees_historyTable: employees

12345 J13 11/15/1995 01/31/1998

12345 J13 11/15/1995 01/31/1998

12345 M24 01/31/1998 05/31/2000

1 record(s) selected

2 record(s) selected

48

Page 49: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style49

System Temporal and BiTemporal Table Security (1)

• History Table associated with System Temporal Tables and

Bitemporal Tables

• Inherits its privileges from the base table

• PREFERRED PRACTICE:

• Grant required application privileges on base table and application

accesses history table transparently

• Especially true for INSERT, UPDATE, and DELETE privileges

• Do not want applications outside of DB2 engine to modify history table

• Would adversely affect data-row-tracking mechanism of STT and BiTemporal

tables

Because the history tables for system-period temporal and bitemporal tables inherit their access privileges from the base table, a preferred pracitice is not to explicitly grant privileges on the history tables. This preference is especially true for INSERT, UPDATE, and DELTE privileges, because you do not want external applications outside of the DB2 10 database to modify the history tables. External history table modification can adversely affect the data-row-tracking mechanism of System Temporal Tables and bitemporal tables.

Page 50: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style50

System Temporal and BiTemporal Table Security (2)

• PREFERRED PRACTICE:

• Prevent history table from being implicitly

dropped when base table dropped

• use WITH RESTRICT ON DROP clause

The WITH RESTRICT ON DROP clause prevents the history table from being implicitly dropped when the base table of the temporal table definition is dropped. This clause requires an explicit DROP TABLE statement to be executed against the history table to drop the table.

An alternative method, to avoid dropping the history table between the base table is dropped, is to use the ALTER TABLE… DROP VERSIONING statement clause against the base tables before the DROP TABLE statement.

Page 51: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style51

Temporal Table DDL

• PREFERRED PRACTICE:

• Ensure History Table matches base System Temporal Table or

BiTemporal Table:

• CREATE TABLE hist_table LIKE base_table WITH RESTRICT ON DROP

• LIKE option prevents improper column order, misspelled column names

and mismatched data types and lengths

• TIP:

• Remember time periods within Temporal Tables

always use INCLUSIVE-EXCLUSIVE for start and end

of period for both SYSTEM_TIME and BUSINESS_TIME

The easiest and preferred method to ensure the history table matches the base table for a system-period temporal table is to use the LIKE option on the CREATE TABLE statement. The LIKE option prevents any discrepancy in the history table definition such as imporer column order, misspelled column name, and other similar types of mistakes.

Inclusive-exclusive means that the start of a period includes the start date that is specified (inclusive) and the period ends before the end time that is specified.

Page 52: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style52

System Temporal and BiTemporal Table Performance

• History table has new rows added to end of table as rows are

updated or deleted from base table

• PREFERRED PRACTICE:

• Improve performance of inserts and updates with APPEND ON:

• ALTER TABLE hist_table APPEND ON

• Check SYSCAT.TABLES.APPEND_MODE = ‘Y’

For performance reasons, because the history table has new rows appended to the end of the table as rows are updated or deleted from the base table, a preferred practice is to use APPEND ON to improve the performance of INSERTS and UPDATES.

Page 53: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Click to edit Master title style53

Summary:

• Describe DB2 10 performance tips.

• Utilize DB2 10 problem determination tricks.

• Analyze DB2 10 tips to improve utilities.

• Learn DB2 10 monitoring tricks to improve performance and

problem determination.

• Implement commonly overlooked DB2 10 features.

Abstract:

Look at all the great hints and tips I’ve learned about DB2 LUW 10. Melanie will present tips and tricks that she has learned about DB2 LUW. Many times these small but great enhancements are overlooked. The goal is to provide you with an arsenal of tips you can use in various database administration situations. Come learn DB2 LUW 10 ideas, tips and tricks that you need to know.

53

Page 54: Wowza! Great Tips I've Learned About DB2 LUW 10Wowza! Great Tips I've Learned About DB2 LUW 10 Melanie Stopfer IBM Software Group EMAIL: mstopfer@us.ibm.com TWITTER: @mstopfer1 LinkedIn:

Wowza! Great Tips I've Learned About DB2 LUW 10

EMAIL: [email protected]

TWITTER: @mstopfer1

LinkedIn: Melanie Stopfer

Thank you for attending.

54