27
Build Your Own Benchmark Center Ready to Use Methodology & Tools Randall Parman Database Architect DineEquity, Inc. Doug Ebel Benchmark & POC Center of Expertise Teradata Corporation +

Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

  • Upload
    hadan

  • View
    217

  • Download
    1

Embed Size (px)

Citation preview

Page 1: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Build Your Own Benchmark Center

Ready to Use Methodology & Tools

Randall ParmanDatabase ArchitectDineEquity, Inc.

Doug EbelBenchmark & POC Center of ExpertiseTeradata Corporation

+

Page 2: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

“People who claim an investment yielded an improvement either measure it, guess it, or lie.”-- Doug Ebel

• How benchmarks contribute to good EDW practice

• Lessons learned from different benchmark approaches

• How to defining a good benchmark of the DBMS

• Tools for selecting Queries and Tables from DBQL

• Query Driver

> DBQL and Resuse Table Usage

> Query Setup

> Test setup

> Monitoring and reporting results

> Benchmark Wrap-up – Saving results

• Conclusion

Case Study: Benchmark at DineEquity

• Compare 2-node 5500C with 4-node 2580

• Looking for wall clock time comparisons

• How queries selected

> Known long running queries

> Gut feel for complexity

• Run methods

> Serial

> All in parallel

> Multiple parallel streams

• Data generated for projected growth

Page 3: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Serial Exec.

12 Concurrent

Page 4: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

24 Concurrent

60 Concurrent

Page 5: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Results of Benchmarking At DineEquity

• Provided a reasonable picture of performance

• Easy to control and manipulate

• Found that starting all queries at the same time did not provide a true picture of workload performance.

• Method for choosing queries was very random.

• Did “stumble” upon some workload management nuggets.

• Overall, process was good and led to request for doing this presentation.

How Benchmarks Contribute To Good EDW Practice

• Clearly understand impact of new release on all users ... >Case study: 5400 TD 6.2 vs 2550 TD 12

- 2,132 production queries captured over 6 hr- 2,013 completed in 25% of the time of 5400- 16 ran longer (complex, redundant syntax, short duration)

• What is impact of multiple upgrades?>Case study: SLES 9 -> 10, TD 12 -> 12.0.3, 18 ->33 nodes

- Did SLES upgrade impact system? .. Or change in activity?- Concern that node upgrades may hide performance problem- 1400 actual queries extracted from DBQL- Proved SLES change < 1% performance- Proved no negative impact of software service pack

• Forecast impact before going to production• Provide cost justification for expense of upgrade• Adjustment of capacity plan – might defer upgrade• Inexpensive validation that patches are performance neutral• Identify impacts of physical data model tuning

Page 6: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Lessons Learned From Different Benchmark Approaches

• “Take all tables” approach: “It’s too difficult to determine which tables are needed”> Excess effort, space requirements, system processing

• Engineered queries: “Make that puppy pant”> Not representative of real workload> Success in artificial benchmark ≠ Success in production

• Manually executed from BI tool: “Everyone hit the Enter key at the count of three”> Not reproducible> Takes setup of more infrastructure (BI tool, network)> Hides DBMS performance differences with the additional

overhead of network and client platform, client disks

• Execute same queries multiple times> Database gets faster after the first one> Not representative of real world

What Impacts Benchmark Performance?

Client Server

Client Disk

Node 1

Node 2

Node 3

Node 4

TD Managed Server

Switches, routers

Pub

lic L

AN

Priv

ate

LAN

RAID-0?RAID-1?RAID-5?RAID-10?

1. Client disk speed2. Client server speed3. Client application design4. DBMS load utility used����5. Client LAN bandwidth6. Network congestion, propagation7. Public LAN packet size8. LAN bandwidth to node9. Node speed����10. Node O/S����11. DBMS, DBMS release����12. BYNET speed����

BYNET

1

2

3

4

5

67

89 10 11

Private LAN provides multiple connections + Jumbo packet size

12

Page 7: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Too Often, When Client BI/ETL Application Is Used, DBMS Has Minor Time Impact In Timing

,0.00

,500.00

1,000.00

1,500.00

2,000.00

2,500.00

12:5

5:00

13:0

0:00

13:0

5:00

13:1

0:00

13:1

5:00

13:2

0:00

13:2

5:00

13:3

0:00

13:3

5:00

IdleIOWaitUservUexec

TheTime

Data

ETL Test ran 32 Minutes

100%

of C

PU

Conclusion?Teradata good at being idle?

How To Focus On DBMS Measurement

• Data Loading

> Execute data load utilities against data prepared by ETL application separate from ETL application

> Use dedicated server (or TD Managed Server)

> Use dedicated LAN, configure for Jumbo Packets

> Use multiple connections from ETL server to nodes

• Data Retrieval

> Capture queries from BI tool. Run without BI tool

> Set Retlimit and RetCancel in BTEQ to limit rows sent back to client platform

> Measure performance from Query StartTime to FirstRespTime on DBQL (resolution to .01 seconds)

Page 8: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

• Benchmark for DW should be a subsetof your vision or production

• Focus on the DBMS

• Minimize theclient software:BI/ETL tools

Source System

ExtractFile

ETL Process

Load Data

Load Utilities EDW

DetailData

SemanticData

Transform

BI Tools

SQL

Results

User

Vision

Load Data

Load Utilities EDW

DetailData

SemanticData

Transform

SQL

Results

Distilling Benchmark Objectives And Scope

How To Distill Benchmark Objectives And Scope

• A DBMS Benchmark should be the DBMS relatedsubset of your vision or production

> Should be smaller in scale than a full production parallel

> Ideally execute in 15-60 minutes to allow re-testing

• Make benchmark representative of your production

> #’s of heavy, medium and light users

> Queries selected from real production

> Queries modified for each execution to represent variety of interests (geographies, products, time periods)

> Should cover a few major tables plus their supporting lookups or dimensions.

Page 9: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

xxx_Benchmark

Overview Of The Selection Methodology

App DB

TableIndex

DBQL

QueryIndex

1. ID Tables of key interest

2. Sample queries that use those tables

3. ID other tables needed by Queries

4. Eliminate tables with minor use and their queries

5. Run tests and/or export DDL & Data

0. Turn on DBQL logging for queries, objects

Identifying Data And Queries Via DBQL

xxx_Benchmark

TableIndex

QueryIndex

Call SelectTable(‘db’,’tablepattern’);

Call SelectQueryUsingTables(‘querypattern’, mintablecount, samplesize);

Call SelectTablesUsedByQueries();

Call ReduceMinorTablesPreview(MinUsage);

Call ReduceMinorTables (MinUsage);

Call QueryRenumber ();

Call SelectNeededObjects (); Show table …Show view …

Call SelectTableUsage();Counts of queriesusing each table

Page 10: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Benchmark Query Driver: TdBenchV5

• Features:> Tracks each test using TrackingTable + DBQL

> Supports single stream and multi-stream tests

> Has reporting views for analysis of results from DBQL and ResUsage by selecting a RunID

> Supports mixed workloads of heavy, medium, light queries and update processes

> Organizes DBQL, Resusage, and output log files

• Requirements:> Runs on Windows and Windows Server platforms

> Implemented in DOS batch files, however, no skills required in Batch commands

> Uses stored procedures. (For Teradata express DBMS on PC, need to be able to create stored procedures.)

Overview Of TD Query Driver

xxx_Benchmark

TableIndex

QueryIndex

TestTracking

QueryHold

QueryQueuesQueryQueuesQueryQueues

Query Macros

Rpt views

App DB

DBQL

Windows PC/Server

LogsRun001

filesRun002 …

files

TdBenchDialog to start benchmark

ControlUpdates TestTracking,Populates QueryHold,

Populates QueryQueuesCancels sessions

WorkersExecutes queries

Page 11: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Tracking Each Test – The TestTracking Table

• When test initiated, you are prompted for:

> File to run (.SQL, .BTQ, .BAT, or .LST) in the Scripts Directory

> Number of sessions

> Prefix of the Worker logon ID’s to be used

> Description of the test

> Duration of the test.

• Unique RunID is assigned from TestTracking

> Marks beginning & ending Timestamps for reporting from DBQL & ResUsage

xxx_Benchmark

TestTracking

DBQL

Windows PC/Server

TdBenchDialog to start benchmark

Most Useful DBQL Tables For Benchmarks

• DBC.QryLog:> One row for every query execution. Each query identified by a

QueryID and provides start/stop times, CPU usage, I/O, and information about skew.

> This is the source for all results reporting

• DBC.QryLogObjects:> For each QueryID, one row for every database object referenced.

> This is the essential cross reference of tables �� queries.

• DBC.QryLogExplains:> for each QueryID, captures explain text in a Large Object text field.

> Before and after analysis can explain performance changes.

• DBC.QryLogSteps:> Provides detailed CPU, IO and duration for each step executed within

the query.

> Helps to understand where time is being spent and whether estimates might be improved by statistics

• DBC.QryLogSQL:> Contains complete SQL for the QueryID. Default for DBC.QryLog is to

contain first 200 characters.

Page 12: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Other Log Tables For Benchmark

• Resusage: enabled via CTL utility> Recommend 30 second interval, 30 second collect> ResUsage tables supported:

- DBC.ResUsageSpma: (Recommended)– System-wide node information provides a summary of overall system utilization incorporating the essential information from most of the other tables

- DBC.ResUsageSawt: (Optional)– Data specific to AMP worker tasks. Use this when you want to monitor the utilization of the AWTs and determine if work is backing up because the AWTs are all being used

- DBC.ResUsageSvpr: (Optional)– Data specific to each virtual processor and its file system.

• DBC.EventLog: (Always on)> LogonOff information for finding interference to benchmark.

Begin With The End In Mind – You’re Going To Be Analyzing The Queries In DBQL

• Which DBQL QueryText easier to parse for reporting?#1:

> SELECT DISTINCT a18.HOTEL_PROD_ID (NAMED HOTEL_PROD_ID ) , a18.HOTEL_DESC (NAMED HOTEL_DESC ) , pa19.nsearch_count …

> SELECT a14.DAY_CODE (NAMED DAY_CODE ) , a15.PRICE_TEST_VER_ID (NAMED PRICE_TEST_VER_ID ) , …

> SELECT a13.HOTEL_ID (NAMED HOTEL_ID ) , MAXIMUM ( a13.HOTEL_DESC ) (NAMED HOTEL_DESC ) , …

• Or, #2:> Exec xxx_Benchmark.Query01;

> Exec xxx_Benchmark.Query02;

> Exec xxx_Benchmark.Query03;

Page 13: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Avoid Benchmark Constructs That Aren’t Realistic

• Which query sequence gives more realistic test?#1?

> Exec xxx_Benchmark.Query15;

> Exec xxx_Benchmark.Query15;

> Exec xxx_Benchmark.Query15;

> Exec xxx_Benchmark.Query15;

• Or, #2?

> Exec xxx_Benchmark.Query15 (‘District21’,’2010-03’);

> Exec xxx_Benchmark.Query15 (‘District14’,’2010-02’);

> Exec xxx_Benchmark.Query15 (‘District08’,’2010-04’);

> Exec xxx_Benchmark.Query15 (‘District31’,’2010-03’);

Preparing Queries For Benchmarks

• Best: convert to Macros> Allows for simple parameterization

> Identify with simple query number, e.g: .Query01, .Query02

> Store in the xxx_Benchmark database, give Select + Grant to app. DB’s

> Doesn’t work if query is a script with multiple DDL

• Next Best: Stored procedures> Allows for multiple DDL statements

> Result queries need to be moved to top as cursors

> Declare procedure with 1 or more result sets, instead of final query, open cursor and leave open as you exit

• Also supported: External scripts> Recommend adding /* .Querynnn */ marker after select

> If multiple steps, use fractions: /* .Querynnn.yyy */

> Ensure every identifier is unique

• Zero pad all identifiers so they sort correctly, easier to parse> E.g. .Query01, .Query02 … .Query10, or .Query01.01, .Query01.02

> not .Query1, .Query10, .Query11, … .Query2, .Query20, …

Page 14: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Testing Models

• Serial Execution:> First execution can be against empty tables to validate setup

and get cross reference of column reference to focus tuning> Provides a baseline of each query’s performance

characteristics. Repeat if tuning needed (Skewed, slow, etc)

• Concurrent Sessions:> Increase # streams until Queries/Hour drop off

• Baseline Load:> Export normal unit(s) of update (e.g. a day or several days)> Delete normal unit(s) of data> Load (each) normal unit of data

• Mixed Workload:> Prepare by deleting unit(s) of data (e.g. a day or days)> Start Concurrent test, after a minute, start the load(s) of data

Serial Execution: First Benchmark Test

• Serial:> Simple execution of all queries

> Will provide best possible execution time: Stand-alone

> Pay attention to skewing. Multiple queries with same skew will be much worse when run together

> CPU seconds used over duration allows theoretical calculation of how many can run together without slowing (e.g. 33% Cpu => 3)

0 10 20 30 40 50 60 70 80Minutes

Que

ry 1

Que

ry 2

Que

ry 3

Que

ry 4

Que

ry 5

Que

ry 6

Que

ry 7

Que

ry 8

Que

ry 9

Que

ry 1

0

Page 15: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Avoid Benchmark Constructs That Aren’t Realistic

• Which workload model gives more realistic test?#1?

• Or, #2?

• Which is easier for DBMS workload management?#1 or #2?

0 10 20 30 40 50 60 70 80Minutes

User 1User 2User 3User 4

0 10 20 30 40 50 60 70 80Minutes

User 1User 2User 3User 4

Models For Concurrent Workload

• Decay:> Start all queries at once and time until last finishes> Not realistic but often used because it is simple to set up> Weakness: After small queries finish, long running queries run

with little concurrency – mostly another serial test

0 10 20 30 40 50 60 70 80Minutes

Query 1Query 2Query 3Query 4Query 5Query 6Query 7Query 8Query 9

Query 10

End

Page 16: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Models For Concurrent Workload

• Fixed period:> Start # sessions of heavy, medium, light in proportional to

real users running queries repeatedly for fixed time period

> Need to run long enough so longest queries complete

0 10 20 30 40 50 60 70 80Minutes

Query 1Query 2Query 3Query 4Query 5Query 6Query 7Query 8Query 9

Query 10

End

Models For Concurrent Workload

• Fixed work:> Line up a number of heavy, medium and light queries in

proportion to real workload and initiate different #’s of sessions to execute the queries

> Need to have enough medium/light queries so heavy queries run with concurrency

0 10 20 30 40 50 60 70 80Minutes

Query 1Query 2Query 3Query 4Query 5Query 6Query 7Query 8Query 9

Query 10

End

Page 17: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Windows PC/Server

Worker(s)Worker(s)Worker(s)

How Does TdBench Work? Queue Table Usage

• Queue table used to stage work for Worker sessions

• Contributes a few seconds of CPU per hour

• Overhead? … Duration> Adds .05 seconds between queries for local PC/Server

> .4 to .5 seconds from PC in Cincinnati to San Diego.

• Benefit? Allows adding workers for more concurrency without needing to change scripts

Teradata

QueryQueueBteq

Select QueryTextfrom QueryQueue

Query to Execute

TdBench Queue Tables:

• QueryQueue> Default queue of queries for worker

sessions> Workers logon and wait for queries to run> Control session populates table from

QueryHold to release the test> For fixed period tests, last query in

QueryQueue repopulates the QueryQueue

• QueryQueuexxx> Optional to separate queues of work for

heavy, medium, light queries, or, separate queues for different user types

• QueryHold> Populated by Control session prior to test> Holds queries for all queues> Allows quick population or repopulation of

queue table(s)

xxx_Benchmark

TableIndex

QueryIndex

TestTracking

QueryHold

QueryQueuesQueryQueuesQueryQueues

Query Macros

Rpt views

Page 18: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Different Queue Table Usages

• Serial Execution:

> 1 Queue, 1 Worker

• Concurrent Option #1

> 1 Queue, Multiple workers

> Use for “Fixed Work”

> For Decay test, use #workers = #queries

• Concurrent Option #2

> Multiple workers, multiple queues

> Use for “Fixed Period”

QueryQueueHvy

Worker

QueryQueueMed

QueryQueueLte

QueryQueue

WorkerWorkerWorker QueryQueue

WorkerWorkerWorker

WorkerWorkerWorker

WorkerWorkerWorker

Other Test Implementations Supported

• BTEQ Script> May be simplest conversion of scripts from BI tools

> Difficult to vary parameters for multiple sessions

> Can be used to call a different load utility via .oscommand and return to BTEQ for insert/select from staging to core

> Can contain .hang command to delay start of load activity

> Initiated after all of the worker sessions logged on

• Batch Script (Windows .bat file)> Most flexibility, but also most technical skill to write

> Passed parameters about RunID, Directory of Log path, etc to support coordination with other test outputs.

Page 19: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

TdBench Behavior Based On Type Of Script(1 of 2)

• .SQL> Each line is a complete query ending with a semicolon

> Often an exec of a macro or call to stored procedure

> All lines will be loaded to a queue table and read by Workers

> 4000 character/line limit (arbitrary, but must be a single line)

• .BTQ> Any valid BTEQ commands or SQL statements

> Can use .OS command to launch processes (e.g. Fastload)

> Logon executed by Query Driver

> Each session runs in its own Window

> Should exec TestStop at end to help mark end of test.

• .Bat> Passed RunID, Log path, Session count, Logon ID, Query Queue,

Repeat count, Seq# of this session

> Should run BTEQ at end to execute TestStop to help mark end of test.

TdBench Behavior Based On Type Of Script(2 of 2) – The .LST file

• Used to run multiple types of scripts or multiple queues

• When used, TDBench only prompts for description of this run and duration

• .LST file tokens:

> 1. Name of a .SQL, .BTQ. Or .Bat file. (Specify directory)

> 2. Number of sessions

> 3. User logon name pattern

> 4. Name of the query queue

> 5. Repeat count (Warning: .Repeat only does next SQL command. E.g. .Run is a BTEQ command, not SQL)

• Example:> scripts\New200Hvy.sql 6 Usr_HVY% QueryQueueHvy 1

> scripts\New200Med.sql 10 Usr_Med% QueryQueueMed 1

> scripts\New200Lte.sql 24 Usr_Tac% QueryQueueTac 1

Page 20: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Directory Structure For Tdbench

TDBenchV5 DirectoryBatch files for setup & executionReadme.Doc - InstructionsLogs Directory

Run001 DirectoryLogs for RunID 1

Run002 DirectoryLogs for RunID 2

Run …Queries Directory

Optional, 1 query per fileCan be converted to macro

Scripts Directory.SQL, .BTQ, .Bat, and .Lst files

Setup DirectoryContains source for install

Windows PC/Server

TdBenchDialog to start benchmark

ControlUpdates TestTracking,Populates QueryHold,

Populates QueryQueuesCancels sessions

WorkersExecutes queries

Executing And Monitoring The Query Driver

• Use Windows Server for High Performance tests

> Queries < 1 second

> Long running tests where you might get disconnected

• Laptop ok for tests with longer running queries

• Either case, Control session should execute $R (Rush)

• Look for impact of Query Monitor on Test

> Use Windows task manager to look for high CPU use

> Calculate interval between queries for a single session

• Watch system via Viewpoint, TdManager or PMon

> Watch for significant number of idle worker sessions

Page 21: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Benchmark On 2 Systems, Laptop Screen 2580, Monitor For 2650

10 Worker Windows + PMon

Page 22: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Monitoring The Test: Comparison Of 2 Machines

#1: max CPU

#2: max I/O

CPU Dropoff

Watch Test For Causes < 100% CPU

Less than 100% CPU

Skewed I/O(All tables usedsame default)

PMonGraph

PMonResourceUsage

NodeProblem

Page 23: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Analyzing The Results For A Runid

• Views join captured log information against TestTracking> Product join to find rows between start/stop timestamps> Use with RunID=nnn or RunID in (nn1, nn2, …)

• Views:> RptBenchSummary – Number of statements by error code

- Important for quick validation everything worked ok> RptTestDetail – execution of each statement within test

- Use for serial tests> RptTestSummary – # executions & times by query

- Use for concurrency tests> Resusage reports – select data within bounds of test +/- 2 min

- RptSpma – SPMA records, node level summary per interval- RptSawt – Amp worker summary information per interval- RptSvpr – Vproc summary information per interval- RptLogonoff – all logon/logoff’s within test period

Select * From Testtracking;

RunId TestName StartTime ReleaseExecTime ActualStopTime1 script01 2010-07-16 11:47:35 2010-07-16 11:47:35 2010-07-16 12:09:592 script01 2010-07-16 21:51:46 2010-07-16 21:51:46 2010-07-16 22:02:323 run_scripts2 2010-07-16 23:40:00 2010-07-16 23:40:23 2010-07-16 23:56:474 run_scripts3 2010-07-17 08:53:38 2010-07-17 08:53:38 2010-07-17 09:20:565 run_scripts5 2010-07-17 09:29:15 2010-07-17 09:29:16 2010-07-17 10:14:166 run_scripts10 2010-07-17 10:22:51 2010-07-17 10:22:51 2010-07-17 12:29:347 run_scripts5 2010-07-17 15:14:19 2010-07-17 15:14:19 2010-07-17 15:50:31

SessionCnt RunTitle TestDescription RunNotes1 Baseline the 26501 baseline with new aji2 Checkout 2 sessions on 26503 Concurrent 3 sessions5 5 concurrent sessions

10 10 session concurrent5 Rerun 5 sessions with rule set 1: 2a, 2b, 3b, 8a only 2

Page 24: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Select * From RptBenchSummary

RunId TestName ErrorCode StmtGroup StartTime Minutes ExecCnt1 script01 0 Exec Bench Query Fri 07/16 11:47 22 191 script01 0 Other Fri 07/16 11:47 22 252 script01 0 Exec Bench Query Fri 07/16 21:51 11 202 script01 0 Other Fri 07/16 21:51 11 13 run_scripts2 0 Exec Bench Query Fri 07/16 23:40 16 383 run_scripts2 0 Other Fri 07/16 23:40 16 524 run_scripts3 0 Exec Bench Query Sat 07/17 08:53 27 574 run_scripts3 0 Other Sat 07/17 08:53 27 755 run_scripts5 0 Exec Bench Query Sat 07/17 09:29 45 955 run_scripts5 0 Other Sat 07/17 09:29 45 125

TotRunSecs AveResp CpuTime RunTitle1,288.81 67.83 98,534.11 Baseline the 2650

1.12 0.04 1.38 Baseline the 2650630.17 31.51 68,007.39 baseline with new aji

0.02 0.02 0.01 baseline with new aji1,873.98 49.32 140,239.38 Checkout 2 sessions on 2650

35.33 0.68 2.72 Checkout 2 sessions on 26504,752.97 83.39 214,448.34 Concurrent 3 sessions

4.29 0.06 3.76 Concurrent 3 sessions12,883.33 135.61 351,837.52 5 concurrent sessions

11.83 0.09 6.92 5 concurrent sessions

Select * From RptTestDetail Where Runid=2

stmt RunId TestName SessionCnt RunSecs StartSecs NumResultRows TotalIOCount

update testtracking set ReleaseExecTime = current_timestamp where runid=0002;2 script01 1 0.02 0.13 0 3

Query1a; 2 script01 1 13.94 4.39 2,198,113 2,233,291

Query1b; 2 script01 1 10.65 19.07 428,170 1,552,523

Query2a; 2 script01 1 190.33 30.32 105,485 23,226,119

Query2b; 2 script01 1 73.37 221.11 136,315 14,104,267

AMPCPU ParserCPU CpuTime SpoolUsage CpuSkew IoSkew ErrorCode QueryID

0 0.01 0.01 0 0 0 163691810215365766

1,085.74 1.32 1,087.06 11,870,501,888 0.06 0.06 0 163691810215365768

985.33 1.26 986.6 11,144,025,088 0.15 0.19 0 163691810215365769

24,236.50 0.7 24,237.20 244,722,140,672 0.07 0.09 0 163691810215365770

5,955.01 1.14 5,956.14 96,679,272,448 0.03 0.02 0 163691810215365771

Page 25: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Select * From RptTestSummary Where Runid = 6

RunId Stmt Error Exec SumRun AveRun MaxRun SumCpu CpuPctCode Cnt Secs Secs Secs Time Of Run

6 QRead 0 209 49.23 0.24 6.54 0.82 0

6 Query1a; 0 10 2,392.95 239.3 447.95 10,490.28 1.586 Query1b; 0 10 949.41 94.94 141.01 9,090.46 1.376 Query2a; 0 10 16,714.67 1,671.47 1,877.33 237,021.24 35.69

6 Query2b; 0 10 11,125.07 1,112.51 1,580.04 59,932.10 9.026 Query3a; 0 10 5,054.93 505.49 893.88 48,974.60 7.376 Query3b; 0 10 5,768.90 576.89 978.05 57,502.53 8.66

6 Query4a; 0 10 357.31 35.73 74.8 2,517.04 0.386 Query4b; 0 10 1,129.11 112.91 189.94 6,125.42 0.926 Query4c; 0 10 761.66 76.17 149.95 11,879.40 1.79

6 Query4d; 0 10 474.23 47.42 115.82 4,520.66 0.68…

Note impact of read from queue

Executed by remote laptop

Select Runid, TheTime, NodeId, CpuUexec, CpuUserv, CpuIoWait, CpuIdle from RptSpma where RunId = 21;

RunId TheTime NodeID CPUUExec CPUUServ CPUIoWait CPUIdle21 12:15:00 102 2 25 217 71,756

21 12:15:00 103 2 18 126 71,85421 12:15:00 104 0 11 115 71,874

21 12:15:00 105 3 20 171 71,80621 12:15:00 106 1 10 149 71,84021 12:15:00 107 0 15 132 71,853

21 12:15:00 108 2 13 119 71,86621 12:15:00 109 1 17 126 71,856

21 12:15:00 110 2 15 138 71,84521 12:15:30 102 38,219 1,094 1,814 30,87321 12:15:30 103 36,077 1,105 1,826 32,994

21 12:15:30 104 36,815 1,109 1,871 32,20821 12:15:30 105 37,180 1,128 1,922 31,794

21 12:15:30 106 36,948 1,070 1,734 32,24921 12:15:30 107 35,981 1,114 1,868 33,04021 12:15:30 108 35,461 1,080 1,828 33,635

21 12:15:30 109 34,466 1,090 1,900 34,5530

100000

200000

300000

400000

500000

600000

700000

1214

30

1216

30

1218

30

1220

30

1222

30

1224

30

1226

30

1228

30

1230

30

1232

30

1234

30

1236

30

1238

30

1240

30

1242

30

1244

30

1246

30

1248

30

1250

30

1252

30

1254

30

1256

30

(bla

nk)

Sum of CPUIdle

Sum of CPUIoWait

Sum of CPUUServ

Sum of CPUUExec

TheTime

Data

Query Output

Excel Pivot Chart

Page 26: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,

Benchmark Wrap-up – Save Your Results

• Tables provided in xxx_Benchmark for History> Hist_DBQLExplainTbl Hist_DBQLObjTbl Hist_DBQLogTbl

Hist_DBQLSqlTbl Hist_DBQLStepTbl Hist_EventLogHist_ResUsageSawt Hist_ResUsageSpma Hist_ResUsageSvpr

• Stored procedures supporting history:> HistSetCopyRuns

- Copies all new data from DBC within RunID’s

> HistSetCopyAll

- Copies all new data from DBC

> HistSetHist

- Changes views in xxx_Benchmark to point to history tables in xxx_Benchmark

> HistSetDBC

- Changes views in xxx_Benchmark to point to active tables in DBC.

Concluding Thoughts “Building Your Own Benchmark Center”

• "If you can not measure it, you can not improve it."Lord Kelvin, Mathematician, Physicist, 1824 - 1907

• “You don’t have to be a rocket scientist to do analysis. Find a simple tool and use it to the extent you need it.”Randall Parman, Swamped DBA

• There are two possible outcomes: if the result confirms the hypothesis, then you've made a measurement. If the result is contrary to the hypothesis, then you've made a discovery.Enrico Fermi, Nuclear Physicist, 1901 - 1954

• “If you don’t measure the right things, decisions may be wrong, you’ll have no basis for finding problems, and worse yet, if improvements happen, they will go unrecognized.”Doug Ebel, Database Geek, 1950 - 20??

Page 27: Build Your Own Benchmark Center - Teradata Downloads · Build Your Own Benchmark Center ... short duration) • What is impact of multiple upgrades? >Case study: SLES 9 -> 10,