TD12 Enhancements

Embed Size (px)

Citation preview

Managed Services- TD12 Features Guide

DATABASE TUNINGTD12 FEATURES GUIDE

Prepared by GCC India ( Mumbai) ADMIN COE MS Team

Prepared by GCC India (Mumbai) (ADMIN-COE MS Team)

i

Managed Services- TD12 Features Guide

Document Type Date Created Current Version Date Last Updated Authors Approved By Approved Date Prepared By

Database TuningTD12 Features guide December 1, 2008 Version 1.1 January 19, 2009 ADMIN COE MS Team Manish Korgaonkar January 19, 2009 GCC India ( Mumbai) ADMIN COE MS Team

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 2 of 147

Managed Services- TD12 Features Guide

Preface

Purpose:This document will take you through the enhancements in V2R12.0.0 as compared to V2R6.1.X.

Audience:The primary audience includes database and system administers and Application developers.

Prerequisites:You should be familiar with TD V2R6.1.X features.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 3 of 147

Managed Services- TD12 Features Guide

Table of Contents1 Introduction......................................................................................6 2 Performance Enhancements...............................................................72.1 Collect Statistics Improvements...................................................................................................7 2.1.1 Better Cardinality Estimate....................................................................................................8 2.1.2 Improved AMP-level statistics..............................................................................................16 2.2 Parameterized Statement Caching............................................................................................18 2.2.1 Where this feature applies ..................................................................................................19 2.2.2 Where this feature does not apply.......................................................................................19 2.2.3 How this feature works........................................................................................................20

3 Database Enhancements..................................................................223.1 Restartable Scandisk.................................................................................................................22 3.1.1 Usage..................................................................................................................................22 3.2 Check table Enhancements ......................................................................................................30 3.2.1 Usage of Check table .........................................................................................................30 3.2.2 Differences among Checking Levels...................................................................................30 3.2.3 New Features in Teradata 12.0 ..........................................................................................31 3.2.4 Checktable checks compressed values...............................................................................31 3.3 Software Event Log....................................................................................................................33

4 Security Enhancements....................................................................344.1 Password Enhancements..........................................................................................................34 4.1.1 Password Enhancements....................................................................................................34 4.1.2 How does it Work?..............................................................................................................36 4.1.3 Data Dictionary Modifications..............................................................................................36

5 Enhancements to Utilities.................................................................375.1 Normalized Resusage/DBQL data and Ampusage views for coexistence systems...................37 5.1.1 Resusage Data....................................................................................................................38 5.1.2 DBQL Data..........................................................................................................................38 5.1.3 Ampusage View...................................................................................................................39 5.2 Tdpkgrm New option to remove all non-current TD packages................................................40 5.3 MultiTool New DIP option........................................................................................................42

6 Backup and Recovery Enhancements................................................446.1 Online Archive............................................................................................................................44 6.1.1 How Online Archive works?.................................................................................................44 6.1.2 Usage..................................................................................................................................46 6.1.3 Backup and Restore Scenarios ..........................................................................................47

7 Teradata Tools and Utilities..............................................................507.1 Teradata Parallel Transporter Updates......................................................................................50 7.1.1 Unicode UTF16 Support......................................................................................................50 7.1.2 Array Support in Stream Operator.......................................................................................51 7.1.3 ANSI and SQL DATE/TIME Data Types..............................................................................53 7.1.4 Delimiters.............................................................................................................................54 7.1.5 Continue Job after an error..................................................................................................55 7.1.6 Checkpoint Files..................................................................................................................57 7.1.7 New Attributes Available in the Stream Operator................................................................59 7.1.8 Case-Insensitive column names..........................................................................................62 7.2 Teradata Parallel Transporter API updates................................................................................63 7.2.1 Block Mode Exporting Using Export Operator.....................................................................63 7.2.2 Stream Operator returns Number of Rows Inserted, Updated and Deleted........................63 7.2.3 Array Support......................................................................................................................63

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 4 of 147

Managed Services- TD12 Features Guide

7.2.4 Support for UTF -16.............................................................................................................63 7.2.5 TPT API can now be called from an XSP............................................................................63

8 TASM ..............................................................................................658.1 Query Banding...........................................................................................................................65 8.1.1 How to Set a Query Band....................................................................................................65 8.1.2 Query Band and Workload Management............................................................................69 8.1.3 Using Query banding to Improve Resource Accounting......................................................78 8.1.4 Using Both the Session and Transaction Query Bands.......................................................79 8.2 State Matrix................................................................................................................................81 8.2.1 Managing the System through a State Matrix......................................................................81 8.2.2 System Conditions...............................................................................................................82 8.2.3 Operating Environments......................................................................................................86 8.2.4 State....................................................................................................................................88 8.2.5 Events.................................................................................................................................89 8.2.6 Periods:...............................................................................................................................94 8.3 Global Exception / Multiple Exception......................................................................................104 8.3.1 Global Exception Directive.................................................................................................104 8.3.2 Multiple Global Exception Directive...................................................................................109 8.4 Utility Management..................................................................................................................112

9 Usability Features.........................................................................1149.1 Complex Error Handling..........................................................................................................114 9.2 Multilevel Partitioned Primary Index .......................................................................................117 9.2.1 Features of MLPPI.............................................................................................................117 9.2.2 MLPPI Table joins and Optimizer join plans......................................................................125 9.3 Schmon Enhancements .........................................................................................................130 9.3.1 Comparison of Options available in TD6.1 and TD12.0.....................................................130 9.3.2 Delay Modifier (-d) ............................................................................................................131 9.3.3 Display PG Usage.............................................................................................................133 9.4 Enhanced Explain Plan Details ..............................................................................................134 9.5 DBC Indexes Contains Join Index ID.......................................................................................140 9.6 List all Global Temporary Tables..............................................................................................142 9.7 ANSI Merge.............................................................................................................................143

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 5 of 147

Managed Services- TD12 Features Guide

1 IntroductionTeradatas mission is to provide an integrated, optimized, and extensible enterprise data warehouse solution to power better, faster decisions. Teradata 12.0 is a highly integrated solution that continues to advance Teradata further along in this mission. Teradata 12.0: Extends its lead in enterprise intelligence by supporting both strategic and operational intelligence. Continues to be the only true choice for concurrently using detailed data in operational applications, while using business intelligence, and deep analytics to direct business. Strengthens business logic processing capability, high availability, and performance of the EDW and ADW foundations. Enhances query performance. Advances its enterprise fit characteristics, including partner friendliness and ease of enterprise integration. Improves availability, supportability, and security.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 6 of 147

Managed Services- TD12 Features Guide

2 Performance Enhancements2.1 Collect Statistics Improvements

Internal enhancements to the way statistics are collected will capture a larger quantity of data demographic information with more accuracy, so that the Optimizer can create better query execution plans.

Description:This feature provides the following benefits: Improved decision support (DSS) query performance as a result of improved query execution plans. Query performance consistency with new releases of Teradata. Faster results of extremely complex, highly analytical DSS queries.

The statistics collection improvements allow the Optimizer to better estimate the cardinality (number of elements) in the data in the following ways: Increased number of statistics intervals Improved statistics collection for multi-column NULL values Improved AMP-level statistics

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 7 of 147

Managed Services- TD12 Features Guide

2.1.1 Better Cardinality Estimate2.1.1.1 Increased number of Statistics interval

Statistics are stored as a histogram (collection of occurrences of values), and the more granular the statistics, the better the query execution plans can be. In Teradata 12, the maximum number of intervals has been increased from 100 to 200, providing the Optimizer with a more detailed picture of the actual column data distribution for estimating purposes.

In TD6.1: Screen shot:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 8 of 147

Managed Services- TD12 Features Guide

TD 6.1 Collect stats in 100 Intervals

In TD 6.1 Statistics are collected in only 100 Intervals as displayed in above Screen Shot.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 9 of 147

Managed Services- TD12 Features Guide

TD12.0: Screen Shot:

T12 collect stats in 200 Intervals

In TD 12.0 Statistics are collected in 200 intervals as displayed in above Screen shot.

Advantages: Statistics are stored as a histogram (collection of occurrences of values), and the more granular the statistics, the better the query execution plans can be. In Teradata 12, the maximum number of intervals has been increased from 100 to 200, providing the Optimizer with a more detailed picture of the actual column data distribution for estimating purposes. Certain types of queries will experience improved performance, specifically: Queries involving a JOIN operation between tables Queries with conditions that have many distinct values Queries with a large number of IN-list values.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 10 of 147

Managed Services- TD12 Features Guide

2.1.1.2 Improved Statistics collection for multi-column NULL valuesPrior to Teradata 12, the system counted rows with at least one NULL value as a NULL row, even if some columns in a row did have values. This improvement more accurately identifies and counts the following: All-NULL rows in multi-column statistics and multi-column index statistics (used by the Optimizer to detect skew) Partially NULL rows in multi-column statistics and multi-column index statistics (used by the Optimizer to estimate single-table selectivity) Unique rows in a multi-column scheme (used by the Optimizer to estimate the number of distinct hash values during a redistribution operation)

Example: To see Improved Statistics count, suppose there are statistics collected on the table below:

Teradata 6.1: Statistics would indicate four NULL rows and two rows with unique values. Teradata 12.0 and above: The statistics more accurately indicate two all-NULL rows and four rows with unique values.

For Comparison between TD6.1 and TD 12.0 Please use below Scripts:

CollectStats_All_NUL LS.txt

CollectStats_UNIQUE _Values.txt

Collect Stats All Nulls

Collect Stats Unique Values

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 11 of 147

Managed Services- TD12 Features Guide

Below Screenshots shows the comparison between TD6.1 and TD 12.0

Number of All NULLS: In TD 6.1:

In TD 6.1 Result set Number of Nulls only reported

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 12 of 147

Managed Services- TD12 Features Guide

Number of All NULLS: In TD 12.0:

In TD12.0 Result set Number of All Nulls Column added

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 13 of 147

Managed Services- TD12 Features Guide

Number of Rows with Unique Values Demo: In TD 6.1:

In TD 6.1 Collect Stats shows 2 unique values

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 14 of 147

Managed Services- TD12 Features Guide

Number of Rows with Unique Values Demo: In TD 12.0:

In TD 12.0 Collect Stats shows 4 unique values

Advantages: With the more refined count of all-NULL rows, the Optimizers join plans are improved, especially for large tables where a significant number of rows have Nulls. In addition, any data redistribution effort is more accurately estimated.

This improvement does not change procedures for collecting or dropping statistics or any associated timing for collecting statistics.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 15 of 147

Managed Services- TD12 Features Guide

2.1.2 Improved AMP-level statistics

Prior to Teradata 12, average rows-per-value (RPV) statistics were obtained using a probability model, which often underestimated the actual rows. In Teradata 12, this measure is calculated exactly using the following formula:

Advantages: The new RPV calculation formula makes the cost estimates for joins much more accurate. This improvement does not change procedures for collecting or dropping statistics or any associated timing for collecting statistics.

For Comparison between TD6.1 and TD 12.0 Please use below Scripts:

Im proved Am Level p Statistics.txt

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 16 of 147

Managed Services- TD12 Features Guide

Below Screenshots shows the comparison between TD6.1 and TD 12.0

Average AMP RPV: In TD 6.1:

In TD 6.1 Result set following columns only reported

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 17 of 147

Managed Services- TD12 Features Guide

In TD 12.0:

In TD12.0 In Result set Average AMP RPV Column added

2.2

Parameterized Statement Caching

This internal feature improves the logic that caches optimized plans for parameterized queries (SQL statements that include variables). In previous releases, the Optimizer did not evaluate the value of the parameter (the actual USING, CURRENT_DATE, or DATE value) when creating the query plan for a parameterized request. If the same request was resubmitted with different parameters, the old cached plan was used, often generating suboptimal plans. Query plans that remain the same regardless of the parameter values are called generic plans. With Teradata 12, the Optimizer first determines whether the request would benefit if the parameter values are evaluated. If so, then the Optimizer will include the parameter values when optimizing the

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 18 of 147

Managed Services- TD12 Features Guide

request. Plans in which the Optimizer peeks at parameter values and generates a plan optimized for those values are called specific plans. For example, the Optimizer considers the user-supplied product_code value when generating a plan for the following request: USING (x INT) SELECT * FROM SalesHistory WHERE product_code =: x OR store_number = 56;

2.2.1 Where this feature appliesTypes of queries that are optimized with this feature include those in which: The query involves multiple tables. The query involves a single table and: The ORed condition is on a UPI, USI, NUPI, or PPI column The condition (either AND or OR) is on a NUSI column

With this feature, performance improvements have been observed in the following situations: Partition Elimination Sparse Join Index access NUSI access Join plans

2.2.2 Where this feature does not applyNote that this feature does not apply to requests involving an equality condition on: Primary Index (NUPI or UPI) Unique Secondary Index (USI) Partitioned Primary Index (PPI) column

These queries are already highly optimized, and any evaluation of the parameter value is redundant and/or would not change the query plan.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 19 of 147

Managed Services- TD12 Features Guide

2.2.3 How this feature worksThis feature works over time, as requests are submitted to the Teradata Database, in the following manner: The first time a parameterized request is submitted, the system determines whether its parameter values should be evaluated in the first place. If so, the system peeks at the parameter values, and the Optimizer generates a specific query plan. If the parsing cost of this specific plan is small enough, then all subsequent submissions of this request will always result in specific plans. If the request hasnt already been marked as a specific-always request, the Optimizer will generate a generic plan the next time that request is submitted. The system compares the run times of the specific and generic plans, and then whether, from that point on: 1) To execute the generic cached plan each time that request is submitted. 2) To generate a specific plan each time based on the new user supplied values. decides

Attached below is the PPT how Parameterized Query Caching works -

Param eterized Query Caching.ppt

The CURRENT_DATE variable will be resolved for all queries, parameterized or otherwise, and replaced with the actual date prior to optimization. This will help in generating a more optimal plan in cases of partition elimination, sparse join indexes, or NUSIs that are based on CURRENT_DATE. For a parameterized request that uses CURRENT_DATE, a generic plan with CURRENT_DATE resolved will be referred to as DateSpecific Generic Plan. Similarly for a parameterized request that uses the CURRENT_DATE, a specific plan with CURRENT_DATE resolved will be referred to as DateSpecific Specific Plan.

The explain text, for queries for which CURRENT_DATE has been resolved, will show the resolved date in TD 12.0.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 20 of 147

Managed Services- TD12 Features Guide

In TD 6.1:Explain select * from retail.item where l_receiptdate= current_date;

Explanation1) for 2) First, we lock a distinct retail."pseudo table" for read on a RowHash to prevent global deadlock retail.item. Next, we lock retail.item for read.

3) We do an all-AMPs RETRIEVE step from retail.item by way of an all-rows scan with a condition of ("retail.item.L_RECEIPTDATE = DATE") into Spool 1 (group_amps), which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The size of Spool 1 is estimated with no confidence to be 6,018 rows. The estimated time for this step is 0.58 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.58 seconds.

In TD 12.0:Explain select * from retail.item where l_receiptdate= current_date;

Explanation 1) First, we lock a distinct retail."pseudo table" for read on a RowHash to prevent global deadlock for retail.item. 2) Next, we lock retail.item for read.

3) We do an all-AMPs RETRIEVE step from retail.item by way of an all-rows scan with a condition of ("retail.item.L_RECEIPTDATE = DATE '2008-12-01'") into Spool 1 (group_amps), which is built locally on the AMPs. The input table will not be cached in memory, but it is eligible for synchronized scanning. The size of Spool 1 is estimated with no confidence to be 6,018 rows (806,412 bytes). The estimated time for this step is 0.59 seconds. 4) Finally, we send out an END TRANSACTION step to all AMPs involved in processing the request. -> The contents of Spool 1 are sent back to the user as the result of statement 1. The total estimated time is 0.59 seconds.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 21 of 147

Managed Services- TD12 Features Guide

3 Database Enhancements3.1 Restartable Scandisk

3.1.1 UsageScandisk is a diagnostic tool designed to check for inconsistencies between the key file system data structures such as the Master Index, Cylinder Index and the Data blocks. As an administrator, you can perform this procedure as preventive maintenance to validate the file system, as part of other maintenance procedures, or when users report file system problems. The SCANDISK command: Verifies data block content matches the data descriptor. Checks that all the sectors are allocated to one and only one of the Bad sector list, the free sector list, or a data block. Ensures that the continuation bits are flagged correctly.

In TD12.0, with the Restartable Scandisk feature, Scandisk utility can be restarted either from a defined point or from the last table scanned. Restartability allows you to halt the Scandisk process during times of extremely heavy system use and then restart it at some later time (e.g. off-peak hours)

Syntax:SCANDISK TAB[L[E]] starting_tableid [ starting_rowid ] [ TO ending_tableid [ ending_rowid]]

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 22 of 147

Managed Services- TD12 Features Guide

We do see the comparison of the options available in both the versions by giving the command (Screen shots shown below).

Help scandisk In TD 6.1:

Syntax: SCANDISK [ /dispopt ] [ { DB | CI | MI | FREECIS } ] [ FIX ] [ /dispopt ]

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 23 of 147

Managed Services- TD12 Features Guide

In TD12:

Syntax: SCANDISK [ /dispopt ] [ { DB | CI | MI | FREECIS | WMI | WCI | WDB } ] [ /dispopt ][INQUIRE ] [ { NOCR | CR } ] Where CI --Scans the MI and CIs. If the scope is AMP or all tables, rather than selected tables, the free CIs are also scanned. DB-- Scans the MI, CIs, and DBs. This is the default for the normal file system, which can be overridden by the CI, MI, or FREECIS options. If the scope is AMP or all tables, rather than selected tables, the free CIs are also scanned. FREECIS --- Scans the free CIs only. This option also detects missing WAL and Depot cylinders. MI ---- Scans the MI only. WCI --- Scans the WMI and WCIs.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 24 of 147

Managed Services- TD12 Features Guide

WDB --- Scans the WMI, WCIs, and WDBs. This is the default for the WAL log, which can be overridden by the WCI or WMI options. WMI --- Scans the WMI only. INQUIRE --- Reports SCANDISK progress as a percentage of total time to completion and display the number of errors encountered so far. Interval --- An integer which defines the time interval, in seconds, to automatically display SCANDISK progress. CR ----specifies to use Cylinder reads NOCR --- Specifies to use regular data block preloads, instead of cylinder reads.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 25 of 147

Managed Services- TD12 Features Guide

CR option:When we specify the SCANDISK CR option, the utility uses the cylinder reads. This option is not supported in TD6.1 (Screenshot shown below).

In TD6.1:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 26 of 147

Managed Services- TD12 Features Guide

In TD12.0:

WMI, WCI, WDB options:Scandisk WMI: When we specify the WMI option, it scans the WMI only. Scandisk WCI: When we specify the WCI option, the utility scans the WMIs and WCIs. Scandisk WDB: When we specify the WDB option, the utility scans the WMI, WCIs, and WDBs. All the three features WMI, WCI, WDB are not supported in TD6.1 (screen shot shown below). The screenshots shown below is to show that all these 3 features are not available in TD6.1 (due to which it throws an error)

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 27 of 147

Managed Services- TD12 Features Guide

In TD6.1:

In TD12.0: Scandisk wci:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 28 of 147

Managed Services- TD12 Features Guide

Scandisk wdb:

Scandisk wmi:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 29 of 147

Managed Services- TD12 Features Guide

3.2

Check table Enhancements

Check Table is a console-startable utility and a diagnostic tool for Teradata DBS software. Check Table checks for inconsistencies among internal data structures, such as table headers, row identifiers and secondary indexes. Although Check Table identifies and isolates data inconsistencies and corruption, it cannot repair inconsistencies or data corruption.

3.2.1 Usage of Check table Compares primary and fallback copies of data and hashed secondary-index sub tables. Compares index sub tables to the corresponding data sub tables. Compares table headers across AMPs and compares table headers to information in the Data Dictionary. Validates that the set of tables found on all AMPs matches the set found in the Data Dictionary. This check is done only if a check of database DBC or ALL TABLES is specified.

3.2.2 Differences among Checking LevelsThree levels of checking are available. The differences among them are as follows: "LEVEL ONE" compares only counts of data and index sub tables. "LEVEL TWO" compares lists of index ids and data row ids. "LEVEL THREE" compares entire rows. "LEVEL PENDINGOP" finds all the tables on which there are pending operations (e.g., mload, fast load, table rebuild).

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 30 of 147

Managed Services- TD12 Features Guide

3.2.3 New Features in Teradata 12.0Following are the new features in Teradata 12.0 Check table now checks Compressed Values.

3.2.4 Checktable checks compressed values In TD6.1, we do see from the below screenshot

In TD6.2 Syntax Error generated

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 31 of 147

Managed Services- TD12 Features Guide

In TD12.0:Checks Compressed Values Usage The feature can be invoked from the Teradata Manager Check table Utility Menu Item.

Compresscheck Syntax: Check at compresscheck;

CheckTable Syntax with Td12 Compresschec k

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 32 of 147

Managed Services- TD12 Features Guide

3.3

Software Event Log

During log system processing in Teradata Database 12.0, all Teradata messages are captured in the software event log so that all messages are available in one place.

Users will have the availability of: Additional diagnostic information. One point access to complete log repository of all sources. Querying the log repository. Common repository data across all platforms. DBC.SW_EVENT_LOG could be archived and stored across different machines/platforms. It is a TABLE.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 33 of 147

Managed Services- TD12 Features Guide

4 Security Enhancements4.1 Password Enhancements

4.1.1 Password Enhancements

TD6.1 supported two forms of password encryption from previous releases, namely DES and SHA-256 truncated to 27. (This support will continue) Teradata Database 12.0 includes the following password enhancements: Full implementation of 32-byte Secure Hashing Algorithm (SHA) 256 encrypted passwords. A user-customizable list of words restricted for password use.

All passwords created on old releases will continue to work and will be changed to full SHA-256 encryption when next modified. The restricted passwords feature includes the new column, PasswordRestrictWords, in the table DBC.SysSecDefaults, having the following possible, single character values: N, n = Do not restrict any words from being contained within in a password string. This is the default. Y, y = Restrict any word (case independent) that is listed in DBC.PasswordRestrictions from being a significant part of a password string.

Default Value: The default value is 30. Maximum number of characters allowed in a valid password is 30. The parameter PasswordMaxChar in dbc.SysSecDefaults sets the maximum characters allowed for a valid password. Screen shot below shows the table DBC.SysSecDefaults by which we come to know that PasswordRestrictWords column was missing in TD6.1

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 34 of 147

Managed Services- TD12 Features Guide

In TD6.1:

In TD12.0:

Note: The highlighted column is added for password restrictions

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 35 of 147

Managed Services- TD12 Features Guide

How to enable this feature: This feature can be enabled in two ways: System-wide by using DBC.SysSecDefaults. the new column, PasswordRestrictWords, in the table

By user profile using the New CREATE/MODIFY PROFILE syntax clause RESTRICTWORDS = 'Y' | 'N'. The default is N.

The profile password attribute has precedence over DBS.SysSecDefaults attribute.

4.1.2 How does it Work?The following procedure is used to determine if a restricted word is within a password: Remove all ASCII numbers and ASCII special characters from the beginning and end of the password string. Use the resulting string (called StrippedPassword) in the query

SELECT * FROM DBC.PasswordRestrictions WHERE UPPER (RestrictedWords) = UPPER (StrippedPassword) If there is a match, then reject the password.

4.1.3 Data Dictionary Modifications

DBC.PasswordRestrictions: A new system table with a single column, RestrictedWords. DBC.RestrictedWords: A new view which is created via DIP scripts for access to the system table DBC.PasswordRestrictions. This view will not have PUBLIC access, even for SELECT. DBC.SysSecDefaults: Contains one new field, PasswordRestrictWords. The default value for this field is N.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 36 of 147

Managed Services- TD12 Features Guide

5 Enhancements to Utilities5.1 Normalized Resusage/DBQL data and Ampusage views for coexistence systemsTeradata Database 12.0 provides normalized CPU time(which was not present in TD6.1) in Resusage Data, DBQL Data, Ampusage view which provides more accurate performance statistics for mixed node systems , particularly in the areas of CPU skewing and capacity planning. This feature adds the following fields to the ResUsageSpma table: CPUIdleNorm: Normalized idle time. CPUIOWaitNorm: Normalized idle and waiting for I/O completion CPUUServNorm: Normalized user service CPUUExecNorm: Normalized user execution NodeNormFactor: Per node normalization factor

To compare the columns from the ResUsageSpma table in TD6.1 and TD12.0, please find the documents attached below:

dbcResusagespm aTD 61.rtf

dbcResusagespm aTD 12.rtf

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 37 of 147

Managed Services- TD12 Features Guide

5.1.1 Resusage DataShows normalized view of CPU performance across all the nodes in a coexistence system. This is useful in viewing the balance of the system.

5.1.2 DBQL DataLogs normalized CPU data for DBQL Detail, STEPINFO, SUMMARY and THRESHOLD logging. The following columns shown below were not there present in TD6.1 DBC.DBQLogTbl and DBC.QRYLOG view: AMPCPUTimeNorm: Normalized AMP CPU time for coexistence systems. MaxAMPCPUTimeNorm: Normalized maximum CPU time for AMP. ParserCPUTimeNorm: Normalized Parser CPU time for coexistence systems. MaxCPUAmpNumberNorm: Number of the AMP with the maximum normalized CPU time. MinAmpCPUTimeNorm: Normalized minimum CPU time for AMP.

To compare the columns from DBQLogTbl and QRYLOG tables from TD6.1 and TD12.0, please find the documents attached below:

dbqlogtbl_TD61.rtf

dbqlogtblTD12.rtf

qrylogTD61.rtf

qrylogTD12.rtf

DBC.DBQLStepTbl and DBC.QRYLOGSTEPS view: CPUtimeNorm: Normalized AMP CPU time for co-existence systems. MaxAmpCPUTimeNorm: Normalized maximum CPU time for AMP. MaxCPUAmpNumberNorm: Number of the AMP with the maximum normalized CPU time. MinAmpCPUTimeNorm: Normalized minimum CPU time for AMP.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 38 of 147

Managed Services- TD12 Features Guide

To compare the columns from DBQLStepTbl and QRYLOGSteps tables from TD6.1 and TD12.0, please find the documents attached below:

DBQLStepTbl_TD61.r tf

dbqlsteptblTD12.rtf

QRYLogSteps_TD61. rtf

qrylogstepsTD12.rtf

DBC.DBQLSummaryTbl and DBC.QRYLOGSUMMARY view: AMPCPUTimeNorm: Normalized AMP CPU time ParserCPUTimeNorm: Normalized CPU time to parse queries.

To compare the columns from DBQLSummaryTbl and QRYLOGSummary tables from TD6.1 and TD12.0 , please find the documents attached below:

DBQLSum aryTbl_T m D61.rtf

dbqlsum arytblTD12 m .rtf

QRYLogSum ary_TD m 61.rtf

qrylogsum aryTD12. m rtf

5.1.3 Ampusage ViewIt shows a new normalized CPU time to display more accurate CPU statistics in AMPUsage. In a co-existence system, AMPs running on faster nodes process data in less CPU time than AMPs running on nodes with slower CPUs even though they are processing the same amount of data. The CPU time for different models needs to be adjusted base on the relative CPU speed. This feature provides the normalized statistics. To compare the columns from AMPUSAGE view from TD6.1 and TD12.0, please find the documents attached below:

am pusage_TD61.rtf

am pusageTD12.rtf

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 39 of 147

Managed Services- TD12 Features Guide

This feature adds the following field to the Acctg table: CPUNorm: Normalized AMP CPU seconds used by the user and account. CPUNorm = CPU * Scaling Factor The Scaling Factor is gathered from tosgetpma().

5.2 Tdpkgrm New option to remove all non-current TD packagesTdpkgrm (Teradata Package) This is used to remove non-current packages. Prior to Teradata 12.0, you had to manually remove each non-current package which was time consuming. New option to remove all non-current Teradata packages: Teradata 12.0 adds an option in tdpkgrm that allows you to remove all non-current Teradata packages of all the components at once with the command line option a as shown below: $ tdpkgrm a Screenshots below shows that , in TD6.1 it was not possible to remove all non-current Teradata packages at once as the command line option -a is not supported in TD12.0.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 40 of 147

Managed Services- TD12 Features Guide

Below screenshots shows the comparison between TD6.1 and TD12.0

In TD6.1:

In TD12.0:

If there are no non current packages, then after running the tdpkgrm a it will give the message No Teradata Software non current version available for removal

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 41 of 147

Managed Services- TD12 Features Guide

5.3

MultiTool New DIP option

A new DIP option DIPPWD-Password Restrictions has been added to Teradata Database 12.0 as part of the password enhancement feature which was not available in TD6.1 (Screen shots shown below) This feature allows DBA to create list of restricted words that are not allowed in new or modified passwords.

In TD6.1:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 42 of 147

Managed Services- TD12 Features Guide

TD12 (Red arrow shows the new options in DIP)

DIPSQLJ: The SQLJ database and its views are used by the system to manage JAR files that implement Java external stored procedures. The DIP script used to create the SQLJ database and its objects is called DIPSQLJ. This script is run as part of DIPALL. DIPDEM: Loads tables, stored procedures, and a UDF that enable propagation, backup, and recovery of database extensibility mechanisms (DEMs). DEMs include stored procedures, UDFs, and UDTs that are distributed as packages, which can be used to extend the capabilities of Teradata Database. Note: DIPSQLJ and DIPDEM were supported from TD6.2Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 43 of 147

Managed Services- TD12 Features Guide

6 Backup and Recovery Enhancements6.1 Online Archive

Online archive allows the archival of a running database; that is, a database (or tables within a database) can be archived in conjunction with concurrently executing update transactions for the tables in the database. Transactional consistency is maintained by tracking any changes to a table in a log such that changes applied to the table during the archive can be rolled back to the transactional consistency point after the restore.

6.1.1 How Online Archive works?When online archive is initiated on either a table or a database, the system creates and maintains a separate log sub-table which contains before-image change rows for each update applied to the table instead of using permanent journaling. These change rows are stored with the archive data, so that the table can be rolled back to a consistent point in the event of a restore. These steps are performed automatically by the system. The user does not have to do anything other than specify that the online archive feature should be used.

Scenario how Online Feature Works We are trying to update a column on the table which is backing up, this we can do in TD12. Created a table named b_t in AU database consisting of more records, in our scenario we have taken around 2 million records. Following is the script to create AU.b_t table. STEP 1: Scripts required for ONLINE Backup, update and restore -

arcTD12data.txt

update_qry.txt

restore.txt

STEP 2: Note the data before starting which you are going to update while taking backup -

DataBeforeBackup.J PG

STEP 3-A: Start the Online backup using the script and start. STEP 3-B: As soon as the archiving of the table is started.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 44 of 147

Managed Services- TD12 Features Guide

Following is the Screenshot for STEP 3:

arcm ain_update.J PG

STEP 4: Wait for the Completion of the backup as well as update. Make sure that data Online Log info contains data which is highlighted in the below screenshot

OnlineLogInfo.J PG

The three lines displayed indicate that the table was archived online, the consistency point for that table (e.g. when online logging was started), and how many log bytes and rows were archived (indicating the amount of change in the table during the online archive). These lines are also displayed during a restore, copy, or analyze of the archive, as an indication that the archive was done online. STEP 5: Check for the column data after taking backup in the database, following is the screenshot

DataAfterBackup.J P G

STEP 6: Delete all the records in table before restore.

DeleteDataAfterBack up.J PG

STEP 7: Restore the Table using the restore script, following is the screenshot

Restore_Snapshot.J PG

STEP 8: Check for the data after restore, following is the screenshot

DataAfterRestore.J P G

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 45 of 147

Managed Services- TD12 Features Guide

6.1.2 Usage

6.1.2.1 DROP and DELETE Dropping a table on which online archive logging is active is only allowed if the table is not being archived. It is possible to delete a database that already has online archive logging initiated on the database or some tables in the database. Delete database is not possible during the archive.

6.1.2.2 LOGGING Online Archive ON Statement The LOGGING ONLINE ARCHIVE ON statement is defined as LOGGING ONLINE ARCHIVE ON FOR databasename | databasename.tablename, {databasename | databasename.tablename} ] . . .; To execute the LOGGING ONLINE ARCHIVE ON statement, the user name specified in the logon statement must have one of the following privileges: The Archive privilege on the database or table that being logged. The privilege may be granted to a user or an active role for the user. Ownership of the database or table.

6.1.2.3 LOGGING Online Archive OFF Statement The LOGGING ONLINE ARCHIVE OFF statement is defined as LOGGING ONLINE ARCHIVE OFF FOR databasename | databasename.tablename [[, {databasename | databasename.tablename} ] . . .][, OVERRIDE]; This command will delete the log subtables associated with that object. The OVERRIDE option allows online archive logging to be stopped by someone other than the user who sets them. To execute the LOGGING ONLINE ARCHIVE OFF statement, the user name specified in the logon statement must have one of the following privileges: The Archive privilege on the database or table that being logged. The privilege may be granted to a user or an active role for the user. Ownership of the database or table.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 46 of 147

Managed Services- TD12 Features Guide

6.1.2.4

Archive Statement

An ARCHIVE statement can start online archive with the ONLINE option without being initiated first by the LOGGING ONLINE ARCHIVE ON statement. This allows you to start the online archive immediately. In this case, online archive logging will be started implicitly. The ARCMAIN starts online archive logging on the specified objects before it starts archiving. The ARC statement of ARCHIVE/DUMP has been extended to support the online archive. The SQL DUMP statement doesnt support the new ONLINE option. The online archive feature adds the options ONLINE and KEEP LOGGING to the syntax. ARCHIVE DATA TABLES . . . [, ONLINE] [, KEEP LOGGING] [, RELEASE LOCK ] [, INDEXES ] [, ABORT ] [, USE [ GROUP ] READ LOCK ] [, NONEMPTY DATABASE[S] ] , FILE = name [,FILE = name];

6.1.3 Backup and Restore ScenariosFollowing are some of backup Scenarios 6.1.3.1 Database Level Backup

6.1.3.1.1

All Databases Backup (DBC Backup)

all-db_TD12.txt

all-db_TD12_log.txt

6.1.3.1.2

Single Database Backup

single-db_TD12.txt

single-db_TD12_log.t xt

6.1.3.1.3

Single Database Backup with EXCLUDE Option

single-exclude_TD12 .txt

single-exclude_TD12 _log.txt

6.1.3.1.4

Multiple Databases with EXCLUDE Option

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 47 of 147

Managed Services- TD12 Features Guide

m ultiple-exlude_TD1 2.txt

m ultiple-exlude_TD1 2_og.txt

6.1.3.2 Table Level Backup 6.1.3.2.1 Single Table Backup

single_table_TD12.tx t

single_table_TD12_lo g.txt

6.1.3.2.2

Multi Table Backup

m ulti-table_TD12.txt

m ulti-table_TD12_log .txt

6.1.3.3 Dictionary Backup In TD12.0 Database, database level dictionary backup and table level dictionary backup are not supported while taking ONLINE backup 6.1.3.3.1 Database level dictionary backup

Following are the backup scripts and the Log files attached, please have a look of the log file which tell us that online database level dictionary backup is not supported.

db_dict_TD12.txt

db_dict_TD12_log.tx t

6.1.3.3.2

Table level dictionary backup

Following are the backup scripts and the Log files attached. Please have a look of the log file which tells us that online table level dictionary backup is not supported.

tbl-dict_TD12.txt

tbl-dict_TD12_log.txt

6.1.3.4 Partition Level Backup Please find the below document for the Partition Level backup

PartitionBackup.doc

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 48 of 147

Managed Services- TD12 Features Guide

6.1.3.5 6.1.3.5.1

Table Level Restore Single Table for accidental deletion of Data

There is no change in the restore script but you can see in the log file that table has been taken with ONLINE enabled. Please find the below restore and log scripts.

restore_t2.txt

restore_t2_log.txt

6.1.3.5.2

Table Restore after accidental dropping of Table

rest_drop_tbl.txt

rest_drop_tbl_log.tx t

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 49 of 147

Managed Services- TD12 Features Guide

7 Teradata Tools and Utilities7.1 Teradata Parallel Transporter Updates

7.1.1 Unicode UTF16 Support In TPT 8.1:

In TPT 12.0:

Sending UTF16support output to a fileTPT 8.1:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 50 of 147

Managed Services- TD12 Features Guide

TPT 12.0:

Output file of the UTF 16 support is attached

UTF16SUPPORT12.t xt

7.1.2 Array Support in Stream Operator

Teradata PT 12.0 added a new ArraySupport attribute to the Stream Operator that allows the use of the Array Support database feature for DML statements. Array support improves Stream driver performance. By default, this feature is automatically turned on if the database supports it. User action is not needed to take advantage of this feature. With the ARRAYSUPPORT attribute enabled

In TPT 8.1:

Attached is the log file in which we see that the Array support feature is not showing up in the Attribute definitions

ArraySupportTD61_ ON_log.txt

In TD12.0:Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 51 of 147

Managed Services- TD12 Features Guide

Attached is the log file in which we see that the Array support feature is shown (which was not showing in TPT 8.1)

ArraySupportTD12_ ON_Log.txt

With ARRAYSUPPORT attribute disabled

In TPT 8.1:

Attached is the log file in which we see that the Array support feature is not showing up in the Attribute definitions

ArraySupportTD61_ OFF_log.txt

In TD12.0:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 52 of 147

Managed Services- TD12 Features Guide

Attached is the log file in which we see that the Array support feature is shown (which was not showing in TPT 8.1)

ArraySupportTD12_ OFF_Log.txt

7.1.3 ANSI and SQL DATE/TIME Data TypesSupport is provided for ANSI DATE and TIME data types in TPT scripts. The following data types are now supported: TIME TIME WITH ZONE TIMESTAMP TIMESTAMP WITH ZONE INTERVAL YEAR INTERVAL MONTH INTERVAL DAY INTERVAL HOUR INTERVAL MINUTE INTERVAL SECOND INTERVAL YEAR TO MONTH

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 53 of 147

Managed Services- TD12 Features Guide

INTERVAL DAY TO HOUR INTERVAL DAY TO MINUTE INTERVAL DAY TO SECOND INTERVAL HOUR TO MINUTE INTERVAL HOUR TO SECOND INTERVAL MINUTE TO SECOND

Prior to Teradata PT 12.0, if you wanted to define a column such as MyTimestamp TIMESTAMP (4), you would have to code it as: MyTimestamp CHAR (24) This character string would be necessary to contain all of the characters of a Timestamp with four precision points. Now, you can simply specify it in the script as: MyTimestamp TIMESTAMP (4)

7.1.4 DelimitersDelimiters are usually used in conjunction with the Data Connector operator. The following attributes will imply the use of delimiters in the data file: VARCHAR FORMAT = Delimited VARCHAR TextDelimiter = | (this is the default delimiter) VARCHAR ExcapeTextDelimiter = \ (this is the default escape delimiter) If delimiters are expected to be embedded within delimited data they must be preceded by the backslash ('\') escape character or an alternative designated escape character. The TextDelimiter attribute is used to specify the delimiter. The EscapeTextDelimiter attribute is used to change the default escape delimiter to something other than the backslash character (\).

Example:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 54 of 147

Managed Services- TD12 Features Guide

hi|how|\|r|you Would be broken up into hi how |r you

7.1.5 Continue Job after an errorThis option causes a job to continue after an error code is received; subsequent steps do not use the checkpoint file started by the previously failed step. This option if specified will continue job even if a step returns an error. In TPT 8.1, this option is not supported (screen shot shown below)

In TD12.0:Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 55 of 147

Managed Services- TD12 Features Guide

Script attached has been run with and with out the n options

Load2Stream s.txt

From the screenshots below we clearly see the how n option continues the job even if an error is encountered.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 56 of 147

Managed Services- TD12 Features Guide

7.1.6 Checkpoint FilesInstead of requiring users to manually remove checkpoint files using the designated TPT client path, a utility is provided to remove these files with one of two possible parameters: User ID or Job Name This feature is not supported in TPT 8.1 This new enhancement makes it easier for the user to remove checkpoint files and minimizes the risk of deleting the wrong files. Instead of requiring users to manually remove checkpoint files using the TPT path, a new utility, twbrmcp, has been provided to remove these files with simple parameters such as user ID or job name. The following is the syntax for the twbrmcp utility: twbrmcp LogonID or twbrmcp JobName Where: LogonID - The user logon ID JobName - The job name specified for the 'tbuild' command

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 57 of 147

Managed Services- TD12 Features Guide

Location of the Checkpoint files: In UNIX environments, checkpoint files can be found at: /usr/tbuild/08.01.00.00/checkpoint or $TWB_ROOT/checkpoint (if you are on a later version of TPT) In Windows environments, they can be found at C:\Program Files\NCR\Teradata Parallel Transporter\8.1\checkpoint or %TWB_ROOT%\checkpoint (if you are on a later version of TPT)

Deleting Checkpoint files using LOGONID

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 58 of 147

Managed Services- TD12 Features Guide

Removing checkpoint files with JOBNAME

7.1.7 New Attributes Available in the Stream OperatorOperator level commands can be used to dynamically set the statement RATE and PERIODICITY for the Stream operator as a job is executing. These operator-level twbcmd commands allow you to change the Rate and/or Periodicity value for a Stream operator copy while the job is running. This command is not supported in TPT8.1 For all platforms (except MVS) use the command as: twbcmd jobname Stream Rate=2000, PERIODICITY=20 Where jobname is the name of the target job Stream is the only operator that supports these commands.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 59 of 147

Managed Services- TD12 Features Guide

RATE and PERIODICITY values will be used for the Stream operator for this job. Note: This RATE and PERIODICITY value will be for the complete job.

With MVS: F Myjob APPL=Stream RATE=2000, PERIODICITY=20Controlling Stream operator at Step level: The OperatorCommandId feature allows you to create identifiers associated with a specific Stream operator within a specific job step. This permits you to assign new Rate or Periodicity values to all instances of that operator within the step after the job has begun by using the twbcmd command. Normally, TPT generates a default value for OperatorCommandId composed of + for each copy of the operator in the APPLY specification. Process Id numbers for operators appear on the command screen when the job step that contains the operator begins to execute. Ex: operator_name1234 Use the following command to identify an operator by default ID and change the rate value twbcmd Jobname operator_name1234 RATE=2000 We can also assign own identifier for subsequent referencing.To accomplish this, we can either: Declare the OperatorCommandId attribute in the DEFINE operator statement for the Stream operator. You can assign a value to the OperatorCommandID attribute in the APPLY statement (which overrides the attribute specified in the DEFINE operator statement).

It can be used as: APPLY TO OPERATOR (Stream_Oper[2] ATTRIBUTES (OperatorCommandID=rate_step1)), Then we can use the twbcmd utility to assign a Rate and/or Periodicity value to a specific Stream operator within a specific job step. When more then one copy of the Stream operator to be active within a given job step, each can be assigned a separate OperatorCommandID APPLY TO OPERATOR (Stream_Oper[2] ATTRIBUTES (OperatorCommandID=rate_step1)), APPLY TO OPERATOR (Stream_Oper[2] ATTRIBUTES (OperatorCommandID=rate_step2))

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 60 of 147

Managed Services- TD12 Features Guide

The RATE or PERIODICITY change will apply to all the instances of the Stream operator running within that job step.

Below screen shot shows how to change the rate: Step 1: New attribute has been added to the attached script

lab7_1.txt

Step 2-a: Use tbuild command to execute the job.

Step 2-b: Run using twbcmd command as soon as the job id generates in Step2-a.

Step 3: After successfully executing the job check the log using tlogview

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 61 of 147

Managed Services- TD12 Features Guide

7.1.8 Case-Insensitive column namesColumn names are now case-insensitive in TPT schema definitions and scripts. Prior to TD12.0, we have to take care when column names are defined in the schema and in the scripts. Script attached has been run shows that the script runs successfully in TD12.0 but it gives semantic errors in TPT 8.1 as it was case-sensitive. This saves a lot of time when the scripts are really big.

CaseSensitiveTD61.t xt

In TPT 8.1:

In TD12.0:Script attached

CaseSensitiveTD12.t xt

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 62 of 147

Managed Services- TD12 Features Guide

7.2

Teradata Parallel Transporter API updates

The Application Program Interface (API) is a feature of TPT which permits developers to create programs with a direct interface to the load and unload protocols used by the utilities. The following are new features which have been added to the API as a result of Teradata Release 12.

7.2.1 Block Mode Exporting Using Export OperatorTPT API now supports the GetBuffer method to return a bufferload of rows to the TPT API application. GetBuffer retrieves a buffer full of rows from the Teradata Database. Exporting by the buffer provides a performance improvement over exporting one row at a time.

7.2.2 Stream Operator returns Number of Rows Inserted, Updated and DeletedThe TD_Evt_ApplyCount event can now be used for the Stream driver to obtain the number of rows inserted, updated, or deleted for each DML Group statement in the job. Prior to Teradata PT API 12.0 the statistics for the Stream driver were only printed in the log. Now you can obtain these stats via the GetEvent method in your application without having to look at the log. It is often more convenient to make a function call in your application to obtain this information than it is to view or parse a log file.

7.2.3 Array SupportTeradata PT 12.0 added a new ArraySupport attribute to the Stream Operator that allows the use of the Array Support database feature for DML statements. Array support improves Stream driver performance. By default, this feature is automatically turned on if the database supports it. User action is not needed to take advantage of this feature.

7.2.4 Support for UTF -16By adding support for UTF-16 to the previous support of UTF-8, Teradata PT API 12.0 provides more options and flexibility.

7.2.5 TPT API can now be called from an XSPTeradata PT API can now be called from an External Stored Procedure (XSP). An XSP is a code module which resides on the database and can be executed using the SQL CALL statement. This feature provides a potential performance improvement by minimizing data transfer between the client and the database server.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 63 of 147

Managed Services- TD12 Features Guide

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 64 of 147

Managed Services- TD12 Features Guide

8 TASM8.1 Query Banding

A set of Name/Value pairs that can be set on a Session or Transaction to identify the querys originating source enabling improved workload management and classification.

8.1.1 How to Set a Query BandA Query band is a set of name/value pairs that can be set for a session or transaction. Session Query Band: The query band is set for the session using the following syntax:

set query_band = 'EXTUserid=CV1;EXTGroup=Finance;UnitofWork=Fin123;' for session;The session queryband is stored in the session table and recovered after a system reset. The session query band remains set for the session until the session ends or the query band is set to NONE.

Transaction Query Band: The query band is set for a transaction using the following syntax:

BT; set query_band = 'EXTUserid=CV1;EXTGroup=Finance;UnitofWork=Fin123;' for TRANSACTION; INS INS ET;The transaction query band is discarded when the transactions ends (commit, rollback, or abort) or the query band is set to NONE.

8.1.1.1 GetQueryBand A system user defined function (UDF) is provided to return the current query band for the session by using the following syntax:

SEL GetQueryBand();

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 65 of 147

Managed Services- TD12 Features Guide

Below screenshot shows query band has been successfully set and is active at session level (We can activate it at Transaction level by giving the above command).

=S> indicates Session Level =T> indicates Transaction Level

8.1.1.2 GetQueryBandValue A system UDF is also provided to retrieve the value of a specified name in the query band. This can be used to retrieve the name of the end user.

SEL GetQueryBandValue(0,'ExtUserId');

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 66 of 147

Managed Services- TD12 Features Guide

The output below displays the QueryBand value (CV1) associated with the QueryBand name (ExtUserId) as we had given the value for the ExtUserId as CV1 while setting the queryband.

8.1.1.3 GetQueryBandPairs A system table function will return the name/value pairs in the query band in name and value columns. Sel QBName (FORMAT X (20)'), QBValue (FORMAT X (20)') FROM TABLE (GetQueryBandPairs(0)) AS t1 ORDER BY 1;

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 67 of 147

Managed Services- TD12 Features Guide

The below output shows the QueryBand value associated with each QueryBand Name set in the queryband.

8.1.1.4 MonitorQueryBand An administrative UDF is provided to retrieve the query band for a specified session. The DBA can use this UDF in order to track down the originator of a blocker request or one using excessive resources.

SELECT MonitorQueryBand(1, 1207, 16383);

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 68 of 147

Managed Services- TD12 Features Guide

Below output shows the query band for the session bearing session number 1207.

Note: For getting session information use dbc.sessioninfo dictionary table.

8.1.2 Query Band and Workload ManagementTeradata Dynamic Workload Management (DWM) offers the DBA the ability to automate workload control actions in the system. These actions as guided by the rules provided through the Teradata DWM Administrator.

FiltersTeradata DWM Filter Rules allow the administrator to control access to and from specific Teradata Database objects and users. There are two types of filter rules: Object access filters limit access to all objects associated with the filter during a specified time period. Queries referencing objects associated with a filter during the time the filter applies are rejected. Query Resource filters limit database resource usage for objects associated with the filter. Queries exceeding the resource usage limit (estimated number of rows, estimated processing time, types of joins, table scans) during the time the filter applies are rejected. Query band name/value pairs can be associated with Teradata TWM Filter rules.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 69 of 147

Managed Services- TD12 Features Guide

Below screenshot shows the additional options (Include QueryBand and Exclude QueryBand)

Below screenshots shows stepwise how to Include QueryBand in the WHO criteria Select Include QueryBand in the WHO criteria

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 70 of 147

Managed Services- TD12 Features Guide

Clicking on Choose will pop up with the Include QueryBand window which will allow to Load Names and the QueryBand values associated with the Names. (Load Names option will be highlighted as seen in the screenshot)

Clicking on the Load Names options will show all QueryBand Names.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 71 of 147

Managed Services- TD12 Features Guide

Selecting a particular QueryBand Name will highlight Load Values option which will display the Load Value associated the selected QueryBand Name as shown below:

Selecting the QueryBand Value and clicking on the Add option will add the queryband in the QueryBand classification criteria.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 72 of 147

Managed Services- TD12 Features Guide

If a name/value pair in the Query band set for the session or transaction matches the query band pair associated with a filter rule, the filter rule will be applied to request. If we select two name / value set in the QueryBand classifications, it will be ANDed as shown below:

In the same way we can use the Exclude QueryBand option in the WHO criteria.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 73 of 147

Managed Services- TD12 Features Guide

Excluding QueryBand in WHERE criteria:Below screenshots shows how can we use Exclude QueryBand option in the WHERE criteria.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 74 of 147

Managed Services- TD12 Features Guide

Workload DefinitionsWorkload Definitions are another component of Teradata DWM. A workload definition (WD) is a type of rule that groups queries for management based on the querys operational properties. The attributes that can be defined for a WD are who (user, account, profile, etc.), what (CPU limits, estimated processing time, row counts, etc.), and where (databases, tables, macros, etc.). The attributes of the WDs are compared to the attributes of each the incoming request and the request are classified into a WD. The WD determines the priority of the request. Query band name/value pairs can be defined as additional who attributes. This enables us to solve the following problems mentioned in the first section: To set the priority of a request based on the end user when submitted through a connection pool To assign requests from the same application different priorities

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 75 of 147

Managed Services- TD12 Features Guide

The following is an example of how to use WDs to accomplish this. WD Name: Marketing-Online Priority: Tactical Classification Criteria;

Query Band

Name EXTUserId EXTGroup Importance

Value MG123 or DG456 or KS232 or WD333 Marketing Online

WD Name: Marketing-Batch Priority: Normal Classification Criteria;

Query Band

Name EXTUserId EXTGroup Importance

Value MG123 or DG456 or KS232 or WD333 Marketing Batch

Note Query Band classification criteria with different names (EXTUserID, EXTGroup, Importance) are ANDed so that all must be present in the query band to match the WD classification criteria. Query Band classification criteria with different values for the same name are ORed.

Given the above WDs: SET QUERY_BAND='EXTUserId=MG123;EXTGroup=Marketing;Importance=Online;' for session; SEL * FROM cust_table; Request will be assigned to WD Marketing-Online and will run in the Tactical PriorityPrepared by GCC India (Mumbai) ADMIN-COE MS Team Page 76 of 147

Managed Services- TD12 Features Guide

SET QUERY_BAND='EXTUserId=MG123;EXTGroup=Marketing;Importance=Batch;' for session; SEL * FROM cust_table; Request will be assigned to WD Marketing-Batch and will run in the Normal Priority

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 77 of 147

Managed Services- TD12 Features Guide

8.1.3 Using Query banding to Improve Resource AccountingAnother use of Query banding is to provide more information for system resources chargeback and accounting purposes. Also to provide additional data so that application requests that make up a "job" can be grouped for accounting purposes. To facilitate this, when DBQL detail logging is enabled, the query band is written to the query band field in the DBC.dbqlogtbl.

sel queryband(FORMAT 'X(50)'), querytext(FORMAT 'X(30)') queryid;

from qrylog order by StartTime,

Query banding UDFs can be used to extract accounting reports from the DBQL log table. Sel t1.AMPCPUTime, t1.ParserCPUTime from dbc.dbqlogtbl t1 where GetQueryBandValue(t1.queryband, 0, 'EXTUserId') = CV185018' AND GetQueryBandValue(t1.queryband, 0, 'unitofwork') = Fin123 AND t1.QueryBand is NOT NULL;

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 78 of 147

Managed Services- TD12 Features Guide

Output of the above query is as shown below:

8.1.4 Using Both the Session and Transaction Query BandsGenerally we expect users to use either the session query band or the transaction query band depending on their needs. The session query band is used when you want the query band to be retained for the life of the session and recovered on system reset. The transaction query band is used when you want to query band to be discarded at the end of the transaction. You can, however, set both at the same time. When both are set, they are concatenated together when displayed or written to the DBQL log. For example: =T> JOB=RPT1;ACTION=Analysis; =S> EXTUserID=DT785;ORG=world;

One reason to use both in the same session is to have the transaction query band add additional pairs. For example, using the WD classification criteria in the previous section, if the session query band is set as follows: SET QUERY_BAND='EXTUserId=MG123;EXTGroup=Marketing;' for session;

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 79 of 147

Managed Services- TD12 Features Guide

Then you would need to set the transaction query band to add the Importance name/value pair to determine which WD to use for request. SET QUERY_BAND='Importance=Online;' for transaction; SEL * FROM cust_table; A name/value pair in the transaction query band will override the pair in the session query band if the name is the same in both query bands. Using the same example as above, say we want to the set session query band so that the default Importance=Batch. SET QUERY_BAND='EXTUserId=MG123;EXTGroup=Marketing;Importance=Batch' for session;

Then for a request that needs a higher priority, the transaction query band can be used to set the Importance name/value pair to the Online priority. SET QUERY_BAND='Importance=Online;' for transaction; SEL * FROM cust_table; When Teradata DWM searches the query band associated with a request for comparisons with rules and classification criteria, it always searches the transaction query band first and stops when the name in the queryband pair matches that of the name in the rule or WD.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 80 of 147

Managed Services- TD12 Features Guide

8.2

State Matrix

The state matrix is a two-dimensional diagram that can help you visualize how you want the system to behave in different situations. It extends active workload management to automatically detect, notify, and act on planned and unplanned system and enterprise events. TASM then automatically implements a new set of workload management rules specific to the detected events and resulting system condition.

8.2.1 Managing the System through a State MatrixWith the help of State Matrix, we can plan how we want our system to behave when certain events such as time of day, critical applications executing, or resource availability occurs through a state matrix. The matrix is made up of two dimensions: System Condition (SysCon) The condition or health of the system. For example, SysCons include system performance and availability considerations such as number of AMPs in flow control or nodes down at system startup. Operating Environment (OpEnv) the kind of work the system is expected to perform. It is usually indicative of time periods or workload windows when particular critical applications such as crucial data load or month end jobs running.

The combination of a SysCon and an OpEnv reference a specific state of the system. Associated with each state are a set of workload management behaviors, such as throttle thresholds, Priority Scheduler weights, and so on. When specific events occur, they can direct a change of SysCon or OpEnv, resulting in a state change and therefore an adjustment of workload management behavior.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 81 of 147

Managed Services- TD12 Features Guide

Example of the State matrix is as shown below:

Guidelines for Defining State Matrix: Keep the size of the state matrix and the number of unique states to a minimum If there are not clear-cut needs for managing with additional states, as may be the case when you first dive into workload management, it is recommended to simply utilize the default state, base, referenced by the default operating environment and system condition pair. As the need for additional states becomes apparent, add them, but keep the total number of states to a minimum, because the state matrix supports gross level, not granular level system management.

8.2.2 System ConditionsWhen a System condition is defined, there is an option to associate it with a minimum duration. When an event results in this system condition being activated, the state will be transitioned appropriately. That state may have working values for tighter throttles, more restrictive priority scheduler weights, more filters, etc. A minimum duration can be set for the System Condition. That way regardless of the associated event status, the system will remain in the same state for at least the minimum duration, giving the system a better chance of more fully working itself out of the situations that are putting it into degraded state.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 82 of 147

Managed Services- TD12 Features Guide

System conditions can be defined as below:

Select System Conditions = New SysCon This will pop up with the System Condition window where in we can give the Name of the SysCon, Minimum duration for which the state should be in same state.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 83 of 147

Managed Services- TD12 Features Guide

Click on OK and then on Accept.

The same way we can add the SysCon Degraded. Screenshot below shows State Matrix with the System Conditions (Normal which is the default, Base, Degraded). (Operating Environment Base is by default)

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 84 of 147

Managed Services- TD12 Features Guide

Guidelines for System Condition Duration:

When a unique system condition is defined, there is an option to associate it with a minimum duration. Otherwise, consider a system condition of RED associated with degraded health. When an event results in this system condition are being activated, the state will be transitioned appropriately. That state may have working values for tighter throttles, more restrictive priority scheduler weights, more filters, etc. If by invoking the state, the system immediately returns to good health (i.e., the events that result in the system condition are no longer valid), the system could conceivably realize another state transition that removes the more restrictive working values. By removing the more restrictive working values, the system could conceivably put itself right back into RED state and yet another state transition, etc. A minimum duration can be set for the System Condition. That way regardless of the associated event status, the system will remain in the same state for at least the minimum duration, giving the system a better chance of more fully working itself out of the situations that are putting it into degraded state. It is recommended that System Conditions that are activated entirely by internal event detections and not external user-defined event detections be set to have a minimum duration > 0, perhaps 10 minutes or so.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 85 of 147

Managed Services- TD12 Features Guide

8.2.3 Operating EnvironmentsOperating Environments can be defined as shown below: Select Operating Environments and Click on New OpEnv which will pop up with the Operating Environment window in which we can specify the Name of the OpEnv and the Description as shown below

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 86 of 147

Managed Services- TD12 Features Guide

Click OK and then click on Accept so that the new OpEnv will be added as shown below:

Likewise we can add the other OpEnv (End of Month Reporting).

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 87 of 147

Managed Services- TD12 Features Guide

The below screenshot shows the State Matrix with the SysCon and OpEnv defined. (Base is the default state which later can be changed)

8.2.4 StateThe combination of a SysCon and an OpEnv reference a specific state of the system. Associated with each state are a set of workload management behaviors, such as throttle thresholds, Priority Scheduler weights, and so on. When specific events occur, they can direct a change of SysCon or OpEnv, resulting in a state change and therefore an adjustment of workload management behavior. State can be defined as shown below: Select States and click New State which will pop up with the State window in which the Name of the State can be mentioned.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 88 of 147

Managed Services- TD12 Features Guide

Click OK and then on Accept to add State.

8.2.5 EventsA System Condition, Operating Environment or the State can change as directed by event directives defined by the DBA. Establishing Event Combinations and Associating Associations to the State Matrix for System Conditions: Once you have defined your state matrix, you will need to define the event combinations that will put you into the particular system conditions or operating environments defined in the state matrix.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 89 of 147

Managed Services- TD12 Features Guide

First we need to define an Event as shown below: We do create an Event (NodeDown) which will create an Event Combination

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 90 of 147

Managed Services- TD12 Features Guide

Then we associate a particular action against the Event. As shown below, we select the Event Combination (NodeDown) and select Change SysCon and assign it to Busy.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 91 of 147

Managed Services- TD12 Features Guide

Screenshot below shows the Event Combination NodeDownAction which was defined above which will change the System Condition to Busy.

Likewise we create the other events (AWTLimitEvent) and associate Action to be taken against the Event as shown below:

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 92 of 147

Managed Services- TD12 Features Guide

Screenshot below shows the Events which have been created AWTLimitEvent and NodeDown

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 93 of 147

Managed Services- TD12 Features Guide

Screenshot below shows the Event Combination after creating the Events and the associate action to be taken against the event

8.2.6 Periods:These are intervals of time during the day, week, or month.TDWM monitors the system time, automatically triggers an event when the period starts and it will last until the period ends. Screenshots below shows how we can create Periods

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 94 of 147

Managed Services- TD12 Features Guide

Select Periods and then click on New Period which will open New Period window in which we can give the Period Name

After entering the Period Name and the Description, click OK

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 95 of 147

Managed Services- TD12 Features Guide

Uncheck Everyday and 24 hours option so that we can select the days of week and the time as shown below

After specifying the days and the time, click Accept so that the period can be saved.

Likewise we can create other periods too. As shown in the below screenshot we see two periods defined ( EndOfMonthReporting and LoadWindowPeriod)

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 96 of 147

Managed Services- TD12 Features Guide

Establishing Event Combinations and Associating Associations to the State Matrix for Operating Environments: Screenshot below shows the Operating Environments defined

Below we associate a particular action against an Event. In this when the Event LoadWindowPeriod is active, it will change the OpEnv to Load Window (9am to 4pm)

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 97 of 147

Managed Services- TD12 Features Guide

Screenshot below shows the Event Combination and what action will be taken against this Event Combination

Likewise we create the other Event Combinations and associate the Actions to be taken.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 98 of 147

Managed Services- TD12 Features Guide

After defining all the necessary parameters for State Matrix, it will look as shown below:

Guidelines to establish Event Combinations and Associate Actions to the State Matrix: Once you have defined your state matrix, you will need to define the event combinations that will put you into the particular system conditions or operating environments defined in the state matrix. Event combinations are logical combinations of events that you have defined. The current release of Teradata DWM offers the following Event Types: Components Down Event Types, detected at system startup:

Node Down: Maximum percent of nodes down in a clique. AMP Fatal: Number of AMPs reported as fatal. PE Fatal: Number of PEs reported as fatal. Gateway Fatal: Number of gateways reported as fatal.

AMP Activity Level Event Types. To avoid unnecessary detections, these must persist for a qualified amount of time you specify (default is 180 seconds) on at least the number of AMPs that you also specify:

AWT Limit: Number of AWTs in use for MSGWORKNEW and MSGWORKONE work on an AMP. Flow Control: Number of AMPs in flow control.

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 99 of 147

Managed Services- TD12 Features Guide

Period Events. These are enabled or disabled depending on the current date/time relative to your defined periods of time. E.g. If daytime is defined as daily 8am-6pm, the daytime period is enabled everyday at 8am, and disabled every day at 6pm. User-Defined Events. These are enabled via openAPI or PMAPI calls to the database, and are disabled via an expiration timer given by the enable call, or through an explicit disable call.

Here we discuss guidelines for the usage and associated settings of event types meriting additional discussion as well as general event detection considerations.

Guidelines for Node_Down Event Type: Consider that when a node goes down, its VPROCs migrate, increasing the amount of work required of the nodes that are still up. That translates to performance degradation. When your system is performing in a degraded mode, it is not unusual to want to throttle back further lower priority requests or reassigned priority scheduler weights or enabling filters to assure that critical requests can still meet their Service Level Goals. Alternatively or in addition, you may want to send a notification so that follow-on actions can occur.

The Node_Down Event Type threshold you define is the maximum percent of nodes down in a clique, and is representative of the performance degradation the system will incur. Consider a system configuration with mixed clique types, some with more nodes per clique than others, and some with Hot Standby Nodes (HSN).

Cliques 1&2: 8 5400 nodes Clique 3: 6 5400 nodes Cliques 4: 4 5450 nodes Clique 5: 2 5450 nodes Clique 6: 3 5500 nodes plus 1 HSN

If a single node were to go down, what is the associated performance degradation? It is roughly synonymous with the maximum percent down in a clique, and depends on which clique bears the down node: Cliques 1&2: 1/8 =12.5% Clique 3: 1/6 = 16.7% Cliques 4: = 25% Clique 5: = 50% Clique 6: 0/3 (because the HSN took the burden of the down node) = 0%

Prepared by GCC India (Mumbai) ADMIN-COE MS Team Page 100 of 147

Managed Services- TD12 Features Guide

In the example above, you probably dont want to take much, if any, action if cliques 1, 2 or 6 were to have a node down, however if clique 4 or especially 5 were to experience a node down, that would be a very serious problem that requires immediate attention to resolve, and drastic workload management controls to assure critical work can still be addressed during the degraded period. Recommendation: If your system is designed to run with some amount of degradation (for example, many very large systems with hundreds of nodes may be sized expecting that there is always a single node down somewhere in the system) it is suggested to set the threshold such that the Node Down event will activate only when that degradation exceeds what was sized for. For example, if the example system above were sized to meet workload expectations as long as clique 5 did not experience a down node, you might set your Node_Down event to activate at a threshold > 25%, in other words, to only activate if clique 5 experienced a node down. At that time you could change your system condition appropriately. Prior to that threshold, you could possibly define a second Node_Down event with a lower threshold with an action to Notify only. If you are only interested in sending a notification, you could simply rely on the Teradata Managers alert policy manager to send an alert. However the Alert Policy Manager cannot notify to a queue. Also, when detecting a node down, the Alert Policy Manager cannot distinguish between the severity of a node going down in a clique with HSN vs. a node going down in a 2 node clique. In general, assuming your system is NOT designed to expect nodes down (as is the case with many small to moderate sized systems), a good threshold to set Down_Nodes Event Type Threshold to is roughly 24%.

Guidelines for AMP Activity Level Event Types (AWT Limit and Flow Control):

AWT Limit Usage Considerations:Consider setting up an event combination for when AWT_Limit exceeds a threshold of about 45-55 persistently for at least the default 180 seconds of qualification time. Further qualify that to be on at least 20% of the AMPs persist above the threshold to avoid insignificant detections. As an action, send a notification to inform users or an application. It is not recommended that you have as an action to change system condition and