Upload
vankhanh
View
214
Download
0
Embed Size (px)
Citation preview
Transaction Analysis
Workbench Henry Kiesslich
Fundi Software Pty Ltd. [email protected]
Date of presentation (02/11/2016)
Session: HG
Transaction Analysis Workbench for z/OS, Version 1.3
with APAR PI65072 for IT Analytics
IBM Transaction Analysis Workbench: Bridging the divide between classic problem determination and IT analytics
November 2016
Workbench overview
Workbench solution
A single platform for managing problems with z/OS transactions
– Comprehensive performance analysis, with a pedigree in benchmark CICS and IMS
performance tools, adding IBM MQ, z/OS Connect, WAS, and DB2
– Tracing and profiling of transactions, even across subsystems
Minimal overhead
– Uses the logs and traces generated by z/OS and the various subsystems during normal
transaction processing
Simplifies collection and analysis
– Automatically selects the required log data from each subsystem
– Instantly combine and slice information sources in real time
– Automate problem determination steps and disseminate knowledge through workflows
Exposes logs and other z/OS traces to off-host analysis
– Extract, transform, and load (ETL) for Hadoop, Elastic and Splunk, as well as DB2
– Input for mobile workload pricing calculation
4
Data sources
Workbench architecture
5
Batch
IMS CICS DB2 MQ, WAS z/OS
Log
selection
Exception index
and extract
Data processing engine
Knowledge
base
Interpreting
and tracking
User interfaces
ISPF
dialog
Eclipse
plug-in
Insights
Analytics:
JSON/CSV ETL
Daily and ad-hoc
batch reporting
Transaction lifecycle:
Interactive analysis
Broad, deep coverage of log types
6
IMS CICS DB2 MQ, WAS z/OS
IMS log and trace CMF performance class
(SMF 110) DB2 log MQ log extract SMF
IMS monitor CICS trace
(DFHAUXT or GTF) DB2 accounting
MQ statistics
(SMF 115-1, -2) OPERLOG
CQS log stream
VSAM forward recovery
and autojournaling log
streams
DB2 performance
trace (IFCIDs)
MQ accounting
(SMF 116)
z/OS Connect
(SMF 120-11)
IMS Connect event data
(collected by IMS
Connect Extensions)
Near Term History
(collected by
OMEGAMON XE
for DB2)
WAS request activity
performance statistics
(SMF 120-9)
OMEGAMON
ATF
IRLM long lock detection
(SMF 79-15)
All supported log types can be treated and processed in a consistent way: format, interpret, relate, select, reduce, ETL
Typical workflow
7
User reports a problem
Workbench
1. Create a session
2. Locate the log files (use automated file selection)
3. Create transaction index extracts for the time period
4. Submit batch reports
5. “Deep-dive” analysis using the ISPF log browser
Transaction Analysis Workbench automated file selection utility
Automated file selection
8 System definition repository
DB2 logs IMS logs
(SLDS) IMS Connect
Extensions journal
SMF log streams or
(dumped MANx) files
OMEGAMON XE
for DB2 PE
near-term history
DB2 print log
map utility DBRC API
DB2 bootstrap
data set (BSDS)
RECON
data sets
Indexes
Collates transaction-related records by subsystem
Can be filtered to include exception transactions only
Extracts just the records that are of interest for performance and problem determination
For IMS, consolidates multiple record types into a single record
9
CICS transaction index
DB2 accounting index
IMS log
IMS transaction Index
SMF
MQ accounting index
Workbench exception processing
Original logs or extracts
by Workbench, e.g.
Transaction tracking finds and fits the pieces together
10
e.g. IMS trancode or
userid, long response
time, time of day
Workbench
ISPF dialog log browser
IMS transaction index
IMS log records
DB2 trace records
DB2 log records
IMS log
DB2 log
SMF file
or log stream
Use values from your
analytics software to
identify a matching
record on z/OS…
…Workbench finds
related records across
all log types…
…showing the transaction
lifecycle on z/OS in time
sequence
Transaction
Analysis
Workbench
Log transformation
11
z/OS data source
Metadata
Data
Big data analytics
(e.g. Splunk, Elastic
or Hadoop)
Information about the data, such as
field types and descriptions, including
specific support for loading data into
Elastic (Logstash configuration) and
Hadoop (HCatalog table schema)
Data formats accepted natively by big
data analytics:
JSON (including JSON lines) and
CSV (comma-separated values)
Any supported log
format or index
can be used as
input to the
process
Analysis on z/OS Workbench “classic”
Traditional analysis on z/OS
Log selection
Exception reports
Combined log analysis
Automation through workflows
Eclipse GUI
13
IMS
Performance
Analyzer
IMS transaction index
14
IMS TM or
DBCTL log
IMS
transaction index
Transaction
Analysis
Workbench
Both products can
create an IMS
transaction index
An IMS transaction
index created by
one product can
be used by either
product
IMSINDEX
command of the
report and extract
utility
• Reduce file size and
processing costs
• Reporting and
analysis
Reporting - IMS Transaction and System Analysis Command ===> Type of analysis: ————— Report Interval —————— Individual transaction detail YYYY-MM-DD HH.MM.SS.TH Transaction statistical summary From IMS system resources To Deadlock analysis Focus of transaction analysis: Response time breakdown and CPU usage DLI calls Fast Path database and buffers Subsystem usage / CICS DBCTL Select the IMS system to report against, or specify an IMS log file: 1. System . . . + 2. Log File . . +
15
Reporting
Key IMS reports (similar reports for other subsystems)
Interactive Log analysis using the ISPF dialog
Allows you to view transactions events across all the subsystems
Automatically obtain log data using automated file selection
Merges data sources from IMS, DB2, CICS and MVS
Displays records with friendly descriptions and useful information
Analyze transaction event latencies with relative and elapsed times
Use a filter to show required records only
Use tracking to isolate all the records associated with an individual transaction
Select a record to see more detail (full mapping)
Save the steps you took as a workflow: a guided problem determination template for reuse
with other problems
16
Eclipse GUI
Part of Transaction
Analysis Workbench
Run reports and follow
workflows
Tabulate list reports and
search for outliers
Export result sets to CSV
Suitable as a quick “turn-
key” implementation for
off-z/OS analysis
For more advanced use
cases use the big data
offering…
17
Example: IMS and DB2 analysis
Example: analysing an IMS/DB2 problem
Summary of the scenario:
– Filter IMS index records (CA01) to identify problem transactions
– Use tracking (TX) to include all records associated with the problem transaction
– See the logs from various sources merged in time sequence giving you the complete
story of the transaction
– Use the REXX API to build custom reports or other presentation formats for your result
19
IMS Transaction Index Request Command ===> Original Data Set . : FUW000.QADATA.FBOSP007.IMS.D131008.SLDS IMS index . . . . . . 'JM3.FUW.INDEX' Exception criteria: / Transaction ABEND / Response time threshold . . 0.5 (0.00001 to 999999 seconds) ———— Extract Interval ————— YYYY-MM-DD HH.MM.SS.TH From 2016-09-04 09.25.00.00 To 2016-09-04 09.40.00.00
Building an IMS transaction index
20
Manually enter the log data set or automatically select from systems/time periods
View all records or just exceptions
BROWSE FUW000.QADATA.FBOSP007.IMS.D131008.INDEX Record 00000201 More: < > Command ===> Scroll ===> CSR Navigate < 00.00.01.000000 > Date/Time 2016-10-04 17.10.09.284086 / ———— Filtering ——————————————————————— Tuesday 2016-10-04 LSN ———————————————————————————————————————————————————————————————————————————— TX CA01 IMS Transaction IMS-000000000021 UTC=17.10.09.284078 TranCode=FBOIAT41 Program=FBOIAP41 Userid=FUNTRM10 LTerm=FUNTRM10 Terminal=SC0TCP10 Region=0002 OrgUOWID=IDDG/CC1476B6713CB884 IMSRel=131 RecToken=IDDG/0000000400000000 CPU=45.699549 InputQ=0.000309 Process=72.612278 OutputQ=0.000356 TotalTm=72.612943 RegTyp=MPP ———————————————————————————————————————————————————————————————————————————— CA01 IMS Transaction IMS-000000000025 UTC=17.15.19.060177 TranCode=FBOIAT41 Program=FBOIAP41 Userid=FUNTRM10 LTerm=FUNTRM10 Terminal=SC0TCP10 Region=0002 OrgUOWID=IDDG/CC1477DDDE2AF104 IMSRel=131 RecToken=IDDG/0000000600000000 CPU=11.512388 InputQ=0.000354 Process=18.105197 OutputQ=0.000039 TotalTm=18.105590 RegTyp=MPP ————————————————————————————————————————————————————————————————————————————
Subject-matter expert: Exception candidate investigation
21
Display filtered to show IMS x'CA01' Exception index records with excessive processing times. Enter TX to show records related to a transaction.
BROWSE FUW000.QADATA.FBOSP007.IMS.D131008.INDEX Record 00000201 More: < > Command ===> Scroll ===> CSR Navigate < 00.00.01.000000 > Date/Time 2016-10-04 17.10.09.284086 / ———— Tracking ———————————————————————— Tuesday 2016-10-04 Time (Elapsed) E CA01 IMS Transaction TranCode=FBOIAT41 Region=0002 0.000000 01 Input Message TranCode=FBOIAT41 0.000000 35 Input Message Enqueue TranCode=FBOIAT41 0.000023 08 Application Start TranCode=FBOIAT41 Region=0002 0.000256 5607 Start of UOR Program=FBOIAP41 Region=0002 0.000000 31 DLI GU TranCode=FBOIAT41 Region=0002 0.000022 5616 Start of protected UOW Region=0002 0.000189 5600 Sign-on to ESAF Region=0002 0.005896 5600 Thread created for ESAF 0.000012 112 Thread allocate FBOIAP41 DBA6 0.000572 073 Create thread end DBA6 0.000068 177 Package allocation FBOIAP41 DBA6 0.000227 233 SP entry FBOSP007 STMT=001031 DBA6 0.000234 380 SP entry FBOSP007 STMT=001031 DBA6 0.000023 177 Package allocation FBOSP007 DBA6 0.000184 061 SQL UPDATE STMT=000001 DBA6 0.000141 0020 Begin UR 0.001034 0600 Savepoint 0.000000 0600 Update in-place in a data page 0.000000 058 SQL UPDATE SQLCODE=0 STMT=000001 DBA6 0.000338 065 SQL OPEN C1 STMT=000001 DBA6 0.000090 058 SQL OPEN SQLCODE=0 STMT=000001 DBA6 0.000021 499 SP statement execution detail DBA6 0.000039 233 SP exit FBOSP007 SQLCODE=0 STMT=001031 DBA6 0.000016 380 SP exit FBOSP007 SQLCODE=0 STMT=001031 DBA6 0.000012 053 SQL request SQLCODE=466 STMT=001031 DBA6 0.000083 053 SQL request SQLCODE=0 STMT=001082 DBA6 0.000824 053 SQL request SQLCODE=0 STMT=001085 DBA6 0.000119 059 SQL FETCH C1 STMT=001090 DBA6 0.000107 0600 Savepoint 1.437546 0600 Savepoint 0.257680 0600 Savepoint 1.059456
IMS-DB2 transaction lifecycle investigation
22
1. Start tracking a transaction
(here, an IMS transaction).
2. See the transaction lifecycle
events from the related logs
(here, an IMS index and
log, SMF file, and a DB2
log), merged together with
no preparation required.
3. Notice the jump in elapsed
time.
4. In this case, the problem
was caused by an inefficient
table scan initiated by a
DB2 stored procedure. A
drill down of the DB2 trace
was able to determine this.
Alternate view of
IMS-DB2
transaction
lifecycle
investigation
23
Displays a tracking result set in a web browser, showing nesting of events (such as SQL calls by stored procedures) and a bar chart of elapsed time between events
Detail DB2 event data view using
forms view
24
+029C Code... 058 SQL FETCH SQLCODE=0 STMT=001090 DBA6 +02A8 STCK... CC1476FBAF617906 LSN.... 0000000000000049 Date... 2016-10-04 Tuesday Time... 17.11.21.890327.563 +0000 SM102LEN... 03A6 SM102FLG... 1E SM102RTY... 66 +0006 SM102TME... 005E6C9D SM102DTE... 0113281F SM102SID... 'FTS3' +0012 SM102SSI... 'DBA6' SM102STF... 0000 +0034 QW0058..... IFCID data Package +0034 Location... 'DB2ALOC' Collection ID.... 'FUNBOX' +0056 Package name... 'FBOSP007' +0068 Consistency token.... 19718A5F136E9A24 +0072 SQLCA...... SQL communication area (SQLCA) +0072 SQLCAID.... 'SQLCA ' SQLCABC.... +136 SQLCODE.... +0 +0082 SQLERRML... +0 SQLERRM.... ' ' +00CA SQLERRP.... 'DSN ' SQLERRD1... +0 SQLERRD2... +0 +00DA SQLERRD3... +0 SQLERRD4... +4294967295 +00E2 SQLERRD5... +0 SQLERRD6... +0 SQLWARN0... ' ' +00EB SQLWARN1... ' ' SQLWARN2... ' ' SQLWARN3... ' ' +00EE SQLWARN4... ' ' SQLWARN5... ' ' SQLWARN6... ' ' +00F1 SQLWARN7... ' ' SQLWARN8... ' ' SQLWARN9... ' ' +00F4 SQLWARNA... ' ' SQLSTATE... '00000' +00FC Statement number... +1090 +0106 Query command ID... 00000000 +010E Query instance ID.... 00000000 +0116 Type of SQL request.... 01 +0118 QW0058ID... Scan information +0118 Scan type.... 'INDX' Rows processed... +1280799 +0128 Rows examined.... +1595 +0130 Rows qualified after stage 1... +1275908 +0138 Rows qualified after stage 2... +1275908 +0140 Rows inserted.... +0
Program statement number 1090 caused an index scan that processed 1,280,799 rows in the table
·———————————————————————————————— Field Zoom —————————————————————————————————· | BROWSE FUW000.QADATA.FBOSP007.SMF.D131008.FULL Line 00000000 | | Command ===> Scroll ===> CSR | | ******************************* Top of data ******************************* | | +016C QWACFLGS... 003F Accounting class flags | | | | On QWACCLS2... --01 Accounting class 2 data is non-zero | | On QWACCLS3... --02 Accounting class 3 data is non-zero | | On QWACCL2O... --04 Accounting class 2 is being collected | | On QWACCL3O... --08 Accounting class 3 is being collected | | On QWACCL7O... --10 Accounting class 7 is being collected | | On QWACCL8O... --20 Accounting class 8 is being collected | | Off QWACPARR... --40 Rollup data for parallel child tasks or DDF/RRSAF | | threads - see QWACPCNT for rollup count | | Off QWACBADE... 04-- Error detected in WLM enclave - some accounting | | data may be incorrect | | ******************************* End of data ******************************* | ·—————————————————————————————————————————————————————————————————————————————· +016C QWAC3...... Optional classes +016C QWACFLGS... 003F QWACPKGN... +2
Zoom
Use cursor to select any field and get more detailed information or help
“Field zoom”
25
Identifying events for review by other SMEs
26
Enter G to “tag” (bookmark) this DB2 record to quickly return to it.
BROWSE FUW000.QADATA.FBOSP007.IMS.D131008.INDEX Record 00001016 More: < > Command ===> Scroll ===> CSR Navigate < 00.00.01.000000 > Date/Time 2016-10-04 16.52.07.645253 / ———— Tracking ———————————————————————— Tuesday 2016-10-04 Time (Relative) CA01 IMS Transaction 17.10.09.284086 UTC=17.10.09.284078 TranCode=FBOIAT41 Program=FBOIAP41 Userid=FUNTRM10 LTerm=FUNTRM10 Terminal=SC0TCP10 Region=0002 OrgUOWID=IDDG/CC1476B6713CB884 SSID=IDDG IMSRel=131 RecToken=IDDG/0000000400000000 CPU=45.699549 InputQ=0.000309 Process=72.612278 OutputQ=0.000356 TotalTm=72.612943 RegTyp=MPP ———————————————————————————————————————————————————————————————————————————— TAG DB2 transaction with long response time +0.008889 ———————————————————————————————————————————————————————————————————————————— G 0020 Begin UR +0.008889 Program=FBOIAP41 Userid=FUNTRM10 Region=0002 SSID=IDDG URID=00002A4010EA LUWID=FTS3/DBA6LU/CC1476B672D1/0001 ———————————————————————————————————————————————————————————————————————————— 0600 Update in-place in a data page +0.008889 DBID=306 PSID=98 PAGE=66 URID=00002A4010EA
A DB2 expert can now use the DB2 Log Analysis Tool to investigate the associated DB2 table updates, based on the transaction’s URID
DB2 Expert Help using DB2 Log Analysis Tool
27
RECORD IDENTIFIER: 1 ACTION DATE TIME TABLE OWNER TABLE NAME URID ------ ---------- -------- ------------ ------------------------- ------------ INSERT 2012-06-24 16.33.34 JOHN HR 00002A4010EA DATABASE TABLESPACE DBID PSID OBID AUTHID PLAN CONNTYPE LRSN -------- ---------- ----- ----- ----- -------- -------- -------- ------------ HR_DB HR_SPACE 00456 00002 00003 FUNTRM10 HR_PLAN IMS C62D2CB46CB3 MEMID CORRID CONNID LUW=NETID/LUNAME/UNIQUE/COMMIT PAGE/RID ----- ------------ -------- ----------------------------------- ----------- 00000 0004MQATPGM IMS FTS3 /DB3ALU /C62D2CB46A5A/0001 00000002/02 ROW STATUS EMP_ID EMP_NAME EMP_PHONE EMP_YEAR EMP_SALARY ----------- ------ ---------- --------------- ---------- ----------- CURRENT +330 JIM MARTIN 475-712-9508 2009-06-24 +0041000.00 POST-CHANGE +330 JIM MARTIN 475-712-9508 2009-06-24 +0042000.00
URID field correlated from Workbench allows experts to leverage their subsystem-specific tools for deep-dive investigation
History Row 1 to 1 of 1 Command ===> Scroll ===> PAGE Select a history item or use the NEW command to create a new note. / Type Description JOB IMS transaction and system analysis report TAG Personal savepoint for JM3 TAG DB2 transaction with long response time JOB DB2 log analysis report ******************************* Bottom of data ********************************
Investigative history
Problem analysis history:
Keep analysis notes for yourself or to share with other SMEs
Use tags to quickly resume analysis sessions, or to identify points in transaction lifecycle
that might require further investigation
28
Problem resolution: end of scenario
The cause of the IMS transaction problem has been narrowed down to a
slowdown in DB2
Sufficient information about the DB2 update activity has been collected and can
be passed on to the DB2 DBA for further investigation
Automatically locates log files for the problem time range
– SMF
– IMS log
– DB2 log
A common problem analysis approach:
– First responders collect data and perform initial analysis
– SMEs in different areas see the big picture and work towards problem
resolution
29
Example: CICS DBCTL
31
ss
CICS
ss
IMS
2016-10-21 Friday CICS-DBCTL Summary Page 1 Tran APPLID CMF Count Response CPU Time IMS Reqs IMS Wait ABEND Rate/Sec ---- -------- --------- -------- -------- -------- -------- -------- -------- BANK CICSP1 60 11.12982 0.008967 35 4.256977 10 0 08 Count Elapsed CPU Time StaDelay Schedule IC Wait PS Wait --------- -------- -------- -------- -------- -------- -------- 42 10.94999 0.004092 0.011668 0.000183 0 0 07 Count DB call DB Gets DB Upds IO Count IO Time LockWait --------- -------- -------- -------- -------- -------- -------- 41 33 13 19 4 0.003438 3.980170 FP Count FP call FP Gets FP Upds FP Wait FP Fail --------- -------- -------- -------- -------- -------- 41 19 7 11 0 7 Synctime Phase 1 Phase 2 FP PH2 OTHREAD -------- -------- -------- -------- -------- 0.011938 0.006555 0.005383 0.002232 0.017659
• Both CMF (CICSPA)and
IMS 56FA/07 (IMSPA)
highlight the long
response time
• The CICS trace coupled
with the IMS log provides
the answer
• The AP events are CICS
application events
• The type 50 and 59
records are IMS FF and
FP database updates
• The DLI ISRT call has
caused the delay
32
BROWSE CICS.DBCTL.MERGED.EXTRACT Record 00042653 More: < > Command ===> Scroll ===> CSR Navigate < 00.00.01.000000 > Date/Time 2016-10-21 17.02.52.449102 / ———— Tracking ———————————————————————— Friday 2016-10-21 Time (Elapsed) CICS Transaction 17.02.52.449102 TranCode=DB2U Program=TWM$UPD Userid=TWM LTerm=SC0TCP07 Terminal=CP07 Response=1.536399 CPU=0.009465 File=4 DB2=7 IMS=12 PSB=DFHTWM04 ——————————————————————————————————————————————————————————————————————————— AP 0328 DLI ENTRY FUNCTION_CODE(PCB) 0.000047 08 Application Start TranCode=DB2U Program=DFHTWM04 0.000488 5607 Start of UOR Program=DFHTWM04 0.000000 AP 0329 DLI EXIT STATUS_CODE(' ') 0.000228 AP 0328 DLI ENTRY FUNCTION_CODE(GHU) 0.000037 AP 0329 DLI EXIT STATUS_CODE(' ') 0.000323 AP 0328 DLI ENTRY FUNCTION_CODE(REPL) 0.000020 5050 Database REPL Database=DI21PART 0.000163 AP 0329 DLI EXIT STATUS_CODE(' ') 0.000105 AP 0328 DLI ENTRY FUNCTION_CODE(ISRT) 0.000019 5052 Database insert into KSDS Database=DI21PART 0.000116 5050 Database ISRT Database=DI21PART 1.283935 AP 0329 DLI EXIT STATUS_CODE(' ') 0.000112 5610 Syncpoint Start of Phase 1 0.000637 5950 FP Database Update Database=IVPDB3 0.000011 5611 Syncpoint End of Phase 1 0.003809 3730 Syncpoint Start of Phase 2 0.001935 5937 FP Syncpoint Program=DFHTWM04 0.000735 56FA Transaction Statistics 0.003526 Userid=TWM SSID=ICB1 RecToken=FUWTCIC/CA9DD24FF0DD0204 CPU=0.005962 UORtime=1.498370 DLI=10 VSAM R/W=0/1 OSAM R/W=0/0 ——————————————————————————————————————————————————————————————————————————— 07 Application Terminate 0.000717 5612 Syncpoint End of Phase 2 Program=DFHTWM04 0.001279
IMS log
CICS trace
ISRT DLI call
IMS syncpoint
CICS transaction response time 1.5 seconds is too high. Is IMS the problem?
CICS DBCTL transaction
CICS CMF
Analysis off z/OS Exposing logs to big data platforms
Analytics for z/OS operations
Big data tooling provides an opportunity to take analysis to the next level
– Perform analyses that were previously infeasible
– Valuable new insights into system performance and security
Standardized and unified approach to all operational analysis
Combining z/OS operational data with data from other platforms
Reduced cost of analysis and storage making long term historical trend analysis cost
effective
Off host
– Splunk
– Elastic (ELK stack – Elasticsearch, Logstash, Kibana)
– Hadoop
– Spreadsheets, for example MS Excel
On platform
– DB2 and IDAA
34
Workbench analytics feature
35
The Workbench primary function is classic problem determination using log and traces; now
complemented with an ETL (Extract, Transform and Load) feature
Prepares all supported data sources for external analysis such as analytics, including the
IMS log and SMF
ISPF dialog to assist in creating
end-to-end ETL JCL for both off
and on host analytics
Dialog has a subset of common
SMF record types, many more
are supported – let us know if you
need additional record types
JSON or CSV
DB2 process fully automated:
– DDL to create tables
– DB2 load statements
DB2 – this is for CICS – same for IMS
36
//DB2LOAD JOB ,NOTIFY=&SYSUID //S1 EXEC PGM=FUWBATCH //SYSPRINT DD SYSOUT=* //SMFIN DD DISP=SHR,DSN=PROD.DAILY.SMF(-1) //SYSIN DD * CSV CODE(CMF) NOLABELS TOKENS(CICS) + SCHEMA(SCHEMA) TABLE(CICSPROD) DB2LOAD(DB2LOAD) FIELDS( SMFSID:LPAR APPLID Tran Program Userid UserCPU:CPU QRCPU L8CPU Response Dispatch Suspend TaskNo:Task RMIelap:RMIelapsed RMISusp:RMIsuspend FCTOTAL DB2REQS:DB2calls RMIDB2:DB2elapsed ABOR:ABEND ) /*
The generated JCL performs all the tasks
Input
1. Record type(s) you want to analyze
e.g. CICS CMF or IMS transaction index
2. List of fields in the record that you want to
extract; or extract them all
– Assign an alias to the field names
3. Input file e.g. SMF or IMS log
Output
1. Schema DDL to create the table
2. DB2 load utility control statements
3. CSV
4. Second step uses DSNUTILB to load into
DB2
Flexible – you decide what fields to include Easy to use – controlled by one piece of repeatable JCL that can be job scheduled for auto ops Lots of options e.g. ALLNUMBERS to include all numerical fields from the record in the CSV
DB2 schemas generated automatically
37
CREATE TABLE CICSPROD( "TIME" TIMESTAMP, "LPAR" CHAR(0004), "APPLID" CHAR(0008), "TRAN" CHAR(0004), "USRDISPT" DECIMAL(15,6), "CPU" DECIMAL(15,6), "SUSPTIME" DECIMAL(15,6), "TASK" INTEGER, "RMIELAPSED" DECIMAL(15,6), "RMISUSPEND" DECIMAL(15,6), "QRCPUT" DECIMAL(15,6), "L8CPUT" DECIMAL(15,6), "RESPONSE" DECIMAL(15,6), "USERID" CHAR(0008), "FCTOTCT" INTEGER, "DB2CALLS" INTEGER, "DB2ELAPSED" DECIMAL(15,6), "PGMNAME" CHAR(0008), "ABEND" CHAR(0004), "ACCTOKEN" CHAR(0030));
LOAD DATA INDDN CSV LOG NO RESUME NO REPLACE CCSID(1047) FORMAT DELIMITED COLDEL X'6B' CHARDEL X'7F' DECPT X'4B' INTO TABLE CICSPROD ("TIME" TIMESTAMP EXTERNAL(26), "LPAR" CHAR, "APPLID" CHAR, "TRAN" CHAR, "USRDISPT" DECIMAL, "CPU" DECIMAL, "SUSPTIME" DECIMAL, "TASK" INTEGER, "RMIELAPSED" DECIMAL, "RMISUSPEND" DECIMAL, "QRCPUT" DECIMAL, "L8CPUT" DECIMAL, "RESPONSE" DECIMAL, "USERID" CHAR, "FCTOTCT" INTEGER, "DB2CALLS" INTEGER, "DB2ELAPSED" DECIMAL, "PGMNAME" CHAR, "ABEND" CHAR, "ACCTOKEN" CHAR)
DDL to create the table DB2 load utility statements
Open and scalable log analysis
Build composite log index
CSV/JSON
HCatalog and Logstash
End-to-end workflow
38
Capture and synthesize detailed transaction indices
Provide metadata to assist analysis
Improve access to the data
Linux, Windows z/OS
Transaction
Analysis
Workbench
batch job
Streaming JSON Lines over TCP to Splunk
39
Logs
Splunk
TCP data input
JSON Lines
streamed over
TCP network
Streamed directly from the
batch job with no staging
files. Supports both
unsecure and secure
(SSL/TLS) connections.
Elasticsearch, Logstash, Kibana (ELK stack)
40
We use Logstash to feed data into Elasticsearch and view in Kibana (all open source)
Quickly search your logs
41
Kibana makes your log data searchable using both unstructured and structured searches
Create custom visualizations Build your own visualization aggregation (instead of 3270 reports)
Combine visualizations into dashboards Bring problems to the foreground. Know exactly where you transactions are spending the most time
Sift through millions of transactions From millions of transactions get to a list of problem transactions
Provide URLs with complete transaction details (from workbench index)
Splunk: IMS and Connect combined dashboard
45
Splunk: CICS dashboard – CPU and response time breakdown
46
Every transaction, every metric can be investigated
47
Combined CICS and DB2 dashboard – where is the problem?
48
Advantages of the solution
Minimal barrier to entry for proof-of-concept implementations. All that is needed:
– Existing logging on z/OS (no agents to configure)
– A Hadoop implementation on the network
Or:
– Supported Logstash output (paritcularly Elasticsearch/Kibana)
– Generally, the dialog-configured JCL will work out of the box
Comprehensive: covers most transactional information sources
Flexible: Direct offload with Co:Z or sftp or any preferred file transfer mechanism
Scalable: rely on the inherent capabilities of big data platforms to grow your historical
database and identify trends and exceptions
Summary
Broad, deep coverage of z/OS-based instrumentation data
Consistent interface for all log types
For many different types of user: “first responders”, application developers, system
programmers, subsystem subject-matter experts
Transaction tracking: view events for a transaction across all available subsystem logs
Automated file selection: IMS logs, DB2 logs, SMF files, OMEGAMON NTH (DB2 trace)
Sessions promote collaboration between subject matter experts
Analysis off z/OS in big data applications
50
More information Go to IBM website at http://www-03.ibm.com/software/products/en/tawforzos
Session feedback
• Please submit your feedback at
http://conferences.gse.org.uk/2016/feedback/hg
• Session is HG
This is the last
slide in the deck