594
Configuration and User’s Guide Version 2.1.2 - PTF SDF21H1 Ironstream

Ironstream v2.1.2 Configuration and Users Guide

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Ironstream v2.1.2 Configuration and Users GuideVersion 2.1.2 - PTF SDF21H1
© 2014, 2021 Syncsort Incorporated. All rights reserved.
This document contains unpublished, confidential, and proprietary information of Syncsort Incorporated. No disclosure or use of any portion of the contents of this document may be made without the express written consent of Syncsort Incorporated.
Version 2.1.2 - PTF SDF21H1 Last Update: June 7, 2021
Contents List of Figures .................................................................................................................. ix
List of Panels ................................................................................................................... xi
Chapter 3: Setting Up Splunk for Ironstream ..................................................3-1
Overview of Setting Up Splunk........................................................................3-1 Setting Up a Non-SSL Port ..............................................................................3-2 Setting Up an SSL Port....................................................................................3-2 Setting Up a Splunk Index...............................................................................3-3 Next Steps ........................................................................................................3-3
Overview of Elastic Support in Ironstream .....................................................4-1 Forwarding Data to Logstash...........................................................................4-2 Receiving Ironstream Data in Elasticsearch....................................................4-4 Displaying Ironstream Data in Kibana............................................................4-4 Field Mappings and Elastic Defaults ...............................................................4-4 Next Steps ........................................................................................................4-5
Overview of Apache Kafka Support in Ironstream ..........................................5-2 Downloading and Installing Kafka to z/OS OMVS Systems............................5-3 Applying the Kafka Function to Ironstream....................................................5-4 Providing APF Authorization for Ironstream Programs..................................5-5 Configuring Ironstream to Use the Kafka Producer ........................................5-6 Kafka Delivery Guarantees Recommended by Ironstream..............................5-8 Confirming Kafka Activity Status in Ironstream .......................................... 5-12 Dynamically set Topic and Key when using Ironstream API and KAFKA feature........................................................................................ 5-13 Next Steps ...................................................................................................... 5-13
Section III, Configuring and Running Ironstream
Chapter 6: Configuring Ironstream Components ...........................................6-1
Overview...........................................................................................................6-2 Manually Configuring Ironstream ...................................................................6-4 About the Ironstream Configurator Utility......................................................6-7 Running the Ironstream Configurator Utility ............................................... 6-11 Post Configurator: Additional Actions ........................................................... 6-23
Overview of the Configuration File ..................................................................7-2 General Syntax Rules.......................................................................................7-3 Configuration Parameters................................................................................7-6 Typical Ironstream Parameters ..................................................................... 7-30 Configuration File Examples ......................................................................... 7-32
Controlling Ironstream Forwarders .................................................................8-1 Controlling the Ironstream Desktop (IDT) ......................................................8-3 Controlling the Data Collection Extension (DCE) ...........................................8-4 Controlling the Network Monitoring Components (NMC)...............................8-7
Overview...........................................................................................................9-1 Dynamic Reconfiguration Limitations .............................................................9-2 How Ironstream Performs a Dynamic Change in the Current Configuration ...............................................................................9-2 Dynamic Reconfiguration Commands..............................................................9-3
Dynamic Reconfiguration Procedure................................................................9-3 Messages Issued by Dynamic Reconfiguration ................................................9-4
Overview......................................................................................................... 10-1 Ironstream System Requirements for Using DLP ......................................... 10-2 Configuring Ironstream DLP Parameters ..................................................... 10-4 Configuring Splunk Parameters .................................................................... 10-5 Best Practices When Using DLP.................................................................... 10-5 Messages Issued by DLP................................................................................ 10-5
Chapter 11: Syslog Message Filtering ................................................................ 11-1
Overview of Filter Modules ............................................................................ 11-1 Syslog Message Filtering................................................................................ 11-2 Ironstream SYSLOG Continuous Offload Reporter (ISCOR) ........................ 11-3
Overview of SMF Record Filtering................................................................. 12-2 Gathering SMF Data...................................................................................... 12-3 Supported SMF Record Types ........................................................................ 12-4 Manually Defining SMF Filtering Configurations......................................... 12-8 Defining Custom SMF Numbers for ISV Products ........................................ 12-8 Limiting SMF Record Selection with WHERE Search Conditions ................ 12-9 Using the SMF Filter Configuration Builder in IDT ................................... 12-15 Using the READ Command to Share SMF Filter Configurations ............... 12-24 Implementing a Custom CICS Monitor Dictionary in Ironstream .............. 12-25 Sample SMF Filter Configurations .............................................................. 12-33
Chapter 13: SYSOUT Forwarding ....................................................................... 13-1
Using the SYSOUT Forwarding Function ..................................................... 13-1 SYSOUT Selection and Filtering ................................................................... 13-4 Using the Advanced PRINT Data Block Parameters................................... 13-13 Preserving SYSOUT State Across Restarts ................................................. 13-16 SYSOUT Forwarding Parameter Examples and Sample Output................ 13-19
Chapter 14: Alerts and SyslogD Forwarding .................................................. 14-1
Overview of Syncsort Network Management Components............................ 14-2 Configuring ZEN for Ironstream.................................................................... 14-2 ZEN Component Alert Generation................................................................. 14-3
The OSA MONITOR (ZOM) ........................................................................... 14-3 The LINUX MONITOR (ZLM) ....................................................................... 14-7 FTP CONTROL (ZFC).................................................................................... 14-8 The EE MONITOR (ZEM)............................................................................ 14-10 The IP MONITOR (ZIM) .............................................................................. 14-13 Routing SyslogD Messages to Ironstream.................................................... 14-19 More Information about Alerts and SyslogD Forwarding............................ 14-20
Chapter 15: DB2 Data Forwarding ..................................................................... 15-1
Overview of DB2 Tables ................................................................................. 15-1 Configuration for DB2 Table Data ................................................................. 15-1 Enabling Data Loss Prevention in Splunk for DB2 Forwarding ................... 15-3
Chapter 16: Sequential File Forwarding ........................................................... 16-1
Capturing Sequential Data ............................................................................ 16-1 Sequential File Forwarding Example ............................................................ 16-2 Batching FILELOAD Data............................................................................. 16-4
Overview......................................................................................................... 17-1 Configuring Ironstream for System State Forwarding .................................. 17-2 System-level Data Fields Forwarded to Destinations.................................... 17-2
Chapter 18: Configuring and Using the Ironstream API ........................... 18-1
Overview of the Ironstream API .................................................................... 18-2 Defining the IRONSTREAM_API Data Type ................................................ 18-3 Using the Single-send API ............................................................................. 18-5 Using the Multi-send API ............................................................................ 18-12 Troubleshooting the Ironstream API ........................................................... 18-17 Ironstream API Coding Examples ............................................................... 18-21 Ironstream API and KAFKA - Dynamic Topic and Key Support Feature Details ................................................................ 18-51
Chapter 19: Setting Up Log4j ............................................................................... 19-1
Overview of Log4j ........................................................................................... 19-1 Defining the Log4j Parameters ...................................................................... 19-2 Sample Log4j Configurations ......................................................................... 19-2 How to use PATTERN in the Log4j Reader Facility ...................................... 19-4
iv Ironstream Configuration and User’s Guide
Chapter 20: IMS Log Record Forwarding ....................................................... 20-1
Overview of IMS Log Record Forwarding ...................................................... 20-2 Synchronous IMS Log Gathering ................................................................... 20-3 Asynchronous IMS Log Gathering ................................................................. 20-4 IMS Log Record Extraction Process ............................................................... 20-5 IMS Log Record Processing............................................................................ 20-7 IMS Log Record Field Descriptions................................................................ 20-8 Messages Issued by IMS Log Records .......................................................... 20-85
Section V, Setting Up the Data Collection Extension Data Types
Chapter 21: Configuring the DCE Parameters ............................................... 21-1
Overview of DCE ............................................................................................ 21-1 DCE Configuration Files ................................................................................ 21-2 Syntax for DCE Configuration Parameters ................................................... 21-5 Ironstream Forwarder Configuration Files.................................................... 21-5
Chapter 22: Setting Up USS File Collection ................................................... 22-1
Overview of USS File Collection .................................................................... 22-1 Summary of the DCE USS Offload Functions ............................................... 22-5 Configuring DCE for USS File Offload .......................................................... 22-5 Duplicate USS File Detection....................................................................... 22-14 USS File Tailing Process.............................................................................. 22-15 Tracking Rolled USS Log Files .................................................................... 22-16 Detecting and Controlling Multi-line Log Records ...................................... 22-19 Dynamically Modifying USS Processing ...................................................... 22-21
Chapter 23: Setting Up the RMF Data Forwarder ...................................... 23-1
Overview of the RMF Data Forwarder........................................................... 23-1 Configuring the RMF Data Forwarder .......................................................... 23-2 Configuring DCE RMF Parameters ............................................................... 23-2 Defining Security Settings ............................................................................. 23-5 Setting the RMF Filters in IDT...................................................................... 23-6 Sample Scenario for Setting RMF Filters ...................................................... 23-9
Section VI, Integration with Splunk Premium Applications
Chapter 24: Splunk Enterprise Security and Ironstream .......................... 24-1
About Splunk Enterprise Security and Ironstream....................................... 24-1 Intrusion Detection ........................................................................................ 24-2 TSO Log-on Activity ....................................................................................... 24-3
Ironstream Configuration and User’s Guide v
TSO Account Activity ..................................................................................... 24-3 FTP Sessions .................................................................................................. 24-4 FTP Change Analysis..................................................................................... 24-5 IP Traffic Analysis.......................................................................................... 24-5 Network Management/User-Defined Notification ......................................... 24-6
Overview......................................................................................................... 25-1 Management Commands................................................................................ 25-1 MODIFY Commands ...................................................................................... 25-2 Auxiliary Commands...................................................................................... 25-5 SMF Real-time INMEM Commands .............................................................. 25-5
Message Flood Automation and Syslog Message Collection .......................... 26-1 Network Contention ....................................................................................... 26-1 Data Store Filling or Full Condition .............................................................. 26-2
Chapter 27: Ironstream Messages ..................................................................... 27-1
Overview of Ironstream Messages ................................................................. 27-1 Ironstream Messages ..................................................................................... 27-2 Data Collection Extension Messages ........................................................... 27-81 Ironstream SYSLOG Continuous Offload Reporter (ISCOR) Messages ...... 27-96
Chapter 28: Diagnostics and Contacting Precisely Support ................... 28-1
Before Calling Precisely Support ................................................................... 28-1 Contacting Precisely Support......................................................................... 28-2
Chapter 29: Using the Ironstream Data Usage Reporter ......................... 29-1
Overview......................................................................................................... 29-1 Configuring the Report Parameters............................................................... 29-2 Using the Report TRACE Facility.................................................................. 29-5 CSV File Report Format................................................................................. 29-5 Overriding the Default SMF Record Number ................................................ 29-6 System Messages for the Data Usage Reporter ............................................. 29-7
Section IX, Appendices
Syslog Format.................................................................................................. A-2 FILELOAD Format ......................................................................................... A-3 SYSOUT Format ............................................................................................. A-3 Log4j Format ................................................................................................... A-4 Alert Format.................................................................................................... A-4 SyslogD Format............................................................................................... A-5
Overview of SSDFCPR .................................................................................... B-1 Executing SSDFCPR....................................................................................... B-1
List of Figures
List of Panels Initial Configuration ..............................................................................................................6-11 Ironstream Component Selection ........................................................................................6-12 List NMC Configuration Members ........................................................................................6-13 Ironstream Forwarder Tasks .................................................................................................6-14 DCE Configuration ................................................................................................................6-15 IDT Configuration ..................................................................................................................6-17 Setting Authority Level within IDT ........................................................................................6-18 Log4j Forwarder Parameters ................................................................................................6-19 EE Monitor Configuration .....................................................................................................6-20 FTP Control Configuration ....................................................................................................6-21 IP Monitor Configuration ......................................................................................................6-21 OSA Monitor Configuration ..................................................................................................6-22 UNIX Server Configuration ....................................................................................................6-22 Ironstream Desktop Login Panel .............................................................................................8-3 SMF Filter Configuration - DEFAULT only ...........................................................................12-17 Add Configuration ...............................................................................................................12-19 SMF Filter Configurations ...................................................................................................12-19 Configuration BOBDYLAN - Collapsed Record Types ..........................................................12-20 Configuration BOBDYLAN - Expanded Record Types ..........................................................12-22 Viewing SMF 110 Subtypes 1 and 2 ....................................................................................12-22 Viewing Configuration BOBDYLAN .....................................................................................12-23 JSON-formatted SYSOUT data records in Splunk ................................................................13-20 Synchronous IMS Log Capture ..............................................................................................20-2 Asynchronous IMS Log Capture ............................................................................................20-3 Ironstream Desktop Menu Bar ...........................................................................................22-21 Accessing the USS File Status ..............................................................................................22-21 USS File Status ....................................................................................................................22-22 USS Defaults .......................................................................................................................22-23 USS Directories ...................................................................................................................22-23 Directory Scan Attributes ...................................................................................................22-24 USS Filters ...........................................................................................................................22-25 Update RMF Collection Settings ...........................................................................................23-5 RMF Filters Panel ..................................................................................................................23-7 RMF Filters - No Metrics Selected ........................................................................................23-9 RMF Elements for VOL ........................................................................................................23-10 RMF Filters Panel - With 3 Volumes Selected .....................................................................23-10 RMF Filters Panel - With 3 Volumes Selected .....................................................................23-11 RMF Metrics for Volume Panel - With 3 Volumes Selected ...............................................23-11 RMF Metrics for Volume Panel - With 25 Metrics Selected ...............................................23-12 RMF Metrics for Volume Panel - With 75 Metrics Selected ...............................................23-12 Selected RMF Metrics Panel ...............................................................................................23-13
RMF Attributes for ENCLAVE ..............................................................................................23-14 RMF Filters Panel - With Workloads Metrics Selected .......................................................23-14
List of Tables Sections and Chapters in this Guide ....................................................................................... 1-2 Text Conventions.................................................................................................................... 1-5 Network Monitoring Components ......................................................................................... 2-4 Ironstream Mainframe Data Sources ..................................................................................... 2-5 Ironstream Data Sources ........................................................................................................ 6-5 Generated NMC Member Names for Manual Tailoring ....................................................... 6-23 EBCDIC to ASCII Translate Modules...................................................................................... 7-18 Unconverted EDBCDIC Hex Values ....................................................................................... 7-19 SOURCE Data Types .............................................................................................................. 7-20 DATA_FORMAT Parameter Supported SMF Type Details .................................................... 7-25 DCE Start Types ...................................................................................................................... 8-5 Syslog Record Types for Filtering.......................................................................................... 11-2 Supported SMF Records ....................................................................................................... 12-4 Supported SMF ISV Products................................................................................................ 12-8 WHERE Statement Parameters .......................................................................................... 12-10 SMF Filter Configuration Panel Controls ............................................................................ 12-17 Configuration CONFNAME Panel Controls ......................................................................... 12-21 SELECT_JOB_IF_ Keywords .................................................................................................. 13-7 SELECT_DATA_SET_IN_JOB_WITH Keywords....................................................................... 13-8 FILTER_JOB_ON Keywords ................................................................................................... 13-8 JES2 PHASE Keyword Values............................................................................................... 13-11 JES3 PHASE Keyword Values............................................................................................... 13-12 MIB Performance Threshold Alerts ...................................................................................... 14-4 MIB Status Change Alerts ..................................................................................................... 14-4 OSA MONITOR-Generated Alerts ......................................................................................... 14-5 MIB Overall Counter Increase Alerts .................................................................................... 14-5 FTP Control Alerts................................................................................................................. 14-8 EE MONITOR Alerts ............................................................................................................ 14-10 IP MONITOR Alerts ............................................................................................................. 14-13 FILELOAD Batching Example 1 .............................................................................................. 16-5 FILELOAD Batching Example 2 .............................................................................................. 16-6 System State Data Fields ...................................................................................................... 17-2 Required SSDFAPI Parameters for the Single-send API ........................................................ 18-6 Ironstream API Environment ................................................................................................ 18-8 Ironstream API Input Registration........................................................................................ 18-8 Ironstream API Output Registration..................................................................................... 18-8 Ironstream API Parameter List ............................................................................................. 18-9 Required SSDFPAPI Parameters for the Multi-send API ..................................................... 18-13 INIT Request Parameter List ............................................................................................... 18-15 SEND Request Parameter List............................................................................................. 18-16 TERM Request Parameter List ............................................................................................ 18-16 List of Transient API Parameters ........................................................................................ 18-52
Ironstream Configuration and User’s Guide xiii
List of Persistent API Parameters ....................................................................................... 18-53 RMF Filters Panel Controls ................................................................................................... 23-8 Ironstream Messages ........................................................................................................... 27-2 Data Collection Extension Messages .................................................................................. 27-81 Ironstream SYSLOG Continuous Offload Reporter (ISCOR) Messages ............................... 27-96
xiv Ironstream Configuration and User’s Guide
Section I About Ironstream
This section provides introductory and overview information about Ironstream and how it works.
• “Introduction”
Chapter 1 Introduction
The Ironstream Configuration and User’s Guide provides instructions for configuring and running Ironstream instances. It also describes how to configure the supported data source parameters to optimize data collection and forwarding efficiency for your environment.
Topics: • “What’s in this Guide” on page 1-2
• “Audience” on page 1-4
• “Conventions” on page 1-5
Introduction
What’s in this Guide The Ironstream Configuration and User’s Guide is divided into categorical sections that contain related chapters.
Table 1-1: Sections and Chapters in this Guide
Title Description SECTION I: About Ironstream
Introduction This chapter
Understanding Ironstream Provides a detailed overview of Ironstream and its components.
SECTION II: Configuring Ironstream Target Destinations
Setting Up Splunk for Ironstream Describes how to set up Splunk indexes and TCP ports for forwarding data to Splunk.
Setting Up Elastic for Ironstream Describes how to set up Elastic products to ingest data forwarded by Ironstream.
Setting Up Kafka for Ironstream Describes how to set up Ironstream’s internal Kafka producer to publish z/OS data to Kafka brokers.
SECTION III: Configuring and Running Ironstream
Configuring Ironstream Components
Manually Setting Ironstream Parameters
Explains how to configure the Ironstream configuration file parameters to define data sources and destinations. It also includes descriptions of typical Ironstream parameters and example configuration files.
Controlling Ironstream Components
Instructions for starting and stopping the Ironstream forwarders and the their components.
Configuring Data Loss Prevention Describes how to configure Ironstream to minimize loss of forwarded Splunk data due to extended network or Splunk server outages.
SECTION IV: Setting Up Ironstream Data Sources
Syslog Message Filtering Describes how to configure syslog message filtering.
SMF Record Filtering Describes how to configure field-level filtering for SMF record types, either by manually creating control records in the Ironstream configuration file or using the GUI-based “SMF Filter Configuration Builder” in the Ironstream Desktop.
1–2 Ironstream Configuration and User’s Guide
Introduction
SYSOUT Forwarding describes how to configure the SYSOUT data forwarder to select and forward data sets.
Alerts and SyslogD Forwarding Describes the optional Network Monitoring components that enable alert monitoring and SyslogD forwarding.
DB2 Data Forwarding Describes how to configure DB2 table data for forwarding.
Sequential File Forwarding Describes how to capture data written and stored in sequential datasets.
System State Forwarding Describes how to capture z/OS LPAR system performance metrics for forwarding.
Configuring and Using the Ironstream API
Describes how to configure the Ironstream API to capture application information for analysis and visualization.
Setting Up Log4j Describes how to configure log4j configuration files to collect log4j records.
IMS Log Record Forwarding Describes how to configure Ironstream to gather IMS log records.
SECTION V: Setting Up Data Collection Extension Data Types
Configuring the DCE Parameters Describes how to configure the Data Collection Extension (DCE), an Ironstream component that provides extensions for collection of data from a variety of data types.
Setting Up USS File Collection Describes how to configure DCE to offload Unix System Services (USS) to Ironstream.
Setting Up the RMF Data Forwarder
Describes how to configure the RMF Data Forwarder to collect Resource Measurement Facility III (RMF III) system performance and utilization data.
SECTION VI: Troubleshooting Ironstream
Operational Considerations Describes some operational considerations when using Ironstream, such as message flood automation, network contention, and data store conditions.
Ironstream Messages Contains the system messages generated by core Ironstream and DCE.
Table 1-1: Sections and Chapters in this Guide
Title Description
Introduction
Audience This document is intended for system administrators who are configuring and running Ironstream.
Related Resources For more information about Ironstream functionality and enhancements, refer to the Precisely Knowledge Base and the additional Ironstream manuals described below.
Knowledge Base Ironstream is continually enhanced to add more functionality. Enhancements are documented in Precisely Knowledge Base articles. The Precisely Knowledge Base contains information about enhancements, topics of interest and product defects. The Knowledge Base is available to licensed users and is accessible through the Precisely Support Portal: support.precisely.com. Search on the Ironstream product by release to find all relevant information about Ironstream.
To improve the search process for Ironstream V2.1.2 enhancements, all enhancement-related Knowledge Base articles are indicated with the inclusion of “Enhancement” in the article title. When querying the Knowledge Base, you can also enter “enhancement” in the Search for field to list all enhancement-related articles for Ironstream V2.1.2.
Diagnostics and Contacting Precisely Support
Contains basic diagnostic information and contact information for Precisely Support.
SECTION VII: Ironstream Audit Reporting
Using the Ironstream Data Usage Reporter
Provides information about how to use Ironstream auditing reports.
SECTION VIII: Integration with Splunk Premium Applications
Splunk Enterprise Security and Ironstream
Describes how Ironstream Technology Add-on can be configured to work with the Splunk Enterprise Security application.
SECTION IX: Appendices
Forwarded Data Formats Contains the data formats that are forwarded to Splunk.
The SSDFCPR Utility Describes how to use the SSDFCPR utility.
Table 1-1: Sections and Chapters in this Guide
Title Description
Additional Documentation The following documentation is available for download in the Ironstream section of the Precisely Support portal: support.precisely.com.
• Ironstream New Features and Functions – A detailed compendium of all Ironstream V2.1.2 enhancements.
• Ironstream Program Directory – For system programmers responsible for program installation and maintenance. It contains information concerning the materials and procedures associated with the installation of Ironstream V2.1.2.
• Ironstream SMF Record Field Reference – An HTML-based reference that describes all of the SMF record fields that are supported by Ironstream. This reference is available on the “Ironstream Documentation” page on the Precisely Support portal.
• Syncsort Network Management Component (NMC) Manuals – Full details of all of the configuration options outlined in “Alerts and SyslogD Forwarding” on page 14-1 are available in the appropriate Configuration and Reference or Administration manual for the ZEN component concerned and/or in the ZEN Help system.
Conventions The following text conventions are used in this document:
Table 1-2: Text Conventions
Convention Meaning boldface Boldface type indicates graphical user interface
elements associated with an action, or terms defined in text or the glossary.
italic Italic type indicates book titles, emphasis, or placeholder variables for which you supply particular values.
monospace Monospace type indicates commands within a paragraph, URLs, code in examples, text that appears on the screen, or text that you enter.
Ironstream Configuration and User’s Guide 1–5
Chapter 2 Understanding Ironstream
This chapter provides an overview of Ironstream and its components.
Topics: • “What Is Ironstream?” on page 2-1
• “Ironstream Components” on page 2-2
• “Supported Data Sources” on page 2-5
• “Ironstream Design Roadmap” on page 2-7
• “Ironstream Starter Edition” on page 2-8
• “Integration with Splunk Premium Applications” on page 2-9
What Is Ironstream? Ironstream enables you to access real-time, mainframe operational insights through a number of supported data destination platforms, such as Splunk, Elastic Logstash, and Apache Kafka. Ironstream captures many different types of data from your z/OS mainframe and generates JSON formatted data that is forwarded via TCP/IP connections directly into a target destination repository.
Note: While most examples in this guide are shown using Splunk, there is no restriction on which supported destination repository can be used, unless specifically noted in that example.
What Data Does Ironstream Collect from z/OS? Ironstream can collect, transform and securely forward SMF records, IMS logs, JES SYSOUT data, syslogs, log4j messages, flat files, and system state performance metrics into a destination platform. Ironstream can also forward entire USS files and file changes, as well as RMF Monitor III metrics from the RMF distributed data server (DDS). There is also
Ironstream Configuration and User’s Guide 2–1
Understanding Ironstream
an API for users of COBOL, C, PL/I, REXX, and Assembler applications for forwarding user-defined data to a target destination.
What Happens to z/OS Data in a Destination? Once the z/OS data is in a supported destination, it can be easily searched, analyzed, and visualized to gain valuable operational information. Ironstream thus enables you to have a real-time, 360-degree view of your IT infrastructure, including your mainframes.
With all your machine data coming into your chosen destination, you can correlate data from multiple mainframe sources, along with log data from distributed platforms, providing insights that were not previously possible.
What Happens If Ironstream Is Unable to Deliver Data to a Destination? There may be operational situations when Ironstream cannot deliver data to a destination. There are two options:
• For critical data sources being forwarded to Splunk, the Data Loss Prevention (DLP) feature stores the data in a coupling facility log stream. When the connection is restored, Ironstream automatically forwards the data to Splunk.
• For non-critical data sources that do not require DLP, there is on offline forwarder batch utility.
When using DLP, you must also make minor configuration changes to your Splunk server. For more information about DLP, see Chapter 10, “Configuring Data Loss Prevention”.
Note: Ironstream DLP uses the Splunk Indexer Acknowledgment function and therefore cannot be configured to work with Elastic destinations.
Fast and Efficient Performance Ironstream is fast and efficient while having very limited impact on z/OS system resources. Processing is offloaded to zIIP engines as appropriate to reduce CPU time consumption.
Ironstream Components Ironstream Forwarders Ironstream delivers data to configured destination using what is referred to as a forwarder and each forwarder is configured for a specific data source. For some data sources, the forwarder is also a data gatherer and it filters and formats data before it is forwarded to destinations. To illustrate this, consider SMF and syslog data sources:
• SMF data is filtered by record type, in some cases by subtype. Many of the fields across all SMF records contain numeric data, often in binary format. Ironstream performs significant formatting that is required to forward these fields in a way that makes them useful and usable in destinations.
2–2 Ironstream Configuration and User’s Guide
Understanding Ironstream
• Syslog data is filtered by product/subsystem or message prefix, but the data is entirely EBCDIC, so Ironstream converts it to ASCII and delivers it to destinations.
Whereas, other data sources, such as the USS file monitor, use a combination of the Data Collection Extension (DCE) and the Ironstream Desktop (IDT) for filtering and gathering, but still rely on a forwarder to deliver the data to destinations. Ultimately, an Ironstream forwarder delivers a specific data source to destinations, but different data sources require different processing to prepare the data being sent to destinations.
For more information about how Ironstream forwarders interact with other Ironstream components, see “DCE, IDT, and XCF Configuration Considerations” on page 6-8.
Data Collection Extension (DCE) The DCE component provides extensions for collection of data from a variety of sources. DCE currently provides support for:
• Monitoring UNIX System Services (USS) directories and files, which can be offloaded to Ironstream to a specified destination, like a Splunk index.
• Collecting Resource Measurement Facility (RMF) III system performance and utilization data in real time to help prevent potential bottlenecks and performance delays.
Other DCE data types may be added in the future. For more information, see Chapter 21, “Configuring the DCE Parameters”.
The Ironstream Desktop (IDT) The IDT component provides GUI-based administration capabilities for some of Ironstream’s key features. IDT administration includes defining the criteria required to control data collected by Ironstream and forwarded to destinations, such as:
• For USS file processing, IDT can dynamically make changes to the USS defaults, filters, and directories, as well as monitor the status of USS file offloads in real-time. See Chapter 22, “Setting Up USS File Collection”.
• For RMF III metric collection, IDT can be used to configure and monitor the RMF Data Forwarder, and to control the filtering of RMF fields. For more information, see Chapter 23, “Setting Up the RMF Data Forwarder”.
• For SMF record filtering, IDT provides an “SMF Filter Configuration Builder” that presents all supported SFM types in an easy-to-navigate expandable/collapsible tree format for creating a variety of SMF filtering configurations that can filter specific combinations of selected record types, subtypes, and fields. For more information, see Chapter 12, “SMF Record Filtering”.
Network Monitoring Components Alerts Optionally, you can install the set of network monitoring tools collectively known as “Network Monitoring Components” or NMC. These enable you to generate network-related alerts and capture SyslogD messages that Ironstream can forward to destinations.
Ironstream Configuration and User’s Guide 2–3
Understanding Ironstream
SyslogD support can collect SyslogD messages from the z/OS Syslog Daemon, remote z/OS Syslog Daemons, or any other network device. Messages can be filtered based on Origin, Facility, and/or Priority.
Table 2-1 describes how each component can generate alerts for a variety of events that can occur in the z/OS system. For more information, see Chapter 14, “Alerts and SyslogD Forwarding”.
Table 2-1: Network Monitoring Components
Component Description ZEN ZEN is the core of the Network Monitoring component set and
provides many common functions for all other ZEN components: a browser-based online interface; software routing; a centralized alerting function (including user-defined alerts); enhanced system log; SyslogD support; IP tools; CSM, ECSA and USS panels; common reporting; REXX support with ZEN Function Pack; user-definable menus and panel support; timer, message and alert-driven automation.
ZEN IP MONITOR Real-time IP stack monitor providing full insight into all IP network data via both 3270 VTAM and ZEN-based panels. Many IP monitors are provided such as: IP stack (services, interfaces, activity, connections, gateways), TN3270 response time, OSPF, EE, X.25, MIBs, OMVS thresholds. Many alerts available in several categories such as Availability, Performance, Capacity and Message. Several utilities available for network analysis and monitoring. Handles single or multiple stacks on single or multiple LPARs and across as many CPUs and Sysplex configurations as required.
ZEN EE MONITOR Sophisticated monitor for EE and APPN/HPR data providing: insight into EE activity and performance; dynamic alerting and incident reporting; online and offline historical data recording; comprehensive diagnostic toolset; Netview/REXX interface for automated network monitoring via ZEN; optional collection and display of SNA session awareness data.
ZEN OSA MONITOR Provides a convenient single point from which you can monitor all OSAs accessible to the LPAR on which ZEN is running. Provides both event and threshold monitoring and panels for OSA information, channels and ports, interfaces, LPARs, VTAM and interface information. Particularly useful is the Health Check function which provides a way of dynamically checking whether there are any problems or potential problems with your OSAs.
2–4 Ironstream Configuration and User’s Guide
Understanding Ironstream
Supported Data Sources The sources of mainframe data that can be captured and forwarded by Ironstream are provided in Table 2-2:
ZEN LINUX MONITOR Enables you to monitor all of your Linux systems using a set of clear panels. Also enables you to set thresholds for key Linux resources such as the System and User CPU, memory and swap utilization, TCP connections and retransmissions and number of active processes, which are monitored and an alert raised should the usage exceed the threshold set.
ZEN FTP CONTROL Monitors activity in all z/OS FTP servers and clients. History file available for browsing online that records all FTP activity. Provides many monitoring panels both as 3270 VTAM and ZEN-based panels. Every FTP action such as RACF logon failures, invalid userid/password, unrecognized command, complete or incomplete transfers both inbound and outbound, cause an alert message to be issued.
Table 2-2: Ironstream Mainframe Data Sources
Source Description API User applications can create and send user data and forward it to
a destination. Ironstream supports two user API functions:
• Single-send for use where a persistent environment is not avail- able, such as in a CICS transaction.
• Multi-send for use where the environment is consistent from call-to-call, such as in a batch job.
See “Configuring and Using the Ironstream API,” on page 18-1.
DB2 Data inserted into DB2 tables. See “DB2 Data Forwarding,” on page 15-1.
FILELOAD Any file containing records that have displayable EBCDIC data. See “Sequential File Forwarding,” on page 16-1.
IMS Many IMS log records can be captured by Ironstream. For a list of all supported IMS record types, see “IMS Log Record Forwarding,” on page 20-1.
Log4j Application user log data. See “Setting Up Log4j,” on page 19-1.
Network Alerts Alerts generated by the ZEN suite of Network Monitoring components. See “Alerts and SyslogD Forwarding,” on page 14-1.
Table 2-1: Network Monitoring Components (continued)
Component Description
Understanding Ironstream
For descriptions of all the fields in the supported SMF record types, refer to the HTML-based Ironstream SMF Record Field Reference available on the “Ironstream Documentation” page on the Precisely Support Portal: support.precisely.com.
For syslog and FILELOAD data, refer to “Forwarded Data Formats” on page A-1, which provides a list of the fields and an example of the field formatting in a target destination.
Ironstream forwards one type of data in each Ironstream instance. As an example, this means that to collect syslog, SMF and log4j data requires three address spaces to be started. Each instance of Ironstream requires a separate configuration file member that includes a NAME parameter in the SYSTEM section. This parameter defines the unique ‘instance name’ of that specific instance of Ironstream running in the LPAR.
RMF DCE data type that collects user-selected RMF III system performance and utilization data. See “Setting Up the RMF Data Forwarder,” on page 23-1.
SMF Many SMF record types can be captured by Ironstream. For a list of all supported SMF record types, see “SMF Record Filtering,” on page 12-1.
Syslog All messages can be captured, or selected messages from ACF2, CICS, DB2, IMS, RACF, Top Secret, USS, WebSphere Application Server and WebSphere MQ and z/OS IEF messages. You may also specify three and four-character message prefixes to capture. See “Syslog Message Filtering,” on page 11-1.
SyslogD Any messages written to SyslogD and captured by the ZEN Network Monitoring core component. See “Alerts and SyslogD Forwarding,” on page 14-1.
SYSOUT Spool data sets for both active and completed jobs as well as input data sets and JES system data sets: JESJCLIN, JESMSGLG, JESJCL and JESYSMSG. See “SYSOUT Forwarding,” on page 13-1/
SYSTEMSTATE Generates z/OS system-level metrics data to forward to a destination. See “System State Forwarding,” on page 17-1.
USS DCE data type that monitors USS directories and files at specified intervals for offloading to Ironstream according to user-defined filters. See “Setting Up USS File Collection,” on page 22-1.
Table 2-2: Ironstream Mainframe Data Sources (continued)
Source Description
Ironstream Design Roadmap This section provides some design decision points to consider when initially installing Ironstream in your z/OS environment.
Which destination is your data being forwarded data to? • Splunk – Requires authorized Splunk resource to open one or more ports to Splunk
server(s), define TCP data inputs (SSL or non-SSL), and set up Splunk indexes to store data.
• Elastic – Requires authorized Elastic resource to open one or more ports to Logstash server(s), define TCP/HTTP input plug-ins (SSL or non-SSL), and set up Elastic search indexes to store data.
• Kakfa – The Ironstream Kafka modules and the Java libraries must be APF-authorized.
What type of z/OS environment? • IBM Cross System Coupling Facility (XCF). Required for these forwarders:
DCE USS DCE RMF IRONSTREAM API ZEN Network Monitoring Component (NMC)
• IBM coupling facility log stream. Required for these functions:
Data Loss Prevention (DLP) SMF Real-time InMemory (INMEM)
• IBM System Authorization Facility (SAF). Optional for DLP
• VTAM application major node. Required for:
Ironstream Desktop (IDT) ZEN Network Management Component (NMC)
• RMF Distributed Data Server (DDS) for DCE RMF
What type of security? System (default) RACF ACF2 Top Secret
What data types are going to be forwarded? • SMF data capture method:
SMF System Exits – Must be defined in all SYS and SUBSYS EXIT parameter lists in the SMFPRMnn member of SYS1.PARMLIB or one of its related data sets
SMF Real-time (INMEM) interface – Requires IBM coupling facility log stream or DASD-only log stream
Ironstream Configuration and User’s Guide 2–7
Understanding Ironstream
• Data Collection Extension for USS monitoring and RMF III metric collection:
Requires IBM Cross System Coupling Facility (XCF) Requires IDT for dynamic monitoring and configuration changes DCE USS requires file permissions in HFS DCE RMF requires RACF user ID to access the RMNF Distributed Data Server
(DDS)
• Syslog – No access restrictions to OPERPARM segment of a RACF user ID:
If the z/OS uses the Message Processing Facility, or any similar product, all messages to be forwarded by SDFLOG are set to AUTO(YES) in the MPF configuration.
• SYSOUT – Requires READ access to any JESSPOOL data set to be forwarded
• ZEN NMC –Requires VTAM application major node
Does your environment require minimal data loss? • Data Loss Prevention:
Splunk only via the “Splunk Indexer Acknowledgment” function HTTP or HTTPS Does not support DCE USS
Should I manually configure Ironstream or use the Configurator Tool? Ironstream strongly recommends initially using the ISPF-based Ironstream Configurator tool, which generates the started task JCL and configuration files for the following data sources:
• SMF
• Syslog
• SYSOUT
• IDT
These resulting configuration files provide a good starting point from which additional customizations can be added.
Ironstream Starter Edition The Ironstream Starter Edition is a free version of Ironstream, with limited product support, for forwarding z/OS syslog data and SMF 205 type records to Splunk. Syslog data and SMF record type 205 are the only supported data sources in this version.
If you want to forward other sources of data besides z/OS syslog or SMF record type 205, or require technical support, you will need to license the fully functional and supported version of Ironstream. Email us at [email protected] for additional information.
2–8 Ironstream Configuration and User’s Guide
Integration with Splunk Premium Applications Splunk offers a premium application called “Enterprise Security” (ES), a large, multi-faceted application that provides comprehensive security surveillance of an organization’s computing infrastructure. To leverage the Splunk ES application, Ironstream has created an “Ironstream® Technical Add-on for Splunk Enterprise Security” (or Ironstream TA-ES), which collects, formats, and manages the data from several z/OS mainframe sources powered by Ironstream. For more information, see Chapter 24, “Splunk Enterprise Security and Ironstream”.
Other Ironstream TAs may be added in the future for Splunk premium applications.
Ironstream Configuration and User’s Guide 2–9
Understanding Ironstream
Section II Configuring Ironstream Target Destinations
This section provides instructions for configuring Ironstream target destinations.
• “Setting Up Splunk for Ironstream”
• “Setting Up Elastic for Ironstream”
• “Setting Up Kafka for Ironstream”
Chapter 3 Setting Up Splunk for Ironstream
This chapter describes how a Splunk administrator needs to set up one or more indexes for forwarding records to Splunk, as well as a custom TCP port for the mainframe data forwarded by Ironstream.
Topics: • “Overview of Setting Up Splunk” on page 3-1
• “Setting Up a Non-SSL Port” on page 3-2
• “Setting Up an SSL Port” on page 3-2
• “Setting Up a Splunk Index” on page 3-3
• “Next Steps” on page 3-3
Overview of Setting Up Splunk As a Splunk administrator, you will need to either define an index or identify an existing index for forwarding records to Splunk. Additionally, you will need to define a custom port for the mainframe data.
For example, if you attempt to send Ironstream data to Splunk using an existing default Splunk port, it uses a proprietary protocol, SPLUNKTCP, in order to communicate. Ironstream will send data to the default port and Splunk will accept it. However, when Splunk then attempts to communicate with the Ironstream port (part of the Splunk TCP protocol), there will be no response and Splunk will terminate the TCP connection. Therefore, it is necessary to define a TCP port specifically for Ironstream to transmit its data to Splunk.
For reliability and performance reasons, we recommend defining and using multiple ports on one or more Splunk Indexers. Refer to the Splunk documentation for more information.
The following sections provide some samples for setting up a port and an index.
Ironstream Configuration and User’s Guide 3–1
Setting Up Splunk for Ironstream
Setting Up a Non-SSL Port Follow these steps to set up Splunk ports.
1. In the Splunk web launcher page, in the Data box, click Add Data.
2. In the Add Data to Splunk box, under Or choose a Data Source, click From a TCP port link.
3. Enter the port number in the TCP port box.
4. Set Manual and json for sourcetype options.
5. Click Save.
6. Repeat as needed.
Setting Up an SSL Port Setting up an SSL port requires action on both the Splunk and mainframe platforms.
Splunk Platform Following is a sample procedure to set up a Splunk SSL port. In this example, cacert.pem is used for the default Splunk SSL CA certificate, 9998 is used for the SSL port number, and server.pem is used for the Splunk server’s certificate.
1. Edit $SPLUNK_HOME/etc/system/local/inputs.conf: [tcp://9998] connection_host = dns sourcetype = json [tcp-ssl:9998] compressed = false [SSL] password = password rootCA = $SPLUNK_HOME/etc/auth/cacert.pem serverCert = $SPLUNK_HOME/etc/auth/server.pem
2. Restart Splunk services using: $SPLUNK_HOME/bin/splunk restart.
3. Send a copy of cacert.pem to the client on the mainframe.
Mainframe Platform The Splunk Server x.509 CA certificate must be available for Ironstream to validate the authenticity of the SSL connection. This can be done with a key database or by using an SAF product, such as RACF.
To set up a key database follow these steps:
1. Create a key database in the OMVS file system using the gskkyman program or use an existing key database.
2. Import the Splunk CA x.509 certificate (cacert.pem) sent from the Splunk platform into the key database.
3. Assign a label to this certificate in the key database.
3–2 Ironstream Configuration and User’s Guide
Setting Up Splunk for Ironstream
Note: If a key token is used, specify *TOKEN*/key token name. For example, *TOKEN*/SYSTOKEN, where *TOKEN* indicates that a key token is used and SYSTOKEN is the key token name.
Setting Up a Splunk Index Follow these steps to set up a Splunk index.
1. In the Splunk web launcher page, in the upper right corner, click Settings.
2. Click Indexes under Data.
3. Click New.
5. Click Save.
Next Steps These chapters have information on configuring Ironstream for Splunk destinations:
• Chapter 6, “Configuring Ironstream Components” – Describes how to configure the Ironstream forwarder components for newly installed Ironstream instances using the Ironstream Configurator utility.
• Chapter 7, “Manually Setting Ironstream Parameters” – How to configure the Ironstream configuration file parameters to define data sources and target destinations.
• Chapter 10, “Configuring Data Loss Prevention” – How to configure Ironstream to minimize any loss of forwarded Splunk data due to extended network or Splunk outages.
Ironstream Configuration and User’s Guide 3–3
Setting Up Splunk for Ironstream
3–4 Ironstream Configuration and User’s Guide
Chapter 4 Setting Up Elastic for Ironstream
This chapter describes how an Elastic administrator needs set up the Logstash, Elasticsearch, and Kibana products to ingest the mainframe data forwarded by Ironstream.
Topics: • “Overview of Elastic Support in Ironstream” on page 4-1
• “Forwarding Data to Logstash” on page 4-2
• “Receiving Ironstream Data in Elasticsearch” on page 4-4
• “Displaying Ironstream Data in Kibana” on page 4-4
• “Field Mappings and Elastic Defaults” on page 4-4
• “Next Steps” on page 4-5
Overview of Elastic Support in Ironstream Data from Ironstream is sent to Elastic in single-line JSON format and is readily consumed. Standard processing across Logstash, Elasticsearch, and Kibana is applied after some typical configuration has been carried out by an Elastic administrator. It is assumed that you are familiar with the general mechanics and configuration of Elastic.
The following sections cover the requirements for Logstash, Elasticsearch, and Kibana.
Elastic Limitations with Ironstream In the current release, some Ironstream features are not compatible with the Elastic.
• Data Loss Prevention (DLP) – The DLP feature uses the Splunk Indexer Acknowledgment function and therefore cannot be configured to work with the Elastic.
• DCE RMF – Due to the large volume of fields that can be generated for RMF, recommends using the RMF field filtering capabilities provided in the Ironstream Desktop. For more information, see Chapter 23, “Setting Up the RMF Data Forwarder”.
Ironstream Configuration and User’s Guide 4–1
Setting Up Elastic for Ironstream
• DCE USS – Data can be delivered to Elastic, but due to current limitations the source (file name) will not be supplied. For more information, see Chapter 22, “Setting Up USS File Collection”.
Forwarding Data to Logstash To get data flowing into your Elastic target environment, Ironstream will access the port that is configured as the input source in Logstash, using either TCP or HTTP.
Sending Data to Logstash For all connection methods, the DESTINATION section of the Ironstream configuration file must be configured as follows:
"DESTINATION" "TARGET":"ELASTIC" "TYPE":"TCP|HTTP|HTTPS" "IPADDRESS":"123.456.78.90" "PORT":"1000"
"TARGET": "ELASTIC" - This required parameter must be set to “ELASTIC” as the data target for this Ironstream instance. If a target is not specified, the default value for TARGET is “SPLUNK”.
"TYPE":"TCP | HTTP | HTTPS" - The required transmission protocol method.
• TCP – The default, signifies that standard TCP protocol will be used. When TCP is specified, all Ironstream connections must specify "SSL":"YES".
• HTTP – Signifies that the HTTP protocol will be used. When HTTP is specified, all Ironstream connections must specify "SSL":"NO".
• HTTPS – Signifies that the HTTP with SSL protocol will be used. When HTTPS is specified, all Ironstream connections must specify "SSL":"YES".
"IPADDRESS":"ipaddress" - This required parameter specifies the IP address (up to 255 characters) of a Logstash TCP listener. This can be either a numerical address as in “123.456.78.90” or an URL, such as “my_logstash_tcp_listener”.
"PORT": "port" - This required parameter specifies the port number on which the Logstash listening.
For more information, refer to the “DESTINATION Section” in Chapter 7, “Manually Setting Ironstream Parameters”.
4–2 Ironstream Configuration and User’s Guide
Setting Up Elastic for Ironstream
Logstash Configuration Logstash is an open source data collection engine with real-time pipe-lining capabilities. Data arriving from Ironstream is processed in the usual way by Logstash.
Here is an example Logstash configuration file (logstash.yml) that shows the “input”, “filter”, and “output” sections, where different types of data are received (input) over different ports, processed (filter), and sent (output) to different indexes:
input { # Port 4300 receives SMF data tcp { port => 4300 type => "smf" codec => "json" } # Port 4310 receives SYSLOG data tcp { port => 4310 type => "syslog" codec => "json" } # Port 4315 receives SYSTEMSTATE data tcp { port => 4315 type => "systemstate" codec => "json" } }
filter { # # Use grok, mutate, ruby etc. to manipulate data to requirements. }
# Send the result to Elasticsearch output { if [type] == "smf" and [MFSOURCETYPE] == "SMF110" { elasticsearch { hosts => "<elastic-server>:<port-number>" user => "<username>" password => "<password>" index => "ironstream-example-smf110" } } else if [type] == "smf" and [MFSOURCETYPE] == "SMF080" { elasticsearch { hosts => "<elastic-server>:<port-number>" user => "<username>" password => "<password>" index => "ironstream-example-smf80" } } else if [type] == "syslog" { elasticsearch { hosts => "<elastic-server>:<port-number>" user => "<username>" password => "<password>" index => "ironstream-example-syslog"
Ironstream Configuration and User’s Guide 4–3
Setting Up Elastic for Ironstream
} } else if [type] == "systemstate" { elasticsearch { hosts => "<elastic-server>:<port-number>" user => "<username>" password => "<password>" index => "ironstream-example-systemstate" } } }
Notes • In the “input” section of the above Logstash configuration, each “tcp” section has:
codec => "json"
or: codec => "json_lines"
Even though Ironstream sends JSON formatted data, it arrives as a single line of data for each event/record. Therefore, all three codecs can be used.
• To send data via HTTP (non-SSL) or secure TCP, refer to the Elastic Logstash documentation. Standard Logstash configuration and processing is supported.
Receiving Ironstream Data in Elasticsearch Elasticsearch requires at least one index to receive data from Ironstream. Use standard Elastic processing and commands to create the index according to your requirements.
Displaying Ironstream Data in Kibana Once the data from Ironstream is available in Elasticsearch, it can be displayed on Kibana visualizations and dashboards. Again, this only requires standard processing. See “Field Mappings and Elastic Defaults” below for more information.
Field Mappings and Elastic Defaults Elastic has a default setting for a maximum of 1000 mapped fields per index. Some mainframe data sources can produce documents (JSON structures) that exceed this limit.
To mitigate this situation, there are two options:
4–4 Ironstream Configuration and User’s Guide
Setting Up Elastic for Ironstream
1. Increase the number of mapped fields for a given index by setting the following value: index.mapping.total_fields.limit
Refer to the Elastic documentation for details on how to update this setting.
2. Use Ironstream filtering to decrease the number of fields sent to Elastic, and keep under the default limit. Refer to chapters in the Ironstream documentation that describe message or record filtering.
Next Steps These chapters have information on configuring Ironstream for Elastic destinations:
• Chapter 6, “Configuring Ironstream Components” – Describes how to configure the Ironstream forwarder components for newly installed Ironstream instances using the Ironstream Configurator utility.
• Chapter 7, “Manually Setting Ironstream Parameters” – Describes how to manually configure the Ironstream configuration file parameters to define data sources and target destinations.
Ironstream Configuration and User’s Guide 4–5
Setting Up Elastic for Ironstream
4–6 Ironstream Configuration and User’s Guide
Chapter 5 Setting Up Kafka for Ironstream
This chapter describes how to configure the internal Kafka producer to publish z/OS data to Kafka brokers when using Ironstream.
Topics: • “Overview of Apache Kafka Support in Ironstream” on page 5-2
• “Downloading and Installing Kafka to z/OS OMVS Systems” on page 5-3
• “Applying the Kafka Function to Ironstream” on page 5-4
• “Providing APF Authorization for Ironstream Programs” on page 5-5
• “Configuring Ironstream to Use the Kafka Producer” on page 5-6
• “Kafka Delivery Guarantees Recommended by Ironstream” on page 5-8
• “Confirming Kafka Activity Status in Ironstream” on page 5-12
• “Dynamically set Topic and Key when using Ironstream API and KAFKA feature” on page 5-13
• “Next Steps” on page 5-13
Ironstream Configuration and User’s Guide 5–1
Setting Up Kafka for Ironstream
Overview of Apache Kafka Support in Ironstream Many organizations are challenged with accessing mainframe data for processing by data analytics platforms. Ironstream can capture mainframe data and publish it to Apache Kafka clusters. Through Kafka, mainframe data can be accessed by Hadoop, Spark, and other open systems. If you are new to Kafka, refer to the Apache Kafka documentation for more information: https://kafka.apache.org/documentation/
How Does Kafka Work with Ironstream? Kafka is a distributed, partitioned, replicated commit log service that functions like a high-capacity publish-subscribe messaging system. Ironstream has implemented an internal Kafka producer that publishes data from z/OS to Kafka brokers via Ironstream topics, which are configured as a target DESTINATION in the configuration file. Ironstream customers can write their own Kafka consumer code to capture the data from Kafka brokers and make it accessible for data analytics platforms.
Kafka Requirements and Limitations with Ironstream In the current Ironstream release, running Kafka requires:
• Receive and Apply the Kafka Function and prerequisite PTF.
• Kafka version 0.9.0.1 or higher.
• Java 8.0 or higher, either 31-bit or 64-bit.
In the current release, some Ironstream features are not compatible with Kafka.
• Data Loss Prevention
• The offline log4j reader
• Forwarding IMS log records
• SMF data is only forwarded to Kafka at the block level, which could contain multiple records
• Batching sequential FILELOAD data using these BATCH parameters: “BATCH_AND_BREAK_ON_CONDITION" and "BATCH_RECORDS":"YES"
Instructions for Downloading and Configuring Kafka on Ironstream The steps to configure Kafka on Ironstream must be followed in this order:
1. “Downloading and Installing Kafka to z/OS OMVS Systems”
2. “Applying the Kafka Function to Ironstream”
3. “Providing APF Authorization for Ironstream Programs”
4. “Configuring Ironstream to Use the Kafka Producer”
5. “Configuring Ironstream to Use Other Kafka Producer Configurations”
5–2 Ironstream Configuration and User’s Guide
Setting Up Kafka for Ironstream
Downloading and Installing Kafka to z/OS OMVS Systems Follow these steps to download Kafka to your mainframe OMVS system:
1. Download the appropriate Kafka binary from http://kafka.apache.org/downloads.html to a directory on your PC or Unix/Linux system.
2. From your PC or Unix/Linux system, unzip the downloaded kafka_version.tgz file to extract the kafka_version.tar file.
3. On your OMVS system, create a new directory for use by Kafka. For example, you can create a directory named /usr/lpp/kafka.
4. FTP the kafka_version.tar file to the OMVS Kafka directory using binary mode.
5. From the OMVS Kafka directory, type tar -xvf kafka_version.tar to extract the file to the current directory.
This creates a kafka_version directory under your OMVS Kafka directory.
For example: /usr/lpp/kafka/kafka_2.11-0.10.0.0 6. Under the new kafka_version directory, note that there is also a /libs directory. It
contains all the JAR files that Apache Kafka provides, and Ironstream will use the producer APIs in these JARs to access data for Kafka brokers.
For example: "KAFKHOME":"/usr/lpp/kafka/kafka_2.11-0.10.0.0/libs" Important! The Kafka /libs directory will be used by Ironstream as the KAFKHOME in the Ironstream configuration file.
Converting Binary Kafka Files from ASCII to EBCDIC (Optional) Since you must FTP the Kafka package in binary mode, all files under the /bin or /config directories are still in ASCII encoding; therefore, you cannot run any Kafka shell commands on OMVS. Ironstream calls the Kafka producer APIs directly from the /libs directory, so starting a Kafka Server on OMVS or issuing any Kafka shell commands is not required for Ironstream.
If your Kafka server is not running on OMVS, you can skip this step. If you want to run Kafka as a server on OMVS and use Kafka support shell commands, then you must first convert all files under /bin or /config from ASCII to EBCDIC.
To perform the ASCII to EBCDIC conversion, use this command:
iconv -f ISO8859-1 -t IBM-1047 file1 > file2
Ironstream Configuration and User’s Guide 5–3
Setting Up Kafka for Ironstream
Applying the Kafka Function to Ironstream The IronstreamV21.Kafka.zip file provided by Ironstream contains the Kafka LSDK210 function. LSDK210 contains a binary image of a TSO XMIT’d data set in flat 80*3120 FB format.
To get this data set onto your system and apply LSDK210 to Ironstream:
1. Create a ZFS subdirectory in the directory defined by DDDEF SSDFHFS. For example:
<SSDFHFS>/kafka, where <SSDFHFS> is /user/ironstream_home_directory/ 2. BINary FTP the LSDK210.FNCT.bin XMIT file to your system as an FB 80*3120 file.
3. Use the TSO RECEIVE command to receive it into <your.hlq>.SDFPTF.PTFLIB library.
4. SMP/E RECEIVE FUNCTION LSDK210. If RC0, APPLY with the CHECK option. If the APPLY CHECK is successful and yields an RC0, then APPLY without the CHECK option and ensure an RC0.
Sample JCL: //RECEIVE EXEC PGM=GIMSMP,REGION=0M,PARM='CSI=<your.hlq>.SMP.CSI' //SMPPTFIN DD DISP=SHR,DSN=<your.hlq>.SDFPTF //SMPCNTL DD * SET BDY(GLOBAL). RECEIVE SYSMODS LIST.
//APPLY EXEC PGM=GIMSMP,REGION=0M,PARM='CSI=<your.hlq>.SMP.CSI' //SMPCNTL DD * SET BDY(TgtZone). APPLY CHECK C(ALL) GROUP SELECT(LSDK210) BYPASS(HOLDSYSTEM).
If the above APPLY CHECK was successful, resubmit with the following change: APPLY C(ALL) GROUP SELECT(LSDK210) BYPASS(HOLDSYSTEM).
If you encounter any problems, contact Precisely Support via https://www.precisely.com/support.
5–4 Ironstream Configuration and User’s Guide
Setting Up Kafka for Ironstream
Providing APF Authorization for Ironstream Programs Ironstream must run in APF-authorized mode; therefore, all programs that Ironstream invokes must also be APF-authorized, including the Ironstream Kafka modules and the Java libraries.
First, ensure that the SYS1.SIEALNKE data set is in the link list. The system automatically places this data set at the beginning of the link list, unless it is overridden by a SYSLIB statement in PROGxx. The default IEASYSxx value LNKAUTH=LNKLST must be in effect, or SYS1.SIEALNKE must be APF-authorized.
After applying FUNCTION LSDK210 on your OMVS system (as described in “Applying the Kafka Function to Ironstream” on page 5-4), verify that there is sufficient authorization for the modules libSDFKaf64JNI.so and libSDFKafkaJNI.so under the Ironstream Kafka library, as well as for the Java libraries.
Authorizing the Ironstream Kafka Modules The Ironstream Kafka libraries are in the /ironstream_home_directory/kafka directory. You can check the authorization by entering the following command at a Unix shell prompt:
extattr /usr/lpp/ironstream/kafka/*
• libSDFKaf64JNI.so • libSDFKafkaJNI.so
extattr +a libSDFKaf64JNI.so
extattr +a libSDFKafkaJNI.so
Authorizing the Java Libraries The Java library is in the /usr/lpp/java/ directory and contains different Java versions.
Note: Java 8.0 or higher is required when running Kafka with Ironstream.
You must ensure that the Java library you are running for Ironstream is APF-authorized. You can APF-authorize all modules under the Java library by entering this command:
find /usr/lpp/java/J8.0/ -name "*.so" | xargs extattr +a
The Unix “find” command searches all the subdirectories for the *.so file names and pipes the results into xargs, which converts them into the appropriate input format for extattr.
Ironstream Configuration and User’s Guide 5–5
Setting Up Kafka for Ironstream
Configuring Ironstream to Use the Kafka Producer For Kafka configuration, only the DESTINATION section of the Ironstream configuration file is different from the standard TCP/IP parameters for Splunk or Elastic as an Ironstream target destination. For detailed explanations for all Ironstream configuration file parameters, refer to Chapter 7, “Manually Setting Ironstream Parameters”.
Setting the Ironstream DESTINATION The DESTINATION section describes where the data is to be sent and how it is to be forwarded. Multiple Kafka broker addresses and ports can be specified by repeating the IPADDRESS/PORT for each topic.
"DESTINATION" - The required section heading.
"TARGET": "KAFKA" - This required parameter must be set to “KAFKA” as the data target for this Ironstream instance. If a target is not specified, the default value for TARGET is SPLUNK.
"TOPIC":"topic_name" - This required parameter defines the name of the Kafka topic for this Ironstream instance.
"JAVAHOME":"Java_home address" - This required parameter specifies which JRE (Java Runtime Environment) to use for execution and must correspond with the JAVA64BIT parameter. This is an OMVS directory path pointing to a 31-bit or 64-bit version of Java. For example: "/usr/lpp/java/J8.0" Note: JRE 8.0 or higher is required when running Kafka with Ironstream.
"JAVA64BIT":"NO" |"YES" - This required parameter specifies whether to use the 64-bit or 31-bit version of the JVM. This value is based on which directory the JAVAHOME parameter points to.
• NO – 31-bit version of Java is used • YES – 64-bit version of Java is used
"SSDFHOME":"ssdf address in OMVS" - This required parameter specifies the address where the Ironstream Kafka producer class is available.
Example: /usr/lpp/ironstream/kafka
"KAFKHOME":"KAFKA API library in OMVS" - This required parameter specifies the address of the Kafka API library in OMVS.
For example: /usr/lpp/kafka/kafka_2.11-0.9.0.1/libs
"IPADDRESS":"ipaddress/hostname" - This required parameter specifies the IP address (up to 255 characters) or the host name of a Kafka broker. This can be either a numerical address as in “192.168.61.3” or an URL, such as “my_local_kafka_broker”.
"PORT": "port" - This required parameter specifies the port number on which the Kafka broker is listening.
5–6 Ironstream Configuration and User’s Guide
Setting Up Kafka for Ironstream
Note: Any number of IPADDRESS and PORT combinations can be specified. This would be considered as the broker list for the Kafka cluster, and it is solely used by the Kafka producer for load balancing.
Example Ironstream JCL for Sending Data to Kafka Brokers //KAFKA EXEC PGM=SSDFMAIN //STEPLIB DD DISP=SHR,DSN=&HLQ.. SSDFAUTH //SYSPRINT DD SYSOUT=* //STDOUT DD SYSOUT=* //STDERR DD SYSOUT=* //SYSABEND DD SYSOUT=* //CEEOPTS DD DISP=SHR,DSN=&HLQ..SSDFSAMP(CEEOPTS) //SSDFCONF DD *
"KEYS" "KEY_WARN_DAYS":"30" "KEY":"NNNNNNNNNNNNNNNN"
"SOURCE" "DATATYPE":"SYSLOG" "FILTER":"SSDFFLOG"
Configuring Ironstream to Use Other Kafka Producer Configurations Ironstream’s internal Kafka producer needs Topic, IPADDRESS, and PORT information for publishing data to Kafka brokers. The internal producer uses Kafka’s default producer settings. However, if you want to use other producer configurations, Ironstream also supports all Kafka producer key-value pairs in the producer properties file format. The producer.properties configuration file under SSDFHOME is parsed and the values inside are honored by Ironstream.
Ironstream provides a template file named producer.properties.template under SSDFHOME that you can modify and save your preferred configuration to producer.properties.
Ironstream Configuration and User’s Guide 5–7
Setting Up Kafka for Ironstream
Using the TLS protocol with Kafka Brokers Ironstream provides the ability to publish z/OS data to Kafka brokers using the TLS protocol.
The producer.properties.template under SSDFHOME contains the following SSL fields:
security.protocol=SSL ssl.truststore.location=/u/kafka/kafka-onestop/kafka.client.truststore.jks ssl.truststore.password=testssl ssl.keystore.location=/u/kafka/kafka-onestop/kafka.client.keystore.jks ssl.keystore.password=testssl ssl.key.password=testssl
Keystore Requirements in producer.properties for SSL client.auth Type The three types of SSL authentication are: required, requested, or none:
• When the ssl.client.auth=required in the server.properties file of the Kafka broker server, then the following keystore fields must be set in the producer.properties file.
ssl.keystore.location=/u/kafka/kafka-onestop/kafka.client.keystore.jks ssl.keystore.password=testssl ssl.key.password=testssl
• When the ssl.client.auth is not specified, or is specified as requested or none in the server.properties of the Kafka broker server, then the keystore fields are not necessary in the producer.properties file.
It is important to note that these are only example settings, so you must update the SSL properties for your environment and rename the template file to “producer.properties”.
Kafka Delivery Guarantees Recommended by Ironstream There are three message delivery guarantees that Kafka provides between a producer and consumer: at-most-once, at-least-once, and exactly-once. When data is consumed from Kafka by a consumer group or a standalone consumer, Ironstream recommends using either the at-most-once and at-least-once guarantees. This section discusses the pro’s and con’s of using these Kafka delivery guarantees and explains how to implement them in Ironstream.
At-most-once Delivery Guarantee
Expected Kafka Producer Behavior When the Kafka acknowledgment property is set to zero ("acks = 0") in the producer.properties file, Kafka producers provide the at-most-once guarantee. The producer sends the record to the broker and does not wait for a response. With this setting, once messages are sent, the producer will not try to re-send them. In other words, the producer uses a “send-and-forget-approach” when “acks = 0”.
5–8 Ironstream Configuration and User’s Guide
Setting Up Kafka for Ironstream
In this mode, the chance of data loss occurring is higher, since the producer does not confirm the message was received by the broker. In some cases, messages may simply not reach the broker, and a broker failure soon after message delivery could result in data loss.
At-most-once is the default producer setting when the producer.properties file is not being used.
Expected Kafka Consumer Behavior When using the at-most-once guarantee, a message should be delivered at a maximum of only once. This means that it is acceptable to lose a message rather than delivering a message twice in this method. As a result, applications adopting the at-most-once semantic can achieve higher throughput and lower latency.
By default, Kafka consumers are set to use the at-most-once guarantee because the enable.auto.commit property defaults to true even when it is not explicitly set in the producer.properties file.
Note that commits are asynchronous when enable.auto.commit is true.
In cases where a consumer fails after messages are committed as read, but before they were processed, the unprocessed messages are lost and will not be read again.
At-least-once Delivery Guarantee
Expected Kafka Producer Behavior When the Kafka acknowledgment property is set to one ("ack = 1"), in the producer.properties file, Kafka producers provide the at-least-once guarantee. A producer sends the record to the broker and waits for a response from the broker. If no acknowledgment is received for the sent message, then the producer will retry sending the messages based on a retry configuration value. By default, the retries property is set to 0; therefore, make sure this is set to the appropriate value for your environment.
Unlike the passive at-most-once semantic, you must add the following properties in the producer.properties file to activate the at-least-once semantic, as shown in the following table:
Kafka consumer property Default value
enable.auto.commit true
auto.commit.interval.ms 5000ms
Setting Up Kafka for Ironstream
When using at-least-once, the chances of data loss are moderate since the producer confirms that the message was received by the broker (leader partition). Because the replication of the follower partition happens after the acknowledgment, this could still result in data loss.
Expected Kafka Consumer Behavior With the at-least-once guarantee, it is acceptable to