159
INFORMATICA PREPARED BY: Ammar Hasan

Informatica Powercenter

Embed Size (px)

DESCRIPTION

Informatica Powercenter 9.X

Citation preview

  • INFORMATICA

    PREPARED BY:

    Ammar Hasan

  • CONTENTS

    CHAPTER 1: TOOL KNOWLEDGE

    1.1 Informatica PowerCenter 1.2 Product Overview

    1.2.1 PowerCenter Domain 1.2.2 Administration Console 1.2.3 PowerCenter Repository 1.2.4 PowerCenter Client 1.2.5 Repository Service 1.2.6 INTEGRATION SERVICE 1.2.7 WEB SERVICES HUB 1.2.8 DATA ANALYZER 1.2.9 METADATA MANAGER

    CHAPTER 2: REPOSITORY MANAGER

    2.1 Adding a Repository to the Navigator 2.2 Configuring a Domain Connection 2.3 Connecting to a Repository 2.4 Viewing Object Dependencies 2.5 Validating Multiple Objects 2.6 Comparing Repository Objects 2.7 Truncating Workflow and Session Log Entries 2.8 Managing User Connections and Locks 2.9 Managing Users and Groups 2.10 Working with Folders

    CHAPTER 3: DESIGNER

    3.1 Source Analyzer 3.1.1 Working with Relational Sources 3.1.2 Working with Flat Files

    3.2 Target Designer 3.3 Mappings 3.4 Transformations

    3.4.1 Working with Ports 3.4.2 Using Default Values for Ports 3.4.3 User-Defined Default Values

    3.5 Tracing Levels 3.6 Basic First Mapping 3.7 Expression Transformation 3.8 Filter Transformation 3.9 Router Transformation

  • 3.10 Union Transformation 3.11 Sorter Transformation 3.12 Rank Transformation 3.13 Aggregator Transformation 3.14 Joiner Transformation 3.15 Source Qualifier 3.16 Lookup Transformation

    3.16.1 Lookup Types 3.16.2 Lookup Transformation Components 3.16.3 Connected Lookup Transformation 3.16.4 Unconnected Lookup Transformation 3.16.5 Lookup Cache Types: Dynamic, Static, Persistent, Shared

    3.17 Update Strategy 3.18 Dynamic Lookup Cache Use 3.19 Lookup Query 3.20 Lookup and Update Strategy Examples

    Example to Insert and Update without a Primary Key Example to Insert and Delete based on a condition

    3.21 Stored Procedure Transformation 3.21.1 Connected Stored Procedure Transformation 3.21.2 Unconnected Stored Procedure Transformation

    3.22 Sequence Generator Transformation 3.23 Mapplets: Mapplet Input and Mapplet Output Transformations 3.24 Normalizer Transformation 3.25 XML Sources Import and usage 3.26 Mapping Wizards

    3.26.1 Getting Started 3.26.2 Slowly Changing Dimensions

    3.27 Mapping Parameters and Variables 3.28 Parameter File 3.29 Indirect Flat File Loading

    CHAPTER 4: WORKFLOW MANAGER

    4.1 Informatica Architecture 4.1.1 Integration Service Process 4.1.2 Load Balancer 4.1.3 DTM Process 4.1.4 Processing Threads 4.1.5 Code Pages and Data Movement 4.1.6 Output Files and Caches

    4.2 Working with Workflows 4.2.1 Assigning an Integration Service 4.2.2 Working with Links 4.2.3 Workflow Variables 4.2.4 Session Parameters

  • 4.3 Working with Tasks

    4.3.1 Session Task 4.3.2 Email Task 4.3.3 Command Task 4.3.4 Working with Event Tasks 4.3.5 Timer Task 4.3.6 Decision Task 4.3.7 Control Task 4.3.8 Assignment Task

    4.4 Schedulers 4.5 Worklets 4.6 Partitioning

    4.6.1 Partitioning Attributes 4.6.2 Partitioning Types 4.6.3 Some Points

    4.7 Session Properties 4.8 Workflow Properties

  • Chapter 1

    Informatica

    PowerCenter

  • CHAPTER 1: TOOL KNOWLEDGE

    1.1 INFORMATICA POWERCENTER

    Informatica PowerCenter is a powerful ETL tool from Informatica Corporation. Informatica Corporation products are:

    Informatica PowerCenter Informatica on demand Informatica B2B Data Exchange Informatica Data Quality Informatica Data Explorer

    Informatica PowerCenter is a single, unified enterprise data integration platform for accessing, discovering, and integrating data from virtually any business system, in any format, and delivering that data throughout the enterprise at any speed.

    Informatica PowerCenter Editions Because every data integration project is different and includes many variables such as data volumes, latency requirements, IT infrastructure, and methodologiesInformatica offers three PowerCenter Editions and a suite of PowerCenter Options to meet your projects and organizations specific needs. Standard Edition Real Time Edition Advanced Edition

    Informatica PowerCenter Standard Edition

    PowerCenter Standard Edition is a single, unified enterprise data integration platform for discovering, accessing, and integrating data from virtually any business system, in any format, and delivering that data throughout the enterprise to improve operational efficiency. Key features include:

    A high-performance data integration server A global metadata infrastructure Visual tools for development and centralized administration Productivity tools to facilitate collaboration among architects, analysts, and

    developers

  • Informatica PowerCenter Real Time Edition

    Packaged for simplicity and flexibility, PowerCenter Real Time Edition extends PowerCenter Standard Edition with additional capabilities for integrating and provisioning transactional or operational data in real-time. PowerCenter Real Time Edition provides the ideal platform for developing sophisticated data services and delivering timely information as a service, to support all business needs. It provides the perfect real-time data integration complement to service-oriented architectures, application integration approaches, such as enterprise application integration (EAI), enterprise service buses (ESB), and business process management (BPM). Key features include:

    Change data capture for relational data sources Integration with messaging systems Built-in support for Web services Dynamic partitioning with data smart parallelism Process orchestration and human workflow capabilities

    Informatica PowerCenter Real Time Edition PowerCenter Advanced Edition addresses requirements for organizations that are standardizing data integration at an enterprise level, across a number of projects and departments. It combines all the capabilities of PowerCenter Standard Edition and features additional capabilities that are ideal for data governance and Integration Competency Centers. Key features include:

    Dynamic partitioning with data smart parallelism Powerful metadata analysis capabilities Web-based data profiling and reporting capabilities

  • Informatica PowerCenter Options

    A range of options are available to extend PowerCenters core data integration capabilities.

    Data Cleanse and Match Option features powerful, integrated cleansing and matching capabilities to correct and remove duplicate customer data. Data Federation Option enables a combination of traditional physical and virtual data integration in a single platform. Data Masking Option protects sensitive, private information by masking it in flight to produce realistic-looking data, reducing the risk of security and compliance breaches.

    Enterprise Grid Option enhances scalability and delivers optimal performance while reducing the administrative overhead of supporting grid computing environments.

    High Availability Option minimizes service interruptions during hardware and/or software outages and reduces costs associated with data downtime. Metadata Exchange Options coordinate technical and business metadata from data modeling tools, business intelligence tools, source and target database catalogs,

    and PowerCenter repositories. Partitioning Option helps IT organizations maximize their technology investments by enabling hardware and software to jointly scale to handle large volumes of data and users. Pushdown Optimization Option enables data transformation processing, where appropriate, to be pushed down into any relational database to make the best use of existing database assets.

    Team-Based Development Option facilitates collaboration among development, quality assurance, and production administration teams and across geographically disparate teams.

    Unstructured Data Option expands PowerCenters data access capabilities to include unstructured data formats, providing virtually unlimited access to all enterprise data formats.

  • 1.2 PRODUCT OVERVIEW

    PowerCenter provides an environment that allows you to load data into a centralized location, such as a data warehouse or operational data store (ODS). You can extract data from multiple sources, transform the data according to business logic you build in the client application, and load the transformed data into file and relational targets. PowerCenter also provides the ability to view and analyze business information and browse and analyze metadata from disparate metadata repositories. PowerCenter includes the following components:

    PowerCenter domain Administration Console PowerCenter repository PowerCenter Client Repository Service Integration Service Web Services Hub SAP BW Service Data Analyzer Metadata Manager PowerCenter Repository Reports

    1.2.1 POWERCENTER DOMAIN:

    PowerCenter has a service-oriented architecture that provides the ability to scale services and share resources across multiple machines. PowerCenter provides the PowerCenter domain to support the administration of the PowerCenter services. A domain is the primary unit for management and administration of services in PowerCenter. A domain contains the following components:

    One or more nodes Service Manager Application services

    One or more nodes: A node is the logical representation of a machine in a domain. A domain may contain more than one node. The node that hosts the domain is the master gateway for the domain. You can add other machines as nodes in the domain and configure the nodes to run application services, such as the Integration Service or Repository Service. All service requests from other nodes in the domain go through the master gateway. Service Manager: The Service Manager is built in to the domain to support the domain and the application services. The Service Manager runs on each node in the domain. The Service Manager starts and runs the application services on a machine.

  • The Service Manager performs the following functions:

    Alerts: Provides notifications about domain and service events. Authentication: Authenticates user requests Authorization: Authorizes user requests for services. Domain configuration: Manages domain configuration metadata. Node configuration: Manages node configuration metadata. Licensing: Registers license information and verifies license information Logging: Provides accumulated log events from each service in the domain.

    Application services: A group of services that represent PowerCenter server-based functionality.

    Repository Service: Manages connections to the PowerCenter repository. Integration Service: Runs sessions and workflows. Web Services Hub: Exposes PowerCenter functionality to external clients

    through web services. SAP BW Service: Listens for RFC requests from SAP NetWeaver BW and

    initiates workflows to extract from or load to SAP BW.

  • 1.2.2 ADMINISTRATION CONSOLE The Administration Console is a web application that we use to manage a PowerCenter domain. If we have a user login to the domain, you can access the Administration Console. Domain objects include services, nodes, and licenses. Use the Administration Console to perform the following tasks in the domain: Manage application services: Manage all application services in the domain, such as the Integration Service and Repository Service. Configure nodes: Configure node properties, such as the backup directory and resources. We can also shut down and restart nodes. Manage domain objects: Create and manage objects such as services, nodes, licenses, and folders. Folders allow you to organize domain objects and to manage security by setting permissions for domain objects. View and edit domain object properties: You can view and edit properties for all objects in the domain, including the domain object. View log events: Use the Log Viewer to view domain, Integration Service, SAP BW Service, Web Services Hub, and Repository Service log events. Other domain management tasks include applying licenses, managing grids and resources, and configuring security.

  • 1.2.3 POWERCENTER REPOSITORY The PowerCenter repository resides in a relational database. The repository database tables contain the instructions required to extract, transform, and load data and store administrative information such as user names, passwords, permissions, and privileges. PowerCenter applications access the repository through the Repository Service.

    We administer the repository using the Repository Manager Client tool, the PowerCenter Administration Console, and command line programs.

    Global repository: The global repository is the hub of the repository domain. Use the global repository to store common objects that multiple developers can use through shortcuts. These objects may include operational or Application source definitions, reusable transformations, mapplets, and mappings.

    Local repositories: A local repository is any repository within the domain that is not the global repository. Use local repositories for development. From a local repository, you can create shortcuts to objects in shared folders in the global repository. These objects include source definitions, common dimensions and lookups, and enterprise standard transformations. You can also create copies of objects in non-shared folders. PowerCenter supports versioned repositories. A versioned repository can store multiple versions of an object. PowerCenter version control allows you to efficiently develop, test, and deploy metadata into production.

  • 1.2.4 POWERCENTER CLIENT The PowerCenter Client consists of the following applications that we use to manage the repository, design mappings, mapplets, and create sessions to load the data:

    Designer Data Stencil Repository Manager Workflow Manager Workflow Monitor

    Designer:

    Use the Designer to create mappings that contain transformation instructions for the Integration Service. The Designer has the following tools that you use to analyze sources, design target schemas, and build source-to-target mappings:

    Source Analyzer: Import or create source definitions. Target Designer: Import or create target definitions. Transformation Developer: Develop transformations to use in mappings.

    You can also develop user-defined functions to use in expressions. Mapplet Designer: Create sets of transformations to use in mappings.

    Mapping Designer: Create mappings that the Integration Service uses to extract, transform, and load data.

  • Data Stencil

    Use the Data Stencil to create mapping template that can be used to generate multiple mappings. Data Stencil uses the Microsoft Office Visio interface to create mapping templates. Not used by a developer usually.

    Repository Manager Use the Repository Manager to administer repositories. You can navigate through multiple folders and repositories, and complete the following tasks:

    Manage users and groups: Create, edit, and delete repository users and user groups. We can assign and revoke repository privileges and folder permissions.

    Perform folder functions: Create, edit, copy, and delete folders. Work we perform in the Designer and Workflow Manager is stored in folders. If we want to share metadata, you can configure a folder to be shared.

    View metadata: Analyze sources, targets, mappings, and shortcut dependencies, search by keyword, and view the properties of repository objects.

    We create repository objects using the Designer and Workflow Manager Client tools. We can view the following objects in the Navigator window of the Repository Manager: Source definitions: Definitions of database objects (tables, views, synonyms) or files that provide source data. Target definitions: Definitions of database objects or files that contain the target data. Mappings: A set of source and target definitions along with transformations containing business logic that you build into the transformation. These are the instructions that the Integration Service uses to transform and move data. Reusable transformations: Transformations that we use in multiple mappings. Mapplets: A set of transformations that you use in multiple mappings. Sessions and workflows: Sessions and workflows store information about how and when the Integration Service moves data. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. A session is a type of task that you can put in a workflow. Each session corresponds to a single mapping.

  • Workflow Manager Use the Workflow Manager to create, schedule, and run workflows. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. The Workflow Manager has the following tools to help us develop a workflow: Task Developer: Create tasks we want to accomplish in the workflow. Worklet Designer: Create a worklet in the Worklet Designer. A worklet is an object that groups a set of tasks. A worklet is similar to a workflow, but without scheduling information. We can nest worklets inside a workflow. Workflow Designer: Create a workflow by connecting tasks with links in the Workflow Designer. You can also create tasks in the Workflow Designer as you develop the workflow. When we create a workflow in the Workflow Designer, we add tasks to the workflow. The Workflow Manager includes tasks, such as the Session task, the Command task, and the Email task so you can design a workflow. The Session task is based on a mapping we build in the Designer. We then connect tasks with links to specify the order of execution for the tasks we created. Use conditional links and workflow variables to create branches in the workflow.

    Workflow Monitor Use the Workflow Monitor to monitor scheduled and running workflows for each Integration Service. We can view details about a workflow or task in Gantt Chart view or Task view. We can run, stop, abort, and resume workflows from the Workflow Monitor. We can view sessions and workflow log events in the Workflow Monitor Log Viewer. The Workflow Monitor displays workflows that have run at least once. The Workflow Monitor continuously receives information from the Integration Service and Repository Service. It also fetches information from the repository to display historic information.

  • 1.2.5 REPOSITORY SERVICE All repository client applications access the repository database tables through the Repository Service. The Repository Service protects metadata in the repository by managing repository connections and using object-locking to ensure object consistency. The Repository Service also notifies us when another user modifies or deletes repository objects we are using. Each Repository Service manages a single repository database. We can configure a Repository Service to run on multiple machines, or nodes, in the domain. Each instance running on a node is called a Repository Service process. This process accesses the database tables and performs most repository-related tasks. The Repository Service uses native drivers to communicate with the repository database. A repository domain is a group of repositories that you can connect to simultaneously in the PowerCenter Client. They share metadata through a special type of repository called a global repository. The Repository Service is a separate, multi-threaded process that retrieves, inserts, and updates metadata in the repository database tables. The Repository Service ensures the consistency of metadata in the repository. A Repository Service process is an instance of the Repository Service that runs on a particular machine, or node. The Repository Service accepts connection requests from the following applications:

    PowerCenter Client: Use the Designer and Workflow Manager to create and store mapping metadata and connection object information in the repository. Use the Workflow Monitor to retrieve workflow run status information and session logs written by the Integration Service. Use the Repository Manager to organize and secure metadata by creating folders, users, and groups.

    Command line programs pmrep and infacmd: Use pmrep to perform repository metadata administration tasks, such as listing repository objects or creating and editing users and groups. Use infacmd to perform service-related functions, such as creating or removing a Repository Service.

    Integration Service (IS): When we start the IS, it connects to the repository to schedule workflows. When we run a workflow, the IS retrieves workflow task and

    mapping metadata from the repository. IS writes workflow status to the repository.

    Web Services Hub: When we start the Web Services Hub, it connects to the repository to access web-enabled workflows. The Web Services Hub retrieves workflow task and mapping metadata from the repository and writes workflow status to the repository.

    SAP BW Service: Listens for RFC requests from SAP NetWeaver BW and initiates workflows to extract from or load to SAP BW.

  • We install the Repository Service when we install PowerCenter Services. After we install the PowerCenter Services, we can use the Administration Console to manage the Repository Service.

    Repository Connectivity: PowerCenter applications such as the PowerCenter Client, the Integration Service, pmrep, and infacmd connect to the repository through the Repository Service.

    The following process describes how a repository client application connects to the repository database:

    1) The repository client application sends a repository connection request to the master gateway node, which is the entry point to the domain. This is node B in the diagram.

    2) The Service Manager sends back the host name and port number of the node

    running the Repository Service. If you have the high availability option, you can configure the Repository Service to run on a backup node. Node A in above diagram.

    3) The repository client application establishes a link with the Repository Service process on node A. This communication occurs over TCP/IP.

    4) The Repository Service process communicates with the repository database and performs repository metadata transactions for the client application.

  • Understanding Metadata The repository stores metadata that describes how to extract, transform, and load source and target data. PowerCenter metadata describes several different kinds of repository objects. We use different PowerCenter Client tools to develop each kind of object. If we enable version control, we can store multiple versions of metadata objects in the repository. We can also extend the metadata stored in the repository by associating information with repository objects. For example, when someone in our organization creates a source definition, we may want to store the name of that person with the source definition. We associate information with repository metadata using metadata extensions.

    Administering Repositories We use the PowerCenter Administration Console, the Repository Manager, and the pmrep and infacmd command line programs to administer repositories.

    Back up repository to a binary file Restore repository from a binary file Copy repository database tables Delete repository database tables Create a Repository Service Remove a Repository Service Create folders to organize metadata Add repository users and groups Configure repository security

  • 1.2.6 INTEGRATION SERVICE The Integration Service reads workflow information from the repository. The Integration Service connects to the repository through the Repository Service to fetch metadata from the repository. A workflow is a set of instructions that describes how and when to run tasks related to extracting, transforming, and loading data. The Integration Service runs workflow tasks. A session is a type of workflow task. A session is a set of instructions that describes how to move data from sources to targets using a mapping. It extracts data from the mapping sources and stores the data in memory while it applies the transformation rules that you configure in the mapping. The Integration Service loads the transformed data into the mapping targets. The Integration Service can combine data from different platforms and source types. For example, you can join data from a flat file and an Oracle source. The Integration Service can also load data to different platforms and target types.

    1.2.7 WEB SERVICES HUB The Web Services Hub is a web service gateway for external clients. It processes SOAP requests from web service clients that want to access PowerCenter functionality through web services. Web service clients access the Integration Service and Repository Service through the Web Services Hub. When we install PowerCenter Services, the PowerCenter installer installs the Web Services Hub. The Web Services Hub hosts the following web services:

    Batch web services: Run and monitor web-enabled workflows. Real-time web services: Create service workflows that allow you to read

    and write messages to a web service client through the Web Services Hub. This is not used by Informatica Developer normally and not in scope of our training.

  • 1.2.8 DATA ANALYZER PowerCenter Data Analyzer provides a framework to perform business analytics on corporate data. With Data Analyzer, we can extract, filter, format, and analyze corporate information from data stored in a data warehouse, operational data store, or other data storage models. Data Analyzer uses a web browser interface to view and analyze business information at any level. Data Analyzer extracts, filters, and presents information in easy-to-understand reports. We can use Data Analyzer to design, develop, and deploy reports and set up dashboards and alerts to provide the latest information to users at the time and in the manner most useful to them. Data Analyzer has a repository that stores metadata to track information about enterprise metrics, reports, and report delivery. Once an administrator installs Data Analyzer, users can connect to it from any computer that has a web browser and access to the Data Analyzer host. This is a different tool and is out of scope for our training.

    1.2.9 METADATA MANAGER PowerCenter Metadata Manager is a metadata management tool that you can use to browse and analyze metadata from disparate metadata repositories. Metadata Manager helps us understand and manage how information and processes are derived, the fundamental relationships between them, and how they are used. Metadata Manager uses Data Analyzer functionality. We can use the embedded Data Analyzer features to design, develop, and deploy metadata reports and dashboards. Metadata Manager uses PowerCenter workflows to extract metadata from source repositories and load it into a centralized metadata warehouse called the Metadata Manager Warehouse. This is a different tool and is out of scope for our training.

  • Chapter 2

    Repository

    Manager

  • CHAPTER 2: REPOSITORY MANAGER We can navigate through multiple folders and repositories and perform basic repository tasks with the Repository Manager. This is an administration tool and used by Informatica Administrator.

    Repository Manager Tasks:

    Add domain connection information Add and connect to a repository Work with PowerCenter domain and repository connections Search for repository objects or keywords View object dependencies Compare repository objects Truncate session and workflow log entries View user connections Release locks Exchange metadata with other business intelligence tools

    Add a repository to the Navigator, and then configure the domain connection information when we connect to the repository.

    2.1 Adding a Repository to the Navigator

    1. In any of the PowerCenter Client tools, click Repository > Add.

    2. Enter the name of the repository and a valid repository user name. 3. Click OK.

    Before we can connect to the repository for the first time, we must configure the connection information for the domain that the repository belongs to.

  • 2.2 Configuring a Domain Connection

    1. In a PowerCenter Client tool, select the Repositories node in the Navigator. 2. Click Repository > Configure Domains to open the Configure Domains dialog

    box.

    3. Click the Add button. The Add Domain dialog box appears. 4. Enter the domain name, gateway host name, and gateway port number. 5. Click OK to add the domain connection.

    2.3 Connecting to a Repository

    1. Launch a PowerCenter Client tool. 2. Select the repository in the Navigator and click Repository > Connect, or

    double-click the repository. 3. Enter a valid repository user name and password. 4. Click Connect.

    Click on more button to add, change or view domain information.

  • 2.4 Viewing Object Dependencies Before we change or delete repository objects, we can view dependencies to see the impact on other objects. For example, before you remove a session, we can find out which workflows use the session. We can view dependencies for repository objects in the Repository Manager, Workflow Manager, and Designer tools. Steps:

    1. Connect to the repository. 2. Select the object of use in navigator. 3. Click Analyze and Select the dependency we want to view.

    2.5 Validating Multiple Objects We can validate multiple objects in the repository without fetching them into the workspace. We can save and optionally check in objects that change from invalid to valid status as a result of the validation. We can validate sessions, mappings, mapplets, workflows, and worklets. Steps:

    1. Select the objects you want to validate. 2. Click Analyze and Select Validate 3. Select validation options from the Validate Objects dialog box 4. Click Validate. 5. Click a link to view the objects in the results group.

    2.6 Comparing Repository Objects We can compare two repository objects of the same type to identify differences between the objects. For example, we can compare two sessions to check for differences. When we compare two objects, the Repository Manager displays their attributes. Steps:

    1. In the Repository Manager, connect to the repository. 2. In the Navigator, select the object you want to compare. 3. Click Edit > Compare Objects. 4. Click Compare in the dialog box displayed.

  • 2.7 Truncating Workflow and Session Log Entries When we configure a session or workflow to archive session logs or workflow logs, the Integration Service saves those logs in local directories. The repository also creates an entry for each saved workflow log and session log. If we move or delete a session log or workflow log from the workflow log directory or session log directory, we can remove the entries from the repository. Steps:

    1. In the Repository Manager, select the workflow in the Navigator window or in the Main window.

    2. Choose Edit > Truncate Log. The Truncate Workflow Log dialog box appears. 3. Choose to delete all workflow and session log entries or to delete all workflow

    and session log entries with an end time before a particular date. 4. If you want to delete all entries older than a certain date, enter the date and

    time. 5. Click OK.

    2.8 Managing User Connections and Locks In the Repository Manager, we can view and manage the following items: Repository object locks: The repository locks repository objects and folders by user. The repository creates different types of locks depending on the task. The Repository Service locks and unlocks all objects in the repository. User connections: Use the Repository Manager to monitor user connections to the repository. We can end connections when necessary. Types of locks created:

    1. In-use lock: Placed on objects we want to view 2. Write-intent lock: Placed on objects we want to modify. 3. Execute lock: Locks objects we want to run, such as workflows and sessions

    Steps:

    1. Launch the Repository Manager and connect to the repository. 2. Click Edit > Show User Connections or Show locks 3. The locks or user connections will be displayed in a window. 4. We can do the rest as per our need.

  • 2.9 Managing Users and Groups

    1. In the Repository Manager, connect to a repository. 2. Click Security > Manage Users and Privileges. 3. Click the Groups tab to create Groups. or 4. Click the Users tab to create Users 5. Click the Privileges tab to give permissions to groups and users. 6. Select the options available to add, edit, and remove users and groups.

    There are two default repository user groups: Administrators: This group initially contains two users that are created by default. The default users are Administrator and the database user that created the repository. We cannot delete these users from the repository or remove them from the Administrators group. Public: The Repository Manager does not create any default users in the Public group.

    2.10 Working with Folders We can create. Edit or delete folder as per our need.

    1. In the Repository Manager, connect to a repository. 2. Click Folder > Create.

    Enter the following information:

    3. Click ok.

  • Chapter 3

    Designer

  • CHAPTER 3: DESIGNER The Designer has tools to help us build mappings and mapplets so we can specify how to move and transform data between sources and targets. The Designer helps us create source definitions, target definitions, and transformations to build the mappings. The Designer lets us work with multiple tools at one time and to work in multiple folders and repositories at the same time. It also includes windows so we can view folders, repository objects, and tasks.

    Designer Tools:

    Source Analyzer: Use to import or create source definitions for flat file, XML, COBOL, Application, and relational sources.

    Target Designer: Use to import or create target definitions. Transformation Developer: Use to create reusable transformations. Mapplet Designer: Use to create mapplets. Mapping Designer: Use to create mappings.

    Designer Windows:

    Navigator: Use to connect to and work in multiple repositories and folders. Workspace: Use to view or edit sources, targets, mapplets, transformations,

    and mappings. Status bar: Displays the status of the operation we perform. Output: Provides details when we perform certain tasks, such as saving work

    or validating a mapping Overview: An optional window to simplify viewing workbooks containing

    large mappings or a large number of objects. Instance Data: View transformation data while you run the Debugger to

    debug a mapping. Target Data: View target data while you run the Debugger to debug a

    mapping.

    Overview Window

  • Designer Windows Designer Tasks:

    Add a repository. Print the workspace. View date and time an object was last saved. Open and close a folder. Create shortcuts. Check out and in repository objects. Search for repository objects. Enter descriptions for repository objects. View older versions of objects in the workspace. Revert to a previously saved object version. Copy objects. Export and import repository objects. Work with multiple objects, ports, or columns. Rename ports. Use shortcut keys.

  • 3.1 SOURCE ANALYZER In Source Analyzer, we define the source definitions that we will use in a mapping. We can either import a source definition or manually create the definition. We can import or create the following types of source definitions in the Source Analyzer:

    Relational tables, views, and synonyms Fixed-width and delimited flat files that do not contain binary data. COBOL files XML files Data models using certain data modeling tools through Metadata Exchange

    for Data Models

    3.1.1 Working with Relational Sources

    Special Character Handling:

    We can import, create, or edit source definitions with table and column names containing special characters, such as the slash (/) character through the Designer. When we use the Source Analyzer to import a source definition, the Designer retains special characters in table and field names. However, when we add a source definition with special characters to a mapping, the Designer either retains or replaces the special character. Also, when we generate the default SQL statement in a Source Qualifier transformation for a relational source, the Designer uses quotation marks around some special characters. The Designer handles special characters differently for relational and non-relational sources.

    Importing a Relational Source Definition

    1. Connect to repository. 2. Right click the folder where you want to import source definition and click

    open. The folder which is connected gets bold. We can work in only one folder at a time.

    3. In the Source Analyzer, click Sources > Import from Database. 4. Select the ODBC data source used to connect to the source database. If you

    need to create or modify an ODBC data source, click the Browse button to open the ODBC Administrator. Create the data source, and click OK. Select the new ODBC data source.

  • 5) Enter a database user name and password to connect to the database. 6) Click Connect. Table names will appear. 7) Select the relational object or objects you want to import. 8) Click OK. 9) Click Repository > Save.

    Updating a Relational Source Definition We can update a source definition to add business names or to reflect new column names, datatypes, or other changes. We can update a source definition in the following ways: Edit the definition: Manually edit the source definition if we need to configure properties that we cannot import or if we want to make minor changes to the source definition. Reimport the definition: If the source changes are significant, we may need to reimport the source definition. This overwrites or renames the existing source definition. We can retain existing primary key-foreign key relationships and descriptions in the source definition being replaced.

    Editing Relational Source Definitions

    1) Select Tools -> Source Analyzer 2) Drag the table you want to edit in workspace. 3) In the Source Analyzer, double-click the title bar of the source definition. Or

    Right click the table and click edit. 4) In table tab, we can rename, add owner name, business description or edit

    database type. 5) Click the Columns Tab. Edit column names, datatypes, and restrictions. Click

    OK.

  • 3.1.2 Working with Flat Files To use flat files as sources, targets, and lookups in a mapping we must import or create the definitions in the repository. We can import or create flat file source definitions in the Source Analyzer. We can import fixed-width and delimited flat file definitions that do not contain binary data. When importing the definition, the file must be in a directory local to the client machine. In addition, the Integration Service must be able to access all source files during the session.

    Special Character Handling: When we import a flat file in the Designer, the Flat File Wizard uses the file name as the name of the flat file definition by default. We can import a flat file with any valid file name through the Flat File Wizard. However, the Designer does not recognize some special characters in flat file source and target names. When we import a flat file, the Flat File Wizard changes invalid characters and spaces into underscores ( _ ). For example, you have the source file "sample prices+items.dat". When we import this flat file in the Designer, the Flat File Wizard names the file definition sample_prices_items by default.

    To import a fixed-width flat file definition:

    1. Open the Source Analyzer and click Sources > Import from File. The Open Flat File dialog box appears.

    2. Browse and Select the file you want to use. 3. Select a code page. 4. Click OK. 5. Edit the following settings:

  • 6) Click Next. Follow the directions in the wizard to manipulate the column

    breaks in the file preview window. Move existing column breaks by dragging them. Double-click a column break to delete it.

    7) Click next and Enter column information for each column in the file. 8) Click Finish. 9) Click Repository > Save.

    To import a delimited flat file definition: Delimited flat files are always character-oriented and line sequential. The column precision is always measured in characters for string columns and in significant digits for numeric columns. Each row ends with a newline character. We can import a delimited file that does not contain binary data or multibyte character data greater than two bytes per character. Steps:

    1) Repeat Steps 1-5 as in case of fixed width. 2) Click Next. 3) Enter the following settings:

    Delimiters Required Character used to separate columns of data. Use the

    Other field to enter a different delimiter. Treat Consecutive Delimiters as One

    Optional If selected, the Flat File Wizard reads one or more consecutive column delimiters as one.

    Escape Character

    Optional Character immediately preceding a column delimiter character embedded in an unquoted string, or immediately preceding the quote character in a quoted string.

    Remove Escape Character From Data

    Optional Clear this option to include the escape character in the output string.

    Use Default Text Length

    Optional If selected, the Flat File Wizard uses the entered default text length for all string datatypes.

    Text Qualifier Required Quote character that defines the boundaries of text strings. Choose No Quote, Single Quote, or Double Quotes.

    4) Enter column information for each column in the file. 5) Click Finish. 6) Click Repository > Save.

  • Editing Flat File Definitions

    1) Select Tools -> Source Analyzer 2) Drag the file you want to edit in workspace. 3) In the Source Analyzer, double-click the title bar of the source definition.

    We can edit source or target flat file definitions using the following definition tabs:

    Table tab: Edit properties such as table name, business name, and flat file properties.

    Columns tab: Edit column information such as column names, datatypes, precision, and formats.

    Properties tab: View the default numeric and datetime format properties in the Source Analyzer and the Target Designer. You can edit these properties for each source and target instance in a mapping in the Mapping Designer.

    Metadata Extensions tab: Extend the metadata stored in the repository by associating information with repository objects, such as flat file definitions.

    4) Click the Advanced button to edit the flat file properties. A different dialog box appears for fixed-width and delimited files.

    5) Do the changes as needed. 6) Click OK. 7) Click Repository > Save.

    The way to handle target flat files is also same as described in the above sections. Just make sure that instead of Source Analyzer, Select Tools -> Target Designer. Rest is same.

  • 3.2 TARGET DESIGNER

    Before we create a mapping, we must define targets in the repository. Use the Target Designer to import and design target definitions. Target definitions include properties such as column names and data types.

    Types of target definitions:

    Relational: Create a relational target for a particular database platform. Flat file: Create fixed-width and delimited flat file target definitions. XML file: Create an XML target definition to output data to an XML file.

    Ways of creating target definitions:

    1. Import the definition for an existing target: Import the target definition from a relational target or a flat file. The Target Designer uses a Flat File Wizard to import flat files.

    2. Create a target definition based on a source definition: Drag a source

    definition into the Target Designer to make a target definition and edit it to make necessary changes..

    3. Create a target definition based on a transformation or mapplet: Drag

    a transformation into the Target Designer to make a target definition. 4. Manually create a target definition: Create a target definition in the

    Target Designer. 5. Design several related targets: Create several related target definitions at

    the same time. You can create the overall relationship, called a schema, and the target definitions, through wizards in the Designer.

    After we create a relational target table definition, we need to create a table in database also. Steps:

    1. In the Target Designer, select the relational target definition you want to create in the database. If you want to create multiple tables, select all relevant table definitions.

    2. Click Targets > Generate/Execute SQL. 3. Click Connect and select the database where the target table should be

    created. Click OK to make the connection. 4. Click Generate SQL File if you want to create the SQL script, or Generate and

    Execute if you want to create the file, and then immediately run it. 5. Click Close.

  • 3.3 MAPPINGS A mapping is a set of source and target definitions linked by transformation objects that define the rules for data transformation. Mappings represent the data flow between sources and targets. When the Integration Service runs a session, it uses the instructions configured in the mapping to read, transform, and write data.

    Mapping Components:

    Source definition: Describes the characteristics of a source table or file. Transformation: Modifies data before writing it to targets. Use different

    transformation objects to perform different functions. Target definition: Defines the target table or file.

    Links: Connect sources, targets, and transformations so the Integration Service can move the data as it transforms it.

    The work of Informatica Developer is to make mappings as per client requirements. We drag source definition and target definition in workspace. We create various transformations to modify the data as per the need.

    We then run the mappings by creating session and workflow. We also unit test the mappings.

    Steps to create a mapping:

    1. Open the Mapping Designer. 2. Click Mappings > Create, or drag a repository object into the workspace. 3. Enter a name for the new mapping and click OK.

  • 3.4 TRANSFORMATIONS A transformation is a repository object that generates, modifies, or passes data. You configure logic in a transformation that the Integration Service uses to transform data. The Designer provides a set of transformations that perform specific functions. For example, an Aggregator transformation performs calculations on groups of data. Transformations in a mapping represent the operations the Integration Service performs on the data. Data passes through transformation ports that we link in a mapping or mapplet.

    Types of Transformations: Active: An active transformation can change the number of rows that pass through it, such as a Filter transformation that removes rows that do not meet the filter condition.

    Passive: A passive transformation does not change the number of rows that pass through it, such as an Expression transformation that performs a calculation on data and passes all rows through the transformation.

    Connected: A connected transformation is connected to other transformations in the mapping.

    Unconnected: An unconnected transformation is not connected to other transformations in the mapping. An unconnected transformation is called within another transformation, and returns a value to that transformation.

    Reusable: Reusable transformations can be used in multiple mappings. These are created in Transformation Developer tool. Or promote a non-reusable transformation from the Mapping Designer.

    We can create most transformations as a non-reusable or reusable. External Procedure transformation can be created as a reusable

    transformation only. Source Qualifier is not reusable.

    Non reusable: Non-reusable transformations exist within a single mapping. These are created in Mapping Designer tool.

    Single-Group Transformation: Transformations that have one input and one output group.

    Multi-Group Transformations: Transformations that have multiple input groups, multiple output groups, or both. A group is the representation of a row of data entering or leaving a transformation. Example: Union, Router, Joiner, HTTP etc.

  • 3.4.1 Working with Ports After we create a transformation, we need to add and configure ports using the Ports tab. Ports are equivalent to columns in Informatica. Creating Ports: We can create a new port in the following ways:

    Drag a port from another transformation. When we drag a port from another transformation the Designer creates a port with the same properties, and it links the two ports. Click Layout > Copy Columns to enable copying ports.

    Click the Add button on the Ports tab. The Designer creates an empty port you can configure.

    3.4.2 Using Default Values for Ports All transformations use default values that determine how the Integration Service handles input null values and output transformation errors.

    Input port: The system default value for null input ports is NULL. It displays as a blank in the transformation. If an input value is NULL, the Integration Service leaves it as NULL.

    Output port: The system default value for output transformation errors is ERROR. The default value appears in the transformation as ERROR(`transformation error'). If a transformation error occurs, the Integration Service skips the row. The Integration Service notes all input rows skipped by the ERROR function in the session log file.

    Input/output port: The system default value for null input is the same as input ports, NULL. The system default value appears as a blank in the transformation. The default value for output transformation errors is the same as output ports.

    Note: Variable ports do not support default values. The Integration Service initializes variable ports according to the datatype. Note: The Integration Service ignores user-defined default values for unconnected transformations.

  • 3.4.3 User-defined default values Constant value: Use any constant (numeric or text), including NULL. Example: 0, 9999, Unknown Value, NULL

    Constant expression: We can include a transformation function with constant parameters. Example: 500 * 1.75, TO_DATE('January 1, 1998, 12:05 AM'), ERROR ('Null not allowed')

    ERROR: Generate a transformation error. Write the row and a message in the session log or row error log. The Integration Service writes the row to session log or row error log based on session configuration. Use the ERROR function as the default value when we do not want null values to pass into a transformation. For example, we might want to skip a row when the input value of DEPT_NAME is NULL. You could use the following expression as the default value: ERROR('Error. DEPT is NULL')

    ABORT: Abort the session. Session aborts when the Integration Service encounters a null input value. The Integration Service does not increase the error count or write rows to the reject file. Example: ABORT(DEPT is NULL')

  • 3.5 TRACING LEVELS When we configure a transformation, we can set the amount of detail the Integration Service writes in the session log.

    We set tracing level in Properties tab of a transformation.

    Level Description

    Normal Integration Service logs initialization and status information, errors encountered and skipped rows due to transformation row errors. Summarizes session results, but not at the level of individual rows.

    Terse Integration Service logs initialization information and error messages and notification of rejected data.

    Verbose Initialization

    In addition to normal tracing, Integration Service logs additional initialization details; names of index and data files used, and detailed transformation statistics.

    Verbose Data

    In addition to verbose initialization tracing, Integration Service logs each row that passes into the mapping.

    Allows the Integration Service to write errors to both the session log and error log when you enable row error logging.

    Integration Service writes row data for all rows in a block when it processes a transformation.

    Change the tracing level to a Verbose setting only when we need to debug a transformation that is not behaving as expected.

    To add a slight performance boost, we can also set the tracing level to Terse.

  • 3.6 BASIC FIRST MAPPING First make sure that we have created a shared folder and a folder with the name of developer along with user as described in Installation Guide. We will transfer data from EMP table in source to EMP_Tgt table in target. Also create an ODBC connection for source and target database.

    Importing Source Definition:

    1. Select the Shared folder. Right click on it and select open. 2. Shared folder will become bold. It means we are now connected to it. 3. Click on tools-> Source Analyzer 4. Now we will import the source table definitions in shared folder. 5. Click on Source -> Import from database 6. In box displayed, give connection information for source database. 7. Click Connect. Tables in source database will be displayed. 8. Select the tables of use and click OK. 9. Table definition will be displayed. We can edit it as per need.

    Note: We can edit the source definition by dragging the table in Source Analyzer only.

    Creating Target Table EMP_Tgt in Target database

    1. Connect to the Shared folder. Tools-> Target Designer 2. Now drag the EMP table definition from left side pane to target designer. 3. We will see the EMP table definition in Target Designer. 4. Right click EMP -> Edit -> Click on rename and give name as EMP_Tgt 5. Apply -> Ok. 6. Now we will create this table in target database. 7. Click Target -> Select generate/ execute SQL. 8. Click on connect and give login information for target database. 9. Then select the options of table generation. 10. Click Generate/Execute button. 11. Repository -> Save

    We are doing this for our practice only. In a project, all the source tables and target tables are created by DBA. We just import the definition of tables.

  • Now we have all the tables we need in shared folder. We now need to create shortcut to these in our folder.

    1. Right Click on Shared folder and select disconnect. 2. Select the folder where we want to create the mapping. 3. Right click on folder and click open. The folder will become bold. 4. We will now create shortcut to the tables of need in our work folder. 5. Click + sign on Shared folder and open + sign on Sources and Select

    EMP table. 6. Now click Edit -> Copy 7. Now select the folder where which is bold. 8. Click Edit -> Paste Shortcut 9. Do the same for all source and target tables. 10. Also rename all the shortcuts and remove Shortcut_to_ from all. 11. Repository Save

    Shortcut use:

    If we will select paste option, then the copy of EMP table definition will be created.

    Suppose, we are 10 people and 5 using shortcut and 5 are copying the definition of EMP.

    Now suppose the definition of EMP changes in database. We will now reimport the EMP definition and old definition will be replaced. Developers who were using shortcuts will see that the changes have

    been reflected in mapping automatically. Developers using copy will have to reimport manually. So for maintenance and ease, we use shortcuts to source and target

    definitions in our folder and short to other reusable transformations and mapplets.

    Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping -> Create -> Give mapping name. Ex: m_basic_mapping 4. Drag EMP from source and EMP_Tgt from target in mapping. 5. Link ports from SQ_EMP to EMP_Tgt. 6. Click Mapping -> Validate 7. Repository -> Save

  • Creating Session: Now we will create session in workflow manager.

    1. Open Workflow Manager -> Connect to repository 2. Open the folder with same name in which we created mapping. 3. Make sure folder is bold. 4. Now click tool Task Developer 5. Click Task -> Create -> Select Session task and give name.

    s_m_basic_mapping 6. Select the correct mapping from the list displayed. 7. Click Create and done. 8. Now right click session and click edit. 9. Select mapping tab. 10. Go to SQ_EMP in source and give the correct relational connection for

    it. 11. Do the same for EMP_Tgt. 12. Also for target table, Give Load Type option as Normal and Also select

    Truncate Target Table Option. 13. Task -> Validate

    Creating Workflow:

    1. Now Click Tools -> Workflow Designer 2. Workflow -> Create -> Give name like wf_basic_mapping 3. Click ok 4. START task will be displayed. It is the starting point for Informatica server. 5. Drag session to workflow. 6. Click Task-> Link Task. Connect START to the session. 7. Click Workflow -> Validate 8. Repository Save.

    Now open Workflow Monitor first.

    1. Go back to Workflow Manager. Select the workflow and right click on the workflow wf_basic_mapping.

    2. Select Start Workflow. You can view the status in Workflow Monitor. Check the data in target table. Command: select * from table_name;

  • 3.7 EXPRESSION TRANSFORMATION

    Passive and connected transformation. Use the Expression transformation to calculate values in a single row before we write to the target. For example, we might need to adjust employee salaries, concatenate first and last names, or convert strings to numbers. Use the Expression transformation to perform any non-aggregate calculations. Example: Addition, Subtraction, Multiplication, Division, Concat, Uppercase conversion, lowercase conversion etc. We can also use the Expression transformation to test conditional statements before we output the results to target tables or other transformations. Example: IF, Then, Decode There are 3 types of ports in Expression Transformation:

    Input Output Variable: Used to store any temporary calculation.

    Calculating Values To use the Expression transformation to calculate values for a single row, we must include the following ports:

    Input or input/output ports for each value used in the calculation: For example: To calculate Total Salary, we need salary and commission.

    Output port for the expression: We enter one expression for each output port. The return value for the output port needs to match the return value of the expression.

    We can enter multiple expressions in a single Expression transformation. We can create any number of output ports in the transformation. Example: Calculating Total Salary of an Employee

    Import the source table EMP in Shared folder. If it is already there, then dont import.

    In shared folder, create the target table Emp_Total_SAL. Keep all ports as in EMP table except Sal and Comm in target table. Add Total_SAL port to store the calculation.

    Create the necessary shortcuts in the folder.

  • Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping -> Create -> Give mapping name. Ex: m_totalsal 4. Drag EMP from source in mapping. 5. Click Transformation -> Create -> Select Expression from list. Give name and

    click Create. Now click done. 6. Link ports from SQ_EMP to Expression Transformation. 7. Edit Expression Transformation. As we do not want Sal and Comm in target,

    remove check from output port for both columns. 8. Now create a new port out_Total_SAL. Make it as output port only. 9. Click the small button that appears in the Expression section of the dialog box

    and enter the expression in the Expression Editor. 10. Enter expression SAL + COMM. You can select SAL and COMM from Ports tab

    in expression editor.

    11. Check the expression syntax by clicking Validate. 12. Click OK -> Click Apply -> Click Ok. 13. Now connect the ports from Expression to target table. 14. Click Mapping -> Validate 15. Repository -> Save

    Create Session and Workflow as described earlier. Run the workflow and see the data in target table.

  • As COMM is null, Total_SAL will be null in most cases. Now open your mapping and expression transformation. Select COMM port, In Default Value give 0. Now apply changes. Validate Mapping and Save. Refresh the session and validate workflow again. Run the workflow and see the result again. Now use ERROR in Default value of COMM to skip rows where COMM is null. Syntax: ERROR(Any message here)

    Similarly, we can use ABORT function to abort the session if COMM is null. Syntax: ABORT(Any message here) Make sure to double click the session after doing any changes in mapping. It will prompt that mapping has changed. Click OK to refresh the mapping. Run workflow after validating and saving the workflow.

  • 3.8 FILTER TRANSFORMATION

    Active and connected transformation. We can filter rows in a mapping with the Filter transformation. We pass all the rows from a source transformation through the Filter transformation, and then enter a filter condition for the transformation. All ports in a Filter transformation are input/output and only rows that meet the condition pass through the Filter transformation.

    Example: to filter records where SAL>2000

    Import the source table EMP in Shared folder. If it is already there, then dont import.

    In shared folder, create the target table Filter_Example. Keep all fields as in EMP table.

    Create the necessary shortcuts in the folder.

    Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping -> Create -> Give mapping name. Ex: m_filter_example 4. Drag EMP from source in mapping. 5. Click Transformation -> Create -> Select Filter from list. Give name and click

    Create. Now click done. 6. Pass ports from SQ_EMP to Filter Transformation. 7. Edit Filter Transformation. Go to Properties Tab 8. Click the Value section of the Filter condition, and then click the Open button. 9. The Expression Editor appears. 10. Enter the filter condition you want to apply. 11. Click Validate to check the syntax of the conditions you entered. 12. Click OK -> Click Apply -> Click Ok. 13. Now connect the ports from Filter to target table. 14. Click Mapping -> Validate 15. Repository -> Save

  • Create Session and Workflow as described earlier. Run the workflow and see the data in target table. How to filter out rows with null values? To filter out rows containing null values or spaces, use the ISNULL and IS_SPACES functions to test the value of the port. For example, if we want to filter out rows that contain NULLs in the FIRST_NAME port, use the following condition: IIF(ISNULL(FIRST_NAME),FALSE,TRUE) This condition states that if the FIRST_NAME port is NULL, the return value is FALSE and the row should be discarded. Otherwise, the row passes through to the next transformation.

  • 3.9 ROUTER TRANSFORMATION

    Active and connected transformation. A Router transformation is similar to a Filter transformation because both transformations allow you to use a condition to test data. A Filter transformation tests data for one condition and drops the rows of data that do not meet the condition. However, a Router transformation tests data for one or more conditions and gives you the option to route rows of data that do not meet any of the conditions to a default output group. Example: If we want to keep employees of France, India, US in 3 different tables, then we can use 3 Filter transformations or 1 Router transformation.

    Mapping A uses three Filter transformations while Mapping B produces the same result with one Router transformation. A Router transformation consists of input and output groups, input and output ports, group filter conditions, and properties that we configure in the Designer.

  • Working with Groups A Router transformation has the following types of groups:

    Input: The Group that gets the input ports. Output: User Defined Groups and Default Group. We cannot modify or delete

    output ports or their properties.

    User-Defined Groups: We create a user-defined group to test a condition based on incoming data. A user-defined group consists of output ports and a group filter condition. We can create and edit user-defined groups on the Groups tab with the Designer. Create one user-defined group for each condition that we want to specify.

    The Default Group: The Designer creates the default group after we create one new user-defined group. The Designer does not allow us to edit or delete the default group. This group does not have a group filter condition associated with it. If all of the conditions evaluate to FALSE, the IS passes the row to the default group. Example: Filtering employees of Department 10 to EMP_10, Department 20 to EMP_20 and rest to EMP_REST

    Source is EMP Table. Create 3 target tables EMP_10, EMP_20 and EMP_REST in shared folder.

    Structure should be same as EMP table. Create the shortcuts in your folder.

    Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping-> Create-> Give mapping name. Ex: m_router_example 4. Drag EMP from source in mapping. 5. Click Transformation -> Create -> Select Router from list. Give name and

    click Create. Now click done. 6. Pass ports from SQ_EMP to Router Transformation. 7. Edit Router Transformation. Go to Groups Tab 8. Click the Groups tab, and then click the Add button to create a user-defined

    group. The default group is created automatically.. 9. Click the Group Filter Condition field to open the Expression Editor. 10. Enter a group filter condition. Ex: DEPTNO=10 11. Click Validate to check the syntax of the conditions you entered.

  • 12. Create another group for EMP_20. Condition: DEPTNO=20 13. The rest of the records not matching the above two conditions will be passed

    to DEFAULT group. See sample mapping 14. Click OK -> Click Apply -> Click Ok. 15. Now connect the ports from router to target tables. 16. Click Mapping -> Validate 17. Repository -> Save

    Create Session and Workflow as described earlier. Run the

    workflow and see the data in target table. Make sure to give connection information for all 3 target tables.

    Sample Mapping:

    Difference between Router and Filter

    1> We cannot pass rejected data forward in filter but we can pass it in router. Rejected data is in Default Group of router.

    2> Filter has no default group.

  • 3.10 UNION TRANSFORMATION

    Active and Connected transformation. The Union transformation is a multiple input group transformation that you can use to merge data from multiple pipelines or pipeline branches into one pipeline branch. It merges data from multiple sources similar to the UNION ALL SQL statement to combine the results from two or more SQL statements.

    Union Transformation Rules and Guidelines

    We can create multiple input groups, but only one output group. We can connect heterogeneous sources to a Union transformation. All input groups and the output group must have matching ports. The

    precision, datatype, and scale must be identical across all groups. The Union transformation does not remove duplicate rows. To remove

    duplicate rows, we must add another transformation such as a Router or Filter transformation.

    We cannot use a Sequence Generator or Update Strategy transformation upstream from a Union transformation.

    Union Transformation Components When we configure a Union transformation, define the following components: Transformation tab: We can rename the transformation and add a description. Properties tab: We can specify the tracing level. Groups tab: We can create and delete input groups. The Designer displays groups we create on the Ports tab. Group Ports tab: We can create and delete ports for the input groups. The Designer displays ports we create on the Ports tab. We cannot modify the Ports, Initialization Properties, Metadata Extensions, or Port Attribute Definitions tabs in a Union transformation. Create input groups on the Groups tab, and create ports on the Group Ports tab. We can create one or more input groups on the Groups tab. The Designer creates one output group by default. We cannot edit or delete the default output group. Example: to combine data of tables EMP_10, EMP_20 and EMP_REST

    Import tables EMP_10, EMP_20 and EMP_REST in shared folder in Sources. Create a target table EMP_UNION_EXAMPLE in target designer. Structure

    should be same EMP table. Create the shortcuts in your folder.

  • Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping-> Create-> Give mapping name. Ex: m_union_example 4. Drag EMP_10, EMP_20 and EMP_REST from source in mapping. 5. Click Transformation -> Create -> Select Union from list. Give name and click

    Create. Now click done. 6. Pass ports from SQ_EMP_10 to Union Transformation. 7. Edit Union Transformation. Go to Groups Tab 8. One group will be already there as we dragged ports from SQ_DEPT_10 to

    Union Transformation. 9. As we have 3 source tables, we 3 need 3 input groups. Click Add button to

    add 2 more groups. See Sample Mapping 10. We can also modify ports in ports tab. 11. Click Apply -> Ok. 12. Drag target table now. 13. Connect the output ports from Union to target table. 14. Click Mapping -> Validate 15. Repository -> Save

    Create Session and Workflow as described earlier. Run the

    workflow and see the data in target table. Make sure to give connection information for all 3 source

    tables.

    Sample mapping picture

  • 3.11 SORTER TRANSFORMATION

    Connected and Active Transformation The Sorter transformation allows us to sort data. We can sort data in ascending or descending order according to a specified

    sort key. We can also configure the Sorter transformation for case-sensitive sorting,

    and specify whether the output rows should be distinct. When we create a Sorter transformation in a mapping, we specify one or more ports as a sort key and configure each sort key port to sort in ascending or descending order. We also configure sort criteria the PowerCenter Server applies to all sort key ports and the system resources it allocates to perform the sort operation. The Sorter transformation contains only input/output ports. All data passing through the Sorter transformation is sorted according to a sort key. The sort key is one or more ports that we want to use as the sort criteria.

    Sorter Transformation Properties

    1. Sorter Cache Size: The PowerCenter Server uses the Sorter Cache Size property to determine the maximum amount of memory it can allocate to perform the sort operation. The PowerCenter Server passes all incoming data into the Sorter transformation before it performs the sort operation.

    We can specify any amount between 1 MB and 4 GB for the Sorter cache size.

    If it cannot allocate enough memory, the PowerCenter Server fails the session.

    For best performance, configure Sorter cache size with a value less than or equal to the amount of available physical RAM on the PowerCenter Server machine.

    Informatica recommends allocating at least 8 MB (8,388,608 bytes) of physical memory to sort data using the Sorter transformation.

    2. Case Sensitive: The Case Sensitive property determines whether the PowerCenter Server considers case when sorting data. When we enable the Case Sensitive property, the PowerCenter Server sorts uppercase characters higher than lowercase characters.

    3. Work Directory Directory PowerCenter Server uses to create temporary files while it sorts data.

    4. Distinct:

    Check this option if we want to remove duplicates. Sorter will sort data according to all the ports when it is selected.

  • Example: Sorting data of EMP by ENAME

    Source is EMP table. Create a target table EMP_SORTER_EXAMPLE in target designer. Structure

    same as EMP table. Create the shortcuts in your folder.

    Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping-> Create-> Give mapping name. Ex: m_sorter_example 4. Drag EMP from source in mapping. 5. Click Transformation -> Create -> Select Sorter from list. Give name and click

    Create. Now click done. 6. Pass ports from SQ_EMP to Sorter Transformation. 7. Edit Sorter Transformation. Go to Ports Tab 8. Select ENAME as sort key. CHECK mark on KEY in front of ENAME. 9. Click Properties Tab and Select Properties as needed. 10. Click Apply -> Ok. 11. Drag target table now. 12. Connect the output ports from Sorter to target table. 13. Click Mapping -> Validate 14. Repository -> Save

    Create Session and Workflow as described earlier. Run the

    workflow and see the data in target table. Make sure to give connection information for all tables.

    Sample Sorter Mapping

  • 3.12 RANK TRANSFORMATION

    Active and connected transformation The Rank transformation allows us to select only the top or bottom rank of data. It allows us to select a group of top or bottom values, not just one value. During the session, the PowerCenter Server caches input data until it can perform the rank calculations.

    Rank Transformation Properties

    Cache Directory where cache will be made. Top/Bottom Rank as per need Number of Ranks Ex: 1, 2 or any number Case Sensitive Comparison can be checked if needed Rank Data Cache Size can be set Rank Index Cache Size can be set

    Ports in a Rank Transformation

    Ports Number Required

    Description

    I 1 Minimum Port to receive data from another transformation.

    O 1 Minimum Port we want to pass to other transformation.

    V Not needed Can use to store values or calculations to use in an expression.

    R Only 1 Rank port. Rank is calculated according to it. The Rank port is an input/output port. We must link the Rank port to another transformation. Example: Total Salary

    Rank Index The Designer automatically creates a RANKINDEX port for each Rank transformation. The PowerCenter Server uses the Rank Index port to store the ranking position for each row in a group. For example, if we create a Rank transformation that ranks the top five salaried employees, the rank index numbers the employees from 1 to 5.

    The RANKINDEX is an output port only. We can pass the rank index to another transformation in the mapping or

    directly to a target. We cannot delete or edit it.

  • Defining Groups Rank transformation allows us to group information. For example: If we want to select the top 3 salaried employees of each Department, we can define a group for department.

    By defining groups, we create one set of ranked rows for each group. We define a group in Ports tab. Click the Group By for needed port. We cannot Group By on port which is also Rank Port.

    1> Example: Finding Top 5 Salaried Employees

    EMP will be source table. Create a target table EMP_RANK_EXAMPLE in target designer. Structure

    should be same as EMP table. Just add one more port Rank_Index to store RANK INDEX.

    Create the shortcuts in your folder.

    Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping-> Create-> Give mapping name. Ex: m_rank_example 4. Drag EMP from source in mapping. 5. Create an EXPRESSION transformation to calculate TOTAL_SAL. 6. Click Transformation -> Create -> Select RANK from list. Give name and click

    Create. Now click done. 7. Pass ports from Expression to Rank Transformation. 8. Edit Rank Transformation. Go to Ports Tab 9. Select TOTAL_SAL as rank port. Check R type in front of TOTAL_SAL. 10. Click Properties Tab and Select Properties as needed. 11. Top in Top/Bottom and Number of Ranks as 5. 12. Click Apply -> Ok. 13. Drag target table now. 14. Connect the output ports from Rank to target table. 15. Click Mapping -> Validate 16. Repository -> Save

    Create Session and Workflow as described earlier. Run the

    workflow and see the data in target table. Make sure to give connection information for all tables.

    2> Example: Finding Top 2 Salaried Employees for every DEPARTMENT

    Open the mapping made above. Edit Rank Transformation. Go to Ports Tab. Select Group By for DEPTNO. Go to Properties tab. Set Number of Ranks as 2. Click Apply -> Ok. Mapping -> Validate and Repository Save.

    Refresh the session by double clicking. Save the changed and run workflow to see the new result.

  • Sample Rank Mapping

    RANK CACHE When the PowerCenter Server runs a session with a Rank transformation, it compares an input row with rows in the data cache. If the input row out-ranks a stored row, the PowerCenter Server replaces the stored row with the input row. Example: PowerCenter caches the first 5 rows if we are finding top 5 salaried employees. When 6th row is read, it compares it with 5 rows in cache and places it in cache is needed.

    1> RANK INDEX CACHE: The index cache holds group information from the group by ports. If we are using Group By on DEPTNO, then this cache stores values 10, 20, 30 etc.

    All Group By Columns are in RANK INDEX CACHE. Ex. DEPTNO

    2> RANK DATA CACHE: It holds row data until the PowerCenter Server completes the ranking and is generally larger than the index cache. To reduce the data cache size, connect only the necessary input/output ports to subsequent transformations.

    All Variable ports if there, Rank Port, All ports going out from RANK transformation are stored in RANK DATA CACHE.

    Example: All ports except DEPTNO In our mapping example.

  • 3.13 AGGREGATOR TRANSFORMATION Connected and Active Transformation The Aggregator transformation allows us to perform aggregate calculations, such

    as averages and sums. Aggregator transformation allows us to perform calculations on groups.

    Components of the Aggregator Transformation

    1> Aggregate expression 2> Group by port 3> Sorted Input 4> Aggregate cache

    1> Aggregate Expressions

    Entered in an output port. Can include non-aggregate expressions and conditional clauses.

    The transformation language includes the following aggregate functions: AVG, COUNT , MAX, MIN, SUM FIRST, LAST MEDIAN, PERCENTILE, STDDEV, VARIANCE

    Single Level Aggregate Function: MAX(SAL) Nested Aggregate Function: MAX( COUNT( ITEM ))

    Nested Aggregate Functions

    In Aggregator transformation, there can be multiple single level functions or multiple nested functions.

    An Aggregator transformation cannot have both types of functions together. MAX( COUNT( ITEM )) is correct. MIN(MAX( COUNT( ITEM ))) is not correct. It can also include one aggregate

    function nested within another aggregate function

    Conditional Clauses

    We can use conditional clauses in the aggregate expression to reduce the number of rows used in the aggregation. The conditional clause can be any clause that evaluates to TRUE or FALSE.

    SUM( COMMISSION, COMMISSION > QUOTA )

    Non-Aggregate Functions

    We can also use non-aggregate functions in the aggregate expression. IIF( MAX( QUANTITY ) > 0, MAX( QUANTITY ), 0))

  • 2> Group By Ports

    Indicates how to create groups. When grouping data, the Aggregator transformation outputs the last row of

    each group unless otherwise specified. The Aggregator transformation allows us to define groups for aggregations, rather than performing the aggregation across all input data. For example, we can find Maximum Salary for every Department.

    In Aggregator Transformation, Open Ports tab and select Group By as needed.

    3> Using Sorted Input

    Use to improve session performance. To use sorted input, we must pass data to the Aggregator transformation

    sorted by group by port, in ascending or descending order. When we use this option, we tell Aggregator that data coming to it is already

    sorted. We check the Sorted Input Option in Properties Tab of the transformation. If the option is checked but we are not passing sorted data to the

    transformation, then the session fails.

    4> Aggregator Caches

    The PowerCenter Server stores data in the aggregate cache until it completes aggregate calculations.

    It stores group values in an index cache and row data in the data cache. If the PowerCenter Server requires more space, it stores overflow values in cache files.

    Note: The PowerCenter Server uses memory to process an Aggregator transformation with sorted ports. It does not use cache memory. We do not need to configure cache memory for Aggregator transformations that use sorted ports.

    1> Aggregator Index Cache: The index cache holds group information from the group by ports. If we are using Group By on DEPTNO, then this cache stores values 10, 20, 30 etc.

    All Group By Columns are in AGGREGATOR INDEX CACHE. Ex. DEPTNO

    2> Aggregator Data Cache: DATA CACHE is generally larger than the AGGREGATOR INDEX CACHE. Columns in Data Cache:

    Variable ports if any Non group by input/output ports. Non group by input ports used in non-aggregate output expression. Port containing aggregate function

    Example: All ports except DEPTNO in our mapping example.

  • 1> Example: To calculate MAX, MIN, AVG and SUM of salary of EMP table.

    EMP will be source table. Create a target table EMP_AGG_EXAMPLE in target designer. Table should

    contain DEPTNO, MAX_SAL, MIN_SAL, AVG_SAL and SUM_SAL Create the shortcuts in your folder.

    Creating Mapping:

    1. Open folder where we want to create the mapping. 2. Click Tools -> Mapping Designer. 3. Click Mapping-> Create-> Give mapping name. Ex: m_agg_example 4. Drag EMP from source in mapping. 5. Click Transformation -> Create -> Select AGGREGATOR from list. Give name

    and click Create. Now click done. 6. Pass SAL and DEPTNO only from SQ_EMP to AGGREGATOR Transformation. 7. Edit AGGREGATOR Transformation. Go to Ports Tab 8. Create 4 output ports: OUT_MAX_SAL, OUT_MIN_SAL, OUT_AVG_SAL,

    OUT_SUM_SAL 9. Open Expression Editor one by one for all output ports and give the

    calculations. Ex: MAX(SAL), MIN(SAL), AVG(SAL),SUM(SAL)

    10. Click Apply -> Ok. 11. Drag target table now. 12. Connect the output ports from Rank to target table. 13. Click Mapping -> Validate 14. Repository -> Save

    Create Session and Workflow as described earlier. Run the

    workflow and see the data in target table. Make sure to give connection information for all tables.

  • 2> Example: To calculate MAX, MIN, AVG and SUM of salary of EMP table for every DEPARTMENT

    Open the mapping made above. Edit Rank Transformation. Go to Ports Tab. Select Group By for DEPTNO. Click Apply -> Ok. Mapping -> Validate and Repository Save.

    Refresh the session by double clicking. Save the changed and run workflow to see the new result. Scene1: What will be output of the picture below?

    Here we are not doing any calculation or group by. In this case, the DEPTNO and SAL of last record of EMP table will be passed to target. Scene2: What will be output of the above picture if Group By is done on DEPTNO? Here we are not doing any calculation but Group By is there on DEPTNO. In this case, the last record of every DEPTNO from EMP table will be passed to target. Scene3: What will be output of the EXAMPLE 1? In Example 1, we are calculating MAX, MIN, AVG and SUM but we are not doing any Group By. In this DEPTNO of last record of EMP table will be passed. The calculations however will be correct. Scene4: What will be output of the EXAMPLE 2? In Example 1, we are calculating MAX, MIN, AVG and SUM for every DEPT. In this DEPTNO and the correct calculations for every DEPTNO will be passed to target. Scene5: Use SORTED INPUT in Properties Tab and Check output

  • 3.14 JOINER TRANSFORMATION

    Connected and Active Transformation Used to join source data from two related heterogeneous sources residing in

    different locations or file systems. Or, we can join data from the same source. If we need to join 3 tables, then we need 2 Joiner Transformations. The Joiner transformation joins two sources with at least one matching port.

    The Joiner transformation uses a condition that matches one or more pairs of ports between the two sources.

    Example: To join EMP and DEPT tables.

    EMP and DEPT will be source table. Create a target table JOINER_EXAMPLE in target designer. Table should

    contain all ports of EMP table plus DNAME and LOC as shown below. Create the shortcuts in your folder.

    Creating Mapping:

    1> Open folder where we want to create the mapping. 2> Click Tools -> Mapping Designer. 3> Click Mapping-> Create-> Give mapping name. Ex: m_joiner_example 4> Drag EMP, DEPT, Target. Create Joiner Transformation. Link as shown below.

    5> Specify the join condition in Condition tab. See steps on next page. 6> Set Master in Ports tab. See steps on next page. 7> Mapping -> Validate 8> Repository -> Save.

    Create Session and Workflow as described earlier. Run the workflow and see the data in target table.

    Make sure to give connection information for all tables.

  • JOIN CONDITION:

    The join condition contains ports from both input sources that must match for the PowerCenter Server to join two rows. Example: DEPTNO=DEPTNO1 in above.

    1. Edit Joiner Transformation -> Condition Tab 2. Add condition

    We can add as many conditions as needed. Only = operator is allowed.

    If we join Char and Varchar datatypes, the PowerCenter Server counts any spaces that pad Char values as part of the string. So if you try to join the following: Char (40) = abcd and Varchar (40) = abcd Then the Char value is abcd padded with 36 blank spaces, and the PowerCenter Server does not join the two fields because the Char field contains trailing spaces. Note: The Joiner transformation does not match null values.

    MASTER and DETAIL TABLES

    In Joiner, one table is called as MASTER and other as DETAIL. MASTER table is always cached. We can make any table as MASTER. Edit Joiner Transformation -> Ports Tab -> Select M for Master table.

    Table with less number of rows should be made MASTER to improve performance. Reason:

    When the PowerCenter Server processes a Joiner transformation, it reads rows from both sources concurrently and builds the index and data cache based on the master rows. So table with fewer rows will be read fast and cache can be made as table with more rows is still being read.

    The fewer unique rows in the master, the fewer iterations of the join comparison occur, which speeds the join process.

    JOINER TRANSFORMATION PROPERTIES TAB

    Case-Sensitive String Comparison