185
Informatica Developer (Version 9.1.0) User Guide

In 910 Dev UserGuide En

Embed Size (px)

Citation preview

Page 1: In 910 Dev UserGuide En

Informatica Developer (Version 9.1.0)

User Guide

Page 2: In 910 Dev UserGuide En

Informatica Developer User Guide

Version 9.1.0March 2011

Copyright (c) 1998-2011 Informatica. All rights reserved.

This software and documentation contain proprietary information of Informatica Corporation and are provided under a license agreement containing restrictions on use anddisclosure and are also protected by copyright law. Reverse engineering of the software is prohibited. No part of this document may be reproduced or transmitted in any form,by any means (electronic, photocopying, recording or otherwise) without prior consent of Informatica Corporation. This Software may be protected by U.S. and/or internationalPatents and other Patents Pending.

Use, duplication, or disclosure of the Software by the U.S. Government is subject to the restrictions set forth in the applicable software license agreement and as provided inDFARS 227.7202-1(a) and 227.7702-3(a) (1995), DFARS 252.227-7013©(1)(ii) (OCT 1988), FAR 12.212(a) (1995), FAR 52.227-19, or FAR 52.227-14 (ALT III), as applicable.

The information in this product or documentation is subject to change without notice. If you find any problems in this product or documentation, please report them to us inwriting.

Informatica, Informatica Platform, Informatica Data Services, PowerCenter, PowerCenterRT, PowerCenter Connect, PowerCenter Data Analyzer, PowerExchange,PowerMart, Metadata Manager, Informatica Data Quality, Informatica Data Explorer, Informatica B2B Data Transformation, Informatica B2B Data Exchange, Informatica OnDemand, Informatica Identity Resolution, Informatica Application Information Lifecycle Management, Informatica Complex Event Processing, Ultra Messaging and InformaticaMaster Data Management are trademarks or registered trademarks of Informatica Corporation in the United States and in jurisdictions throughout the world. All other companyand product names may be trade names or trademarks of their respective owners.

Portions of this software and/or documentation are subject to copyright held by third parties, including without limitation: Copyright DataDirect Technologies. All rightsreserved. Copyright © Sun Microsystems. All rights reserved. Copyright © RSA Security Inc. All Rights Reserved. Copyright © Ordinal Technology Corp. All rightsreserved.Copyright © Aandacht c.v. All rights reserved. Copyright Genivia, Inc. All rights reserved. Copyright 2007 Isomorphic Software. All rights reserved. Copyright © MetaIntegration Technology, Inc. All rights reserved. Copyright © Oracle. All rights reserved. Copyright © Adobe Systems Incorporated. All rights reserved. Copyright © DataArt,Inc. All rights reserved. Copyright © ComponentSource. All rights reserved. Copyright © Microsoft Corporation. All rights reserved. Copyright © Rogue Wave Software, Inc. Allrights reserved. Copyright © Teradata Corporation. All rights reserved. Copyright © Yahoo! Inc. All rights reserved. Copyright © Glyph & Cog, LLC. All rights reserved.Copyright © Thinkmap, Inc. All rights reserved. Copyright © Clearpace Software Limited. All rights reserved. Copyright © Information Builders, Inc. All rights reserved.Copyright © OSS Nokalva, Inc. All rights reserved. Copyright Edifecs, Inc. All rights reserved.

This product includes software developed by the Apache Software Foundation (http://www.apache.org/), and other software which is licensed under the Apache License,Version 2.0 (the "License"). You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0. Unless required by applicable law or agreed to in writing,software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See theLicense for the specific language governing permissions and limitations under the License.

This product includes software which was developed by Mozilla (http://www.mozilla.org/), software copyright The JBoss Group, LLC, all rights reserved; software copyright ©1999-2006 by Bruno Lowagie and Paulo Soares and other software which is licensed under the GNU Lesser General Public License Agreement, which may be found at http://www.gnu.org/licenses/lgpl.html. The materials are provided free of charge by Informatica, "as-is", without warranty of any kind, either express or implied, including but notlimited to the implied warranties of merchantability and fitness for a particular purpose.

The product includes ACE(TM) and TAO(TM) software copyrighted by Douglas C. Schmidt and his research group at Washington University, University of California, Irvine,and Vanderbilt University, Copyright (©) 1993-2006, all rights reserved.

This product includes software developed by the OpenSSL Project for use in the OpenSSL Toolkit (copyright The OpenSSL Project. All Rights Reserved) and redistribution ofthis software is subject to terms available at http://www.openssl.org.

This product includes Curl software which is Copyright 1996-2007, Daniel Stenberg, <[email protected]>. All Rights Reserved. Permissions and limitations regarding thissoftware are subject to terms available at http://curl.haxx.se/docs/copyright.html. Permission to use, copy, modify, and distribute this software for any purpose with or withoutfee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

The product includes software copyright 2001-2005 (©) MetaStuff, Ltd. All Rights Reserved. Permissions and limitations regarding this software are subject to terms availableat http://www.dom4j.org/ license.html.

The product includes software copyright © 2004-2007, The Dojo Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to termsavailable at http:// svn.dojotoolkit.org/dojo/trunk/LICENSE.

This product includes ICU software which is copyright International Business Machines Corporation and others. All rights reserved. Permissions and limitations regarding thissoftware are subject to terms available at http://source.icu-project.org/repos/icu/icu/trunk/license.html.

This product includes software copyright © 1996-2006 Per Bothner. All rights reserved. Your right to use such materials is set forth in the license which may be found at http://www.gnu.org/software/ kawa/Software-License.html.

This product includes OSSP UUID software which is Copyright © 2002 Ralf S. Engelschall, Copyright © 2002 The OSSP Project Copyright © 2002 Cable & WirelessDeutschland. Permissions and limitations regarding this software are subject to terms available at http://www.opensource.org/licenses/mit-license.php.

This product includes software developed by Boost (http://www.boost.org/) or under the Boost software license. Permissions and limitations regarding this software are subjectto terms available at http:/ /www.boost.org/LICENSE_1_0.txt.

This product includes software copyright © 1997-2007 University of Cambridge. Permissions and limitations regarding this software are subject to terms available at http://www.pcre.org/license.txt.

This product includes software copyright © 2007 The Eclipse Foundation. All Rights Reserved. Permissions and limitations regarding this software are subject to termsavailable at http:// www.eclipse.org/org/documents/epl-v10.php.

This product includes software licensed under the terms at http://www.tcl.tk/software/tcltk/license.html, http://www.bosrup.com/web/overlib/?License, http://www.stlport.org/doc/license.html, http://www.asm.ow2.org/license.html, http://www.cryptix.org/LICENSE.TXT, http://hsqldb.org/web/hsqlLicense.html, http://httpunit.sourceforge.net/doc/license.html, http://jung.sourceforge.net/license.txt , http://www.gzip.org/zlib/zlib_license.html, http://www.openldap.org/software/release/license.html, http://www.libssh2.org,http://slf4j.org/license.html, http://www.sente.ch/software/OpenSourceLicense.html, http://fusesource.com/downloads/license-agreements/fuse-message-broker-v-5-3-license-agreement, http://antlr.org/license.html, http://aopalliance.sourceforge.net/, http://www.bouncycastle.org/licence.html, http://www.jgraph.com/jgraphdownload.html, http://www.jgraph.com/jgraphdownload.html, http://www.jcraft.com/jsch/LICENSE.txt and http://jotm.objectweb.org/bsd_license.html.

This product includes software licensed under the Academic Free License (http://www.opensource.org/licenses/afl-3.0.php), the Common Development and DistributionLicense (http://www.opensource.org/licenses/cddl1.php) the Common Public License (http://www.opensource.org/licenses/cpl1.0.php) and the BSD License (http://www.opensource.org/licenses/bsd-license.php).

This product includes software copyright © 2003-2006 Joe WaInes, 2006-2007 XStream Committers. All rights reserved. Permissions and limitations regarding this softwareare subject to terms available at http://xstream.codehaus.org/license.html. This product includes software developed by the Indiana University Extreme! Lab. For furtherinformation please visit http://www.extreme.indiana.edu/.

Page 3: In 910 Dev UserGuide En

This Software is protected by U.S. Patent Numbers 5,794,246; 6,014,670; 6,016,501; 6,029,178; 6,032,158; 6,035,307; 6,044,374; 6,092,086; 6,208,990; 6,339,775;6,640,226; 6,789,096; 6,820,077; 6,823,373; 6,850,947; 6,895,471; 7,117,215; 7,162,643; 7,254,590; 7,281,001; 7,421,458; 7,496,588; 7,523,121; 7,584,422; 7,720,842;7,721,270; and 7,774,791, international Patents and other Patents Pending.

DISCLAIMER: Informatica Corporation provides this documentation "as is" without warranty of any kind, either express or implied, including, but not limited to, the impliedwarranties of non-infringement, merchantability, or use for a particular purpose. Informatica Corporation does not warrant that this software or documentation is error free. Theinformation provided in this software or documentation may include technical inaccuracies or typographical errors. The information in this software and documentation issubject to change at any time without notice.

NOTICES

This Informatica product (the “Software”) includes certain drivers (the “DataDirect Drivers”) from DataDirect Technologies, an operating company of Progress SoftwareCorporation (“DataDirect”) which are subject to the following terms and conditions:

1.THE DATADIRECT DRIVERS ARE PROVIDED “AS IS” WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING BUT NOTLIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NON-INFRINGEMENT.

2. IN NO EVENT WILL DATADIRECT OR ITS THIRD PARTY SUPPLIERS BE LIABLE TO THE END-USER CUSTOMER FOR ANY DIRECT, INDIRECT,INCIDENTAL, SPECIAL, CONSEQUENTIAL OR OTHER DAMAGES ARISING OUT OF THE USE OF THE ODBC DRIVERS, WHETHER OR NOT INFORMED OFTHE POSSIBILITIES OF DAMAGES IN ADVANCE. THESE LIMITATIONS APPLY TO ALL CAUSES OF ACTION, INCLUDING, WITHOUT LIMITATION, BREACHOF CONTRACT, BREACH OF WARRANTY, NEGLIGENCE, STRICT LIABILITY, MISREPRESENTATION AND OTHER TORTS.

Part Number: IN-DUG-91000-0001

Page 4: In 910 Dev UserGuide En

Table of Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viiiInformatica Resources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Informatica Customer Portal. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Informatica Documentation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Informatica Web Site. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Informatica How-To Library. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . viii

Informatica Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Informatica Multimedia Knowledge Base. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Informatica Global Customer Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ix

Part I: Informatica Developer Concepts. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1

Chapter 1: Working with Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2Working with Informatica Developer Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Informatica Data Quality and Informatica Data Explorer. . . . . . . . . . . . . . . . . . . . . . . . . . . 2

Informatica Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

Informatica Developer Interface. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

Informatica Developer Welcome Page. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Cheat Sheets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Setting Up Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Domains. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5

Adding a Domain. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

The Model Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Objects in Informatica Developer. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6

Adding a Model Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Connecting to a Model Repository. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

Projects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Creating a Project. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

Assigning Permissions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Folders. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Creating a Folder. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Search. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Searching for Objects and Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

Configuring Validation Preferences. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Copy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

Copying an Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Copying an Object as a Link. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Table of Contents i

Page 5: In 910 Dev UserGuide En

Chapter 2: Connections. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13Connections Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Adabas Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

DB2 for i5/OS Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

DB2 for z/OS Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

IBM DB2 Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19

IMS Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20

Microsoft SQL Server Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

ODBC Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

Oracle Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23

SAP Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

Sequential Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

VSAM Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

Web Services Connection Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

Connection Explorer View. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

Creating a Connection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Creating a Web Services Connection. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30

Chapter 3: Physical Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32Physical Data Objects Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

Relational Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Key Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33

Creating a Read Transformation from Relational Data Objects. . . . . . . . . . . . . . . . . . . . . . 34

Importing a Relational Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Customized Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35

Default Query. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

Key Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Select Distinct. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Filter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

Sorted Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

User-Defined Joins. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

Custom Queries. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Outer Join Support. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42

Informatica Join Syntax. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

Pre- and Post-Mapping SQL Commands. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

Customized Data Objects Write Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Creating a Customized Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

Adding Relational Resources to a Customized Data Object. . . . . . . . . . . . . . . . . . . . . . . . 48

Adding Relational Data Objects to a Customized Data Object. . . . . . . . . . . . . . . . . . . . . . . 48

Nonrelational Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Importing a Nonrelational Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

Creating a Read Transformation from Nonrelational Data Operations. . . . . . . . . . . . . . . . . . 49

ii Table of Contents

Page 6: In 910 Dev UserGuide En

Flat File Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Flat File Data Object Overview Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

Flat File Data Object Read Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

Flat File Data Object Write Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55

Flat File Data Object Advanced Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57

Creating a Flat File Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Importing a Fixed-Width Flat File Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

Importing a Delimited Flat File Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

SAP Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Importing an SAP Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

Synchronization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

Troubleshooting Physical Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

Chapter 4: Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63Mappings Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Object Dependency in a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

Developing a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Creating a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

Mapping Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Adding Objects to a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

One to One Links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

One to Many Links. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Linking Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Manually Linking Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Automatically Linking Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Rules and Guidelines for Linking Ports. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

Propagating Port Attributes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Dependency Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Link Path Dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

Implicit Dependencies. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Propagated Port Attributes by Transformation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

Mapping Validation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Connection Validation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

Expression Validation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Object Validation. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Validating a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Running a Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Segments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Copying a Segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

Chapter 5: Performance Tuning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74Performance Tuning Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

Optimization Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Table of Contents iii

Page 7: In 910 Dev UserGuide En

Early Projection Optimization Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

Early Selection Optimization Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76

Predicate Optimization Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77

Cost-Based Optimization Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Semi-Join Optimization Method. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78

Setting the Optimizer Level for a Developer Tool Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

Setting the Optimizer Level for a Deployed Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

Chapter 6: Pushdown Optimization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81Pushdown Optimization Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

Pushdown Optimization to Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Pushdown Optimization to Native Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

Pushdown Optimization to PowerExchange Nonrelational Sources. . . . . . . . . . . . . . . . . . . 82

Pushdown Optimization to ODBC Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Pushdown Optimization to SAP Sources. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Pushdown Optimization Expressions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Functions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

Operators. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

Comparing the Output of the Data Integration Service and Sources. . . . . . . . . . . . . . . . . . . . . . . . . 88

Chapter 7: Mapplets. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Mapplets Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

Mapplet Types. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

Mapplets and Rules. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Mapplet Input and Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Mapplet Input. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

Mapplet Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Creating a Mapplet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Validating a Mapplet. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Segments. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

Copying a Segment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

Chapter 8: Object Import and Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Object Import and Export Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

Import and Export Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

Reference Table Import and Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Object Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Exporting Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

Object Import. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Importing Projects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Importing Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

Importing Application Archives. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98

iv Table of Contents

Page 8: In 910 Dev UserGuide En

Chapter 9: Export to PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99Export to PowerCenter Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

PowerCenter Release Compatibility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Setting the Compatibility Level. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Mapplet Export. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

Export to PowerCenter Options. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

Exporting an Object to PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102

Export Restrictions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103

Rules and Guidelines for Exporting to PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104

Troubleshooting Exporting to PowerCenter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

Chapter 10: Deployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106Deployment Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106

Creating an Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Deploying an Object to a Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107

Deploying an Object to a File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108

Updating an Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Mapping Deployment Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

Application Redeployment. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Redeploying an Application. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

Chapter 11: Parameters and Parameter Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112Parameters and Parameter Files Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Where to Create Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113

Where to Assign Parameters. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Creating a Parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Assigning a Parameter. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

Parameter Files. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Parameter File Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Parameter File Schema Definition. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117

Creating a Parameter File. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

Chapter 12: Tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120Tags Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Creating a Tag. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120

Assigning a Tag. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Viewing Tags. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121

Chapter 13: Viewing Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122Viewing Data Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Selecting a Default Data Integration Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

Table of Contents v

Page 9: In 910 Dev UserGuide En

Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Data Viewer Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123

Mapping Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Updating the Default Configuration Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124

Configuration Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

Troubleshooting Configurations. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Exporting Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

Logs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Monitoring Jobs from the Developer Tool. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

Part II: Informatica Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Chapter 14: Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130Data Services Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130

Logical Data Object Model Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

SQL Data Service Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131

Web Services Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

Chapter 15: Logical View of Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133Logical View of Data Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Developing a Logical View of Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

Logical Data Object Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Creating a Logical Data Object Model. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134

Importing a Logical Data Object Model from a Modeling Tool. . . . . . . . . . . . . . . . . . . . . . 135

Logical Data Object Model Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

CA ERwin Data Modeler Import Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135

IBM Cognos Business Intelligence Reporting - Framework Manager Import Properties. . . . . . 136

SAP BusinessObjects Designer Import Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

Sybase PowerDesigner CDM Import Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

Sybase PowerDesigner OOM 9.x to 15.x Import Properties. . . . . . . . . . . . . . . . . . . . . . . 139

Sybase PowerDesigner PDM Import Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

XSD Import Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

Logical Data Objects. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

Logical Data Object Properties. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

Attribute Relationships. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

Creating a Logical Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

Logical Data Object Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Logical Data Object Read Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Logical Data Object Write Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Creating a Logical Data Object Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

Chapter 16: Virtual Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145Virtual Data Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145

vi Table of Contents

Page 10: In 910 Dev UserGuide En

SQL Data Services. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

Defining an SQL Data Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

Creating an SQL Data Service. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

Virtual Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

Data Access Methods. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

Creating a Virtual Table from a Data Object. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Creating a Virtual Table Manually. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148

Defining Relationships between Virtual Tables. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Running an SQL Query to Preview Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Virtual Table Mappings. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

Defining a Virtual Table Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Creating a Virtual Table Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

Validating a Virtual Table Mapping. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Previewing Virtual Table Mapping Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Virtual Stored Procedures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Defining a Virtual Stored Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Creating a Virtual Stored Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

Validating a Virtual Stored Procedure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

Previewing Virtual Stored Procedure Output. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

SQL Query Plans. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153

SQL Query Plan Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

Viewing an SQL Query Plan. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154

Appendix A: Datatype Reference. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156Datatype Reference Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156

DB2 for i5/OS, DB2 for z/OS, and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Unsupported DB2 for i5/OS and DB2 for z/OS Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . 157

Flat File and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

IBM DB2 and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158

Unsupported IBM DB2 Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Microsoft SQL Server and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159

Unsupported Microsoft SQL Server Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160

Nonrelational and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161

ODBC and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

Oracle and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

Number(P,S) Datatype. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Char, Varchar, Clob Datatypes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

Unsupported Oracle Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

XML and Transformation Datatypes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

Converting Data. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167

Port-to-Port Data Conversion. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

Index. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169

Table of Contents vii

Page 11: In 910 Dev UserGuide En

PrefaceThe Informatica Developer User Guide is written for data services and data quality developers. This guideassumes that you have an understanding of flat file and relational database concepts, the database engines inyour environment, and data quality concepts.

Informatica Resources

Informatica Customer PortalAs an Informatica customer, you can access the Informatica Customer Portal site at http://mysupport.informatica.com. The site contains product information, user group information, newsletters,access to the Informatica customer support case management system (ATLAS), the Informatica How-To Library,the Informatica Knowledge Base, the Informatica Multimedia Knowledge Base, Informatica ProductDocumentation, and access to the Informatica user community.

Informatica DocumentationThe Informatica Documentation team takes every effort to create accurate, usable documentation. If you havequestions, comments, or ideas about this documentation, contact the Informatica Documentation team throughemail at [email protected]. We will use your feedback to improve our documentation. Let usknow if we can contact you regarding your comments.

The Documentation team updates documentation as needed. To get the latest documentation for your product,navigate to Product Documentation from http://mysupport.informatica.com.

Informatica Web SiteYou can access the Informatica corporate web site at http://www.informatica.com. The site contains informationabout Informatica, its background, upcoming events, and sales offices. You will also find product and partnerinformation. The services area of the site includes important information about technical support, training andeducation, and implementation services.

Informatica How-To LibraryAs an Informatica customer, you can access the Informatica How-To Library at http://mysupport.informatica.com.The How-To Library is a collection of resources to help you learn more about Informatica products and features. Itincludes articles and interactive demonstrations that provide solutions to common problems, compare features andbehaviors, and guide you through performing specific real-world tasks.

viii

Page 12: In 910 Dev UserGuide En

Informatica Knowledge BaseAs an Informatica customer, you can access the Informatica Knowledge Base at http://mysupport.informatica.com.Use the Knowledge Base to search for documented solutions to known technical issues about Informaticaproducts. You can also find answers to frequently asked questions, technical white papers, and technical tips. Ifyou have questions, comments, or ideas about the Knowledge Base, contact the Informatica Knowledge Baseteam through email at [email protected].

Informatica Multimedia Knowledge BaseAs an Informatica customer, you can access the Informatica Multimedia Knowledge Base at http://mysupport.informatica.com. The Multimedia Knowledge Base is a collection of instructional multimedia filesthat help you learn about common concepts and guide you through performing specific tasks. If you havequestions, comments, or ideas about the Multimedia Knowledge Base, contact the Informatica Knowledge Baseteam through email at [email protected].

Informatica Global Customer SupportYou can contact a Customer Support Center by telephone or through the Online Support. Online Support requiresa user name and password. You can request a user name and password at http://mysupport.informatica.com.

Use the following telephone numbers to contact Informatica Global Customer Support:

North America / South America Europe / Middle East / Africa Asia / Australia

Toll FreeBrazil: 0800 891 0202Mexico: 001 888 209 8853North America: +1 877 463 2435 Standard RateNorth America: +1 650 653 6332

Toll FreeFrance: 00800 4632 4357Germany: 00800 4632 4357Israel: 00800 4632 4357Italy: 800 915 985Netherlands: 00800 4632 4357Portugal: 800 208 360Spain: 900 813 166Switzerland: 00800 4632 4357 or 0800 463200United Kingdom: 00800 4632 4357 or 0800023 4632 Standard RateFrance: 0805 804632Germany: 01805 702702Netherlands: 030 6022 797

Toll FreeAustralia: 1 800 151 830New Zealand: 1 800 151 830Singapore: 001 800 4632 4357 Standard RateIndia: +91 80 4112 5738

Preface ix

Page 13: In 910 Dev UserGuide En

x

Page 14: In 910 Dev UserGuide En

Part I: Informatica DeveloperConcepts

This part contains the following chapters:

¨ Working with Informatica Developer, 2

¨ Connections, 13

¨ Physical Data Objects, 32

¨ Mappings, 63

¨ Performance Tuning, 74

¨ Pushdown Optimization, 81

¨ Mapplets, 90

¨ Object Import and Export, 94

¨ Export to PowerCenter, 99

¨ Deployment, 106

¨ Parameters and Parameter Files, 112

¨ Tags, 120

¨ Viewing Data, 122

1

Page 15: In 910 Dev UserGuide En

C H A P T E R 1

Working with Informatica DeveloperThis chapter includes the following topics:

¨ Working with Informatica Developer Overview, 2

¨ Informatica Developer Interface, 4

¨ Setting Up Informatica Developer, 5

¨ Domains, 5

¨ The Model Repository, 6

¨ Projects, 8

¨ Folders, 9

¨ Search, 10

¨ Configuring Validation Preferences, 11

¨ Copy, 11

Working with Informatica Developer OverviewThe Developer tool is an application that you use to design and implement data quality and data servicessolutions. Use Informatica Data Quality and Informatica Data Explorer for data quality solutions. Use InformaticaData Services for data services solutions. You can also use the Profiling option with Informatica Data Services toprofile data.

Informatica Data Quality and Informatica Data ExplorerUse the data quality capabilities in the Developer tool to analyze the content and structure of your data andenhance the data in ways that meet your business needs.

Use the Developer tool to design and run processes to complete the following tasks:

¨ Profile data. Profiling reveals the content and structure of data. Profiling is a key step in any data project, as itcan identify strengths and weaknesses in data and help you define a project plan.

¨ Create scorecards to review data quality. A scorecard is a graphical representation of the qualitymeasurements in a profile.

¨ Standardize data values. Standardize data to remove errors and inconsistencies that you find when you run aprofile. You can standardize variations in punctuation, formatting, and spelling. For example, you can ensurethat the city, state, and ZIP code values are consistent.

2

Page 16: In 910 Dev UserGuide En

¨ Parse data. Parsing reads a field composed of multiple values and creates a field for each value according tothe type of information it contains. Parsing can also add information to records. For example, you can define aparsing operation to add units of measurement to product data.

¨ Validate postal addresses. Address validation evaluates and enhances the accuracy and deliverability of postaladdress data. Address validation corrects errors in addresses and completes partial addresses by comparingaddress records against address reference data from national postal carriers. Address validation can also addpostal information that speeds mail delivery and reduces mail costs.

¨ Find duplicate records. Duplicate analysis calculates the degrees of similarity between records by comparingdata from one or more fields in each record. You select the fields to be analyzed, and you select thecomparison strategies to apply to the data. The Developer tool enables two types of duplicate analysis: fieldmatching, which identifies similar or duplicate records, and identity matching, which identifies similar orduplicate identities in record data.

¨ Create reference data tables. Informatica provides reference data that can enhance several types of dataquality process, including standardization and parsing. You can create reference tables using data from profileresults.

¨ Create and run data quality rules. Informatica provides rules that you can run or edit to meet your projectobjectives. You can create mapplets and validate them as rules in the Developer tool.

¨ Collaborate with Informatica users. The Model Repository stores reference data and rules, and this repositoryis available to users of the Developer tool and Analyst tool. Users can collaborate on projects, and differentusers can take ownership of objects at different stages of a project.

¨ Export mappings to PowerCenter. You can export mappings to PowerCenter to reuse the metadata for physicaldata integration or to create web services.

Informatica Data ServicesData services are a collection of reusable operations that you can run to access and transform data.

Use the data services capabilities in the Developer tool to complete the following tasks:

¨ Define logical views of data. A logical view of data describes the structure and use of data in an enterprise. Youcan create a logical data object model that shows the types of data your enterprise uses and how that data isstructured.

¨ Map logical models to data sources or targets. Create a mapping that links objects in a logical model to datasources or targets. You can link data from multiple, disparate sources to create a single view of the data. Youcan also load data that conforms to a model to multiple, disparate targets.

¨ Create virtual views of data. You can deploy a virtual federated database to a Data Integration Service. Endusers can run SQL queries against the virtual data without affecting the actual source data.

¨ Provide access to data integration functionality through a web service interface. You can deploy a web serviceto a Data Integration Service. End users send requests to the web service and receive responses throughSOAP messages.

¨ Export mappings to PowerCenter. You can export mappings to PowerCenter to reuse the metadata for physicaldata integration or to create web services.

¨ Create and deploy mappings that domain users can run from the command line.

¨ Profile data. If you use the Profiling option, profile data to reveal the content and structure of data. Profiling is akey step in any data project, as it can identify strengths and weaknesses in data and help you define a projectplan.

Working with Informatica Developer Overview 3

Page 17: In 910 Dev UserGuide En

Informatica Developer InterfaceThe Developer tool lets you design and implement data quality and data services solutions.

You can work on multiple tasks in the Developer tool at the same time. You can also work in multiple folders andprojects at the same time. To work in the Developer tool, you access the Developer tool workbench.

The following figure shows the Developer tool workbench:

The Developer tool workbench includes an editor and views. You edit objects, such as mappings, in the editor.The Developer tool displays views, such as the default view, based on which object is open in the editor. TheDeveloper tool also includes the following views that appear independently of the objects in the editor:

¨ Cheat Sheets. Shows cheat sheets.

¨ Connection Explorer. Shows connections to relational databases.

¨ Data Viewer. Shows the results of a mapping, data preview, or an SQL query.

¨ Object Explorer. Shows projects, folders, and the objects they contain.

¨ Outline. Shows dependent objects in an object.

¨ Progress. Shows the progress of operations in the Developer tool, such as a mapping run.

¨ Properties. Shows object properties.

¨ Search. Shows search options.

¨ Validation Log. Shows object validation errors.

You can hide views and move views to another location in the Developer tool workbench. You can also displayother views. Click Window > Show View to select the views you want to display.

4 Chapter 1: Working with Informatica Developer

Page 18: In 910 Dev UserGuide En

Informatica Developer Welcome PageThe first time you open the Developer tool, the Welcome page appears. Use the Welcome page to learn moreabout the Developer tool, set up the Developer tool, and to start working in the Developer tool.

The Welcome page displays the following options:

¨ Overview. Click the Overview button to get an overview of data quality and data services solutions.

¨ First Steps. Click the First Steps button to learn more about setting up the Developer tool and accessingInformatica Data Quality and Informatica Data Services lessons.

¨ Tutorials. Click the Tutorials button to see cheat sheets for the Developer tool and for data quality and dataservices solutions.

¨ Web Resources. Click the Web Resources button for a link to mysupport.informatica.com. You can access theInformatica How-To Library. The Informatica How-To Library contains articles about Informatica Data Quality,Informatica Data Services, and other Informatica products.

¨ Workbench. Click the Workbench button to start working in the Developer tool.

Cheat SheetsThe Developer tool includes cheat sheets as part of the online help. A cheat sheet is a step-by-step guide thathelps you complete one or more tasks in the Developer tool.

When you follow a cheat sheet, you complete the tasks and see the results. For example, you can complete acheat sheet to import and preview a relational data object.

To access cheat sheets, click Help > Cheat Sheets.

Setting Up Informatica DeveloperTo set up the Developer tool, you add a domain. You create a connection to a Model repository, and you create aproject and folder to store your work. You also select a default Data Integration Service.

To set up the Developer tool, complete the following tasks:

1. Add a domain.

2. Connect to a Model repository.

3. Create a project.

4. Optionally, create a folder.

5. Select a default Data Integration Service.

DomainsThe Informatica domain is a collection of nodes and services that define the Informatica environment.

You add a domain in the Developer tool. You can also edit the domain information or remove a domain. Youmanage domain information in the Developer tool preferences.

Setting Up Informatica Developer 5

Page 19: In 910 Dev UserGuide En

Adding a DomainAdd a domain in the Developer tool to access a Model repository.

Before you add a domain, verify that you have a domain name, host name, and port number to connect to adomain. You can get this information from an administrator.

1. Click Window > Preferences.

The Preferences dialog box appears.

2. Select Informatica > Domains.

3. Click Add.

The New Domain dialog box appears.

4. Enter the domain name, host name, and port number.

5. Click Finish.

6. Click OK.

The Model RepositoryThe Model repository is a relational database that stores the metadata for projects and folders.

When you set up the Developer tool, you need to add a Model repository. Each time you open the Developer tool,you connect to the Model repository to access projects and folders.

Objects in Informatica DeveloperYou can create, manage, or view certain objects in a project or folder in the Developer tool.

The following table lists the objects in a project or folder and the operations you can perform:

Object Description

Application Create, edit, and delete applications.

Connection Create, edit, and delete connections.

Data service Create, edit, and delete data services.

Folder Create, edit, and delete folders.

Logical data object Create, edit, and delete logical data objects in a logical data object model.

Logical data object mapping Create, edit, and delete logical data object mappings for a logical data object.

Logical data object model Create, edit, and delete logical data object models.

Mapping Create, edit, and delete mappings.

Mapplet Create, edit, and delete mapplets.

Operation mapping Create, edit, and delete operation mappings in a web service.

6 Chapter 1: Working with Informatica Developer

Page 20: In 910 Dev UserGuide En

Object Description

Physical data object Create, edit, and delete physical data objects. Physical data objects can be flat file, non-relational, relational, SAP, or WSDL.

Profile Create, edit, and delete profiles.

Reference table View and delete reference tables.

Rule Create, edit, and delete rules.

Scorecard Create, edit, and delete scorecards.

Transformation Create, edit, and delete transformations.

Virtual schema Create, edit, and delete virtual schemas in an SQL data service.

Virtual stored procedure Create, edit, and delete virtual stored procedures in a virtual schema.

Virtual table Create, edit, and delete virtual tables in a virtual schema.

Virtual table mapping Create, edit, and delete virtual table mappings for a virtual table.

Adding a Model RepositoryAdd a Model repository to access projects and folders.

Before you add a Model repository, verify the following prerequisites:

¨ An administrator has configured a Model Repository Service in the Administrator tool.

¨ You have a user name and password to access the Model Repository Service. You can get this informationfrom an administrator.

1. Click File > Connect to Repository.

The Connect to Repository dialog box appears.

2. Click Browse to select a Model Repository Service.

3. Click OK.

4. Click Next.

5. Enter your user name and password.

6. Click Finish.

The Model Repository appears in the Object Explorer view.

Connecting to a Model RepositoryEach time you open the Developer tool, you connect to a Model repository to access projects and folders. Whenyou connect to a Model repository, you enter connection information to access the domain that includes the ModelRepository Service that manages the Model repository.

1. In the Object Explorer view, right-click a Model repository and click Connect.

The Connect to Repository dialog box appears.

2. Enter the domain user name and password.

The Model Repository 7

Page 21: In 910 Dev UserGuide En

3. Click OK.

The Developer tool connects to the Model repository. The Developer tool displays the projects in therepository.

ProjectsA project is the top-level container that you use to store folders and objects in the Developer tool. Use projects toorganize and manage the objects that you want to use for data services and data quality solutions.

You manage and view projects in the Object Explorer view. When you create a project, the Developer tool storesthe project in the Model repository. Each project that you create also appears in the Analyst tool.

The following table describes the tasks that you can perform on a project:

Task Description

Manage projects Manage project contents. You can create, duplicate, rename,and delete a project. You can view project contents.

Manage folders Organize project contents in folders. You can create,duplicate, rename, move, and rename folders within projects.

Manage objects You can view object contents, duplicate, rename, move, anddelete objects in a project or in a folder within a project.

Search projects You can search for folders or objects in projects. You canview search results and select an object from the results toview its contents.

Assign permissions You can add users to a project. You can assign the read,write, and grant permissions to users on a project to restrict orprovide access to objects within the project.

Share projects Share project contents to collaborate with other users on theproject. The contents of a shared project are available forother uses to add to use.For example, when you create a profile in the projectCustomers_West, you can add a physical data object from theshared folder Customers_East to the profile.

Creating a ProjectCreate a project to store objects and folders.

1. Select a Model Repository Service in the Object Explorer view.

2. Click File > New > Project.

The New Project dialog box appears.

3. Enter a name for the project.

4. Click Shared if you want to use objects in this project in other projects.

5. Click Finish.

The project appears under the Model Repository Service in the Object Explorer view.

8 Chapter 1: Working with Informatica Developer

Page 22: In 910 Dev UserGuide En

Assigning PermissionsYou can add users to a project and assign permissions for the user. Assign permissions to determine the tasksthat users can complete on a project and objects in the project.

1. Select a project in the Object Explorer view.

2. Click File > Permissions.

The Permissions dialog box appears.

3. Click Add to add a user and assign permissions for the user.

The Domain Users dialog box appears. The dialog box shows a list of users.

4. To filter the list of users, enter a name or string.

Optionally, use the wildcard characters in the filter.

5. To filter by security domain, click the Filter by Security Domain button.

6. Select Native to show users in the native security domain. Or, select All to show all users.

7. Select a user and click OK.

The user appears with the list of users in the Permissions dialog box.

8. Select Allow or Deny for each permission for the user.

9. Click OK.

FoldersUse folders to organize objects in a project. Create folders to group objects based on business needs. Forexample, you can create a folder to group objects for a particular task in a project. You can create a folder in aproject or in another folder.

Folders appear within projects in the Object Explorer view. A folder can contain other folders, data objects, andobject types.

You can perform the following tasks on a folder:

¨ Create a folder.

¨ View a folder.

¨ Rename a folder.

¨ Duplicate a folder.

¨ Move a folder.

¨ Delete a folder.

Creating a FolderCreate a folder to store related objects in a project. You must create the folder in a project or another folder.

1. In the Object Explorer view, select the project or folder where you want to create a folder.

2. Click File > New > Folder.

The New Folder dialog box appears.

3. Enter a name for the folder.

Folders 9

Page 23: In 910 Dev UserGuide En

4. Click Finish.

The folder appears under the project or parent folder.

SearchYou can search for objects and object properties in the Developer tool.

You can create a search query and then filter the search results. You can view search results and select an objectfrom the results to view its contents. Search results appear on the Search view.

You can use the following search options:

Search Option Description

Containing text Object or property that you want to search for. Enter an exact string or use a wildcard. Not casesensitive.

Name patterns One or more objects that contain the name pattern. Enter an exact string or use a wildcard. Not casesensitive.

Search for One or more object types to search for.

Scope Search the workspace or an object that you selected.

The Model Repository Service uses a search engine to index the metadata in the Model repository. To correctlyindex the metadata, the search engine uses a search analyzer appropriate for the language of the metadata thatyou are indexing. The Developer tool uses the search engine to perform searches on objects contained in projectsin the Model repository. You must save an object before you can search on it.

You can search in different languages. To search in a different language, an administrator must change the searchanalyzer and configure the Model repository to use the search analyzer.

Searching for Objects and PropertiesSearch for objects and properties in the Model repository.

1. Click Search > Search.

The Search dialog box appears.

2. Enter the object or property you want to search for. Optionally, include wildcard characters.

3. If you want to search for a property in an object, optionally enter one or more name patterns separated by acomma.

4. Optionally, choose the object types you want to search for.

5. Choose to search the workspace or the object you selected.

6. Click Search.

The search results appear in the Search view.

7. In the Search view, double-click an object to open it in the editor.

10 Chapter 1: Working with Informatica Developer

Page 24: In 910 Dev UserGuide En

Configuring Validation PreferencesConfigure validation preferences to set error limits and limit the number of visible items per group.

1. Click Window > Preferences.

The Preferences dialog box appears.

2. Select Informatica > Validation.

3. Optionally, select Use Error Limits.

4. Enter a value for Limit visible items per group to.

Default is 100.

5. To restore the default values, click Restore Defaults.

6. Click Apply.

7. Click OK.

CopyYou can copy objects within a project or to a different project. You can also copy objects to folders in the sameproject or to folders in a different project.

You can also copy an object as a link to view the object in the Analyst tool or to provide a link to the object inanother medium, such as an email message.

You can copy the following objects to another project or folder or as a link:

¨ Application

¨ Data service

¨ Logical data object model

¨ Mapping

¨ Mapplet

¨ Physical data object

¨ Profile

¨ Reference table

¨ Reusable transformation

¨ Rule

¨ Scorecard

¨ Virtual stored procedure

Use the following guidelines when you copy objects:

¨ You can copy segments of mappings, mapplets, rules, and virtual stored procedures.

¨ You can copy a folder to another project.

¨ You can copy a logical data object as a link.

¨ You can paste an object multiple times after you copy it.

¨ If the project or folder contains an object with the same name, you can rename or replace the object.

Configuring Validation Preferences 11

Page 25: In 910 Dev UserGuide En

Copying an ObjectCopy an object to make it available in another project or folder.

1. Select an object in a project or folder.

2. Click Edit > Copy .

3. Select the project or folder that you want to copy the object to.

4. Click Edit > Paste.

Copying an Object as a LinkCopy an object as a link to view the object in the Analyst tool.

You can paste the link into a web browser or in another medium, such as a document or an email message. Whenyou click the link, it opens the Analyst tool in the default web browser configured for the machine. You must log into the Analyst tool to access the object.

1. Right-click an object in a project or folder.

2. Click Copy as Link.

3. Paste the link into another application, such as Microsoft Internet Explorer or an email message.

12 Chapter 1: Working with Informatica Developer

Page 26: In 910 Dev UserGuide En

C H A P T E R 2

ConnectionsThis chapter includes the following topics:

¨ Connections Overview, 13

¨ Adabas Connection Properties, 14

¨ DB2 for i5/OS Connection Properties, 15

¨ DB2 for z/OS Connection Properties, 17

¨ IBM DB2 Connection Properties, 19

¨ IMS Connection Properties, 20

¨ Microsoft SQL Server Connection Properties, 21

¨ ODBC Connection Properties, 22

¨ Oracle Connection Properties, 23

¨ SAP Connection Properties, 24

¨ Sequential Connection Properties, 25

¨ VSAM Connection Properties, 27

¨ Web Services Connection Properties, 28

¨ Connection Explorer View, 29

¨ Creating a Connection , 30

¨ Creating a Web Services Connection, 30

Connections OverviewA connection is a repository object that defines a connection in the domain configuration repository.

Create a connection to import relational or nonrelational data objects, preview data, profile data, and runmappings. Create a connection to a web service.

The Developer tool uses the connection when you import a data object. The Data Integration Service uses theconnection when you preview data, run mappings, or consume web services.

The Developer tool stores connections in the Model repository. Any connection that you create in the Developertool is available in the Analyst tool or the Administrator tool.

Create and manage connections in the Preferences dialog box. You can also create and manage relationalconnections in the Connection Explorer view.

13

Page 27: In 910 Dev UserGuide En

You can create the following types of connections:

¨ Adabas

¨ DB2/I5OS

¨ DB2/ZOS

¨ IBM DB2

¨ IMS

¨ Microsoft SQL Server

¨ ODBC

¨ Oracle

¨ SAP

¨ Sequential

¨ VSAM

¨ Web service

Adabas Connection PropertiesUse an Adabas connection to access an Adabas database. The Data Integration Service connects to Adabasthrough PowerExchange.

The following table describes the Adabas connection properties:

Option Description

Location Location of the PowerExchange Listener node that can connect to the data source. The location isdefined in the first parameter of the NODE statement in the PowerExchange dbmover.cfgconfiguration file.

User Name Database user name.

Password Password for the database user name.

Code Page Required. Code to read from or write to the database. Use the ISO code page name, such asISO-8859-6. The code page name is not case sensitive.

Encryption Type Type of encryption that the Data Integration Service uses. Select one of the following values:- None- RC2- DESDefault is None.

Encryption Level Level of encryption that the Data Integration Service uses. If you select RC2 or DES for EncryptionType, select one of the following values to indicate the encryption level:- 1. Uses a 56-bit encryption key for DES and RC2.- 2. Uses 168-bit triple encryption key for DES. Uses a 64-bit encryption key for RC2.- 3. Uses 168-bit triple encryption key for DES. Uses a 128-bit encryption key for RC2.Ignored if you do not select an encryption type.Default is 1.

14 Chapter 2: Connections

Page 28: In 910 Dev UserGuide En

Option Description

Pacing Size Amount of data the source system can pass to the PowerExchange Listener. Configure the pacingsize if an external application, database, or the Data Integration Service node is a bottleneck. Thelower the value, the faster the performance.Minimum value is 0. Enter 0 for maximum performance. Default is 0.

Interpret as Rows Interprets the pacing size as rows or kilobytes. Select to represent the pacing size in number of rows.If you clear this option, the pacing size represents kilobytes. Default is Disabled.

Compression Optional. Compresses the data to decrease the amount of data Informatica applications write overthe network. True or false. Default is false.

OffLoad Processing Optional. Moves bulk data processing from the data source to the Data Integration Service machine.Enter one of the following values:- Auto. The Data Integration Service determines whether to use offload processing.- Yes. Use offload processing.- No. Do not use offload processing.Default is Auto.

Worker Threads Number of threads that the Data Integration Service uses to process bulk data when offloadprocessing is enabled. For optimal performance, this value should not exceed the number ofavailable processors on the Data Integration Service machine. Valid values are 1 through 64. Defaultis 0, which disables multithreading.

Array Size Determines the number of records in the storage array for the threads when the worker threads valueis greater than 0. Valid values are from 1 through 100000. Default is 25.

Write Mode Mode in which Data Integration Service sends data to the PowerExchange Listener. Configure one ofthe following write modes:- CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a response

before sending more data. Select if error recovery is a priority. This option might decreaseperformance.

- CONFIRMWRITEOFF. Sends data to the PowerExchange Listener without waiting for aresponse. Use this option when you can reload the target table if an error occurs.

- ASYNCHRONOUSWITHFAULTTOLERANCE. Sends data to the PowerExchange Listenerwithout waiting for a response. This option also provides the ability to detect errors. Thisprovides the speed of confirm write off with the data integrity of confirm write on.

Default is CONFIRMWRITEON.

DB2 for i5/OS Connection PropertiesUse a DB2 for i5/OS connection to access tables in DB2 for i5/OS. The Data Integration Service connects to DB2for i5/OS through PowerExchange.

The following table describes the DB2 for i5/OS connection properties:

Property Description

Database Name Name of the database instance.

Location Location of the PowerExchange Listener node that can connect to DB2. Thelocation is defined in the first parameter of the NODE statement in thePowerExchange dbmover.cfg configuration file.

DB2 for i5/OS Connection Properties 15

Page 29: In 910 Dev UserGuide En

Property Description

Username Database user name.

Password Password for the user name.

Environment SQL SQL commands to set the database environment when you connect to thedatabase. The Data Integration Service executes the connection environmentSQL each time it connects to the database.

Database File Overrides Specifies the i5/OS database file override. The format is:from_file/to_library/to_file/to_member Where:- from_file is the file to be overridden- to_library is the new library to use- to_file is the file in the new library to use- to_member is optional and is the member in the new library and file to use.

*FIRST is used if nothing is specified.You can specify up to 8 unique file overrides on a single connection. A singleoverride applies to a single source or target. When you specify more than onefile override, enclose the string of file overrides in double quotes and include aspace between each file override.Note: If you specify both Library List and Database File Overrides and a tableexists in both, the Database File Overrides takes precedence.

Library List List of libraries that PowerExchange searches to qualify the table name forSelect, Insert, Delete, or Update statements. PowerExchange searches the list ifthe table name is unqualified.Separate libraries with semicolons.Note: If you specify both Library List and Database File Overrides and a tableexists in both, Database File Overrides takes precedence.

Code Page Database code page.

SQL identifier character The type of character used for the Support Mixed-Case Identifiers property.Select the character based on the database in the connection.

Support mixed-case identifiers Enables the Developer tool and Analyst tool to place quotes around table, view,schema, synonym, and column names when generating and executing SQLagainst these objects in the connection. Use if the objects have mixed-case orlowercase names. Also, use if the object names contain SQL keywords, such asWHERE.

Isolation Level Commit scope of the transaction. Select one of the following values:- None- CS. Cursor stability.- RR. Repeatable Read.- CHG. Change.- ALLDefault is CS.

Encryption Type Type of encryption that the Data Integration Service uses. Select one of thefollowing values:- None- RC2- DESDefault is None.

16 Chapter 2: Connections

Page 30: In 910 Dev UserGuide En

Property Description

Level Level of encryption that the Data Integration Service uses. If you select RC2 orDES for Encryption Type, select one of the following values to indicate theencryption level:- 1 - Uses a 56-bit encryption key for DES and RC2.- 2 - Uses 168-bit triple encryption key for DES. Uses a 64-bit encryption key

for RC2.- 3 - Uses 168-bit triple encryption key for DES. Uses a 128-bit encryption

key for RC2.Ignored if you do not select an encryption type.Default is 1.

Pacing Size Amount of data the source system can pass to the PowerExchange Listener.Configure the pacing size if an external application, database, or the DataIntegration Service node is a bottleneck. The lower the value, the faster theperformance.Minimum value is 0. Enter 0 for maximum performance. Default is 0.

Interpret as Rows Interprets the pacing size as rows or kilobytes. Select to represent the pacingsize in number of rows. If you clear this option, the pacing size representskilobytes. Default is Disabled.

Compression Select to compress source data when reading from the database.

Array Size Number of records of the storage array size for each thread. Use if the numberof worker threads is greater than 0. Default is 25.

Write Mode Mode in which Data Integration Service sends data to the PowerExchangeListener. Configure one of the following write modes:- CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits

for a response before sending more data. Select if error recovery is apriority. This option might decrease performance.

- CONFIRMWRITEOFF. Sends data to the PowerExchange Listener withoutwaiting for a response. Use this option when you can reload the target tableif an error occurs.

- ASYNCHRONOUSWITHFAULTTOLERANCE. Sends data to thePowerExchange Listener without waiting for a response. This option alsoprovides the ability to detect errors. This provides the speed of confirm writeoff with the data integrity of confirm write on.

Default is CONFIRMWRITEON.

Async Reject File Overrides the default prefix of PWXR for the reject file. PowerExchange createsthe reject file on the target machine when the write mode is asynchronous withfault tolerance. Specifying PWXDISABLE prevents the creation of the reject files.

DB2 for z/OS Connection PropertiesUse a DB2 for z/OS connection to access tables in DB2 for z/OS. The Data Integration Service connects to DB2for z/OS through PowerExchange.

DB2 for z/OS Connection Properties 17

Page 31: In 910 Dev UserGuide En

The following table describes the DB2 for z/OS connection properties:

Property Description

DB2 Subsystem ID Name of the DB2 subsystem.

Location Location of the PowerExchange Listener node that can connect to DB2. Thelocation is defined in the first parameter of the NODE statement in thePowerExchange dbmover.cfg configuration file.

Username Database user name.

Password Password for the user name.

Environment SQL SQL commands to set the database environment when you connect to thedatabase. The Data Integration Service executes the connection environmentSQL each time it connects to the database.

Correlation ID Value to be concatenated to prefix PWX to form the DB2 correlation ID for DB2requests.

Code Page Database code page.

SQL identifier character The type of character used for the Support Mixed-Case Identifiers property.Select the character based on the database in the connection.

Support mixed-case identifiers Enables the Developer tool and Analyst tool to place quotes around table, view,schema, synonym, and column names when generating and executing SQLagainst these objects in the connection. Use if the objects have mixed-case orlowercase names. Also, use if the object names contain SQL keywords, such asWHERE.

Encryption Type Type of encryption that the Data Integration Service uses. Select one of thefollowing values:- None- RC2- DESDefault is None.

Level Level of encryption that the Data Integration Service uses. If you select RC2 orDES for Encryption Type, select one of the following values to indicate theencryption level:- 1 - Uses a 56-bit encryption key for DES and RC2.- 2 - Uses 168-bit triple encryption key for DES. Uses a 64-bit encryption key

for RC2.- 3 - Uses 168-bit triple encryption key for DES. Uses a 128-bit encryption

key for RC2.Ignored if you do not select an encryption type.Default is 1.

Pacing Size Amount of data the source system can pass to the PowerExchange Listener.Configure the pacing size if an external application, database, or the DataIntegration Service node is a bottleneck. The lower the value, the faster theperformance.Minimum value is 0. Enter 0 for maximum performance. Default is 0.

Interpret as Rows Interprets the pacing size as rows or kilobytes. Select to represent the pacingsize in number of rows. If you clear this option, the pacing size representskilobytes. Default is Disabled.

18 Chapter 2: Connections

Page 32: In 910 Dev UserGuide En

Property Description

Compression Select to compress source data when reading from the database.

Offload Processing Moves data processing for bulk data from the source system to the DataIntegration Service machine. Default is No.

Worker Threads Number of threads that the Data Integration Services uses on the DataIntegration Service machine to process data. For optimal performance, do notexceed the number of installed or available processors on the Data IntegrationService machine. Default is 0.

Array Size Number of records of the storage array size for each thread. Use if the numberof worker threads is greater than 0. Default is 25.

Write Mode Configure one of the following write modes:- CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits

for a response before sending more data. Select if error recovery is apriority. This option might decrease performance.

- CONFIRMWRITEOFF. Sends data to the PowerExchange Listener withoutwaiting for a response. Use this option when you can reload the target tableif an error occurs.

- ASYNCHRONOUSWITHFAULTTOLERANCE. Sends data to thePowerExchange Listener without waiting for a response. This option alsoprovides the ability to detect errors. This provides the speed of confirm writeoff with the data integrity of confirm write on.

Default is CONFIRMWRITEON.

Async Reject File Overrides the default prefix of PWXR for the reject file. PowerExchange createsthe reject file on the target machine when the write mode is asynchronous withfault tolerance. Specifying PWXDISABLE prevents the creation of the reject files.

IBM DB2 Connection PropertiesUse an IBM DB2 connection to access tables in an IBM DB2 database.

The following table describes the IBM DB2 connection properties:

Property Description

User name Database user name.

Password Password for the user name.

Connection String for metadata access Connection string to import physical data objects. Use the following connectionstring: jdbc:informatica:db2://<host>:50000;databaseName=<dbname>

Connection String for data access Connection string to preview data and run mappings. Enter dbname from the aliasconfigured in the DB2 client.

Code Page Database code page.

IBM DB2 Connection Properties 19

Page 33: In 910 Dev UserGuide En

Property Description

Environment SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the connectionenvironment SQL each time it connects to the database.

Transaction SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the transactionenvironment SQL at the beginning of each transaction.

Retry Period Number of seconds the Data Integration Service attempts to reconnect to thedatabase if the connection fails. If the Data Integration Service cannot connect tothe database in the retry period, the session fails. Default is 0.

Tablespace Tablespace name of the IBM DB2 database.

SQL identifier character The type of character used for the Support Mixed-Case Identifiers property.Select the character based on the database in the connection.

Support mixed-case identifiers Enables the Developer tool and Analyst tool to place quotes around table, view,schema, synonym, and column names when generating and executing SQLagainst these objects in the connection. Use if the objects have mixed-case orlowercase names. Also, use if the object names contain SQL keywords, such asWHERE.

IMS Connection PropertiesUse an IMS connection to access an IMS database. The Data Integration Service connects to IMS throughPowerExchange.

The following table describes the IMS connection properties:

Option Description

Location Location of the PowerExchange Listener node that can connect to the data source. The location isdefined in the first parameter of the NODE statement in the PowerExchange dbmover.cfgconfiguration file.

User Name Database user name.

Password Password for the database user name.

Code Page Required. Code to read from or write to the database. Use the ISO code page name, such asISO-8859-6. The code page name is not case sensitive.

Encryption Type Type of encryption that the Data Integration Service uses. Select one of the following values:- None- RC2- DESDefault is None.

20 Chapter 2: Connections

Page 34: In 910 Dev UserGuide En

Option Description

Encryption Level Level of encryption that the Data Integration Service uses. If you select RC2 or DES for EncryptionType, select one of the following values to indicate the encryption level:- 1. Uses a 56-bit encryption key for DES and RC2.- 2. Uses 168-bit triple encryption key for DES. Uses a 64-bit encryption key for RC2.- 3. Uses 168-bit triple encryption key for DES. Uses a 128-bit encryption key for RC2.Ignored if you do not select an encryption type.Default is 1.

Pacing Size Amount of data the source system can pass to the PowerExchange Listener. Configure the pacingsize if an external application, database, or the Data Integration Service node is a bottleneck. Thelower the value, the faster the performance.Minimum value is 0. Enter 0 for maximum performance. Default is 0.

Interpret as Rows Interprets the pacing size as rows or kilobytes. Select to represent the pacing size in number of rows.If you clear this option, the pacing size represents kilobytes. Default is Disabled.

Compression Optional. Compresses the data to decrease the amount of data Informatica applications write overthe network. True or false. Default is false.

OffLoad Processing Optional. Moves bulk data processing from the data source to the Data Integration Service machine.Enter one of the following values:- Auto. The Data Integration Service determines whether to use offload processing.- Yes. Use offload processing.- No. Do not use offload processing.Default is Auto.

Worker Threads Number of threads that the Data Integration Service uses to process bulk data when offloadprocessing is enabled. For optimal performance, this value should not exceed the number ofavailable processors on the Data Integration Service machine. Valid values are 1 through 64. Defaultis 0, which disables multithreading.

Array Size Determines the number of records in the storage array for the threads when the worker threads valueis greater than 0. Valid values are from 1 through 100000. Default is 25.

Write Mode Mode in which Data Integration Service sends data to the PowerExchange Listener. Configure one ofthe following write modes:- CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a response

before sending more data. Select if error recovery is a priority. This option might decreaseperformance.

- CONFIRMWRITEOFF. Sends data to the PowerExchange Listener without waiting for aresponse. Use this option when you can reload the target table if an error occurs.

- ASYNCHRONOUSWITHFAULTTOLERANCE. Sends data to the PowerExchange Listenerwithout waiting for a response. This option also provides the ability to detect errors. Thisprovides the speed of confirm write off with the data integrity of confirm write on.

Default is CONFIRMWRITEON.

Microsoft SQL Server Connection PropertiesUse Microsoft SQL Server connection to access tables in a Microsoft SQL Server database.

Microsoft SQL Server Connection Properties 21

Page 35: In 910 Dev UserGuide En

The following table describes the Microsoft SQL Server connection properties:

Property Description

User name Database user name.

Password Password for the user name.

Use Trusted Connection Optional. When enabled, the Data Integration Service uses Windowsauthentication to access the Microsoft SQL Server database. The user name thatstarts the Data Integration Service must be a valid Windows user with access tothe Microsoft SQL Server database.

Connection String for metadata access Connection string to import physical data objects. Use the following connectionstring: jdbc:informatica:sqlserver://<host>:<port>;databaseName=<dbname>

Connection String for data access Connection string to preview data and run mappings. Enter<ServerName>@<DBName>

Domain Name Optional. Name of the domain where Microsoft SQL Server is running.

Packet Size Required. Optimize the ODBC connection to Microsoft SQL Server. Increase thepacket size to increase performance. Default is 0.

Code Page Database code page.

Environment SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the connectionenvironment SQL each time it connects to the database.

Transaction SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the transactionenvironment SQL at the beginning of each transaction.

Retry Period Number of seconds the Data Integration Service attempts to reconnect to thedatabase if the connection fails. If the Data Integration Service cannot connect tothe database in the retry period, the session fails. Default is 0.

SQL identifier character The type of character used for the Support Mixed-Case Identifiers property.Select the character based on the database in the connection.

Support mixed-case identifiers Enables the Developer tool and Analyst tool to place quotes around table, view,schema, synonym, and column names when generating and executing SQLagainst these objects in the connection. Use if the objects have mixed-case orlowercase names. Also, use if the object names contain SQL keywords, such asWHERE.

ODBC Connection PropertiesUse an ODBC connection to access tables in a database through ODBC.

22 Chapter 2: Connections

Page 36: In 910 Dev UserGuide En

The following table describes the ODBC connection properties:

Property Description

User name Database user name.

Password Password for the user name.

Connection String Connection string to connect to the database.

Code Page Database code page.

Environment SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the connectionenvironment SQL each time it connects to the database.

Transaction SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the transactionenvironment SQL at the beginning of each transaction.

Retry Period Number of seconds the Data Integration Service attempts to reconnect to thedatabase if the connection fails. If the Data Integration Service cannot connect tothe database in the retry period, the session fails. Default is 0.

SQL identifier character Type of character used for the Support mixed-case identifiers property. Selectthe character based on the database in the connection.

Support mixed-case identifiers Enables the Developer tool and the Analyst tool to place quotes around table,view, schema, synonym, and column names when generating and executingSQL against these objects in the connection. Use if the objects have mixed-caseor lowercase names. Also, use if the object names contain SQL keywords, suchas WHERE.

ODBC Provider Type of database that ODBC connects to. For pushdown optimization, specifythe database type to enable the Data Integration Service to generate nativedatabase SQL. Default is Other.

Oracle Connection PropertiesUse an Oracle connection to access tables in an Oracle database.

The following table describes the Oracle connection properties:

Property Description

User name Database user name.

Password Password for the user name.

Connection String for metadata access Connection string to import physical data objects. Use the following connectionstring: jdbc:informatica:oracle://<host>:1521;SID=<sid>

Connection String for data access Connection string to preview data and run mappings. Enter dbname.world fromthe TNSNAMES entry.

Oracle Connection Properties 23

Page 37: In 910 Dev UserGuide En

Property Description

Code Page Database code page.

Environment SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the connectionenvironment SQL each time it connects to the database.

Transaction SQL Optional. Enter SQL commands to set the database environment when youconnect to the database. The Data Integration Service executes the transactionenvironment SQL at the beginning of each transaction.

Retry Period Number of seconds the Data Integration Service attempts to reconnect to thedatabase if the connection fails. If the Data Integration Service cannot connect tothe database in the retry period, the session fails. Default is 0.

Parallel Mode Optional. Enables parallel processing when loading data into a table in bulkmode. Default is disabled.

SQL identifier character The type of character used for the Support Mixed-Case Identifiers property.Select the character based on the database in the connection.

Support mixed-case identifiers Enables the Developer tool and Analyst tool to place quotes around table, view,schema, synonym, and column names when generating and executing SQLagainst these objects in the connection. Use if the objects have mixed-case orlowercase names. Also, use if the object names contain SQL keywords, such asWHERE.

SAP Connection PropertiesThe following table describes the SAP connection properties:

Property Description

User name SAP source system connection user name.

Password Password for the user name.

Trace Select this option to track the RFC calls that the SAP system makes. SAP stores theinformation about the RFC calls in a trace file. You can access the trace files from server/bin directory on the Informatica server machine and the client/bin directry on the clientmachine.

Connection type Select Type A to connect to one SAP system. Select Type B when you want to use SAPload balancing.

Host name Host name or IP address of the SAP server. Informatica uses this entry to connect to theSAP server.

R3 name Name of the SAP system.

Group Group name of the SAP application server.

24 Chapter 2: Connections

Page 38: In 910 Dev UserGuide En

Property Description

System number SAP system number.

Client number SAP client number.

Language Language that you want for the mapping. Must be compatible with the the Developer toolcode page. If you leave this option blank, Informatica uses the default language of theSAP system.

Code page Code page compatible with the SAP server. Must also correspond to the language code.

Staging directory Path in the SAP system where the staging file will be created.

Source directory The Data Integration Service path containing the source file.

Use FTP Enables FTP access to SAP.

FTP user User name to connect to the FTP server.

FTP password Password for the FTP user.

FTP host Host name or IP address of the FTP server.Optionally, you can specify a port number from 1 through 65535, inclusive. Default for FTPis 21. Use the following syntax to specify the host name:hostname:port_numberOr ,IP address:port_numberWhen you specify a port number, enable that port number for FTP on the host machine.If you enable SFTP, specify a host name or port number for an SFTP server. Default forSFTP is 22.

Retry period Number of seconds that the Data Integration Service attempts to reconnect to the FTPhost if the connection fails. If the Data Integration Service cannot reconnect to the FTPhost in the retry period, the session fails. Default value is 0 and indicates an infinite retryperiod.

Use SFTP Enables SFTP access to SAP.

Public key file name Public key file path and file name. Required if the SFTP server uses publickeyauthentication. Enabled for SFTP.

Private key file name Private key file path and file name. Required if the SFTP server uses publickeyauthentication. Enabled for SFTP.

Private key file name password Private key file password used to decrypt the private key file. Required if the SFTP serveruses public key authentication and the private key is encrypted. Enabled for SFTP.

Sequential Connection PropertiesUse a sequential connection to access z/OS sequential data sets. The Data Integration Service connects to thedata sets through PowerExchange.

Sequential Connection Properties 25

Page 39: In 910 Dev UserGuide En

The following table describes the sequential connection properties:

Option Description

Code Page Required. Code to read from or write to the sequential data set. Use the ISO code page name,such as ISO-8859-6. The code page name is not case sensitive.

Array Size Determines the number of records in the storage array for the threads when the worker threadsvalue is greater than 0. Valid values are from 1 through 100000. Default is 25.

Compression Compresses the data to decrease the amount of data Informatica applications write over thenetwork. True or false. Default is false.

Encryption Level Level of encryption that the Data Integration Service uses. If you select RC2 or DES forEncryption Type, select one of the following values to indicate the encryption level:- 1 - Uses a 56-bit encryption key for DES and RC2.- 2 - Uses 168-bit triple encryption key for DES. Uses a 64-bit encryption key for RC2.- 3 - Uses 168-bit triple encryption key for DES. Uses a 128-bit encryption key for RC2.Ignored if you do not select an encryption type.Default is 1.

Encryption Type Type of encryption that the Data Integration Service uses. Select one of the following values:- None- RC2- DESDefault is None.

Interpret as Rows Interprets the pacing size as rows or kilobytes. Select to represent the pacing size in number ofrows. If you clear this option, the pacing size represents kilobytes. Default is Disabled.

Location Location of the PowerExchange Listener node that can connect to the data object. The location isdefined in the first parameter of the NODE statement in the PowerExchange dbmover.cfgconfiguration file.

OffLoad Processing Moves bulk data processing from the source machine to the Data Integration Service machine.Enter one of the following values:- Auto. The Data Integration Service determines whether to use offload processing.- Yes. Use offload processing.- No. Do not use offload processing.Default is Auto.

Pacing Size Amount of data that the source system can pass to the PowerExchange Listener. Configure thepacing size if an external application, database, or the Data Integration Service node is abottleneck. The lower the value, the faster the performance.Minimum value is 0. Enter 0 for maximum performance. Default is 0.

Worker Threads Number of threads that the Data Integration Service uses to process bulk data when offloadprocessing is enabled. For optimal performance, this value should not exceed the number of

26 Chapter 2: Connections

Page 40: In 910 Dev UserGuide En

Option Description

available processors on the Data Integration Service machine. Valid values are 1 through 64.Default is 0, which disables multithreading.

Write Mode Mode in which Data Integration Service sends data to the PowerExchange Listener. Configureone of the following write modes:- CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a response

before sending more data. Select if error recovery is a priority. This option might decreaseperformance.

- CONFIRMWRITEOFF. Sends data to the PowerExchange Listener without waiting for aresponse. Use this option when you can reload the target table if an error occurs.

- ASYNCHRONOUSWITHFAULTTOLERANCE. Sends data to the PowerExchange Listenerwithout waiting for a response. This option also provides the ability to detect errors. Thisprovides the speed of confirm write off with the data integrity of confirm write on.

Default is CONFIRMWRITEON.

VSAM Connection PropertiesUse a VSAM connection to connect to a VSAM data set.

The following table describes the VSAM connection properties:

Option Description

Code Page Required. Code to read from or write to the VSAM file. Use the ISO code page name, such asISO-8859-6. The code page name is not case sensitive.

Array Size Determines the number of records in the storage array for the threads when the worker threadsvalue is greater than 0. Valid values are from 1 through 100000. Default is 25.

Compression Compresses the data to decrease the amount of data Informatica applications write over thenetwork. True or false. Default is false.

Encryption Level Level of encryption that the Data Integration Service uses. If you select RC2 or DES forEncryption Type, select one of the following values to indicate the encryption level:- 1 - Uses a 56-bit encryption key for DES and RC2.- 2 - Uses 168-bit triple encryption key for DES. Uses a 64-bit encryption key for RC2.- 3 - Uses 168-bit triple encryption key for DES. Uses a 128-bit encryption key for RC2.Ignored if you do not select an encryption type.Default is 1.

Encryption Type Enter one of the following values for the encryption type:- None- RC2- DESDefault is None.

Interpret as Rows Interprets the pacing size as rows or kilobytes. Select to represent the pacing size in number ofrows. If you clear this option, the pacing size represents kilobytes. Default is Disabled.

Location Location of the PowerExchange Listener node that can connect to the VSAM file. The location isdefined in the first parameter of the NODE statement in the PowerExchange dbmover.cfgconfiguration file.

VSAM Connection Properties 27

Page 41: In 910 Dev UserGuide En

Option Description

OffLoad Processing Moves bulk data processing from the VSAM source to the Data Integration Service machine.Enter one of the following values:- Auto. The Data Integration Service determines whether to use offload processing.- Yes. Use offload processing.- No. Do not use offload processing.Default is Auto.

PacingSize Amount of data the source system can pass to the PowerExchange Listener. Configure the pacingsize if an external application, database, or the Data Integration Service node is a bottleneck. Thelower the value, the faster the performance.Minimum value is 0. Enter 0 for maximum performance. Default is 0.

Worker Threads Number of threads that the Data Integration Service uses to process bulk data when offloadprocessing is enabled. For optimal performance, this value should not exceed the number ofavailable processors on the Data Integration Service machine. Valid values are 1 through 64.Default is 0, which disables multithreading.

Write Mode Mode in which Data Integration Service sends data to the PowerExchange Listener. Configureone of the following write modes:- CONFIRMWRITEON. Sends data to the PowerExchange Listener and waits for a response

before sending more data. Select if error recovery is a priority. This option might decreaseperformance.

- CONFIRMWRITEOFF. Sends data to the PowerExchange Listener without waiting for aresponse. Use this option when you can reload the target table if an error occurs.

- ASYNCHRONOUSWITHFAULTTOLERANCE. Sends data to the PowerExchange Listenerwithout waiting for a response. This option also provides the ability to detect errors. Thisprovides the speed of confirm write off with the data integrity of confirm write on.

Default is CONFIRMWRITEON.

Web Services Connection PropertiesUse a web services connection to connect to a web service.

The following table describes the web services connection properties:

Property Description

Username User name to connect to the web service.

Password Password for the user name.

End Point URL URL for the web service that you want to access.

Timeout Time in seconds that the Data Integration Service waits for response from the webservice provider.

28 Chapter 2: Connections

Page 42: In 910 Dev UserGuide En

Property Description

HTTP Authentication Type Type of user authentication over HTTP. Select one of the following values:- None. No authentication.- Automatic. The Data Integration Service chooses the authentication type of the

web service provider.- Basic. Requires you to provide a user name and password for the domain of the

web service provider. The Data Integration Service sends the user name andthe password to the web service provider for authentication.

- Digest. Requires you to provide a user name and password for the domain ofthe web service provider. The Data Integration Service generates an encryptedmessage digest from the user name and password and sends it to the webservice provider. The provider generates a temporary value for the user nameand password and stores it in the Active Directory on the Domain Controller. Itcompares the value with the message digest. If they match, the web serviceprovider authenticates you.

- NTLM. Requires you to provide a domain name, server name, or default username and password. The web service provider authenticates you based on thedomain you are connected to. It gets the user name and password from theWindows Domain Controller and compares it with the user name and passwordthat you provide. If they match, the web service provider authenticates you.NTLM authentication does not store encrypted passwords in the Active Directoryon the Domain Controller.

WS Security Type The WS-Security type that you want to use. Select PasswordText, PasswordDigest,or no WS-Security header.

Trust Certificates File Trust certificates file name.

Client Certificate File Name Client certificate file name.

Client Certificate Password Client certificate password.

Client Certificate Type The format of the client certificate file. Select one of the following values:- PEM. Files with the .pem extension.- DER. Files with the .cer or .der extension.

Private Key File Name File name for the private key used when the web service provider authenticates theconsumer or when the consumer and provider exchange certificates.

Private Key Password Password for the private key file.

Private Key Type The private key type is DER.

Connection Explorer ViewUse the Connection Explorer view to view relational database connections and to create relational data objects.

You can complete the following tasks in the Connection Explorer view:

¨ Add a connection to the view. Click the Select Connection button to choose one or more connections to add tothe Connection Explorer view.

¨ Connect to a relational database. Right-click a relational database and click Connect.

¨ Disconnect from a relational database. Right-click a relational database and click Disconnect.

Connection Explorer View 29

Page 43: In 910 Dev UserGuide En

¨ Create a relational data object. After you connect to a relational database, expand the database to view tables.Right-click a table and click Add to Project to open the New Relational Data Object dialog box.

¨ Refresh a connection. Right-click a connection and click Refresh.

¨ Show only the default schema. Right-click a connection and click Show Default Schema Only. Default isenabled.

¨ Delete a connection from the Connection Explorer view. The connection remains in the Model repository.Right-click a connection and click Delete.

Note: When you use a Microsoft SQL Server connection to access tables in a Microsoft SQL Server database, theDeveloper tool does not display the synonyms for the tables.

Creating a ConnectionCreate a database connection or nonrelational connection. Create the connection before you import physical dataobjects, preview data, profile data, or run mappings.

1. Click Window > Preferences.

2. Select Informatica > Connections.

3. Expand the domain in the Available Connections.

4. Select a connection type in Available Connections and click Add.

5. Enter a connection name.

6. Optionally, enter a connection description.

7. Click Next.

8. Configure the connection properties.

9. Click Test Connection to verify that you entered the connection properties correctly and that you can connectto the database.

10. Click Finish.

After you create a relational connection, you can add it to the Connection Explorer view.

Creating a Web Services ConnectionCreate a web services connection to configure web service security and a connection timeout period. You canassociate a web services connection with a WSDL data object or a Web Service Consumer transformation.

1. Click Window > Preferences.

2. Select Informatica > Web Services > Connections.

3. Select the domain and click Add.

4. Enter a connection name.

5. Optionally, enter a connection description.

6. Click Next.

7. Configure the connection properties.

30 Chapter 2: Connections

Page 44: In 910 Dev UserGuide En

8. Click Test Connection to verify that you entered the connection properties correctly and that you can connectto the URI.

9. Click Finish.

Creating a Web Services Connection 31

Page 45: In 910 Dev UserGuide En

C H A P T E R 3

Physical Data ObjectsThis chapter includes the following topics:

¨ Physical Data Objects Overview, 32

¨ Relational Data Objects, 33

¨ Customized Data Objects, 35

¨ Nonrelational Data Objects, 49

¨ Flat File Data Objects, 50

¨ SAP Data Objects, 60

¨ Synchronization, 61

¨ Troubleshooting Physical Data Objects, 62

Physical Data Objects OverviewA physical data object is the representation of data that is based on a flat file, relational database, nonrelationaldatabase, SAP, or WSDL resource. Create a physical data object to read data from resources, lookup data fromresources, or write data to resources.

A physical data object can be one of the following types:

Relational data object

A physical data object that uses a relational table, view, or synonym as a source. For example, you can createa relational data object from a DB2 i5/OS table or an Oracle view.

Customized data object

A physical data object that uses one or multiple related relational resources or relational data objects assources. Relational resources include tables, views, and synonyms. For example, you can create acustomized data object from two Microsoft SQL Server tables that have a primary key-foreign key relationship.

Create a customized data object if you want to perform operations such as joining data, filtering rows, sortingports, or running custom queries when the Data Integration Service reads source data.

Nonrelational data object

A physical data object that uses a nonrelational database resource as a source. For example, you can createa nonrelational data object from a VSAM source.

Flat file data object

A physical data object that uses a flat file as a source. You can create a flat file data object from a delimitedor fixed-width flat file.

32

Page 46: In 910 Dev UserGuide En

SAP data object

A physical data object that uses an SAP source.

WSDL data object

A physical data object that uses a WSDL file as a source.

If the data object source changes, you can synchronize the physical data object. When you synchronize a physicaldata object, the Developer tool reimports the object metadata.

You can create any physical data object in a project or folder. Physical data objects in projects and folders arereusable objects. You can use them in any type of mapping, mapplet, or profile, but you cannot change the dataobject within the mapping, mapplet, or profile. To update the physical data object, you must edit the object withinthe project or folder.

You can include a physical data object in a mapping, mapplet, or profile. You can add a physical data object to amapping or mapplet as a read, write, or lookup transformation. You can add a physical data object to a logical dataobject mapping to map logical data objects. You can also include a physical data object in a virtual table mappingwhen you define an SQL data service. You can include a physical data object in an operation mapping when youdefine a web service.

Relational Data ObjectsImport a relational data object to include in a mapping, mapplet, or profile. A relational data object is a physicaldata object that uses a relational table, view, or synonym as a source.

You can create primary key-foreign key relationships between relational data objects. You can create keyrelationships between relational data objects whether or not the relationships exist in the source database.

You can include relational data objects in mappings and mapplets. You can a add relational data object to amapping or mapplet as a read, write, or lookup transformation. You can add multiple relational data objects to amapping or mapplet as sources. When you add multiple relational data objects at the same time, the Developertool prompts you to add the objects in either of the following ways:

¨ As related data objects. The Developer tool creates one read transformation. The read transformation has thesame capabilities as a customized data object.

¨ As independent data objects. The Developer tool creates one read transformation for each relational dataobject. The read transformations have the same capabilities as relational data objects.

You can import the following types of relational data object:

¨ DB2 for i5/OS

¨ DB2 for z/OS

¨ IBM DB2

¨ Microsoft SQL Server

¨ ODBC

¨ Oracle

Key RelationshipsYou can create key relationships between relational data objects. Key relationships allow you to join relationaldata objects when you use them as sources in a customized data object or as read transformations in a mappingor mapplet.

Relational Data Objects 33

Page 47: In 910 Dev UserGuide En

When you import relational data objects, the Developer tool retains the primary key information defined in thedatabase. When you import related relational data objects at the same time, the Developer tool also retains foreignkeys and key relationships. However, if you import related relational data objects separately, you must re-createthe key relationships after you import the objects.

To create key relationships between relational data objects, first create a primary key in the referenced object.Then create the relationship in the relational data object that contains the foreign key.

The key relationships that you create exist in the relational data object metadata. You do not need to alter thesource relational resources.

Creating Keys in a Relational Data ObjectCreate key columns to identify each row in a relational data object. You can create one primary key in eachrelational data object.

1. Open the relational data object.

2. Select the Keys view.

3. Click Add.

The New Key dialog box appears.

4. Enter a key name.

5. If the key is a primary key, select Primary Key.

6. Select the key columns.

7. Click OK.

8. Save the relational data object.

Creating Relationships between Relational Data ObjectsYou can create key relationships between relational data objects. You cannot create key relationships between arelational data object and a customized data object.

The relational data object that you reference must have a primary key.

1. Open the relational data object where you want to create a foreign key.

2. Select the Relationships view.

3. Click Add.

The New Relationship dialog box appears.

4. Enter a name for the foreign key.

5. Select a primary key from the referenced relational data object.

6. Click OK.

7. In the Relationships properties, select the foreign key columns.

8. Save the relational data object.

Creating a Read Transformation from Relational Data ObjectsYou can a add relational data object to a mapping or mapplet as a read transformation. When you add multiplerelational data objects at the same time, you can add them as related or independent objects.

1. Open the mapping or mapplet in which you want to create a read transformation.

2. In the Object Explorer view, select one or more relational data objects.

34 Chapter 3: Physical Data Objects

Page 48: In 910 Dev UserGuide En

3. Drag the relational data objects into the mapping editor.

The Add to Mapping dialog box appears.

4. Select the Read option.

5. If you add multiple data objects, select one of the following options:

Option Description

As related data objects The Developer tool creates one read transformation. The read transformation has thesame capabilities as a customized data object.

As independent data objects The Developer tool creates one read transformation for each relational data object. Eachread transformation has the same capabilities as a relational data object.

6. If the relational data objects use different connections, select the default connection.

7. Click OK.

The Developer tool creates one or multiple read transformations in the mapping or mapplet.

Importing a Relational Data ObjectImport a relational data object to add to a mapping, mapplet, or profile.

Before you import a relational data object, you must configure a connection to the database.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Data Object.

The New dialog box appears.

3. Select Relational Data Object and click Next.

The New Relational Data Object dialog box appears.

4. Click Browse next to the Connection option and select a connection to the database.

5. Click Create data object from existing resource.

6. Click Browse next to the Resource option and select the table, view, or synonym that you want to import.

7. Enter a name for the physical data object.

8. Click Browse next to the Location option and select the project where you want to import the relational dataobject.

9. Click Finish.

The data object appears under Physical Data Objects in the project or folder in the Object Explorer view.

Customized Data ObjectsCreate a customized data object to include in a mapping, mapplet, or profile. Customized data objects are physicaldata objects that use relational resources as sources. Customized data objects allow you to perform tasks thatrelational data objects do not allow you to perform, such as joining data from related resources and filtering rows.

When you create a customized data object, the Data Integration Service generates a default SQL query that ituses to read data from the source relational resources. The default query is a SELECT statement for each columnthat it reads from the sources.

Customized Data Objects 35

Page 49: In 910 Dev UserGuide En

Create a customized data object to perform the following tasks:

¨ Join source data that originates from the same source database. You can join multiple tables with primary key-foreign key relationships whether or not the relationships exist in the database.

¨ Select distinct values from the source. If you choose Select Distinct, the Data Integration Service adds aSELECT DISTINCT statement to the default SQL query.

¨ Filter rows when the Data Integration Service reads source data. If you include a filter condition, the DataIntegration Service adds a WHERE clause to the default query.

¨ Specify sorted ports. If you specify a number for sorted ports, the Data Integration Service adds an ORDER BYclause to the default SQL query.

¨ Specify an outer join instead of the default inner join. If you include a user-defined join, the Data IntegrationService replaces the join information specified by the metadata in the SQL query.

¨ Create a custom query to issue a special SELECT statement for the Data Integration Service to read sourcedata. The custom query replaces the default query that the Data Integration Service uses to read data fromsources.

¨ Add pre- and post-mapping SQL commands. The Data Integration Service runs pre-mapping SQL commandsagainst the source database before it reads the source. It runs post-mapping SQL commands against thesource database after it writes to the target.

¨ Define parameters for the data object. You can define and assign parameters in a customized data object torepresent connections. When you run a mapping that uses the customized data object, you can define differentvalues for the connection parameters at runtime.

¨ Retain key relationships when you synchronize the object with the sources. If you create a customized dataobject that contains multiple tables, and you define key relationships that do not exist in the database, you canretain the relationships when you synchronize the data object.

You can create customized data objects in projects and folders. The customized data objects that you create inprojects and folders are reusable. You can use them in multiple mappings, mapplets, and profiles. You cannotchange them from within a mapping, mapplet, or profile. If you change a customized data object in a project orfolder, the Developer tool updates the object in all mappings, mapplets, and profiles that use the object.

You can create customized data objects from the following types of connections and objects:

¨ DB2 i5/OS connections

¨ DB2 z/OS connections

¨ IBM DB2 connections

¨ Microsoft SQL Server connections

¨ ODBC connections

¨ Oracle connections

¨ Relational data objects

You can also add sources to a customized data object through a custom SQL query.

Default QueryWhen you create a customized data object, the Data Integration Service generates a default SQL query that ituses to read data from the source relational resources. The default query is a SELECT statement for each columnthat it reads from the sources.

You can override the default query through the simple or advanced query. Use the simple query to select distinctvalues, enter a source filter, sort ports, or enter a user-defined join. Use the advanced query to create a customSQL query for reading data from the sources. The custom query overrides the default and simple queries.

36 Chapter 3: Physical Data Objects

Page 50: In 910 Dev UserGuide En

If any table name or column name contains a database reserved word, you can create and maintain a reservedwords file, reswords.txt. Create the reswords.txt file on any machine the Data Integration Service can access.

When the Data Integration Service runs a mapping, it searches for the reswords.txt file. If the file exists, the DataIntegration Service places quotes around matching reserved words when it executes SQL against the database. Ifyou override the default query, you must enclose any database reserved words in quotes.

When the Data Integration Service generates the default query, it delimits table and field names containing thefollowing characters with double quotes:

/ + - = ~ ` ! % ^ & * ( ) [ ] { } ' ; ? , < > \ | <space>

Creating a Reserved Words FileCreate a reserved words file if any table name or column name in the customized data object contains a databasereserved word.

You must have administrator privileges to configure the Data Integration Service to use the reserved words file.

1. Create a file called "reswords.txt."

2. Create a section for each database by entering the database name within square brackets, for example,[Oracle].

3. Add the reserved words to the file below the database name.

For example:[Oracle]OPTIONSTARTwherenumber[SQL Server]CURRENTwherenumber

Entries are not case-sensitive.

4. Save the reswords.txt file.

5. In the Administration Console, select the Data Integration Service.

6. Edit the custom properties.

7. Add the following custom property:

Name Value

Reserved Words File <path>\reswords.txt

8. Restart the Data Integration Service.

Key RelationshipsYou can create key relationships between sources in a customized data object when the sources are relationalresources. Key relationships allow you to join the sources within the customized data object.

Note: If a customized data object uses relational data objects as sources, you cannot create key relationshipswithin the customized data object. You must create key relationships between the relational data objects instead.

When you import relational resources into a customized data object, the Developer tool retains the primary keyinformation defined in the database. When you import related relational resources into a customized data object atthe same time, the Developer tool also retains key relationship information. However, if you import related

Customized Data Objects 37

Page 51: In 910 Dev UserGuide En

relational resources separately, you must re-create the key relationships after you import the objects into thecustomized data object.

When key relationships exist between sources in a customized data object, the Data Integration Service joins thesources based on the related keys in each source. The default join is an inner equijoin that uses the followingsyntax in the WHERE clause:

Source1.column_name = Source2.column_name

You can override the default join by entering a user-defined join or by creating a custom query.

To create key relationships in a customized data object, first create a primary key in the referenced sourcetransformation. Then create the relationship in the source transformation that contains the foreign key.

The key relationships that you create exist in the customized data object metadata. You do not need to alter thesource relational resources.

Creating Keys in a Customized Data ObjectCreate key columns to identify each row in a source transformation. You can create one primary key in eachsource transformation.

1. Open the customized data object.

2. Select the Read view.

3. Select the source transformation where you want to create a key.

The source must be a relational resource, not a relational data object. If the source is a relational data object,you must create keys in the relational data object.

4. Select the Keys properties.

5. Click Add.

The New Key dialog box appears.

6. Enter a key name.

7. If the key is a primary key, select Primary Key.

8. Select the key columns.

9. Click OK.

10. Save the customized data object.

Creating Relationships within a Customized Data ObjectYou can create key relationships between sources in a customized data object.

The source transformation that you reference must have a primary key.

1. Open the customized data object.

2. Select the Read view.

3. Select the source transformation where you want to create a foreign key.

The source must be a relational resource, not a relational data object. If the source is a relational data object,you must create relationships in the relational data object.

4. Select the Relationships properties.

5. Click Add.

The New Relationship dialog box appears.

6. Enter a name for the foreign key.

38 Chapter 3: Physical Data Objects

Page 52: In 910 Dev UserGuide En

7. Select a primary key from the referenced source transformation.

8. Click OK.

9. In the Relationships properties, select the foreign key columns.

10. Save the customized data object.

Select DistinctYou can select unique values from sources in a customized data object through the select distinct option. Whenyou use select distinct, the Data Integration Service adds a SELECT DISTINCT statement to the default SQLquery.

Use the select distinct option in a customized data object to filter out unnecessary source data. For example, youmight use the select distinct option to extract unique customer IDs from a table that lists total sales. When you usethe customized data object in a mapping, the Data Integration Service filters out unnecessary data earlier in thedata flow, which can increase performance.

Using Select DistinctYou can configure a customized data object to select unique values from the source relational resource. The DataIntegration Service filters out unnecessary data when you use the customized data object in a mapping.

1. Open the customized data object.

2. Select the Read view.

3. Select the Output transformation.

4. Select the Query properties.

5. Select the simple query.

6. Enable the Select Distinct option.

7. Save the customized data object.

FilterYou can enter a filter value in a read operation. The filter specifies the where clause of select statment of ABAPprogram. Use a filter to reduce the number of rows that the Data Integration Service reads from the source SAPtable. When you enter a source filter, the Developer tool adds a WHERE clause to the default query in the ABAPprogram.

Entering a Source FilterEnter a source filter to reduce the number of rows the Data Integration Service reads from the source relationalresource.

1. Open the customized data object.

2. Select the Read view.

3. Select the Output transformation.

4. Select the Query properties.

5. Select the simple query.

6. Click Edit next to the Filter field.

Customized Data Objects 39

Page 53: In 910 Dev UserGuide En

The SQL Query dialog box appears.

7. Enter the filter condition in the SQL Query field.

You can select column names from the Columns list.

8. Click OK.

9. Click Validate to validate the filter condition.

10. Save the customized data object.

Sorted PortsYou can use sorted ports in a customized data object to sort rows queried from the sources. The Data IntegrationService adds the ports to the ORDER BY clause in the default query.

When you use sorted ports, the Data Integration Service creates the SQL query used to extract source data,including the ORDER BY clause. The database server performs the query and passes the resulting data to theData Integration Service.

You might use sorted ports to increase performance when you include any of the following transformations in amapping:

¨ Aggregator. When you configure an Aggregator transformation for sorted input, you can send sorted data byusing sorted ports. The group by ports in the Aggregator transformation must match the order of the sortedports in the customized data object.

¨ Joiner. When you configure a Joiner transformation for sorted input, you can send sorted data by using sortedports. Configure the order of the sorted ports the same in each customized data object.

Note: You can also use the Sorter transformation to sort relational and flat file data before Aggregator and Joinertransformations.

Using Sorted PortsUse sorted ports to sort column data in a customized data object. When you use the customized data object as aread transformation in a mapping or mapplet, you can send sorted data to transformations downstream from theread transformation.

1. Open the customized data object.

2. Select the Read view.

3. Select the Output transformation.

4. Select the Query properties.

5. Select the simple query.

6. Click Edit next to the Sort field.

The Sort dialog box appears.

7. To specify a column as a sorted port, click the New button.

8. Select the column and sort type, either ascending or descending.

9. Repeat steps 7 and 8 to select other columns to sort.

The Developer tool sorts the columns in the order in which they appear in the Sort dialog box.

10. Click OK.

In the Query properties, the Developer tool displays the sort columns in the Sort field.

11. Click Validate to validate the sort syntax.

40 Chapter 3: Physical Data Objects

Page 54: In 910 Dev UserGuide En

12. Save the customized data object.

User-Defined JoinsYou can enter a user-defined join in a customized data object. A user-defined join specifies the condition used tojoin data from multiple sources in the same customized data object.

You can use a customized data object with a user-defined join as a read transformation in a mapping. The sourcedatabase performs the join before it passes data to the Data Integration Service. This can improve mappingperformance when the source tables are indexed.

Enter a user-defined join in a customized data object to join data from related sources. The user-defined joinoverrides the default inner equijoin that the Data Integration creates based on the related keys in each source.When you enter a user-defined join, enter the contents of the WHERE clause that specifies the join condition. Ifthe user-defined join performs an outer join, the Data Integration Service might insert the join syntax in theWHERE clause or the FROM clause, based on the database syntax.

You might need to enter a user-defined join in the following circumstances:

¨ Columns do not have a primary key-foreign key relationship.

¨ The datatypes of columns used for the join do not match.

¨ You want to specify a different type of join, such as an outer join.

Use the following guidelines when you enter a user-defined join in a customized data object:

¨ Do not include the WHERE keyword in the user-defined join.

¨ Enclose all database reserved words in quotes.

¨ If you use Informatica join syntax, and Enable quotes in SQL is enabled for the connection, you must enterquotes around the table names and the column names if you enter them manually. If you select tables andcolumns when you enter the user-defined join, the Developer tool places quotes around the table names andthe column names.

User-defined joins join data from related resources in a database. To join heterogeneous sources, use a Joinertransformation in a mapping that reads data from the sources. To perform a self-join, you must enter a customSQL query that includes the self-join.

Entering a User-Defined JoinEnter a user-defined join in a customized data object to specify the join condition for the customized data objectsources.

1. Open the customized data object.

2. Select the Read view.

3. Select the Output transformation.

4. Select the Query properties.

5. Select the simple query.

6. Click Edit next to the Join field.

The SQL Query dialog box appears.

7. Enter the user-defined join in the SQL Query field.

You can select column names from the Columns list.

8. Click OK.

9. Click Validate to validate the user-defined join.

Customized Data Objects 41

Page 55: In 910 Dev UserGuide En

10. Save the customized data object.

Custom QueriesYou can create a custom SQL query in a customized data object. When you create a custom query, you issue aspecial SELECT statement that the Data Integration Service uses to read source data.

You can create a custom query to add sources to an empty customized data object. You can also use a customquery to override the default SQL query.

The custom query you enter overrides the default SQL query that the Data Integration Service uses to read datafrom the source relational resource. The custom query also overrides the simple query settings you specify whenyou enter a source filter, use sorted ports, enter a user-defined join, or select distinct ports.

You can use a customized data object with a custom query as a read transformation in a mapping. The sourcedatabase executes the query before it passes data to the Data Integration Service.

Use the following guidelines when you create a custom query in a customized data object:

¨ In the SELECT statement, list the column names in the order in which they appear in the source transformation.

¨ Enclose all database reserved words in quotes.

If you use a customized data object to perform a self-join, you must enter a custom SQL query that includes theself-join.

Creating a Custom QueryCreate a custom query in a customized data object to issue a special SELECT statement for reading data from thesources. The custom query overrides the default query that the Data Integration Service issues to read sourcedata.

1. Open the customized data object.

2. Select the Read view.

3. Select the Output transformation.

4. Select the Query properties.

5. Select the advanced query.

6. Select Use custom query.

The Data Integration Service displays the query it issues to read source data.

7. Change the query or replace it with a custom query.

8. Save the customized data object.

Outer Join SupportYou can use a customized data object to perform an outer join of two sources in the same database. When theData Integration Service performs an outer join, it returns all rows from one source resource and rows from thesecond source resource that match the join condition.

Use an outer join when you want to join two resources and return all rows from one of the resources. For example,you might perform an outer join when you want to join a table of registered customers with a monthly purchasestable to determine registered customer activity. You can join the registered customer table with the monthlypurchases table and return all rows in the registered customer table, including customers who did not makepurchases in the last month. If you perform a normal join, the Data Integration Service returns only registeredcustomers who made purchases during the month, and only purchases made by registered customers.

42 Chapter 3: Physical Data Objects

Page 56: In 910 Dev UserGuide En

With an outer join, you can generate the same results as a master outer or detail outer join in the Joinertransformation. However, when you use an outer join, you reduce the number of rows in the data flow which canincrease performance.

You can enter two kinds of outer joins:

¨ Left. The Data Integration Service returns all rows for the resource to the left of the join syntax and the rowsfrom both resources that meet the join condition.

¨ Right. The Data Integration Service returns all rows for the resource to the right of the join syntax and the rowsfrom both resources that meet the join condition.

Note: Use outer joins in nested query statements when you override the default query.

You can enter an outer join in a user-defined join or in a custom SQL query.

Informatica Join SyntaxWhen you enter join syntax, use the Informatica or database-specific join syntax. When you use the Informaticajoin syntax, the Data Integration Service translates the syntax and passes it to the source database during amapping run.

Note: Always use database-specific syntax for join conditions.

When you use Informatica join syntax, enclose the entire join statement in braces ({Informatica syntax}). Whenyou use database syntax, enter syntax supported by the source database without braces.

When you use Informatica join syntax, use table names to prefix column names. For example, if you have acolumn named FIRST_NAME in the REG_CUSTOMER table, enter “REG_CUSTOMER.FIRST_NAME” in the joinsyntax. Also, when you use an alias for a table name, use the alias within the Informatica join syntax to ensure theData Integration Service recognizes the alias.

You can combine left outer or right outer joins with normal joins in a single customized data object. You cannotcombine left and right outer joins. Use multiple normal joins and multiple left outer joins. Some databases limit youto using one right outer join.

When you combine joins, enter the normal joins first.

Normal Join SyntaxYou can create a normal join using the join condition in a customized data object. However, if you create an outerjoin, you must override the default join. As a result, you must include the normal join in the join override. When youinclude a normal join in the join override, list the normal join before outer joins. You can enter multiple normal joinsin the join override.

To create a normal join, use the following syntax:

{ source1 INNER JOIN source2 on join_condition }

Customized Data Objects 43

Page 57: In 910 Dev UserGuide En

The following table displays the syntax for normal joins in a join override:

Syntax Description

source1 Source resource name. The Data Integration Service returns rows from this resource that match the joincondition.

source2 Source resource name. The Data Integration Service returns rows from this resource that match the joincondition.

join_condition Condition for the join. Use syntax supported by the source database. You can combine multiple joinconditions with the AND operator.

For example, you have a REG_CUSTOMER table with data for registered customers:

CUST_ID FIRST_NAME LAST_NAME00001 Marvin Chi00002 Dinah Jones00003 John Bowden00004 J. Marks

The PURCHASES table, refreshed monthly, contains the following data:

TRANSACTION_NO CUST_ID DATE AMOUNT06-2000-0001 00002 6/3/2000 55.7906-2000-0002 00002 6/10/2000 104.4506-2000-0003 00001 6/10/2000 255.5606-2000-0004 00004 6/15/2000 534.9506-2000-0005 00002 6/21/2000 98.6506-2000-0006 NULL 6/23/2000 155.6506-2000-0007 NULL 6/24/2000 325.45

To return rows displaying customer names for each transaction in the month of June, use the following syntax:

{ REG_CUSTOMER INNER JOIN PURCHASES on REG_CUSTOMER.CUST_ID = PURCHASES.CUST_ID }

The Data Integration Service returns the following data:

CUST_ID DATE AMOUNT FIRST_NAME LAST_NAME00002 6/3/2000 55.79 Dinah Jones00002 6/10/2000 104.45 Dinah Jones00001 6/10/2000 255.56 Marvin Chi00004 6/15/2000 534.95 J. Marks00002 6/21/2000 98.65 Dinah Jones

The Data Integration Service returns rows with matching customer IDs. It does not include customers who madeno purchases in June. It also does not include purchases made by non-registered customers.

Left Outer Join SyntaxYou can create a left outer join with a join override. You can enter multiple left outer joins in a single join override.When using left outer joins with other joins, list all left outer joins together, after any normal joins in the statement.

To create a left outer join, use the following syntax:

{ source1 LEFT OUTER JOIN source2 on join_condition }

44 Chapter 3: Physical Data Objects

Page 58: In 910 Dev UserGuide En

The following tables displays syntax for left outer joins in a join override:

Syntax Description

source1 Source resource name. With a left outer join, the Data Integration Service returns all rows in thisresource.

source2 Source resource name. The Data Integration Service returns rows from this resource that match thejoin condition.

join_condition Condition for the join. Use syntax supported by the source database. You can combine multiple joinconditions with the AND operator.

For example, using the same REG_CUSTOMER and PURCHASES tables described in “Normal Join Syntax” onpage 43, you can determine how many customers bought something in June with the following join override:

{ REG_CUSTOMER LEFT OUTER JOIN PURCHASES on REG_CUSTOMER.CUST_ID = PURCHASES.CUST_ID }

The Data Integration Service returns the following data:

CUST_ID FIRST_NAME LAST_NAME DATE AMOUNT00001 Marvin Chi 6/10/2000 255.5600002 Dinah Jones 6/3/2000 55.7900003 John Bowden NULL NULL00004 J. Marks 6/15/2000 534.9500002 Dinah Jones 6/10/2000 104.4500002 Dinah Jones 6/21/2000 98.65

The Data Integration Service returns all registered customers in the REG_CUSTOMERS table, using null valuesfor the customer who made no purchases in June. It does not include purchases made by non-registeredcustomers.

Use multiple join conditions to determine how many registered customers spent more than $100.00 in a singlepurchase in June:

{REG_CUSTOMER LEFT OUTER JOIN PURCHASES on (REG_CUSTOMER.CUST_ID = PURCHASES.CUST_ID AND PURCHASES.AMOUNT > 100.00) }

The Data Integration Service returns the following data:

CUST_ID FIRST_NAME LAST_NAME DATE AMOUNT00001 Marvin Chi 6/10/2000 255.5600002 Dinah Jones 6/10/2000 104.4500003 John Bowden NULL NULL00004 J. Marks 6/15/2000 534.95

You might use multiple left outer joins if you want to incorporate information about returns during the same timeperiod. For example, the RETURNS table contains the following data:

CUST_ID CUST_ID RETURN00002 6/10/2000 55.7900002 6/21/2000 104.45

To determine how many customers made purchases and returns for the month of June, use two left outer joins:

{ REG_CUSTOMER LEFT OUTER JOIN PURCHASES on REG_CUSTOMER.CUST_ID = PURCHASES.CUST_ID LEFT OUTER JOIN RETURNS on REG_CUSTOMER.CUST_ID = PURCHASES.CUST_ID }

The Data Integration Service returns the following data:

CUST_ID FIRST_NAME LAST_NAME DATE AMOUNT RET_DATE RETURN00001 Marvin Chi 6/10/2000 255.56 NULL NULL00002 Dinah Jones 6/3/2000 55.79 NULL NULL00003 John Bowden NULL NULL NULL NULL00004 J. Marks 6/15/2000 534.95 NULL NULL00002 Dinah Jones 6/10/2000 104.45 NULL NULL00002 Dinah Jones 6/21/2000 98.65 NULL NULL00002 Dinah Jones NULL NULL 6/10/2000 55.7900002 Dinah Jones NULL NULL 6/21/2000 104.45

Customized Data Objects 45

Page 59: In 910 Dev UserGuide En

The Data Integration Service uses NULLs for missing values.

Right Outer Join SyntaxYou can create a right outer join with a join override. The right outer join returns the same results as a left outerjoin if you reverse the order of the resources in the join syntax. Use only one right outer join in a join override. Ifyou want to create more than one right outer join, try reversing the order of the source resources and changing thejoin types to left outer joins.

When you use a right outer join with other joins, enter the right outer join at the end of the join override.

To create a right outer join, use the following syntax:

{ source1 RIGHT OUTER JOIN source2 on join_condition }

The following table displays syntax for a right outer join in a join override:

Syntax Description

source1 Source resource name. The Data Integration Service returns rows from this resource that match the joincondition.

source2 Source resource name. With a right outer join, the Data Integration Service returns all rows in this resource.

join_condition Condition for the join. Use syntax supported by the source database. You can combine multiple joinconditions with the AND operator.

Pre- and Post-Mapping SQL CommandsYou can create SQL commands in a customized data object that the Data Integration Service runs against thesource relational resource. When you use the customized data object in a mapping, the Data Integration Serviceruns pre-mapping SQL commands against the source database before it reads the source. It runs post-mappingSQL commands against the source database after it writes to the target.

Use the following guidelines when you enter pre- and post-mapping SQL commands in a customized data object:

¨ Use any command that is valid for the database type. The Data Integration Service does not allow nestedcomments, even though the database might.

¨ Use a semicolon (;) to separate multiple statements. The Data Integration Service issues a commit after eachstatement.

¨ The Data Integration Service ignores semicolons within /* ... */.

¨ If you need to use a semicolon outside comments, you can escape it with a backslash (\). When you escapethe semicolon, the Data Integration Service ignores the backslash, and it does not use the semicolon as astatement separator.

¨ The Developer tool does not validate the SQL.

Adding Pre- and Post-Mapping SQL CommandsYou can add pre- and post-mapping SQL commands to a customized data object. The Data Integration Serviceruns the SQL commands when you use the customized data object in a mapping.

1. Open the customized data object.

2. Select the Read view.

3. Select the Output transformation

46 Chapter 3: Physical Data Objects

Page 60: In 910 Dev UserGuide En

4. Select the Advanced properties.

5. Enter a pre-mapping SQL command in the PreSQL field.

6. Enter a post-mapping SQL command in the PostSQL field.

7. Save the customized data object.

Customized Data Objects Write PropertiesThe Data Integration Service uses write properties when it writes data to relational resources. To edit writeproperties, select the Input transformation in the Write view, and then select the Advanced properties.

The following table describes the write properties that you configure for customized data objects:

Property Description

Load type Type of target loading. Select Normal or Bulk.If you select Normal, the Data Integration Service loads targets normally. You can choose Bulkwhen you load to DB2, Sybase, Oracle, or Microsoft SQL Server. If you specify Bulk for otherdatabase types, the Data Integration Service reverts to a normal load. Bulk loading canincrease mapping performance, but it limits the ability to recover because no database loggingoccurs.Choose Normal mode if the mapping contains an Update Strategy transformation. If youchoose Normal and the Microsoft SQL Server target name includes spaces, configure thefollowing environment SQL in the connection object:SET QUOTED_IDENTIFIER ON

Update override Overrides the default UPDATE statement for the target.

Delete Deletes all rows flagged for delete.Default is enabled.

Insert Inserts all rows flagged for insert.Default is enabled.

Truncate target table Truncates the target before it loads data.Default is disabled.

Update strategy Update strategy for existing rows. You can select one of the following strategies:- Update as update. The Data Integration Service updates all rows flagged for update.- Update as insert. The Data Integration Service inserts all rows flagged for update. You

must also select the Insert target option.- Update else insert. The Data Integration Service updates rows flagged for update if they

exist in the target and then inserts any remaining rows marked for insert. You must alsoselect the Insert target option.

PreSQL SQL command the Data Integration Service runs against the target database before it readsthe source. The Developer tool does not validate the SQL.

PostSQL SQL command the Data Integration Service runs against the target database after it writes tothe target. The Developer tool does not validate the SQL.

Creating a Customized Data ObjectCreate a customized data object to add to a mapping, mapplet, or profile. After you create a customized dataobject, add sources to it.

1. Select a project or folder in the Object Explorer view.

Customized Data Objects 47

Page 61: In 910 Dev UserGuide En

2. Click File > New > Data Object.

The New dialog box appears.

3. Select Relational Data Object and click Next.

The New Relational Data Object dialog box appears.

4. Click Browse next to the Connection option and select a connection to the database.

5. Click Create customized data object.

6. Enter a name for the customized data object.

7. Click Browse next to the Location option and select the project where you want to create the customized dataobject.

8. Click Finish.

The customized data object appears under Physical Data Objects in the project or folder in the ObjectExplorer view.

Add sources to the customized data object. You can add relational resources or relational data objects as sources.You can also use a custom SQL query to add sources.

Adding Relational Resources to a Customized Data ObjectAfter you create a customized data object, add sources to it. You can use relational resources as sources.

Before you add relational resources to a customized data object, you must configure a connection to the database.

1. In the Connection Explorer view, select one or more relational resources in the same relational connection.

2. Right-click in the Connection Explorer view and select Add to project.

The Add to Project dialog box appears.

3. Select Add as related resource(s) to existing customized data object and click OK.

The Add to Data Object dialog box appears.

4. Select the customized data object and click OK.

5. If you add multiple resources to the customized data object, the Developer tool prompts you to select theresource to write to. Select the resource and click OK.

If you use the customized data object in a mapping as a write transformation, the Developer tool writes datato this resource.

The Developer tool adds the resources to the customized data object.

Adding Relational Data Objects to a Customized Data ObjectAfter you create a customized data object, add sources to it. You can use relational data objects as sources.

1. Open the customized data object.

2. Select the Read view.

3. In the Object Explorer view, select one or more relational data objects in the same relational connection.

4. Drag the objects from the Object Explorer view to the customized data object Read view.

5. If you add multiple relational data objects to the customized data object, the Developer tool prompts you toselect the object to write to. Select the object and click OK.

If you use the customized data object in a mapping as a write transformation, the Developer tool writes datato this relational data object.

The Developer tool adds the relational data objects to the customized data object.

48 Chapter 3: Physical Data Objects

Page 62: In 910 Dev UserGuide En

Nonrelational Data ObjectsImport a nonrelational data object to use in a mapping, mapplet, or profile. A nonrelational data object is a physicaldata object that uses a nonrelational data source.

You can import nonrelational data objects for the following connection types:

¨ Adabas

¨ IMS

¨ Sequential

¨ VSAM

When you import a nonrelational data object, the Developer tool reads the metadata for the object from itsPowerExchange data map. A data map associates nonrelational records with relational tables so that the productcan use the SQL language to access the data. To create a data map, use the PowerExchange Navigator.

After you import the object, you can include its nonrelational operations as read transformations in mappings andmapplets. Each nonrelational operation corresponds to a relational table that the data map defines. To view themapping of fields in one or more nonrelational records to columns in the relational table, double-click thenonrelational operation in the Object Explorer view.

For more information about data maps, see the PowerExchange Navigator Guide.

Note: Before you work with nonrelational data objects that were created with Informatica 9.0.1, you must upgradethem. To upgrade nonrelational data objects, issue the infacmd pwx UpgradeModels command.

Importing a Nonrelational Data ObjectImport a nonrelational data object to use in a mapping, mapplet, or profile.

Before you import a nonrelational data object, you need to configure a connection to the database or data set. Youalso need to create a data map for the object.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Data Object.

3. Select Non-relational Data Object and click Next.

The New Non-relational Data Object dialog box appears.

4. Enter a name for the physical data object.

5. Click Browse next to the Connection option, and select a connection.

6. Click Browse next to the Data Map option, and select the data map that you want to import.

The Resources area displays the list of relational tables that the data map defines.

7. Optionally, add or remove tables to or from the Resources area.

8. Click Finish.

The nonrelational data object and its nonrelational operations appear under Physical Data Objects in theproject or folder in the Object Explorer view.

Creating a Read Transformation from Nonrelational Data OperationsYou can add a nonrelational data operation to a mapping or mapplet as a read transformation.

1. Open the mapping or mapplet in which you want to create a read transformation.

2. In the Object Explorer view, select one or more nonrelational data operations.

Nonrelational Data Objects 49

Page 63: In 910 Dev UserGuide En

3. Drag the nonrelational data operations into the mapping editor.

The Add to Mapping dialog box appears.

4. Select the Read option.

As independent data object(s) is automatically selected.

5. Click OK.

The Developer tool creates a read transformation for each nonrelational data operation in the mapping ormapplet.

Flat File Data ObjectsCreate or import a flat file data object to include in a mapping, mapplet, or profile. You can use flat file data objectsas sources, targets, and lookups in mappings and mapplets. You can create profiles on flat file data objects.

A flat file physical data object can be delimited or fixed-width. You can import fixed-width and delimited flat filesthat do not contain binary data.

After you import a flat file data object, you might need to create parameters or configure file properties. Createparameters through the Parameters view. Edit file properties through the Overview, Read, Write, and Advancedviews.

The Overview view allows you to edit the flat file data object name and description. It also allows you to updatecolumn properties for the flat file data object.

The Read view controls the properties that the Data Integration Service uses when it reads data from the flat file.The Read view contains the following transformations:

¨ Source transformation. Defines the flat file that provides the source data. Select the source transformation toedit properties such as the name and description, column properties, and source file format properties.

¨ Output transformation. Represents the rows that the Data Integration Service reads when it runs a mapping.Select the Output transformation to edit the file run-time properties such as the source file name and directory.

The Write view controls the properties that the Data Integration Service uses when it writes data to the flat file. TheWrite view contains the following transformations:

¨ Input transformation. Represents the rows that the Data Integration Service writes when it runs a mapping.Select the Input transformation to edit the file run-time properties such as the target file name and directory.

¨ Target transformation. Defines the flat file that accepts the target data. Select the target transformation to editthe name and description and the target file format properties.

The Advanced view controls format properties that the Data Integration Service uses when it reads data from andwrites data to the flat file.

When you create mappings that use file sources or file targets, you can view flat file properties in the Propertiesview. You cannot edit file properties within a mapping, except for the reject file name, reject file directory, andtracing level.

Flat File Data Object Overview PropertiesThe Data Integration Service uses overview properties when it reads data from or writes data to a flat file.Overview properties include general properties, which apply to the flat file data object. They also include columnproperties, which apply to the columns in the flat file data object. The Developer tool displays overview propertiesfor flat files in the Overview view.

50 Chapter 3: Physical Data Objects

Page 64: In 910 Dev UserGuide En

The following table describes the general properties that you configure for flat files:

Property Description

Name Name of the flat file data object.

Description Description of the flat file data object.

The following table describes the column properties that you configure for flat files:

Property Description

Name Name of the column.

Native type Native datatype of the column.

Bytes to process (fixed-widthflat files)

Number of bytes that the Data Integration Service reads or writes for the column.

Precision Maximum number of significant digits for numeric datatypes, or maximum number ofcharacters for string datatypes. For numeric datatypes, precision includes scale.

Scale Maximum number of digits after the decimal point for numeric values.

Format Column format for numeric and datetime datatypes.For numeric datatypes, the format defines the thousand separator and decimal separator.Default is no thousand separator and a period (.) for the decimal separator.For datetime datatypes, the format defines the display format for year, month, day, and time. Italso defines the field width. Default is "A 19 YYYY-MM-DD HH24:MI:SS."

Visibility Determines whether the Data Integration Service can read data from or write data to thecolumn.For example, when the visibility is Read, the Data Integration Service can read data from thecolumn. It cannot write data to the column.For flat file data objects, this property is read-only. The visibility is always Read and Write.

Description Description of the column.

Flat File Data Object Read PropertiesThe Data Integration Service uses read properties when it reads data from a flat file. Select the sourcetransformation to edit general, column, and format properties. Select the Output transformation to edit run-timeproperties.

General PropertiesThe Developer tool displays general properties for flat file sources in the source transformation in the Read view.

The following table describes the general properties that you configure for flat file sources:

Property Description

Name Name of the flat file.

Flat File Data Objects 51

Page 65: In 910 Dev UserGuide En

Property Description

This property is read-only. You can edit the name in the Overview view. When you use the flatfile as a source in a mapping, you can edit the name within the mapping.

Description Description of the flat file.

Columns PropertiesThe Developer tool displays column properties for flat file sources in the source transformation in the Read view.

The following table describes the column properties that you configure for flat file sources:

Property Description

Name Name of the column.

Native type Native datatype of the column.

Bytes to process (fixed-widthflat files)

Number of bytes that the Data Integration Service reads for the column.

Precision Maximum number of significant digits for numeric datatypes, or maximum number ofcharacters for string datatypes. For numeric datatypes, precision includes scale.

Scale Maximum number of digits after the decimal point for numeric values.

Format Column format for numeric and datetime datatypes.For numeric datatypes, the format defines the thousand separator and decimal separator.Default is no thousand separator and a period (.) for the decimal separator.For datetime datatypes, the format defines the display format for year, month, day, and time. Italso defines the field width. Default is "A 19 YYYY-MM-DD HH24:MI:SS."

Shift key (fixed-width flat files) Allows the user to define a shift-in or shift-out statefulness for the column in the fixed-width flatfile.

Description Description of the column.

Format PropertiesThe Developer tool displays format properties for flat file sources in the source transformation in the Read view.

The following table describes the format properties that you configure for delimited flat file sources:

Property Description

Start import at line Row at which the Data Integration Service starts importing data. Use this option to skip headerrows.Default is 1.

Row delimiter Octal code for the character that separates rows of data.Default is line feed, \012 LF (\n).

52 Chapter 3: Physical Data Objects

Page 66: In 910 Dev UserGuide En

Property Description

Escape character Character used to escape a delimiter character in an unquoted string if the delimiter is the nextcharacter after the escape character. If you specify an escape character, the Data IntegrationService reads the delimiter character as a regular character embedded in the string.Note: You can improve mapping performance slightly if the source file does not contain quotesor escape characters.

Retain escape character indata

Includes the escape character in the output string.Default is disabled.

Treat consecutive delimitersas one

Causes the Data Integration Service to treat one or more consecutive column delimiters asone. Otherwise, the Data Integration Service reads two consecutive delimiters as a null value.Default is disabled.

The following table describes the format properties that you configure for fixed-width flat file sources:

Property Description

Start import at line Row at which the Data Integration Service starts importing data. Use this option to skip headerrows.Default is 1.

Number of bytes to skipbetween records

Number of bytes between the last column of one row and the first column of the next. TheData Integration Service skips the entered number of bytes at the end of each row to avoidreading carriage return characters or line feed characters.Enter 1 for UNIX files and 2 for DOS files.Default is 2.

Line sequential Causes the Data Integration Service to read a line feed character or carriage return characterin the last column as the end of the column. Select this option if the file uses line feeds orcarriage returns to shorten the last column of each row.Default is disabled.

Strip trailing blanks Strips trailing blanks from string values.Default is disabled.

User defined shift state Allows you to select the shift state for source columns in the Columns properties.Select this option when the source file contains both multibyte and single-byte data, but doesnot contain shift-in and shift-out keys. If a multibyte file source does not contain shift keys, youmust select a shift key for each column in the flat file data object. Select the shift key for eachcolumn to enable the Data Integration Service to read each character correctly.Default is disabled.

Flat File Data Objects 53

Page 67: In 910 Dev UserGuide En

Run-time PropertiesThe Developer tool displays run-time properties for flat file sources in the Output transformation in the Read view.

The following table describes the run-time properties that you configure for flat file sources:

Property Description

Input type Type of source input. You can choose the following types of source input:- File. For flat file sources.- Command. For source data or a file list generated by a shell command.

Source type Indicates whether the source file contains the source data or a list of files with the same fileproperties. You can choose the following source file types:- Direct. For source files that contain the source data.- Indirect. For source files that contain a list of files. The Data Integration Service finds the

file list and reads each listed file when it runs the mapping.

Source file name File name of the flat file source.

Source file directory Directory where the flat file source exists.The machine that hosts Informatica Services must be able to access this directory.

Command Command used to generate the source file data.Use a command to generate or transform flat file data and send the standard output of thecommand to the flat file reader when the mapping runs. The flat file reader reads the standardoutput as the flat file source data. Generating source data with a command eliminates theneed to stage a flat file source. Use a command or script to send source data directly to theData Integration Service instead of using a pre-mapping command to generate a flat filesource. You can also use a command to generate a file list.For example, to use a directory listing as a file list, use the following command:cd MySourceFiles; ls sales-records-Sep-*-2005.dat

Truncate string null Strips the first null character and all characters after the first null character from string values.Enable this option for delimited flat files that contain null characters in strings. If you do notenable this option, the Data Integration Service generates a row error for any row that containsnull characters in a string.Default is disabled.

Line sequential buffer length Number of bytes that the Data Integration Service reads for each line.This property, together with the total row size, determines whether the Data IntegrationService drops a row. If the row exceeds the larger of the line sequential buffer length or thetotal row size, the Data Integration Service drops the row and writes it to the mapping log file.To determine the total row size, add the column precision and the delimiters, and then multiplythe total by the maximum bytes for each character.Default is 1024.

Configuring Flat File Read PropertiesConfigure read properties to control how the Data Integration Service reads data from a flat file.

1. Open the flat file data object.

2. Select the Read view.

3. To edit general, column, or format properties, select the source transformation. To edit run-time properties,select the Output transformation.

4. In the Properties view, select the properties you want to edit.

For example, click Columns properties or Runtime properties.

54 Chapter 3: Physical Data Objects

Page 68: In 910 Dev UserGuide En

5. Edit the properties.

6. Save the flat file data object.

Flat File Data Object Write PropertiesThe Data Integration Service uses write properties when it writes data to a flat file. Select the Input transformationto edit run-time properties. Select the target transformation to edit general and column properties.

Run-time PropertiesThe Developer tool displays run-time properties for flat file targets in the Input transformation in the Write view.

The following table describes the run-time properties that you configure for flat file targets:

Property Description

Append if exists Appends the output data to the target files and reject files.If you do not select this option, the Data Integration Service truncates the target file and rejectfile before writing data to them. If the files do not exist, the Data Integration Service createsthem.Default is disabled.

Create directory if not exists Creates the target directory if it does not exist.Default is disabled.

Header options Creates a header row in the file target. You can choose the following options:- No header. Does not create a header row in the flat file target.- Output field names. Creates a header row in the file target with the output port names .- Use header command output. Uses the command in the Header Command field to

generate a header row. For example, you can use a command to add the date to a headerrow for the file target.

Default is no header.

Header command Command used to generate the header row in the file target.

Footer command Command used to generate the footer row in the file target.

Output type Type of target for the mapping. Select File to write the target data to a flat file. SelectCommand to output data to a command.

Output file directory Output directory for the flat file target.The machine that hosts Informatica Services must be able to access this directory.Default is ".", which stands for the following directory:<Informatica Services Installation Directory>\tomcat\bin

Output file name File name of the flat file target.

Command Command used to process the target data.On UNIX, use any valid UNIX command or shell script. On Windows, use any valid DOScommand or batch file. The flat file writer sends the data to the command instead of a flat filetarget.You can improve mapping performance by pushing transformation tasks to the commandinstead of the Data Integration Service. You can also use a command to sort or to compresstarget data.For example, use the following command to generate a compressed file from the target data:compress -c - > MyTargetFiles/MyCompressedFile.Z

Flat File Data Objects 55

Page 69: In 910 Dev UserGuide En

Property Description

Reject file directory Directory where the reject file exists.Note: This field appears when you edit a flat file target in a mapping.

Reject file name File name of the reject file.Note: This field appears when you edit a flat file target in a mapping.

General PropertiesThe Developer tool displays general properties for flat file targets in the target transformation in the Write view.

The following table describes the general properties that you configure for flat file targets:

Property Description

Name Name of the flat file.This property is read-only. You can edit the name in the Overview view. When you use the flatfile as a target in a mapping, you can edit the name within the mapping.

Description Description of the flat file.

Columns PropertiesThe Developer tool displays column properties for flat file targets in the target transformation in the Write view.

The following table describes the column properties that you configure for flat file targets:

Property Description

Name Name of the column.

Native type Native datatype of the column.

Bytes to process (fixed-widthflat files)

Number of bytes that the Data Integration Service writes for the column.

Precision Maximum number of significant digits for numeric datatypes, or maximum number ofcharacters for string datatypes. For numeric datatypes, precision includes scale.

Scale Maximum number of digits after the decimal point for numeric values.

Format Column format for numeric and datetime datatypes.For numeric datatypes, the format defines the thousand separators and decimal separators.Default is no thousand separator and a period (.) for the decimal separator.For datetime datatypes, the format defines the display format for year, month, day, and time. Italso defines the field width. Default is "A 19 YYYY-MM-DD HH24:MI:SS."

Description Description of the column.

Configuring Flat File Write PropertiesConfigure write properties to control how the Data Integration Service writes data to a flat file.

56 Chapter 3: Physical Data Objects

Page 70: In 910 Dev UserGuide En

1. Open the flat file data object.

2. Select the Write view.

3. To edit run-time properties, select the Input transformation. To edit general or column properties, select thetarget transformation.

4. In the Properties view, select the properties you want to edit.

For example, click Runtime properties or Columns properties.

5. Edit the properties.

6. Save the flat file data object.

Flat File Data Object Advanced PropertiesThe Data Integration Service uses advanced properties when it reads data from or writes data to a flat file. TheDeveloper tool displays advanced properties for flat files in the Advanced view.

The following table describes the advanced properties that you configure for flat files:

Property Description

Code page Code page of the flat file data object.For source files, use a source code page that is a subset of the target code page. For lookupfiles, use a code page that is a superset of the source code page and a subset of the targetcode page. For target files, use a code page that is a superset of the source code pageDefault is "MS Windows Latin 1 (ANSI), superset of Latin 1."

Format Format for the flat file, either delimited or fixed-width.

Delimiters (delimited flat files) Character used to separate columns of data.

Null character type (fixed-width flat files)

Null character type, either text or binary.

Null character (fixed-widthflat files)

Character used to represent a null value. The null character can be any valid character in thefile code page or any binary value from 0 to 255.

Repeat null character (fixed-width flat files)

For source files, causes the Data Integration Service to read repeat null characters in a singlefield as one null value.For target files, causes the Data Integration Service to write as many null characters aspossible into the target field. If you do not enable this option, the Data Integration Serviceenters one null character at the beginning of the field to represent a null value.Default is disabled.

Datetime format Defines the display format and the field width for datetime values.Default is "A 19 YYYY-MM-DD HH24:MI:SS."

Thousand separator Thousand separator for numeric values.Default is None.

Decimal separator Decimal separator for numeric values.Default is a period (.).

Tracing level Controls the amount of detail in the mapping log file.Note: This field appears when you edit a flat file source or target in a mapping.

Flat File Data Objects 57

Page 71: In 910 Dev UserGuide En

Creating a Flat File Data ObjectCreate a flat file data object to define the data object columns and rows.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Data Object.

3. Select Physical Data Objects > Flat File Data Object and click Next.

The New Flat File Data Object dialog box appears.

4. Select Create as Empty.

5. Enter a name for the data object.

6. Optionally, click Browse to select a project or folder for the data object.

7. Click Next.

8. Select a code page that matches the code page of the data in the file.

9. Select Delimited or Fixed-width.

10. If you selected Fixed-width, click Finish. If you selected Delimited, click Next.

11. Configure the following properties:

Property Description

Delimiters Character used to separate columns of data. Use the Other field to entera different delimiter. Delimiters must be printable characters and must bedifferent from the configured escape character and the quote character.You cannot select unprintable multibyte characters as delimiters.

Text Qualifier Quote character that defines the boundaries of text strings. If you selecta quote character, the Developer tool ignores delimiters within a pair ofquotes.

12. Click Finish.

The data object appears under Data Object in the project or folder in the Object Explorer view.

Importing a Fixed-Width Flat File Data ObjectImport a fixed-width flat file data object when you have a fixed-width flat file that defines the metadata you want toinclude in a mapping, mapplet, or profile.

1. Click File > New > Data Object.

The New dialog box appears.

2. Select Physical Data Objects > Flat File Data Object and click Next.

The New Flat File Data Object dialog box appears.

3. Enter a name for the data object.

4. Click Browse and navigate to the directory that contains the file.

5. Click Open.

The wizard names the data object the same name as the file you selected.

6. Optionally, edit the data object name.

7. Click Next.

58 Chapter 3: Physical Data Objects

Page 72: In 910 Dev UserGuide En

8. Select a code page that matches the code page of the data in the file.

9. Select Fixed-Width.

10. Optionally, edit the maximum number of rows to preview.

11. Click Next.

12. Configure the following properties:

Property Description

Import Field Names From First Line If selected, the Developer tool uses data in the first row for columnnames. Select this option if column names appear in the first row.

Start Import At Row Row number at which the Data Integration Service starts reading when itimports the file. For example, if you specify to start at the second row,the Developer tool skips the first row before reading.

13. Click Edit Breaks to edit column breaks. Or, follow the directions in the wizard to manipulate the columnbreaks in the file preview window.

You can move column breaks by dragging them. Or, double-click a column break to delete it.

14. Click Next to preview the physical data object.

15. Click Finish.

The data object appears under Data Object in the project or folder in the Object Explorer view.

Importing a Delimited Flat File Data ObjectImport a delimited flat file data object when you have a delimited flat file that defines the metadata you want toinclude in a mapping, mapplet, or profile.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Data Object.

The New dialog box appears.

3. Select Physical Data Objects > Flat File Data Object and click Next.

The New Flat File Data Object dialog box appears.

4. Enter a name for the data object.

5. Click Browse and navigate to the directory that contains the file.

6. Click Open.

The wizard names the data object the same name as the file you selected.

7. Optionally, edit the data object name.

8. Click Next.

9. Select a code page that matches the code page of the data in the file.

10. Select Delimited.

11. Optionally, edit the maximum number of rows to preview.

12. Click Next.

Flat File Data Objects 59

Page 73: In 910 Dev UserGuide En

13. Configure the following properties:

Property Description

Delimiters Character used to separate columns of data. Use the Other field to entera different delimiter. Delimiters must be printable characters and must bedifferent from the configure escape character and the quote character.You cannot select nonprinting multibyte characters as delimiters.

Text Qualifier Quote character that defines the boundaries of text strings. If you selecta quote character, the Developer tool ignores delimiters within pairs ofquotes.

Import Field Names From First Line If selected, the Developer tool uses data in the first row for columnnames. Select this option if column names appear in the first row. TheDeveloper tool prefixes "FIELD_" to field names that are not valid.

Row Delimiter Specify a line break character. Select from the list or enter a character.Preface an octal code with a backslash (\). To use a single character,enter the character.The Data Integration Service uses only the first character when the entryis not preceded by a backslash. The character must be a single-bytecharacter, and no other character in the code page can contain that byte.Default is line-feed, \012 LF (\n).

Escape Character Character immediately preceding a column delimiter characterembedded in an unquoted string, or immediately preceding the quotecharacter in a quoted string. When you specify an escape character, theData Integration Service reads the delimiter character as a regularcharacter.

Start Import At Row Row number at which the Data Integration Service starts reading when itimports the file. For example, if you specify to start at the second row,the Developer tool skips the first row before reading.

Treat Consecutive Delimiters as One If selected, the Data Integration Service reads one or more consecutivecolumn delimiters as one. Otherwise, the Data Integration Service readstwo consecutive delimiters as a null value.

Remove Escape Character From Data Removes the escape character in the output string.

14. Click Next to preview the data object.

15. Click Finish.

The data object appears under Data Object in the project or folder in the Object Explorer view.

SAP Data ObjectsImport an SAP data object to include in a mapping, mapplet, or profile. SAP data objects are physical data objectsthat use SAP as the source.

Importing an SAP Data ObjectImport an SAP data object to add to a mapping, mapplet, or profile.

60 Chapter 3: Physical Data Objects

Page 74: In 910 Dev UserGuide En

Before you import an SAP data object, you need to configure a connection to the enterprise application.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Data Object.

3. Select SAP Data Object and click Next.

The New SAP Data Object dialog box appears.

4. Click Browse next to the Location option and select the target project or folder.

5. Click Browse next to the Connection option and select an SAP connection from which you want to import theSAP table metadata.

6. To add a table to the SAP Data Object, click Add next to the Resource option.

The Add sources to the data object dialog box appears.

7. Enter the table names or select them to add to the data object:

¨ Navigate to the SAP table or tables that you want to import and click OK.

¨ Enter the table name or the description of the table you want to import in the Resource field.

When you enter a table name, you can include wildcard characters and separate multiple table names with acomma.

8. Select Show hierarchy option to display the hierarchy of the SAP table.

9. Select the table and click OK.

10. If required, add additional tables to the SAP data object.

11. Optionally, enter a name for the SAP data object.

12. Click Finish.

The data object appears under Data Object in the project or folder in the Object Explorer view.

You can also add tables to an SAP data object after you create it.

SynchronizationYou can synchronize physical data objects when their sources change. When you synchronize a physical dataobject, the Developer tool reimports the object metadata from the source you select.

You can synchronize all physical data objects. When you synchronize relational data objects or customized dataobjects, you can retain or overwrite the key relationships you define in the Developer tool.

You can configure a customized data object to be synchronized when its sources change. For example, acustomized data object uses a relational data object as a source, and you add a column to the relational dataobject. The Developer tool adds the column to the customized data object. To synchronize a customized dataobject when its sources change, select the Synchronize input and output option in the Overview properties ofthe customized data object.

To synchronize any physical data object, right-click the object in the Object Explorer view, and selectSynchronize.

Synchronization 61

Page 75: In 910 Dev UserGuide En

Troubleshooting Physical Data Objects

I am trying to preview a relational data object or a customized data object source transformation and thepreview fails.Verify that the resource owner name is correct.

When you import a relational resource, the Developer tool imports the owner name when the user name andschema from which the table is imported do not match. If the user name and schema from which the table isimported match, but the database default schema has a different name, preview fails because the Data IntegrationService executes the preview query against the database default schema, where the table does not exist.

Update the relational data object or the source transformation and enter the correct resource owner name. Theowner name appears in the relational data object or the source transformation Advanced properties.

I am trying to preview a flat file data object and the preview fails. I get an error saying that the system cannotfind the path specified.Verify that the machine that hosts Informatica Services can access the source file directory.

For example, you create a flat file data object by importing the following file on your local machine, MyClient:

C:\MySourceFiles\MyFile.csv

In the Read view, select the Runtime properties in the Output transformation. The source file directory is "C:\MySourceFiles."

When you preview the file, the Data Integration Service tries to locate the file in the "C:\MySourceFiles" directoryon the machine that hosts Informatica Services. If the directory does not exist on the machine that hostsInformatica Services, the Data Integration Service returns an error when you preview the file.

To work around this issue, use the network path as the source file directory. For example, change the source filedirectory from "C:\MySourceFiles" to "\\MyClient\MySourceFiles." Share the "MySourceFiles" directory so that themachine that hosts Informatica Services can access it.

62 Chapter 3: Physical Data Objects

Page 76: In 910 Dev UserGuide En

C H A P T E R 4

MappingsThis chapter includes the following topics:

¨ Mappings Overview, 63

¨ Developing a Mapping, 64

¨ Creating a Mapping, 64

¨ Mapping Objects, 65

¨ Linking Ports, 66

¨ Propagating Port Attributes, 68

¨ Mapping Validation, 71

¨ Running a Mapping, 72

¨ Segments, 72

Mappings OverviewA mapping is a set of inputs and outputs that represent the data flow between sources and targets. They can belinked by transformation objects that define the rules for data transformation. The Data Integration Service usesthe instructions configured in the mapping to read, transform, and write data.

The type of input and output you include in a mapping determines the type of mapping. You can create thefollowing types of mapping in the Developer tool:

¨ Mapping with physical data objects as the input and output

¨ Logical data object mapping with a logical data object as the mapping input or output

¨ Operation mapping with an operation as the mapping input, output, or both

¨ Virtual table mapping with a virtual table as the mapping output

Object Dependency in a MappingA mapping is dependent on some objects that are stored as independent objects in the repository.

When object metadata changes, the Developer tool tracks the effects of these changes on mappings. Mappingsmight become invalid even though you do not edit the mapping. When a mapping becomes invalid, the DataIntegration Service cannot run it.

The following objects are stored as independent objects in the repository:

¨ Logical data objects

63

Page 77: In 910 Dev UserGuide En

¨ Physical data objects

¨ Reusable transformations

¨ Mapplets

A mapping is dependent on these objects.

The following objects in a mapping are stored as dependent repository objects:

¨ Virtual tables. Virtual tables are stored as part of an SQL data service.

¨ Non-reusable transformations that you build within the mapping. Non-reusable transformations are storedwithin the mapping only.

Developing a MappingDevelop a mapping to read, transform, and write data according to your business needs.

1. Determine the type of mapping you want to create: logical data object, virtual table, or a mapping withphysical data objects as input and output.

2. Create input, output, and reusable objects that you want to use in the mapping. Create physical data objects,logical data objects, or virtual tables to use as mapping input or output. Create reusable transformations thatyou want to use. If you want to use mapplets, you must create them also.

3. Create the mapping.

4. Add objects to the mapping. You must add input and output objects to the mapping. Optionally, addtransformations and mapplets.

5. Link ports between mapping objects to create a flow of data from sources to targets, through mapplets andtransformations that add, remove, or modify data along this flow.

6. Validate the mapping to identify errors.

7. Save the mapping to the Model repository.

After you develop the mapping, run it to see mapping output.

Creating a MappingCreate a mapping to move data between flat file or relational sources and targets and transform the data.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Mapping.

3. Optionally, enter a mapping name.

4. Click Finish.

An empty mapping appears in the editor.

64 Chapter 4: Mappings

Page 78: In 910 Dev UserGuide En

Mapping ObjectsMapping objects determine the data flow between sources and targets.

Every mapping must contain the following objects:

¨ Input. Describes the characteristics of the mapping source.

¨ Output. Describes the characteristics of the mapping target.

A mapping can also contain the following components:

¨ Transformation. Modifies data before writing it to targets. Use different transformation objects to performdifferent functions.

¨ Mapplet. A reusable object containing a set of transformations that you can use in multiple mappings.

When you add an object to a mapping, you configure the properties according to how you want the DataIntegration Service to transform the data. You also connect the mapping objects according to the way you want theData Integration Service to move the data. You connect the objects through ports.

The editor displays objects in the following ways:

¨ Iconized. Shows an icon of the object with the object name.

¨ Normal. Shows the columns and the input and output port indicators. You can connect objects that are in thenormal view.

Adding Objects to a MappingAdd objects to a mapping to determine the data flow between sources and targets.

1. Open the mapping.

2. Drag a physical data object to the editor and select Read to add the data object as a source.

3. Drag a physical data object to the editor and select Write to add the data object as a target.

4. To add a Lookup transformation, drag a physical data object from the Data Sources folder in the ObjectExplorer view to the editor and select Lookup.

5. To add a reusable transformation, drag the transformation from the Transformation folder in the ObjectExplorer view to the editor.

Repeat this step for each reusable transformation you want to add.

6. To add a non-reusable transformation, select the transformation on the Transformation palette and drag it tothe editor.

Repeat this step for each non-reusable transformation you want to add.

7. Configure ports and properties for each non-reusable transformation.

8. Optionally, drag a mapplet to the editor.

One to One LinksLink one port in an input object or transformation to one port in an output object or transformation.

Mapping Objects 65

Page 79: In 910 Dev UserGuide En

One to Many LinksWhen you want to use the same data for different purposes, you can link the port providing that data to multipleports in the mapping.

You can create a one to many link in the following ways:

¨ Link one port to multiple transformations or output objects.

¨ Link multiple ports in one transformation to multiple transformations or output objects.

For example, you want to use salary information to calculate the average salary in a bank branch through theAggregator transformation. You can use the same information in an Expression transformation configured tocalculate the monthly pay of each employee.

Linking PortsAfter you add and configure input, output, transformation, and mapplet objects in a mapping, complete themapping by linking ports between mapping objects.

Data passes into and out of a transformation through the following ports:

¨ Input ports. Receive data.

¨ Output ports. Pass data.

¨ Input/output ports. Receive data and pass it unchanged.

Every input object, output object, mapplet, and transformation contains a collection of ports. Each port representsa column of data:

¨ Input objects provide data, so they contain only output ports.

¨ Output objects receive data, so they contain only input ports.

¨ Mapplets contain only input ports and output ports.

¨ Transformations contain a mix of input, output, and input/output ports, depending on the transformation and itsapplication.

To connect ports, you create a link between ports in different mapping objects. The Developer tool creates theconnection only when the connection meets link validation and concatenation requirements.

You can leave ports unconnected. The Data Integration Service ignores unconnected ports.

When you link ports between input objects, transformations, mapplets, and output objects, you can create thefollowing types of link:

¨ One to one

¨ One to many

You can manually link ports or link ports automatically.

Manually Linking PortsYou can manually link one port or multiple ports.

Drag a port from an input object or transformation to the port of an output object or transformation.

Use the Ctrl or Shift key to select multiple ports to link to another transformation or output object. The Developertool links the ports, beginning with the top pair. It links all ports that meet the validation requirements.

When you drag a port into an empty port, the Developer tool copies the port and creates a connection.

66 Chapter 4: Mappings

Page 80: In 910 Dev UserGuide En

Automatically Linking PortsWhen you link ports automatically, you can link by position or by name.

When you link ports automatically by name, you can specify a prefix or suffix by which to link the ports. Useprefixes or suffixes to indicate where ports occur in a mapping.

Linking Ports by NameWhen you link ports by name, the Developer tool adds links between input and output ports that have the samename. Link by name when you use the same port names across transformations.

You can link ports based on prefixes and suffixes that you define. Use prefixes or suffixes to indicate where portsoccur in a mapping. Link by name and prefix or suffix when you use prefixes or suffixes in port names todistinguish where they occur in the mapping or mapplet.

Linking by name is not case sensitive.

1. Click Mapping > Auto Link.

The Auto Link dialog box appears.

2. Select an object in the From window to link from.

3. Select an object in the To window to link to.

4. Select Name.

5. Optionally, click Advanced to link ports based on prefixes or suffixes.

6. Click OK.

Linking Ports by PositionWhen you link by position, the Developer tool links the first output port to the first input port, the second output portto the second input port, and so forth. Link by position when you create transformations with related ports in thesame order.

1. Click Mapping > Auto Link.

The Auto Link dialog box appears.

2. Select an object in the From window to link from.

3. Select an object in the To window to link to.

4. Select Position and click OK.

The Developer tool links the first output port to the first input port, the second output port to the second inputport, and so forth.

Rules and Guidelines for Linking PortsCertain rules and guidelines apply when you link ports.

Use the following rules and guidelines when you connect mapping objects:

¨ If the Developer tool detects an error when you try to link ports between two mapping objects, it displays asymbol indicating that you cannot link the ports.

¨ Follow the logic of data flow in the mapping. You can link the following types of port:

- The receiving port must be an input or input/output port.

- The originating port must be an output or input/output port.

Linking Ports 67

Page 81: In 910 Dev UserGuide En

- You cannot link input ports to input ports or output ports to output ports.

¨ You must link at least one port of an input group to an upstream transformation.

¨ You must link at least one port of an output group to a downstream transformation.

¨ You can link ports from one active transformation or one output group of an active transformation to an inputgroup of another transformation.

¨ You cannot connect an active transformation and a passive transformation to the same downstreamtransformation or transformation input group.

¨ You cannot connect more than one active transformation to the same downstream transformation ortransformation input group.

¨ You can connect any number of passive transformations to the same downstream transformation,transformation input group, or target.

¨ You can link ports from two output groups in the same transformation to one Joiner transformation configuredfor sorted data if the data from both output groups is sorted.

¨ You can only link ports with compatible datatypes. The Developer tool verifies that it can map between the twodatatypes before linking them. The Data Integration Service cannot transform data between ports withincompatible datatypes.

¨ The Developer tool marks some mappings invalid if the mapping violates data flow validation.

Propagating Port AttributesPropagate port attributes to pass changed attributes to a port throughout a mapping.

1. In the mapping canvas, select a port in a transformation.

2. Click Mapping > Propagate Attributes.

The Propagate Attributes dialog box appears.

3. Select a direction to propagate attributes.

4. Select the attributes you want to propagate.

5. Optionally, preview the results.

6. Click Apply.

The Developer tool propagates the port attributes.

Dependency TypesWhen you propagate port attributes, the Developer tool updates dependencies.

The Developer tool can update the following dependencies:

¨ Link path dependencies

¨ Implicit dependencies

Link Path DependenciesA link path dependency is a dependency between a propagated port and the ports in its link path.

68 Chapter 4: Mappings

Page 82: In 910 Dev UserGuide En

When you propagate dependencies in a link path, the Developer tool updates all the input and input/output ports inits forward link path and all the output and input/output ports in its backward link path. The Developer toolperforms the following updates:

¨ Updates the port name, datatype, precision, scale, and description for all ports in the link path of thepropagated port.

¨ Updates all expressions or conditions that reference the propagated port with the changed port name.

¨ Updates the associated port property in a dynamic Lookup transformation if the associated port name changes.

Implicit DependenciesAn implicit dependency is a dependency within a transformation between two ports based on an expression orcondition.

You can propagate datatype, precision, scale, and description to ports with implicit dependencies. You can alsoparse conditions and expressions to identify the implicit dependencies of the propagated port. All ports with implicitdependencies are output or input/output ports.

When you include conditions, the Developer tool updates the following dependencies:

¨ Link path dependencies

¨ Output ports used in the same lookup condition as the propagated port

¨ Associated ports in dynamic Lookup transformations that are associated with the propagated port

¨ Master ports used in the same join condition as the detail port

When you include expressions, the Developer tool updates the following dependencies:

¨ Link path dependencies

¨ Output ports containing an expression that uses the propagated port

The Developer tool does not propagate to implicit dependencies within the same transformation. You mustpropagate the changed attributes from another transformation. For example, when you change the datatype of aport that is used in a lookup condition and propagate that change from the Lookup transformation, the Developertool does not propagate the change to the other port dependent on the condition in the same Lookuptransformation.

Propagated Port Attributes by TransformationThe Developer tool propagates dependencies and attributes for each transformation.

The following table describes the dependencies and attributes the Developer tool propagates for eachtransformation:

Transformation Dependency Propagated Attributes

Address Validator None. None. This transform has predefinedport names and datatypes.

Aggregator - Ports in link path- Expression- Implicit dependencies

- Port name, datatype, precision,scale, description

- Port name- Datatype, precision, scale

Association - Ports in link path - Port name, datatype, precision,scale, description

Propagating Port Attributes 69

Page 83: In 910 Dev UserGuide En

Transformation Dependency Propagated Attributes

Case Converter - Ports in link path - Port name, datatype, precision,scale, description

Comparison - Ports in link path - Port name, datatype, precision,scale, description

Consolidator None. None. This transform has predefinedport names and datatypes.

Expression - Ports in link path- Expression- Implicit dependencies

- Port name, datatype, precision,scale, description

- Port name- Datatype, precision, scale

Filter - Ports in link path- Condition

- Port name, datatype, precision,scale, description

- Port name

Joiner - Ports in link path- Condition- Implicit Dependencies

- Port name, datatype, precision,scale, description

- Port name- Datatype, precision, scale

Key Generator - Ports in link path - Port name, datatype, precision,scale, description

Labeler - Ports in link path - Port name, datatype, precision,scale, description

Lookup - Ports in link path- Condition- Associated ports (dynamic lookup)- Implicit Dependencies

- Port name, datatype, precision,scale, description

- Port name- Port name- Datatype, precision, scale

Match - Ports in link path - Port name, datatype, precision,scale, description

Merge - Ports in link path - Port name, datatype, precision,scale, description

Parser - Ports in link path - Port name, datatype, precision,scale, description

Rank - Ports in link path- Expression- Implicit dependencies

- Port name, datatype, precision,scale, description

- Port name- Datatype, precision, scale

Router - Ports in link path- Condition

- Port name, datatype, precision,scale, description

- Port name

Sorter - Ports in link path - Port name, datatype, precision,scale, description

70 Chapter 4: Mappings

Page 84: In 910 Dev UserGuide En

Transformation Dependency Propagated Attributes

SQL - Ports in link path - Port name, datatype, precision,scale, description

Standardizer - Ports in link path - Port name, datatype, precision,scale, description

Union - Ports in link path- Implicit dependencies

- Port name, datatype, precision,scale, description

- Datatype, precision, scale

Update Strategy - Ports in link path- Expression- Implicit dependencies

- Port name, datatype, precision,scale, description

- Port name- Datatype, precision, scale

Weighted Average - Ports in link path - Port name, datatype, precision,scale, description

Mapping ValidationWhen you develop a mapping, you must configure it so the Data Integration Service can read and process theentire mapping. The Developer tool marks a mapping invalid when it detects errors that will prevent the DataIntegration Service from running sessions associated with the mapping.

The Developer tool considers the following types of validation:

¨ Connection

¨ Expression

¨ Object

¨ Data flow

Connection ValidationThe Developer tool performs connection validation each time you connect ports in a mapping and each time youvalidate a mapping.

When you connect ports, the Developer tool verifies that you make valid connections. When you validate amapping, the Developer tool verifies that the connections are valid and that all required ports are connected. TheDeveloper tool makes the following connection validations:

¨ At least one input object and one output object are connected.

¨ At least one mapplet input port and output port is connected to the mapping.

¨ Datatypes between ports are compatible. If you change a port datatype to one that is incompatible with the portit is connected to, the Developer tool generates an error and invalidates the mapping. You can however,change the datatype if it remains compatible with the connected ports, such as Char and Varchar.

Mapping Validation 71

Page 85: In 910 Dev UserGuide En

Expression ValidationYou can validate an expression in a transformation while you are developing a mapping. If you did not correct theerrors, error messages appear in the Validation Log view when you validate the mapping.

If you delete input ports used in an expression, the Developer tool marks the mapping as invalid.

Object ValidationWhen you validate a mapping, the Developer tool verifies that the definitions of the independent objects, such asInput transformations or mapplets, match the instance in the mapping.

If any object changes while you configure the mapping, the mapping might contain errors. If any object changeswhile you are not configuring the mapping, the Developer tool tracks the effects of these changes on the mappings.

Validating a MappingValidate a mapping to ensure that the Data Integration Service can read and process the entire mapping.

1. Click Edit > Validate.

Errors appear in the Validation Log view.

2. Fix errors and validate the mapping again.

Running a MappingRun a mapping to move output from sources to targets and transform data.

Before you can run a mapping, you need to configure a Data Integration Service in the Administrator tool. You alsoneed to select a default Data Integration Service. If you have not selected a default Data Integration Service, theDeveloper tool prompts you to select one.

u Right-click an empty area in the editor and click Run Mapping.

The Data Integration Service runs the mapping and writes the output to the target.

SegmentsA segment consists of one or more objects in a mapping, mapplet, rule, or virtual stored procedure. A segment caninclude a source, target, transformation, or mapplet.

You can copy segments. Consider the following rules and guidelines when you copy a segment:

¨ You can copy segments across folders or projects.

¨ The Developer tool reuses dependencies when possible. Otherwise, it copies dependencies.

¨ The Developer tool reuses objects that you copy from a shared project.

¨ If a mapping, mapplet, rule, or virtual stored procedure includes parameters and you copy a transformation thatrefers to the parameter, the transformation in the target object uses a default value for the parameter.

¨ You cannot copy input transformations and output transformations.

72 Chapter 4: Mappings

Page 86: In 910 Dev UserGuide En

¨ After you paste a segment, you cannot undo previous actions.

Copying a SegmentYou can copy a segment when you want to reuse a portion of the mapping logic in another mapping, mapplet, rule,or virtual stored procedure.

1. Open a the object that contains the segment you want to copy.

2. Select a segment by highlighting each object you want to copy.

Hold down the Ctrl key to select multiple objects. You can also select segments by dragging the pointer in arectangle around objects in the editor.

3. Click Edit > Copy to copy the segment to the clipboard.

4. Open a target mapping, mapplet, rule, or virtual stored procedure.

5. Click Edit > Paste.

Segments 73

Page 87: In 910 Dev UserGuide En

C H A P T E R 5

Performance TuningThis chapter includes the following topics:

¨ Performance Tuning Overview, 74

¨ Optimization Methods, 75

¨ Setting the Optimizer Level for a Developer Tool Mapping, 79

¨ Setting the Optimizer Level for a Deployed Mapping, 80

Performance Tuning OverviewThe Developer tool contains features that allow you to tune the performance of mappings. You might be able toimprove mapping performance by updating the mapping optimizer level through the mapping configuration ormapping deployment properties.

If you notice that a mapping takes an excessive amount of time to run, you might want to change the optimizerlevel for the mapping. The optimizer level determines which optimization methods the Data Integration Serviceapplies to the mapping at run-time.

You can choose one of the following optimizer levels:

None

The Data Integration Service does not optimize the mapping. It runs the mapping exactly as you designed it.

Minimal

The Data Integration Service applies the early projection optimization method to the mapping.

Normal

The Data Integration Service applies the early projection, early selection, and predicate optimization methodsto the mapping. The Data Integration Service also applies pushdown optimization to the mapping. This is thedefault optimizer level.

Full

The Data Integration Service applies the early projection, early selection, predicate optimization, cost-based,and semi-join optimization methods to the mapping. The Data Integration Service also applies pushdownoptimization to the mapping.

You set the optimizer level for a mapping in the mapping configuration or mapping deployment properties. TheData Integration Service applies different optimizer levels to the mapping depending on how you run the mapping.

74

Page 88: In 910 Dev UserGuide En

You can run a mapping in the following ways:

¨ From the Run menu or mapping editor. The Data Integration Service uses the normal optimizer level.

¨ From the Run dialog box. The Data Integration Service uses the optimizer level in the selected mappingconfiguration.

¨ From the command line. The Data Integration Service uses the optimizer level in the application's mappingdeployment properties.

You can also apply an optimizer level when you preview output in the Data Viewer view. You can preview outputfor mappings and for SQL queries you run against virtual tables. When you preview output in the Data Viewerview, the Developer tool uses the optimizer level in the selected data viewer configuration.

RELATED TOPICS:¨ “Pushdown Optimization Overview” on page 81

Optimization MethodsTo increase mapping performance, select an optimizer level for the mapping. The optimizer level controls theoptimization methods that the Data Integration Service applies to a mapping.

The Data Integration Service can apply the following optimization methods:

¨ Early projection. The Data Integration Service attempts to reduce the amount of data that passes through amapping by identifying unused ports and removing the links between those ports. The Data Integration Serviceapplies this optimization method when you select the minimal, normal, or full optimizer level.

¨ Early selection. The Data Integration Service attempts to reduce the amount of data that passes through amapping by applying the filters as early as possible. The Data Integration Service applies this optimizationmethod when you select the normal or full optimizer level.

¨ Predicate optimization. The Data Integration Service attempts to improve mapping performance by inferringnew predicate expressions and by simplifying and rewriting the predicate expressions generated by a mappingor the transformations within the mapping. The Data Integration Service applies this optimization method whenyou select the normal or full optimizer level.

¨ Cost-based. The Data Integration Service evaluates a mapping, generates alternate mappings, and runs themapping with the best performance. The Data Integration Service applies this optimization method when youselect the full optimizer level.

¨ Semi-join. The Data Integration Service attempts to reduce the amount of data extracted from the source bydecreasing the size of one of the join operand data sets. The Data Integration Service applies this optimizationmethod when you select the full optimizer level.

The Data Integration Service can apply multiple optimization methods to a mapping at the same time. Forexample, it applies the early projection, early selection, and predicate optimization methods when you select thenormal optimizer level.

Early Projection Optimization MethodThe early projection optimization method causes the Data Integration Service to identify unused ports and removethe links between those ports.

Identifying and removing links between unused ports improves performance by reducing the amount of data theData Integration Service moves across transformations. When the Data Integration Service processes a mapping,it moves the data from all connected ports in a mapping from one transformation to another. In large, complex

Optimization Methods 75

Page 89: In 910 Dev UserGuide En

mappings, or in mappings that use nested mapplets, some ports might not ultimately supply data to the target. Theearly projection method causes the Data Integration Service to identify the ports that do not supply data to thetarget. After the Data Integration Service identifies unused ports, it removes the links between all unused portsfrom the mapping.

The Data Integration Service does not remove all links. For example, it does not remove the following links:

¨ Links connected to a Custom transformation

¨ Links connected to transformations that call an ABORT() or ERROR() function, send email, or call a storedprocedure

If the Data Integration Service determines that all ports in a transformation are unused, it removes alltransformation links except the link to the port with the least data. The Data Integration Service does not removethe unused transformation from the mapping.

The Developer tool enables this optimization method by default.

Early Selection Optimization MethodThe early selection optimization method applies the filters in a mapping as early as possible.

Filtering data early increases performance by reducing the number of rows that pass through the mapping. In theearly selection method, the Data Integration Service splits, moves, splits and moves, or removes the Filtertransformations in a mapping.

The Data Integration Service might split a Filter transformation if the filter condition is a conjunction. For example,the Data Integration Service might split the filter condition "A>100 AND B<50" into two simpler conditions, "A>100"and "B<50."

When the Data Integration Service can split a filter, it attempts to move the simplified filters up the mappingpipeline, closer to the mapping source. Splitting the filter allows the Data Integration Service to move the simplifiedfilters up the pipeline separately. Moving the filter conditions closer to the source reduces the number of rows thatpass through the mapping.

The Data Integration Service might also remove Filter transformations from a mapping. It removes a Filtertransformation when it can apply the filter condition to the transformation logic of the transformation immediatelyupstream of the original Filter transformation.

The Data Integration Service cannot always move a Filter transformation. For example, it cannot move a Filtertransformation upstream of the following transformations:

¨ Custom transformations

¨ Transformations that call an ABORT() or ERROR() function, send email, or call a stored procedure

¨ Transformations that maintain count through a variable port, for example, COUNT=COUNT+1

¨ Transformations that create branches in the mapping. For example, the Data Integration Service cannot movea Filter transformation upstream if it is immediately downstream of a Router transformation with two outputgroups.

The Data Integration Service does not move a Filter transformation upstream in the mapping if doing so changesthe mapping results.

The Developer tool enables this optimization method by default.

You might want to disable this method if it does not increase performance. For example, a mapping containssource ports "P1" and "P2." "P1" is connected to an Expression transformation that evaluates "P2=f(P1)." "P2" isconnected to a Filter transformation with the condition "P2>1." The filter drops very few rows. If the DataIntegration Service moves the Filter transformation upstream of the Expression transformation, the Filtertransformation must evaluate "f(P1)>1" for every row in source port "P1." The Expression transformation also

76 Chapter 5: Performance Tuning

Page 90: In 910 Dev UserGuide En

evaluates "P2=f(P1)" for every row. If the function is resource intensive, moving the Filter transformation upstreamnearly doubles the number of times it is called, which might degrade performance.

Predicate Optimization MethodThe predicate optimization method causes the Data Integration Service to examine the predicate expressionsgenerated by a mapping or the transformations within a mapping to determine whether the expressions can besimplified or rewritten to increase performance of the mapping.

When the Data Integration Service runs a mapping, it generates queries against the mapping sources andperforms operations on the query results based on the mapping logic and the transformations within the mapping.The generated queries and operations often involve predicate expressions. Predicate expressions represent theconditions that the data must satisfy. The filter and join conditions in Filter and Joiner transformations areexamples of predicate expressions.

This optimization method causes the Data Integration Service to examine the predicate expressions generated bya mapping or the transformations within a mapping to determine whether the expressions can be simplified orrewritten to increase performance of the mapping. The Data Integration Service also attempts to apply predicateexpressions as early as possible to improve mapping performance.

This method also causes the Data Integration Service to infer relationships implied by existing predicateexpressions and create new predicate expressions based on the inferences. For example, a mapping contains aJoiner transformation with the join condition "A=B" and a Filter transformation with the filter condition "A>5." TheData Integration Service might be able to add the inference "B>5" to the join condition.

The Data Integration Service uses the predicate optimization method with the early selection optimization methodwhen it can apply both methods to a mapping. For example, when the Data Integration Service creates new filterconditions through the predicate optimization method, it also attempts to move them upstream in the mappingthrough the early selection method. Applying both optimization methods improves mapping performance whencompared to applying either method alone.

The Data Integration Service applies this optimization method when it can run the mapping more quickly. It doesnot apply this method when doing so changes mapping results or worsens mapping performance.

When the Data Integration Service rewrites a predicate expression, it applies mathematical logic to the expressionto optimize it. For example, the Data Integration Service might perform any or all of the following actions:

¨ Identify equivalent variables across predicate expressions in the mapping and generates simplified expressionsbased on the equivalencies.

¨ Identify redundant predicates across predicate expressions in the mapping and remove them.

¨ Extract subexpressions from disjunctive clauses and generates multiple, simplified expressions based on thesubexpressions.

¨ Normalize a predicate expression.

¨ Apply predicate expressions as early as possible in the mapping.

The Data Integration Service might not apply predicate optimization to a mapping when the mapping containstransformations with a datatype mismatch between connected ports.

The Data Integration Service might not apply predicate optimization to a transformation when any of the followingconditions are true:

¨ The transformation contains explicit default values for connected ports.

¨ The transformation calls an ABORT() or ERROR() function, sends email, or calls a stored procedure.

¨ The transformation does not allow predicates to be moved. For example, a developer might create a Customtransformation that has this restriction.

The Developer tool enables this optimization method by default.

Optimization Methods 77

Page 91: In 910 Dev UserGuide En

Cost-Based Optimization MethodThe cost-based optimization method causes the Data Integration Service to evaluate a mapping, generatesemantically equivalent mappings, and run the mapping with the best performance. This method is most effectivefor mappings that contain multiple Joiner transformations. It reduces run time for mappings that perform adjacent,unsorted, inner-join operations.

Semantically equivalent mappings are mappings that perform identical functions and produce the same results. Togenerate semantically equivalent mappings, the Data Integration Service divides the original mapping intofragments. The Data Integration Service then determines which mapping fragments it can optimize.

Generally, the Data Integration Service can optimize a fragment if the fragment meets the following requirements:

¨ The Data Integration Service can optimize every transformation within the fragment. The Data IntegrationService can optimize a transformation if it can determine the number of rows that pass through thetransformation. The Data Integration Service cannot optimize certain active transformations, such as someCustom transformations, because it cannot determine the number of rows that pass through the transformation.

¨ The fragment has one target transformation.

¨ No transformation in the fragment has multiple output groups.

¨ No two linked ports within a fragment perform an implicit datatype conversion. Therefore, the datatype,precision, and scale for each output port must match the datatype, precision, and scale of the linked input port.

The Data Integration Service optimizes each fragment that it can optimize. During optimization, the DataIntegration Service might add, remove, or reorder transformations within a fragment. The Data Integration Serviceverifies that the optimized fragments produce the same results as the original fragments and forms alternatemappings that use the optimized fragments.

The Data Integration Service generates all or almost all of the mappings that are semantically equivalent to theoriginal mapping. It computes data statistics for the original mapping and each alternate mapping. The DataIntegration Service compares the statistics to identify the mapping that runs most quickly. The Data IntegrationService performs a validation check on the best alternate mapping to ensure that it is valid and produces the sameresults as the original mapping.

The Data Integration Service caches the best alternate mapping in memory. When you run a mapping, the DataIntegration Service retrieves the alternate mapping and runs it instead of the original mapping.

Semi-Join Optimization MethodThe semi-join optimization method attempts to reduce the amount of data extracted from the source by modifyingjoin operations in the mapping.

The Data Integration Service applies this method to a Joiner transformation when one input group has many morerows than the other and when the larger group has many rows with no match in the smaller group based on thejoin condition. The Data Integration Service attempts to decrease the size of the data set of one join operand byreading the rows from the smaller group, finding the matching rows in the larger group, and then performing thejoin operation. Decreasing the size of the data set improves mapping performance because the Data IntegrationService no longer reads unnecessary rows from the larger group source. The Data Integration Service moves thejoin condition to the larger group source and reads only the rows that match the smaller group.

Before applying this optimization method, the Data Integration Service performs analyses to determine whethersemi-join optimization is possible and likely to be worthwhile. If the analyses determine that this method is likely toincrease performance, the Data Integration Service applies it to the mapping. The Data Integration Service thenreanalyzes the mapping to determine whether there are additional opportunities for semi-join optimization. Itperforms additional optimizations if appropriate. The Data Integration Service does not apply semi-joinoptimization unless the analyses determine that there is a high probability for improved performance.

78 Chapter 5: Performance Tuning

Page 92: In 910 Dev UserGuide En

For the Data Integration Service to apply the semi-join optimization method to a join operation, the Joinertransformation must meet the following requirements:

¨ The join type must be normal, master outer, or detail outer. The joiner transformation cannot perform a fullouter join.

¨ The detail pipeline must originate from a relational source.

¨ The join condition must be a valid sort-merge-join condition. That is, each clause must be an equality of onemaster port and one detail port. If there are multiple clauses, they must be joined by AND.

¨ If the mapping does not use target-based commits, the Joiner transformation scope must be All Input.

¨ The master and detail pipelines cannot share any transformation.

¨ The mapping cannot contain a branch between the detail source and the Joiner transformation.

The semi-join optimization method might not be beneficial in the following circumstances:

¨ The Joiner transformation master source does not contain significantly fewer rows than the detail source.

¨ The detail source is not large enough to justify the optimization. Applying the semi-join optimization methodadds some overhead time to mapping processing. If the detail source is small, the time required to apply thesemi-join method might exceed the time required to process all rows in the detail source.

¨ The Data Integration Service cannot get enough source row count statistics for a Joiner transformation toaccurately compare the time requirements of the regular join operation against the semi-join operation.

The Developer tool does not enable this method by default.

Setting the Optimizer Level for a Developer ToolMapping

When you run a mapping through the Run menu or mapping editor, the Developer tool runs the mapping with thenormal optimizer level. To run the mapping with a different optimizer level, run the mapping through the Rundialog box.

1. Open the mapping.

2. Select Run > Open Run Dialog.

The Run dialog box appears.

3. Select a mapping configuration that contains the optimizer level you want to apply or create a mappingconfiguration.

4. Click the Advanced tab.

5. Change the optimizer level, if necessary.

6. Click Apply.

7. Click Run to run the mapping.

The Developer tool runs the mapping with the optimizer level in the selected mapping configuration.

Setting the Optimizer Level for a Developer Tool Mapping 79

Page 93: In 910 Dev UserGuide En

Setting the Optimizer Level for a Deployed MappingSet the optimizer level for a mapping you run from the command line by changing the mapping deploymentproperties in the application.

The mapping must be in an application.

1. Open the application that contains the mapping.

2. Click the Advanced tab.

3. Select the optimizer level.

4. Save the application.

After you change the optimizer level, you must redeploy the application.

80 Chapter 5: Performance Tuning

Page 94: In 910 Dev UserGuide En

C H A P T E R 6

Pushdown OptimizationThis chapter includes the following topics:

¨ Pushdown Optimization Overview, 81

¨ Pushdown Optimization to Sources, 82

¨ Pushdown Optimization Expressions, 84

¨ Comparing the Output of the Data Integration Service and Sources, 88

Pushdown Optimization OverviewPushdown optimization causes the Data Integration Service to push transformation logic to the source database.The Data Integration Service translates the transformation logic into SQL queries and sends the SQL queries tothe database. The source database executes the SQL queries to process the transformations.

Pushdown optimization improves the performance of mappings when the source database can processtransformation logic faster than the Data Integration Service. The Data Integration Service also reads less datafrom the source.

The amount of transformation logic that the Data Integration Service pushes to the source database depends onthe database, the transformation logic, and the mapping configuration. The Data Integration Service processes alltransformation logic that it cannot push to a database.

The Data Integration Service can push the following transformation logic to the source database:

¨ Expression transformation logic

¨ Filter transformation logic

¨ Joiner transformation logic. The sources must be in the same database management system and must useidentical connections.

The Data Integration Service cannot push transformation logic after a source in the following circumstances:

¨ The Data Integration Service cannot push any transformation logic if the source is a customized data objectthat contains a custom SQL query.

¨ The Data Integration Service cannot push any transformation logic if the source contains a column with abinary datatype.

¨ The Data Integration Service cannot push Expression or Joiner transformation logic if the source is acustomized data object that contains a filter condition or user-defined join.

The Data Integration Service applies pushdown optimization to a mapping when you select the normal or fulloptimizer level. When you select the normal optimizer level, the Data Integration Service applies pushdownoptimization after it applies all other optimization methods. If you select the full optimizer level, the Data

81

Page 95: In 910 Dev UserGuide En

Integration Service applies pushdown optimization before semi-join optimization, but after all of the otheroptimization methods.

When you apply pushdown optimization, the Data Integration Service analyzes the optimized mapping from thesource to the target or until it reaches a downstream transformation that it cannot push to the source database.The Data Integration Service generates and executes a SELECT statement based on the transformation logic foreach transformation that it can push to the database. Then, it reads the results of this SQL query and processesthe remaining transformations in the mapping.

RELATED TOPICS:¨ “Performance Tuning Overview” on page 74

Pushdown Optimization to SourcesThe Data Integration Service can push transformation logic to different sources. The type of logic that the DataIntegration Service pushes depends on the source type. The Data Integration Service can push Expression, Filter,and Joiner transformation logic to some sources. It can push Filter transformation logic to other sources.

The Data Integration Service can push transformation logic to the following types of sources:

¨ Sources that use native database drivers

¨ PowerExchange nonrelational sources

¨ Sources that use ODBC drivers

¨ SAP sources

Pushdown Optimization to Native SourcesWhen the Data Integration Service pushes transformation logic to relational sources using the native drivers, theData Integration Service generates SQL statements that use the native database SQL.

The Data Integration Service can push Expression, Filter, and Joiner transformation logic to the following nativesources:

¨ IBM DB2 for Linux, UNIX, and Windows ("DB2 for LUW")

¨ Microsoft SQL ServerThe Data Integration Service can use a native connection to Microsoft SQL Server when the Data IntegrationService runs on Windows.

¨ Oracle

The Data Integration Service can push Filter transformation logic to the following native sources:

¨ IBM DB2 for i5/OS

¨ IBM DB2 for z/OS

Pushdown Optimization to PowerExchange Nonrelational SourcesFor PowerExchange nonrelational data sources on z/OS systems, the Data Integration Service pushes Filtertransformation logic to PowerExchange. PowerExchange translates the logic into a query that the source canprocess.

82 Chapter 6: Pushdown Optimization

Page 96: In 910 Dev UserGuide En

The Data Integration Service can push Filter transformation logic to the following types of nonrelational sources:

¨ IBM IMS

¨ Sequential data sets

¨ VSAM

Pushdown Optimization to ODBC SourcesThe Data Integration Service can push Expression, Filter, and Joiner transformation logic to databases that useODBC drivers.

When you use ODBC to connect to a source, the Data Integration Service can generate SQL statements usingANSI SQL or native database SQL. The Data Integration Service can push more transformation logic to the sourcewhen it generates SQL statements using the native database SQL. The source can process native database SQLfaster than it can process ANSI SQL.

You can specify the ODBC provider in the ODBC connection object. When the ODBC provider is databasespecific, the Data Integration Service can generates SQL statements using native database SQL. When the ODBCprovider is Other, the Data Integration Service generates SQL statements using ANSI SQL.

You can configure a specific ODBC provider for the following ODBC connection types:

¨ Sybase ASE

¨ Microsoft SQL ServerUse an ODBC connection to connect to Microsoft SQL Server when the Data Integration Service runs on UNIXor Linux. Use a native connection to Microsoft SQL Server when the Data Integration Service runs on Windows.

Pushdown Optimization to SAP SourcesThe Data Integration Service can push Filter transformation logic to SAP sources for expressions that contain acolumn name, an operator, and a literal string. When the Data Integration Service pushes transformation logic toSAP, the Data Integration Service converts the literal string in the expressions to an SAP datatype.

The Data Integration Service can push Filter transformation logic that contains the TO_DATE function whenTO_DATE converts a DATS, TIMS, or ACCP datatype character string to one of the following date formats:

¨ 'MM/DD/YYYY'

¨ 'YYYY/MM/DD'

¨ 'YYYY-MM-DD HH24:MI:SS'

¨ 'YYYY/MM/DD HH24:MI:SS'

¨ 'MM/DD/YYYY HH24:MI:SS'

The Data Integration Service processes the transformation logic if you apply the TO_DATE function to a datatypeother than DATS, TIMS, or ACCP or if TO_DATE converts a character string to a format that the Data IntegrationServices cannot push to SAP. The Data Integration Service processes transformation logic that contains otherInformatica functions. The Data Integration Service processes transformation logic that contains other Informaticafunctions.

Filter transformation expressions can include multiple conditions separated by AND or OR. If conditions apply tomultiple SAP tables, the Data Integration Service can push transformation logic to SAP when the SAP data objectuses the Open SQL ABAP join syntax. Configure the Select syntax mode in the read operation of the SAP dataobject.

Pushdown Optimization to Sources 83

Page 97: In 910 Dev UserGuide En

SAP Datatype ExceptionsThe Data Integration Service processes Filter transformation logic when the source cannot process thetransformation logic. The Data Integration Service processes Filter transformation logic for an SAP source whentransformation expression includes the following datatypes:

¨ RAW

¨ LRAW

¨ LCHR

Pushdown Optimization ExpressionsThe Data Integration Service can push transformation logic to the source database when the transformationcontains operators and functions that the source supports. The Data Integration Service translates thetransformation expression into a query by determining equivalent operators and functions in the database. If thereis no equivalent operator or function, the Data Integration Service processes the transformation logic.

If the source uses an ODBC connection and you configure a database-specific ODBC provider in the ODBCconnection object, then the Data Integration Service considers the source to be the native source type.

FunctionsThe following table summarizes the availability of Informatica functions for pushdown optimization. In eachcolumn, an X indicates that the Data Integration Service can push the function to the source.

Note: These functions are not available for nonrelational sources on z/OS.

Function DB2 fori5/OS1

DB2 forLUW

DB2 forz/OS1

MicrosoftSQL Server

ODBC Oracle SAP1 SybaseASE

ABS() X X X X X

ADD_TO_DATE() X X X X X X

ASCII() X X X X X X

CEIL() X X X X X X

CHR() X X X X

CONCAT() X X X X X X

COS() X X X X X X X

COSH() X X X X X X

DATE_COMPARE() X X X X X X X

DECODE() X X X X X

EXP() X X X X

84 Chapter 6: Pushdown Optimization

Page 98: In 910 Dev UserGuide En

Function DB2 fori5/OS1

DB2 forLUW

DB2 forz/OS1

MicrosoftSQL Server

ODBC Oracle SAP1 SybaseASE

FLOOR() X X X

GET_DATE_PART() X X X X X X

IIF() X X X X

IN() X X X

INITCAP() X

INSTR() X X X X X X

ISNULL() X X X X X X X

LAST_DAY() X

LENGTH() X X X X X X

LN() X X X X X

LOG() X X X X X X

LOOKUP() X

LOWER() X X X X X X X

LPAD() X

LTRIM() X X X X X X

MOD() X X X X X X

POWER() X X X X X X

ROUND(DATE) X X

ROUND(NUMBER) X X X X X X

RPAD() X

RTRIM() X X X X X X

SIGN() X X X X X X

SIN() X X X X X X X

SINH() X X X X X X

SOUNDEX() X1 X X X

SQRT() X X X X X

SUBSTR() X X X X X X

Pushdown Optimization Expressions 85

Page 99: In 910 Dev UserGuide En

Function DB2 fori5/OS1

DB2 forLUW

DB2 forz/OS1

MicrosoftSQL Server

ODBC Oracle SAP1 SybaseASE

SYSDATE() X X X X X X

SYSTIMESTAMP() X X X X X X

TAN() X X X X X X X

TANH() X X X X X X

TO_BIGINT X X X X X X

TO_CHAR(DATE) X X X X X X

TO_CHAR(NUMBER) X X2 X X X X

TO_DATE() X X X X X X X

TO_DECIMAL() X X3 X X X X

TO_FLOAT() X X X X X X

TO_INTEGER() X X X X X X

TRUNC(DATE) X

TRUNC(NUMBER) X X X X X X

UPPER() X X X X X X X

. 1The Data Integration Service can push these functions to the source only when they are included in Filter transformation logic.

. 2When this function takes a decimal or float argument, the Data Integration Service can push the function only when it isincluded in Filter transformation logic.. 3When this function takes a string argument, the Data Integration Service can push the function only when it is included inFilter transformation logic.

IBM DB2 Function ExceptionsThe Data Integration Service cannot push supported functions to IBM DB2 for i5/OS, DB2 for LUW, and DB2 for z/OS sources under certain conditions.

The Data Integration Service processes transformation logic for IBM DB2 sources when expressions containsupported functions with the following logic:

¨ ADD_TO_DATE or GET_DATE_PART returns results with millisecond or nanosecond precision.

¨ LTRIM includes more than one argument.

¨ RTRIM includes more than one argument.

¨ TO_BIGINT converts a string to a bigint value on a DB2 for LUW source.

¨ TO_CHAR converts a date to a character string and specifies a format that is not supported by DB2.

¨ TO_DATE converts a character string to a date and specifies a format that is not supported by DB2.

¨ TO_DECIMAL converts a string to a decimal value without the scale argument.

¨ TO_FLOAT converts a string to a double-precision floating point number.

¨ TO_INTEGER converts a string to an integer value on a DB2 for LUW source.

86 Chapter 6: Pushdown Optimization

Page 100: In 910 Dev UserGuide En

Microsoft SQL Server Function ExceptionsThe Data Integration Service cannot push supported functions to Microsoft SQL Server sources under certainconditions.

The Data Integration Service processes transformation logic for Microsoft SQL Server sources when expressionscontain supported functions with the following logic:

¨ IN includes the CaseFlag argument.

¨ INSTR includes more than three arguments.

¨ LTRIM includes more than one argument.

¨ RTRIM includes more than one argument.

¨ TO_BIGINT includes more than one argument.

¨ TO_INTEGER includes more than one argument.

Oracle Function ExceptionsThe Data Integration Service cannot push supported functions to Oracle sources under certain conditions.

The Data Integration Service processes transformation logic for Oracle sources when expressions containsupported functions with the following logic:

¨ ADD_TO_DATE or GET_DATE_PART returns results with subsecond precision.

¨ ROUND rounds values to seconds or subseconds.

¨ SYSTIMESTAMP returns the date and time with microsecond precision.

¨ TRUNC truncates seconds or subseconds.

ODBC Function ExceptionThe Data Integration Service processes transformation logic for ODBC when the CaseFlag argument for the INfunction is a number other than zero.

Note: When the ODBC connection object properties include a database-specific ODBC provider, the DataIntegration Service considers the source to be the native source type.

Sybase ASE Function ExceptionsThe Data Integration Service cannot push supported functions to Sybase ASE sources under certain conditions.

The Data Integration Service processes transformation logic for Sybase ASE sources when expressions containsupported functions with the following logic:

¨ IN includes the CaseFlag argument.

¨ INSTR includes more than two arguments.

¨ LTRIM includes more than one argument.

¨ RTRIM includes more than one argument.

¨ TO_BIGINT includes more than one argument.

¨ TO_INTEGER includes more than one argument.

¨ TRUNC(Numbers) includes more than one argument.

Pushdown Optimization Expressions 87

Page 101: In 910 Dev UserGuide En

OperatorsThe following table summarizes the availability of Informatica operators by source type. In each column, an Xindicates that the Data Integration Service can push the operator to the source.

Note: Nonrelational sources are IMS, VSAM, and sequential data sets on z/OS.

Operator DB2 forLUW

DB2 fori5/OS orz/OS*

MicrosoftSQLServer

Nonrelational*

ODBC Oracle SAP* SybaseASE

+-*

X X X X X X X

/ X X X X X X

% X X X X X

|| X X X X X

=><>=<=

X X X X X X X X

<> X X X X X X X

!= X X X X X X X X

^= X X X X X X X

ANDOR

X X X X X X X X

NOT X X X X X X

. *The Data Integration Service can push these operators to the source only when they are included in Filter transformation logic.

Comparing the Output of the Data Integration Serviceand Sources

The Data Integration Service and sources can produce different results when processing the same transformationlogic. When the Data Integration Service pushes transformation logic to the source, the output of thetransformation logic can be different.

Case sensitivity

The Data Integration Service and a database can treat case sensitivity differently. For example, the DataIntegration Service uses case-sensitive queries and the database does not. A Filter transformation uses thefollowing filter condition: IIF(col_varchar2 = ‘CA’, TRUE, FALSE). You need the database to return rows thatmatch ‘CA.’ However, if you push this transformation logic to a database that is not case sensitive, it returnsrows that match the values ‘Ca,’ ‘ca,’ ‘cA,’ and ‘CA.’

88 Chapter 6: Pushdown Optimization

Page 102: In 910 Dev UserGuide En

Numeric values converted to character values

The Data Integration Service and a database can convert the same numeric value to a character value indifferent formats. The database might convert numeric values to an unacceptable character format. Forexample, a table contains the number 1234567890. When the Data Integration Service converts the numberto a character value, it inserts the characters ‘1234567890.’ However, a database might convert the numberto ‘1.2E9.’ The two sets of characters represent the same value.

Date formats for TO_CHAR and TO_DATE functions

The Data Integration Service uses the date format in the TO_CHAR or TO_DATE function when the DataIntegration Service pushes the function to the database. Use the TO_DATE functions to compare date or timevalues. When you use TO_CHAR to compare date or time values, the database can add a space or leadingzero to values such as a single-digit month, single-digit day, or single-digit hour. The database comparisonresults can be different from the results of the Data Integration Service when the database adds a space or aleading zero.

Precision

The Data Integration Service and a database can have different precision for particular datatypes.Transformation datatypes use a default numeric precision that can vary from the native datatypes. The resultscan vary if the database uses a different precision than the Data Integration Service.

SYSDATE or SYSTIMESTAMP function

When you use the SYSDATE or SYSTIMESTAMP, the Data Integration Service returns the current date andtime for the node that runs the service process. However, when you push the transformation logic to thedatabase, the database returns the current date and time for the machine that hosts the database. If the timezone of the machine that hosts the database is not the same as the time zone of the machine that runs theData Integration Service process, the results can vary.

If you push SYSTIMESTAMP to an IBM DB2 or a Sybase ASE database, and you specify the format forSYSTIMESTAMP, the database ignores the format and returns the complete time stamp.

LTRIM, RTRIM, or SOUNDEX function

When you push LTRIM, RTRIM, or SOUNDEX to a database, the database treats the argument (' ') as NULL,but the Data Integration Service treats the argument (' ') as spaces.

LAST_DAY function on Oracle source

When you push LAST_DAY to Oracle, Oracle returns the date up to the second. If the input date containssubseconds, Oracle trims the date to the second.

Comparing the Output of the Data Integration Service and Sources 89

Page 103: In 910 Dev UserGuide En

C H A P T E R 7

MappletsThis chapter includes the following topics:

¨ Mapplets Overview, 90

¨ Mapplet Types, 90

¨ Mapplets and Rules, 91

¨ Mapplet Input and Output, 91

¨ Creating a Mapplet, 92

¨ Validating a Mapplet, 92

¨ Segments, 92

Mapplets OverviewA mapplet is a reusable object containing a set of transformations that you can use in multiple mappings. Use amapplet in a mapping. Or, validate the mapplet as a rule.

Transformations in a mapplet can be reusable or non-reusable.

When you use a mapplet in a mapping, you use an instance of the mapplet. Any change made to the mapplet isinherited by all instances of the mapplet.

Mapplets can contain other mapplets. You can also use a mapplet more than once in a mapping or mapplet. Youcannot have circular nesting of mapplets. For example, if mapplet A contains mapplet B, mapplet B cannot containmapplet A.

Mapplet TypesThe mapplet type is determined by the mapplet input and output.

You can create the following types of mapplet:

¨ Source. The mapplet contains a data source as input and an Output transformation as output.

¨ Target. The mapplet contains an Input transformation as input and a data source as output.

¨ Midstream. The mapplet contains an Input transformation and an Output transformation. It does not contain adata source for input or output.

90

Page 104: In 910 Dev UserGuide En

Mapplets and RulesA rule is business logic that defines conditions applied to source data when you run a profile. It is a midstreammapplet that you use in a profile.

A rule must meet the following requirements:

¨ It must contain an Input and Output transformation. You cannot use data sources in a rule.

¨ It can contain Expression transformations, Lookup transformations, and passive data quality transformations. Itcannot contain any other type of transformation. For example, a rule cannot contain a Match transformation, asit is an active transformation.

¨ It does not specify cardinality between input groups.

Note: Rule functionality is not limited to profiling. You can add any mapplet that you validate as a rule to a profilein the Analyst tool. For example, you can evaluate postal address data quality by selecting a rule configured tovalidate postal addresses and adding it to a profile.

Mapplet Input and OutputTo use a mapplet in a mapping, you must configure it for input and output.

A mapplet has the following input and output components:

¨ Mapplet input. You can pass data into a mapplet from data sources or Input transformations or both. If youvalidate the mapplet as a rule, you must pass data into the mapplet through an Input transformation. When youuse an Input transformation, you connect it to a source or upstream transformation in the mapping.

¨ Mapplet output. You can pass data out of a mapplet from data sources or Output transformations or both. If youvalidate the mapplet as a rule, you must pass data from the mapplet through an Output transformation. Whenyou use an Output transformation, you connect it to a target or downstream transformation in the mapping.

¨ Mapplet ports. You can see mapplet ports in the mapping canvas. Mapplet input ports and output portsoriginate from Input transformations and Output transformations. They do not originate from data sources.

Mapplet InputMapplet input can originate from a data source or from an Input transformation.

You can create multiple pipelines in a mapplet. Use multiple data sources or Input transformations. You can alsouse a combination of data sources and Input transformations.

Use one or more data sources to provide source data in the mapplet. When you use the mapplet in a mapping, it isthe first object in the mapping pipeline and contains no input ports.

Use an Input transformation to receive input from the mapping. The Input transformation provides input ports soyou can pass data through the mapplet. Each port in the Input transformation connected to another transformationin the mapplet becomes a mapplet input port. Input transformations can receive data from a single active source.Unconnected ports do not appear in the mapping canvas.

You can connect an Input transformation to multiple transformations in a mapplet. You can also connect one portin an Input transformation to multiple transformations in the mapplet.

Mapplets and Rules 91

Page 105: In 910 Dev UserGuide En

Mapplet OutputUse a data source as output when you want to create a target mapplet. Use an Output transformation in a mappletto pass data through the mapplet into a mapping.

Use one or more data sources to provide target data in the mapplet. When you use the mapplet in a mapping, it isthe last object in the mapping pipeline and contains no output ports.

Use an Output transformation to pass output to a downstream transformation or target in a mapping. Eachconnected port in an Output transformation appears as a mapplet output port in a mapping. Each Outputtransformation in a mapplet appears as an output group. An output group can pass data to multiple pipelines in amapping.

Creating a MappletCreate a mapplet to define a reusable object containing a set of transformations that you can use in multiplemappings.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Mapplet.

3. Enter a mapplet name.

4. Click Finish.

An empty mapplet appears in the editor.

5. Add mapplet inputs, outputs, and transformations.

Validating a MappletValidate a mapplet before you add it to a mapping. You can also validate a mapplet as a rule to include it in aprofile.

1. Right-click the mapplet canvas.

2. Select Validate As > Mapplet or Validate As > Rule.

The Validation Log displays mapplet error messages.

SegmentsA segment consists of one or more objects in a mapping, mapplet, rule, or virtual stored procedure. A segment caninclude a source, target, transformation, or mapplet.

You can copy segments. Consider the following rules and guidelines when you copy a segment:

¨ You can copy segments across folders or projects.

¨ The Developer tool reuses dependencies when possible. Otherwise, it copies dependencies.

92 Chapter 7: Mapplets

Page 106: In 910 Dev UserGuide En

¨ The Developer tool reuses objects that you copy from a shared project.

¨ If a mapping, mapplet, rule, or virtual stored procedure includes parameters and you copy a transformation thatrefers to the parameter, the transformation in the target object uses a default value for the parameter.

¨ You cannot copy input transformations and output transformations.

¨ After you paste a segment, you cannot undo previous actions.

Copying a SegmentYou can copy a segment when you want to reuse a portion of the mapping logic in another mapping, mapplet, rule,or virtual stored procedure.

1. Open a the object that contains the segment you want to copy.

2. Select a segment by highlighting each object you want to copy.

Hold down the Ctrl key to select multiple objects. You can also select segments by dragging the pointer in arectangle around objects in the editor.

3. Click Edit > Copy to copy the segment to the clipboard.

4. Open a target mapping, mapplet, rule, or virtual stored procedure.

5. Click Edit > Paste.

Segments 93

Page 107: In 910 Dev UserGuide En

C H A P T E R 8

Object Import and ExportThis chapter includes the following topics:

¨ Object Import and Export Overview, 94

¨ Import and Export Objects, 95

¨ Reference Table Import and Export, 96

¨ Object Export, 96

¨ Object Import, 97

Object Import and Export OverviewYou can export multiple objects from a project to one XML file. When you import objects, you can chooseindividual objects in the XML file or all the objects in the XML file.

You can export objects to an XML file and then import objects from the XML file. When you export objects, theDeveloper tool creates an XML file that contains the metadata of the exported objects. Use the XML file to importthe objects into a project or folder. You can import application archives into a repository. Application archive filescontain deployed applications. You can also import and export objects through infacmd command.

Export and import objects to accomplish the following tasks:

¨ Deploy metadata into production. After you test a mapping in a development repository, you can export it to anXML file and then import it from the XML file into a production repository.

¨ Archive metadata. You can export objects to an XML file that you no longer need before you remove them fromthe repository.

¨ Share metadata. You can share metadata with a third party. For example, you can send a mapping to someoneelse for testing or analysis.

¨ Copy metadata between repositories. You can copy objects between repositories that you cannot connect tofrom the same client. Export the object and transfer the XML file to the target machine. Then import the objectfrom the XML file into the target repository. You can export and import objects between repositories with thesame version. If the objects contain tags, the Developer tool automatically imports into the repository.

You can use infacmd to generate a readable XML file from an export file. You can also edit the object names inthe readable XML and update the export XML before you import the objects into a repository.

94

Page 108: In 910 Dev UserGuide En

Import and Export ObjectsYou can import and export projects and objects in a project. You can also import and export application archivefiles in a repository.

When you export an object, the Developer tool also exports the dependent objects. A dependent object is anobject that is used by another object. For example, a physical data object used as a mapping input is a dependentobject of that mapping. When you import an object, the Developer tool imports all the dependent objects.

When you export or import objects in a project or folder, the Model Repository Service preserves the objecthierarchy.

The following table lists objects and dependent objects that you can export:

Object Dependency

Application - SQL data services and their dependent objects

Project - Projects contain other objects, but they do not have dependent objects

Folder - Folders contain other objects, but they do not have dependent objects

Reference Tables - Reference tables do not have dependent objects

Physical data object model - Physical data object models do not have dependent objects

Logical data object model - Logical data objects- Physical data objects- Reusable transformations and their dependent objects- Mapplets and their dependent objects

Transformation - Physical data objects- Reference tables

Mapplet - Logical data objects- Physical data objects- Reusable transformations and their dependent objects

Mapping - Logical data objects- Physical data objects- Reusable transformations and their dependent objects- Mapplets and their dependent objects

SQL data service - Logical data objects- Physical data objects- Reusable transformations and their dependent objects- Mapplets and their dependent objects

Profile - Logical data objects- Physical data objects

Scorecard - Profiles and their dependent objects

Web service - Operation mappings

Import and Export Objects 95

Page 109: In 910 Dev UserGuide En

Reference Table Import and ExportYou can import and export reference tables through the Developer tool. The Model repository stores referencetable metadata, and the staging database that is used by the Analyst Service stores the table data.

Before you import or export reference tables, verify the following prerequisites:

¨ A database client application for the staging database is installed on the Developer tool machine.

¨ The database connection settings on the client application are consistent with the staging database configuredfor the Analyst Service in the Administrator tool.

Object ExportWhen you export an object, the Developer tool creates an XML file that contains the metadata of the objects.

You can choose the objects to export. You must also choose to export all dependent objects. The Developer toolexports the objects and the dependent objects. The Developer tool exports the last saved version of the object.The Developer tool includes Cyclic Redundancy Checking Value (CRCVALUE) codes in the elements in the XMLfile. If you modify attributes in an element that contains a CRCVALUE code, you cannot import the object. If youwant to modify the attributes use the infacmd xrf command.

You can also export objects with the infacmd oie ExportObjects command.

Exporting ObjectsYou can export objects to an XML file to use in another project or folder.

1. Click File > Export.

2. Select Informatica > Export Object Metadata File.

3. Click Next.

4. Click Browse to select a project from which to export objects.

If you are exporting reference table data, complete the following fields:

Option Description

Reference data location Location where you want to save reference table data. Enter a path that the Data IntegrationService can write to. The Developer tool saves the reference table data as one or moredictionary .dic files.

Code page Code page of the destination repository for the reference table data.

5. Click Next.

6. Select the objects to export.

7. Enter the export file name and location.

8. To view the dependent objects that the Export wizard exports with the objects you selected, click Next.

The Export wizard displays the dependent objects.

9. Click Finish.

The Developer tool exports the objects to the XML file.

96 Chapter 8: Object Import and Export

Page 110: In 910 Dev UserGuide En

Object ImportYou can import a project or objects within project from an export file. You can also import a project from anapplication archive file. You can import the objects and any dependent objects into a project or folder.

When you import objects, you can import a project or individual objects. Import a project when you want to reuseall objects in the project. Import individual objects when you want to reuse objects across projects. You cannotimport objects from an export file that you created in a previous version.

When you import an object, the Developer Tool lists all the dependent objects. You must add each dependentobject to the target before you can import the object.

When you import objects, an object in the export file might have the same name as an object in the target projector folder. You can choose how you want to resolve naming conflicts.

You can also import objects with the infacmd oie ImportObjects command.

Importing ProjectsYou can import a project from an XML file into a folder. You can also import the contents of the project into aproject in the target repository.

1. Click File > Import.

2. Select Informatica > Object Import File.

3. Click Next.

4. Click Browse and select the export file that you want to import.

5. Select the project or "<project name> Project Content" in the Source pane.

¨ If you select the project in the Source pane, select the folder in the Target pane where you want to importthe project.

¨ If you select the project content in the Source pane, select the project to which you want to import theproject contents in the Target pane.

6. Click Add to Target to add the project to the target folder.

Tip: You can also drag the project from the Source pane into the required folder in the Target pane.

7. Click Resolution to specify how to handle duplicate objects.

You can rename the imported object, replace the existing object with the imported object, or reuse theexisting object. The Developer tool renames all the duplicate objects by default.

8. Click Next.

The Developer tool summarizes the objects to be imported. You can also specify the additional import settingsin the Additional Import Settings pane.

9. Click Finish.

If you chose to rename the duplicate project, the Model Repository Service appends a number to the object name.You can rename the project after you import it.

Importing ObjectsYou can import objects from an XML file or application archive file. You import the objects and any dependentobjects into a project.

1. Click File > Import.

2. Select Informatica > Object Import File (Advanced).

Object Import 97

Page 111: In 910 Dev UserGuide En

3. Click Next.

4. Click Browse to select the export file that you want to import.

5. Click Open.

6. Select the object in the Source pane that you want to import.

7. Select the project in the Target pane to which you want to import the object.

8. Click Add to Target to add the object to the target.

If you click Auto Match to Target, the Developer Tool tries to match the descendents of the current sourceselection individually by name, type, and parent hierarchy in the target selection and adds the objects thatmatch. If you want to import all the objects under a folder or a project, select the target folder or project andclick Add Content to Target.

Tip: You can also drag the object from the Source pane into the required project in the Target pane. Press thecontrol key while you drag to maintain the object hierarchy in source and target.

9. Click Resolution to specify how to handle duplicate objects.

You can rename the imported object, replace the existing object with the imported object, or reuse theexisting object. The Developer tool renames all the duplicate objects by default.

10. Click Next.

The Developer tool summarizes the objects to be imported.

11. Map the domain connections to the connections from the import file in the Additional Import Settings pane.You can also select whether to overwrite existing tags on the objects.

12. Click Finish.

If you choose to rename the duplicate project, the Import wizard names the imported project as "<OriginalName>_<number of the copy>." You can rename the project after you import it.

Importing Application ArchivesYou can import objects from an application archive file. You import the application and dependent objects into therepository.

1. Click File > Import.

The Import wizard appears.

2. Select Informatica > Application Archive.

3. Click Next.

4. Click Browse to select the application archive file.

The Developer tool lists the application archive file contents.

5. Select the repository into which you want to import the application.

6. Click Finish.

The Developer tool imports the application into the repository. If the Developer tool finds duplicate objects, itrenames the imported objects.

98 Chapter 8: Object Import and Export

Page 112: In 910 Dev UserGuide En

C H A P T E R 9

Export to PowerCenterThis chapter includes the following topics:

¨ Export to PowerCenter Overview, 99

¨ PowerCenter Release Compatibility, 100

¨ Mapplet Export, 100

¨ Export to PowerCenter Options, 101

¨ Exporting an Object to PowerCenter, 102

¨ Export Restrictions, 103

¨ Rules and Guidelines for Exporting to PowerCenter, 104

¨ Troubleshooting Exporting to PowerCenter, 105

Export to PowerCenter OverviewYou can export objects from the Developer tool to use in PowerCenter.

You can export the following objects:

¨ Mappings. Export mappings to PowerCenter mappings or mapplets.

¨ Mapplets. Export mapplets to PowerCenter mapplets.

¨ Logical data object read mappings. Export the logical data object read mappings within a logical data objectmodel to PowerCenter mapplets. The export process ignores logical data object write mappings.

You export objects to a PowerCenter repository or to an XML file. Export objects to PowerCenter to takeadvantage of capabilities that are exclusive to PowerCenter such as partitioning, web services, and highavailability.

When you export objects, you specify export options such as the PowerCenter release, how to convert mappingsand mapplets, and whether to export reference tables. If you export objects to an XML file, PowerCenter users canimport the file into the PowerCenter repository.

ExampleA supermarket chain that uses PowerCenter 9.0 wants to create a product management tool to accomplish thefollowing business requirements:

¨ Create a model of product data so that each store in the chain uses the same attributes to define the data.

¨ Standardize product data and remove invalid and duplicate entries.

¨ Generate a unique SKU for each product.

99

Page 113: In 910 Dev UserGuide En

¨ Migrate the cleansed data to another platform.

¨ Ensure high performance of the migration process by performing data extraction, transformation, and loading inparallel processes.

¨ Ensure continuous operation if a hardware failure occurs.

The developers at the supermarket chain use the Developer tool to create mappings that standardize data,generate product SKUs, and define the flow of data between the existing and new platforms. They export themappings to XML files. During export, they specify that the mappings be compatible with PowerCenter 9.0.

Developers import the mappings into PowerCenter and create the associated sessions and workflows. They setpartition points at various transformations in the sessions to improve performance. They also configure thesessions for high availability to provide failover capability if a temporary network, hardware, or service failureoccurs.

PowerCenter Release CompatibilityTo verify that objects are compatible with a certain PowerCenter release, set the PowerCenter releasecompatibility level. The compatibility level applies to all mappings, mapplets, and logical data object models youcan view in Developer tool.

You can configure the Developer tool to validate against a particular release of PowerCenter, or you can configureit to skip validation for release compatibility. By default, the Developer tool does not validate objects against anyrelease of PowerCenter.

Set the compatibility level to a PowerCenter release before you export objects to PowerCenter. If you set thecompatibility level, the Developer tool performs two validation checks when you validate a mapping, mapplet, orlogical data object model. The Developer tool first verifies that the object is valid in Developer tool. If the object isvalid, the Developer tool then verifies that the object is valid for export to the selected release of PowerCenter.You can view compatibility errors in the Validation Log view.

Setting the Compatibility LevelSet the compatibility level to validate mappings, mapplets, and logical data object models against a PowerCenterrelease. If you select none, the Developer tool skips release compatibility validation when you validate an object.

1. Click Edit > Compatibility Level.

2. Select the compatibility level.

The Developer tool places a dot next to the selected compatibility level in the menu. The compatibility levelapplies to all mappings, mapplets, and logical data object models you can view in the Developer tool.

Mapplet ExportWhen you export a mapplet or you export a mapping as a mapplet, the export process creates objects in themapplet. The export process also renames some mapplet objects.

The export process might create the following mapplet objects in the export XML file:

100 Chapter 9: Export to PowerCenter

Page 114: In 910 Dev UserGuide En

Expression transformations

The export process creates an Expression transformation immediately downstream from each Inputtransformation and immediately upstream from each Output transformation in a mapplet. The export processnames the Expression transformations as follows:

Expr_<InputOrOutputTransformationName>

The Expression transformations contain pass-through ports.

Output transformations

If you export a mapplet and convert targets to Output transformations, the export process creates an Outputtransformation for each target. The export process names the Output transformations as follows:

<MappletInstanceName>_<TargetName>

The export process renames the following mapplet objects in the export XML file:

Mapplet Input and Output transformations

The export process names mapplet Input and Output transformations as follows:

<TransformationName>_<InputOrOutputGroupName>

Mapplet ports

The export process renames mapplet ports as follows:

<PortName>_<GroupName>

Export to PowerCenter OptionsWhen you export an object for use in PowerCenter, you must specify the export options.

The following table describes the export options:

Option Description

Project Project in the model repository from which to export objects.

Target release PowerCenter release number.

Export selected objects to file Exports objects to a PowerCenter XML file. If you select this option, specify the exportXML file name and location.

Export selected objects toPowerCenter repository

Exports objects to a PowerCenter repository. If you select this option, you must specifythe following information for the PowerCenter repository:- Host name. PowerCenter domain gateway host name.- Port number. PowerCenter domain gateway HTTP port number.- User name. Repository user name.- Password. Password for repository user name.- Security domain. LDAP security domain name, if one exists. Otherwise, enter

"Native."- Repository name. PowerCenter repository name.

Send to repository folder Exports objects to the specified folder in the PowerCenter repository.

Use control file Exports objects to the PowerCenter repository using the specified pmrep control file.

Export to PowerCenter Options 101

Page 115: In 910 Dev UserGuide En

Option Description

Convert exported mappings toPowerCenter mapplets

Converts Developer tool mappings to PowerCenter mapplets. The Developer toolconverts sources and targets in the mappings to Input and Output transformations in aPowerCenter mapplet.

Convert target mapplets Converts targets in mapplets to Output transformations in the PowerCenter mapplet.PowerCenter mapplets cannot contain targets. If you export mapplets that contain targetsand you do not select this option, the export process fails.

Export reference data Exports any reference table data used by a transformation in an object you export.

Reference data location Location where you want to save reference table data. Enter a path that the DataIntegration Service can write to. The Developer tool saves the reference table data asone or more dictionary .dic files.

Data service Data Integration Service on which the reference table staging database runs.

Code page Code page of the PowerCenter repository.

Exporting an Object to PowerCenterWhen you export mappings, mapplets, or logical data object read mappings to PowerCenter, you can export theobjects to a file or to the PowerCenter repository.

Before you export an object, set the compatibility level to the appropriate PowerCenter release. Validate the objectto verify that it is compatible with the PowerCenter release.

1. Click File > Export.

The Export dialog box appears.

2. Select Informatica > PowerCenter.

3. Click Next.

The Export to PowerCenter dialog box appears.

4. Select the project.

5. Select the PowerCenter release.

6. Choose the export location, a PowerCenter import XML file or a PowerCenter repository.

7. If you export to a PowerCenter repository, select the PowerCenter or the pmrep control file that defines howto import objects into PowerCenter.

8. Specify the export options.

9. Click Next.

The Developer tool prompts you to select the objects to export.

10. Select the objects to export and click Finish.

The Developer tool exports the objects to the location you selected.

If you exported objects to a file, you can import objects from the XML file into the PowerCenter repository.

If you export reference data, copy the reference table files to the PowerCenter dictionary directory on the machinethat hosts Informatica Services:

102 Chapter 9: Export to PowerCenter

Page 116: In 910 Dev UserGuide En

<PowerCenter Installation Directory>\services\<Informatica Developer Project Name>\<Informatica DeveloperFolder Name>

Export RestrictionsSome Developer tool objects are not valid in PowerCenter.

The following objects are not valid in PowerCenter:

Objects with long names

PowerCenter users cannot import a mapping, mapplet, or object within a mapping or mapplet if the objectname exceeds 80 characters.

Mappings or mapplets that contain a Custom Data transformation

You cannot export mappings or mapplets that contain Custom Data transformations.

Mappings or mapplets that contain a Joiner transformation with certain join conditions

The Developer tool does not allow you to export mappings and mapplets that contain a Joiner transformationwith a join condition that is not valid in PowerCenter. In PowerCenter, a user defines join conditions based onequality between the specified master and detail sources. In the Developer tool, you can define other joinconditions. For example, you can define a join condition based on equality or inequality between the masterand detail sources. You can define a join condition that contains transformation expressions. You can alsodefine a join condition, such as 1 = 1, that causes the Joiner transformation to perform a cross-join.

These types of join conditions are not valid in PowerCenter. Therefore, you cannot export mappings ormapplets that contain Joiner transformations with these types of join conditions to PowerCenter.

Mappings or mapplets that contain a Lookup transformation with renamed ports

The PowerCenter Integration Service queries the lookup source based on the lookup ports in thetransformation and a lookup condition. Therefore, the port names in the Lookup transformation must matchthe column names in the lookup source.

Mappings or mapplets that contain a Lookup transformation that returns all rows

The export process might fail if you export a mapping or mapplet with a Lookup transformation that returns allrows that match the lookup condition. The export process fails when you export the mapping or mapplet toPowerCenter 8.x. The Return all rows option was added to the Lookup transformation in PowerCenter 9.0.Therefore, the option is not valid in earlier versions of PowerCenter.

Mappings or mapplets that contain PowerExchange data objects

If you export a mapping that includes a PowerExchange data object, the Developer tool does not export thePowerExchange data object.

Mapplets that concatenate ports

The export process fails if you export a mapplet that contains a multigroup Input transformation and the portsin different input groups are connected to the same downstream transformation or transformation outputgroup.

Nested mapplets with unconnected Lookup transformations

The export process fails if you export any type of mapping or mapplet that contains another mapplet with anunconnected Lookup transformation.

Export Restrictions 103

Page 117: In 910 Dev UserGuide En

Nested mapplets with Update Strategy transformations when the mapplets are upstream from a Joiner transformation

Mappings and mapplets that contain an Update Strategy transformation upstream from a Joinertransformation are not valid in Developer tool or in PowerCenter. Verify that mappings or mapplets to exportdo not contain an Update Strategy transformation in a nested mapplet upstream from a Joiner transformation.

Mappings with an SAP source

When you export a mapping with an SAP source, the Developer tool exports the mapping without the SAPsource. When you import the mapping into the PowerCenter repository, the PowerCenter Client imports themapping without the source. The output window displays a message indicating the mapping is not valid. Youmust manually create the SAP source in PowerCenter and add it to the mapping.

Rules and Guidelines for Exporting to PowerCenterDue to differences between the Developer tool and PowerCenter, some Developer tool objects might not becompatible with PowerCenter.

Use the following rules and guidelines when you export objects to PowerCenter:

Verify the PowerCenter release.

When you export to PowerCenter 9.0.1, the Developer tool and PowerCenter must be running the sameHotFix version. You cannot export mappings and mapplets to PowerCenter version 9.0.

Verify that object names are unique.

If you export an object to a PowerCenter repository, the export process replaces the PowerCenter object if ithas the same name as an exported object.

Verify that the code pages are compatible.

The export process fails if the Developer tool and PowerCenter use code pages that are not compatible.

Verify precision mode.

By default, the Developer tool runs mappings and mapplets with high precision enabled and PowerCenterruns sessions with high precision disabled. If you run Developer tool mappings and PowerCenter sessions indifferent precision modes, they can produce different results. To avoid differences in results, run the objects inthe same precision mode.

Copy reference data.

When you export mappings or mapplets with transformations that use reference tables, you must copy thereference tables to a directory where the PowerCenter Integration Service can access them. Copy thereference tables to the directory defined in the INFA_CONTENT environment variable. If INFA_CONTENT isnot set, copy the reference tables to the following PowerCenter services directory:

$INFA_HOME\services\<Developer Tool Project Name>\<Developer Tool Folder Name>

104 Chapter 9: Export to PowerCenter

Page 118: In 910 Dev UserGuide En

Troubleshooting Exporting to PowerCenter

The export process fails when I export a mapplet that contains objects with long names.When you export a mapplet or you export a mapping as a mapplet, the export process creates or renames someobjects in the mapplet. The export process might create Expression or Output transformations in the export XMLfile. The export process also renames Input and Output transformations and mapplet ports.

To generate names for Expression transformations, the export process appends characters to Input and Outputtransformation names. If you export a mapplet and convert targets to Output transformations, the export processcombines the mapplet instance name and target name to generate the Output transformation name. When theexport process renames Input transformations, Output transformations, and mapplet ports, it appends groupnames to the object names.

If an existing object has a long name, the exported object name might exceed the 80 character object name limit inthe export XML file or in the PowerCenter repository. When an object name exceeds 80 characters, the exportprocess fails with an internal error.

If you export a mapplet, and the export process returns an internal error, check the names of the Inputtransformations, Output transformations, targets, and ports. If the names are long, shorten them.

Troubleshooting Exporting to PowerCenter 105

Page 119: In 910 Dev UserGuide En

C H A P T E R 1 0

DeploymentThis chapter includes the following topics:

¨ Deployment Overview, 106

¨ Creating an Application, 107

¨ Deploying an Object to a Data Integration Service, 107

¨ Deploying an Object to a File, 108

¨ Updating an Application, 109

¨ Mapping Deployment Properties, 109

¨ Application Redeployment, 110

Deployment OverviewDeploy objects to make them accessible to end users. You can deploy physical data objects, logical data objects,data services, mappings, and applications.

Deploy objects to allow users to query the objects through a third-party client tool or run mappings at thecommand line. When you deploy an object, you isolate the object from changes in data structures. If you makechanges to an object in the Developer tool after you deploy it, you must redeploy the application that contains theobject for the changes to take effect.

You can deploy objects to a Data Integration Service or a network file system. When you deploy an application toa Data Integration Service, end users can connect to the application. Depending on the types of objects in theapplication, end users can then run queries against the objects, access web services, or run mappings. The endusers must have the appropriate permissions in the Administrator tool to perform these tasks.

When you deploy an object to a network file system, the Developer tool creates an application archive file. Deployan object to a network file system if you want to check the application into a version control system. You can alsodeploy an object to a file if your organization requires that administrators deploy objects to Data IntegrationServices. An administrator can deploy application archive files to Data Integration Services through theAdministrator tool.

To deploy an object, perform one of the following actions:

Deploy an object directly.

Deploy an object directly when you want to make the object available to end users without modifying it. Youcan deploy a physical or logical data object, a data service, or a mapping directly. The Developer tool promptsyou to create an application. The Developer tool adds the object to the application. If you redeploy the objectto a Data Integration Service, you cannot update the application. The Developer tool creates an application

106

Page 120: In 910 Dev UserGuide En

with a different name. When you deploy a data object, the Developer tool also prompts you to create an SQLdata service based on the data object.

Note: You cannot deploy a WSDL data object directly.

Deploy an application that contains the object.

Create an application when you want to deploy multiple objects at the same time. When you create anapplication, you select the objects to include in the application. If you redeploy an application to a DataIntegration Service, you can update or replace the application.

Creating an ApplicationCreate an application when you want to deploy multiple objects at the same time or if you want to be able toupdate or replace the application when it resides on the Data Integration Service. When you create an application,you select the objects to include in the application.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Application.

The New Application dialog box appears.

3. Enter a name for the application.

4. Click Browse to select the application location.

You must create the application in a project or a folder.

5. Click Next.

The Developer tool prompts you for the objects to include in the application.

6. Click Add.

The Add Objects dialog box appears.

7. Select one or more data services, mappings, or reference tables and click OK.

The Developer tool lists the objects you select in the New Application dialog box.

8. If the application contains mappings, choose whether to override the default mapping configuration when youdeploy the application. If you select this option, choose a mapping configuration.

The Developer tool sets the mapping deployment properties for the application to the same values as thesettings in the mapping configuration.

9. Click Finish.

The Developer tool adds the application to the project or folder.

After you create an application, you must deploy the application so end users can query objects, access webservices, or run mappings.

Deploying an Object to a Data Integration ServiceDeploy an object to a Data Integration Service so end users can query the object through a JDBC or ODBC clienttool, access web services, or run mappings from the command line.

Creating an Application 107

Page 121: In 910 Dev UserGuide En

1. Right-click an object in the Object Explorer view and click Deploy.

The Deploy dialog box appears.

2. Select Deploy to Service.

3. Click Browse to select the domain.

The Choose Domain dialog box appears.

4. Select a domain and click OK.

The Developer tool lists the Data Integration Services associated with the domain in the Available Servicessection of the Deploy Application dialog box.

5. Select the Data Integration Services to which you want to deploy the application. Click Next.

6. Enter an application name.

7. If you are deploying a data object, click Next and enter an SQL data service name. If you are deploying a dataservice, click Finish.

8. Click Next and add virtual tables to the SQL data service.

By default, the Developer tool creates one virtual table based on the data object you deploy.

9. Click Finish.

The Developer tool deploys the application to the Data Integration Services.

Deploying an Object to a FileDeploy an object to an application archive file if you want to check the application into version control or if yourorganization requires that administrators deploy objects to Data Integration Services.

1. Right-click the application in the Object Explorer view and click Deploy.

The Deploy dialog box appears.

2. Select Deploy to File System.

3. Click Browse to select the directory.

The Choose a Directory dialog box appears.

4. Select the directory and click OK. The, click Next.

5. Enter an application name.

6. If you are deploying a data object, click Next and enter an SQL data service name. If you are deploying a dataservice, click Finish.

7. Click Next and add virtual tables to the SQL data service.

By default, the Developer tool creates one virtual table based on the data object you deploy.

8. Click Finish.

The Developer tool deploys the application to an application archive file.

Before end users can access the application, you must deploy the application to a Data Integration Service. Or, anadministrator must deploy the application to a Data Integration Service through the Administrator tool.

108 Chapter 10: Deployment

Page 122: In 910 Dev UserGuide En

Updating an ApplicationUpdate an application when you want to add objects to an application, remove objects from an application, orupdate mapping deployment properties.

1. Open the application you want to update.

2. To add or remove objects, click the Overview view.

3. To add objects to the application, click Add.

The Developer tool prompts you to choose the data services, mappings, or reference tables to add to theapplication.

4. To remove an object from the application, select the object, and click Remove.

5. To update mapping deployment properties, click the Advanced view and change the properties.

6. Save the application.

Redeploy the application if you want end users to be able to access the updated application.

Mapping Deployment PropertiesWhen you update an application that contains a mapping, you can set the deployment properties the DataIntegration Services uses when end users run the mapping.

Set mapping deployment properties on the Advanced view of the application.

You can set the following properties:

Property Description

Default date time format Date/time format the Data Integration Services uses when the mapping converts strings todates.Default is MM/DD/YYYY HH24:MI:SS.

Override tracing level Overrides the tracing level for each transformation in the mapping. The tracing leveldetermines the amount of information the Data Integration Service sends to the mapping logfiles.Choose one of the following tracing levels:- None. The Data Integration Service does not override the tracing level that you set for

each transformation.- Terse. The Data Integration Service logs initialization information, error messages, and

notification of rejected data.- Normal. The Data Integration Service logs initialization and status information, errors

encountered, and skipped rows due to transformation row errors. It summarizes mappingresults, but not at the level of individual rows.

- Verbose Initialization. In addition to normal tracing, the Data Integration Service logsadditional initialization details, names of index and data files used, and detailedtransformation statistics.

- Verbose Data. In addition to verbose initialization tracing, the Data Integration Servicelogs each row that passes into the mapping. The Data Integration Service also noteswhere it truncates string data to fit the precision of a column and provides detailedtransformation statistics. The Data Integration Service writes row data for all rows in ablock when it processes a transformation.

Default is None.

Updating an Application 109

Page 123: In 910 Dev UserGuide En

Property Description

Sort order Order in which the Data Integration Service sorts character data in the mapping.Default is Binary.

Optimizer level Controls the optimization methods that the Data Integration Service applies to a mapping asfollows:- None. The Data Integration Service does not optimize the mapping.- Minimal. The Data Integration Service applies the early projection optimization method to

the mapping.- Normal. The Data Integration Service applies the early projection, early selection, and

predicate optimization methods to the mapping.- Full. The Data Integration Service applies the early projection, early selection, predicate

optimization, and semi-join optimization methods to the mapping.Default is Normal.

High precision Runs the mapping with high precision.High precision data values have greater accuracy. Enable high precision if the mappingproduces large numeric values, for example, values with precision of more than 15 digits, andyou require accurate values. Enabling high precision prevents precision loss in large numericvalues.Default is enabled.

Application RedeploymentWhen you change an application or change an object in the application and you want end users to access thelatest version of the application, you must deploy the application again.

When you change an application or its contents and you deploy the application to the same Data IntegrationService, the Data Integration Service replaces the objects and the mapping deployment properties in theapplication. You can also preserve or reset the SQL data service and virtual table properties for the application inthe Administrator tool.

The Developer tool gives you the following choices:

¨ Update. If the application contains an SQL data service and an administrator changed the SQL data service,virtual table, or mapping deployment properties in the Administrator tool, the Data Integration Servicepreserves the properties in the Administrator tool.

¨ Replace. If the application contains an SQL data service and an administrator changed the SQL data service,virtual table, or mapping deployment properties in the Administrator tool, the Data Integration Service resetsthe properties in the Administrator tool to the default values.

When you change an application and deploy it to a network file system, the Developer tool allows you to replacethe application archive file or cancel the deployment. If you replace the application archive file, the Developer toolreplaces the objects in the application and the mapping deployment properties.

Redeploying an ApplicationRedeploy an application to a Data Integration Service when you want to update or replace the application.

1. Right-click an application in the Object Explorer view and click Deploy.

The Deploy dialog box appears.

110 Chapter 10: Deployment

Page 124: In 910 Dev UserGuide En

2. Select Deploy to Service.

3. Click Browse to select the domain.

The Choose Domain dialog box appears.

4. Select a domain and click OK.

The Developer tool lists the Data Integration Services associated with the domain in the Available Servicessection of the Deploy Application dialog box.

5. Select the Data Integration Services to which you want to deploy the application.

6. If the Data Integration already contains the deployed application, select to update or replace the application inthe Action column.

7. Click Finish.

Application Redeployment 111

Page 125: In 910 Dev UserGuide En

C H A P T E R 1 1

Parameters and Parameter FilesThis chapter includes the following topics:

¨ Parameters and Parameter Files Overview, 112

¨ Parameters, 112

¨ Parameter Files, 115

Parameters and Parameter Files OverviewParameters and parameter files allow you to define mapping values and update those values each time you run amapping. The Data Integration Service applies parameter values when you run a mapping from the command lineand specify a parameter file.

Create parameters so you can rerun a mapping with different relational connection, flat file, or reference tablevalues. You define the parameter values in a parameter file. When you run the mapping from the command lineand specify a parameter file, the Data Integration Service uses the parameter values defined in the parameter file.

To run mappings with different parameter values, perform the following tasks:

1. Create a parameter and assign it a default value.

2. Apply the parameter to a data object or to a transformation in the mapping.

3. Add the mapping to an application and deploy the application.

4. Create a parameter file that contains the parameter value.

5. Run the mapping from the command line with the parameter file.

For example, you create a mapping that processes customer orders. The mapping reads customer informationfrom a relational table that contains customer data for one country. You want to use the mapping for customers inthe United States, Canada, and Mexico. Create a parameter that represents the connection to the customerstable. Create three parameter files that set the connection name to the U.S. customers table, the Canadiancustomers table, and the Mexican customers table. Run the mapping from the command line, using a differentparameter file for each mapping run.

ParametersParameters represent values that change between mapping runs. You can create parameters that representrelational connections, flat file names, flat file directories, reference table names, and reference table directories.

112

Page 126: In 910 Dev UserGuide En

Create parameters so you can rerun a mapping with different values. For example, create a mapping parameterthat represents a reference table name if you want to run a mapping with different reference tables. All parametersin the Developer tool are user-defined.

You can create the following types of parameters:

¨ Connection. Represents a relational connection. You cannot create connection parameters for nonrelationaldatabase or SAP physical data objects.

¨ String. Represents a flat file name, flat file directory, reference table name, or reference table directory.

When you create a parameter, you enter the parameter name and optional description, select the parameter type,and enter the default value. Each parameter must have a default value. When you run a mapping from thecommand line with a parameter file, the Data Integration Service resolves all parameters to the values set in theparameter file.

The Data Integration Service resolves parameters to the default values in the following circumstances:

¨ You run a mapping or preview mapping results within the Developer tool.

¨ You query an SQL data service that uses a data source that contains parameters.

¨ You run a mapping from the command line without a parameter file.

¨ You copy a mapping fragment from a mapping that has parameters defined and some of the transformations inthe mapping use the parameters. The Developer tool does not copy the parameters to the target mapping.

¨ You export a mapping or mapplet for use in PowerCenter.

Where to Create ParametersCreate parameters to define values that change between mapping runs. Create connection parameters to defineconnections. Create string parameters to define flat file and reference table names and file paths.

The following table lists the objects in which you can create parameters:

Object Parameter Type Scope

Flat file data objects String You can use the parameter in the data object.

Customized data objects(reusable)

Connection You can use the parameter in the customized data object.

Mappings Connection, String You can use the parameter in any nonreusable data object ortransformation in the mapping that accepts parameters.

Mapplets Connection, String You can use the parameter in any nonreusable data object ortransformation in the mapplet that accepts parameters.

Case Convertertransformation (reusable)

String You can use the parameter in the transformation.

Labeler transformation(reusable)

String You can use the parameter in the transformation.

Parser transformation(reusable)

String You can use the parameter in the transformation.

Standardizer transformation(reusable)

String You can use the parameter in the transformation.

Parameters 113

Page 127: In 910 Dev UserGuide En

Where to Assign ParametersAssign a parameter to a field when you want the Data Integration Service to replace the parameter with the valuedefined the a parameter file.

The following table lists the objects and fields where you can assign parameters:

Object Field

Flat file data objects Source file nameSource file directoryOutput file nameOutput file directory

Customized data objects Connection

Read transformation created from related relational data objects Connection

Case Converter transformation (reusable and nonreusable) Reference table

Labeler transformation (reusable and nonreusable) Reference table

Lookup transformation (nonreusable) Connection

Parser transformation (reusable and nonreusable) Reference table

Standardizer transformation (reusable and nonreusable) Reference table

Creating a ParameterCreate a parameter to represent a value that changes between mapping runs.

1. Open the physical data object, mapping, or transformation where you want to create a parameter.

2. Click the Parameters view.

3. Click Add.

The Add Parameter dialog box appears.

4. Enter the parameter name.

5. Optionally, enter a parameter description.

6. Select the parameter type.

Select Connection to create a connection parameter. Select String to create a file name, file path, referencetable name, or reference table path parameter.

7. Enter a default value for the parameter.

For connection parameters, select a connection. For string parameters, enter a file name or file path.

8. Click OK.

The Developer tool adds the parameter to the list of parameters.

Assigning a ParameterAssign a parameter to a field so that when you run a mapping from the command line, the Data Integration Servicereplaces the parameter with the value defined in the parameter file.

114 Chapter 11: Parameters and Parameter Files

Page 128: In 910 Dev UserGuide En

1. Open the field in which you want to assign a parameter.

2. Click Assign Parameter.

The Assign Parameter dialog box appears.

3. Select the parameter.

4. Click OK.

Parameter FilesA parameter file is an XML file that lists parameters and their assigned values. The parameter values defineproperties for a data object, transformation, mapping, or mapplet. The Data Integration Service applies thesevalues when you run a mapping from the command line and specify a parameter file.

Parameter files provide you with the flexibility to change parameter values each time you run a mapping. You candefine parameters for multiple mappings in a single parameter file. You can also create multiple parameter filesand use a different file each time you run a mapping. The Data Integration Service reads the parameter file at thestart of the mapping run to resolve the parameters.

To run a mapping with a parameter file, use the infacmd ms RunMapping command. The -pf argument specifiesthe parameter file name.

The machine from which you run the mapping must have access to the parameter file. The Data IntegrationService fails the mapping if you run it with a parameter file and any of the following circumstances are true:

¨ The Data Integration Service cannot access the parameter file.

¨ The parameter file is not valid or does not exist.

¨ Objects of the same type exist in the same project or folder, have the same name, and use parameters. Forexample, a folder contains Labeler transformation "T1" and Standardizer transformation "T1." If bothtransformations use parameters, the Data Integration Service fails the mapping when you run it with aparameter file. If the objects are in different folders, or if one object does not use parameters, the DataIntegration Service does not fail the mapping.

Parameter File StructureA parameter file is an XML file that contains at least one parameter and its assigned value.

The Data Integration Service uses the hierarchy defined in the parameter file to identify parameters and theirdefined values. The hierarchy identifies the physical data object or the transformation that uses the parameter.

You define parameter values within the following top-level elements:

¨ Application/mapping/project elements. When you define a parameter within the application/mapping/projectelements, the Data Integration Service applies the parameter value when you run the specified mapping in theapplication. For example, you want the Data Integration Service to apply a parameter value when you runmapping "MyMapping" in deployed application "MyApp." You do not want to use the parameter value when yourun a mapping in any other application or when you run another mapping in "MyApp." Define the parameterwithin the following elements:

<application name="MyApp"> <mapping name="MyMapping"> <project name="MyProject"> <!-- Define the parameter here. --> </project> </mapping>

Parameter Files 115

Page 129: In 910 Dev UserGuide En

</application>

¨ Project element. When you omit the application/mapping/project element and define a parameter within aproject top-level element, the Data Integration Service applies the parameter value when you run any mappingthat has no application/mapping/project element defined in the parameter file.

The Data Integration Service searches for parameter values in the following order:

1. The value specified within an application element.

2. The value specified within a project element.

3. The parameter default value.

Use the infacmd ms ListMappingParams command to list the parameters used in a mapping with the defaultvalues. You can use the output of this command as a parameter file template.

Observe the following rules when you create a parameter file:

¨ Parameter values cannot be empty. For example, the Data Integration Service fails the mapping run if theparameter file contains the following entry:

<parameter name="Param1"> </parameter>¨ Within an element, artifact names are not case-sensitive. Therefore, the Data Integration Service interprets

<application name="App1"> and <application name="APP1"> as the same application.

The following example shows a sample parameter file:

<?xml version="1.0"?><root description="Sample Parameter File" xmlns="http://www.informatica.com/Parameterization/1.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <!-- The Data Integration Service uses this section only when you run mapping "Map1" or "Map2" in deployed application "App1." This section assigns values to parameters created in mappings "Map1" and "Map2." --> <application name="App1"> <mapping name="Map1"> <project name="Project1"> <mapping name="Map1"> <parameter name="MAP1_PARAM1">MAP1_PARAM1_VAL</parameter> <parameter name="MAP1_PARAM2">MAP1_PARAM2_VAL</parameter> </mapping> </project> </mapping> <mapping name="Map2"> <project name="Project1"> <mapping name="Map2"> <parameter name="MAP2_PARAM1">MAP2_PARAM1_VAL</parameter> <parameter name="MAP2_PARAM2">MAP2_PARAM2_VAL</parameter> </mapping> </project> </mapping> </application> <!-- The Data Integration Service uses this section only when you run mapping "Map1" in deployed application "App2." This section assigns values to parameters created in the following objects: * Data source "DS1" in mapping "Map1" * Mapping "Map1" --> <application name="App2"> <mapping name="Map1"> <project name="Project1"> <dataSource name="DS1"> <parameter name="PROJ1_DS1">PROJ1_DS1_APP2_MAP1_VAL</parameter> <parameter name="PROJ1_DS1">PROJ1_DS1_APP2_MAP1_VAL</parameter> </dataSource> <mapping name="Map1">

116 Chapter 11: Parameters and Parameter Files

Page 130: In 910 Dev UserGuide En

<parameter name="MAP1_PARAM2">MAP1_PARAM2_VAL</parameter> </mapping> </project> </mapping> </application> <!-- The Data Integration Service uses this section when you run any mapping other than "Map1" in application "App1," "Map2" in application "App1," or "Map1" in application "App2." This section assigns values to parameters created in the following objects: * Reusable data source "DS1" * Mapplet "DS1" --> <project name="Project1"> <dataSource name="DS1"> <parameter name="PROJ1_DS1">PROJ1_DS1_VAL</parameter> <parameter name="PROJ1_DS1_PARAM1">PROJ1_DS1_PARAM1_VAL</parameter> </dataSource> <mapplet name="DS1"> <parameter name="PROJ1_DS1">PROJ1_DS1_VAL</parameter> <parameter name="PROJ1_DS1_PARAM1">PROJ1_DS1_PARAM1_VAL</parameter> </mapplet> </project> <!-- The Data Integration Service uses this section when you run any mapping other than "Map1" in application "App1," "Map2" in application "App1," or "Map1" in application "App2." This section assigns values to parameters created in the following objects: * Reusable transformation "TX2" * Mapplet "MPLT1" in folder "Folder2" * Mapplet "RULE1" in nested folder "Folder2_1_1" --> <project name="Project2"> <transformation name="TX2"> <parameter name="RTM_PATH">Project1\Folder1\RTM1</parameter> </transformation> <folder name="Folder2"> <mapplet name="MPLT1"> <parameter name="PROJ2_FOLD2_MPLT1">PROJ2_FOLD2_MPLT1_VAL</parameter> </mapplet> <folder name="Folder2_1"> <folder name="Folder2_1_1"> <mapplet name="RULE1"> <parameter name="PROJ2_RULE1">PROJ2_RULE1_VAL</parameter> </mapplet> </folder> </folder> </folder> </project></root>

Parameter File Schema DefinitionA parameter file must conform to the structure of the parameter file XML schema definition (XSD). If the parameterfile does not conform to the schema definition, the Data Integration Service fails the mapping run.

The parameter file XML schema definition appears in the following directories:

¨ On the machine that hosts the Developer tool:<Informatica Installation Directory>\clients\DeveloperClient\infacmd\plugins\ms\parameter_file_schema_1_0.xsd

¨ On the machine that hosts Informatica Services:<Informatica Installation Directory>\isp\bin\plugins\ms\parameter_file_schema_1_0.xsd

The following example shows the parameter file XML schema definition:

<?xml version="1.0"?><schema xmlns="http://www.w3.org/2001/XMLSchema" targetNamespace="http://www.informatica.com/Parameterization/1.0"

Parameter Files 117

Page 131: In 910 Dev UserGuide En

xmlns:pf="http://www.informatica.com/Parameterization/1.0" elementFormDefault="qualified">

<simpleType name="nameType"> <restriction base="string"> <minLength value="1"/> </restriction> </simpleType> <complexType name="parameterType"> <simpleContent> <extension base="string"> <attribute name="name" type="pf:nameType" use="required"/> </extension> </simpleContent> </complexType> <complexType name="designObjectType" abstract="true"> <sequence> <element name="parameter" type="pf:parameterType" minOccurs="1" maxOccurs="unbounded"/> </sequence> <attribute name="name" type ="pf:nameType" use="required"/> </complexType> <complexType name="dataSourceType"> <complexContent > <extension base="pf:designObjectType"/> </complexContent> </complexType> <complexType name="mappletType"> <complexContent > <extension base="pf:designObjectType"/> </complexContent> </complexType> <complexType name="transformationType"> <complexContent > <extension base="pf:designObjectType"/> </complexContent> </complexType> <complexType name="mappingType"> <complexContent > <extension base="pf:designObjectType"/> </complexContent> </complexType> <complexType name="deployedObjectType" abstract="true"> <sequence> <element name="project" type="pf:designContainerType" minOccurs="1" maxOccurs="unbounded"/> </sequence> <attribute name="name" type="pf:nameType" use="required"/> </complexType> <complexType name="deployedMappingType"> <complexContent> <extension base="pf:deployedObjectType"/> </complexContent> </complexType> <complexType name="containerType" abstract="true"> <attribute name="name" type ="pf:nameType" use="required"/> </complexType> <complexType name="designContainerType"> <complexContent> <extension base="pf:containerType"> <choice minOccurs="1" maxOccurs="unbounded"> <element name="dataSource" type="pf:dataSourceType"/> <element name="mapplet" type="pf:mappletType"/> <element name="transformation" type="pf:transformationType"/> <element name="mapping" type="pf:mappingType"/> <element name="folder" type="pf:designContainerType"/> </choice> </extension>

118 Chapter 11: Parameters and Parameter Files

Page 132: In 910 Dev UserGuide En

</complexContent> </complexType> <complexType name="applicationContainerType"> <complexContent> <extension base="pf:containerType"> <sequence> <element name="mapping" type="pf:deployedMappingType" minOccurs="1" maxOccurs="unbounded"/> </sequence> </extension> </complexContent> </complexType> <element name="root"> <complexType> <choice minOccurs="1" maxOccurs="unbounded"> <element name="application" type="pf:applicationContainerType"/> <element name="project" type="pf:designContainerType"/> </choice> <attribute name="description" type ="string" use="optional"/> </complexType> </element></schema>

Creating a Parameter FileThe infacmd ms ListMappingParams command lists the parameters used in a mapping and the default value foreach parameter. Use the output of this command to create a parameter file.

1. Run the infacmd ms ListMappingParams command to list all parameters used in a mapping and the defaultvalue for each parameter.

The -o argument sends command output to an XML file.

For example, the following command lists the parameters in mapping Mapping1 in file "MyOutputFile.xml": infacmd ms ListMappingParams -dn MyDomain -sn MyDataIntSvs -un MyUser -pd MyPassword -a MyApplication -m MyMapping -o "MyOutputFile.xml"

The Data Integration Service lists all parameters in the mapping with their default values.

2. If you did not specify the -o argument, copy the command output to an XML file and save the file.

3. Edit the XML file and replace the parameter default values with the values you want to use when you run themapping.

4. Save the XML file.

Run the mapping with the infacmd ms RunMapping command. Use the -pf argument to specify the parameter filename.

Parameter Files 119

Page 133: In 910 Dev UserGuide En

C H A P T E R 1 2

TagsThis chapter includes the following topics:

¨ Tags Overview, 120

¨ Creating a Tag, 120

¨ Assigning a Tag, 121

¨ Viewing Tags, 121

Tags OverviewA tag is metadata that defines an object in the Model repository based on business usage. Create tags to groupobjects according to their business usage.

After you create a tag, you can associate the tag with an object. You can associate a tag with multiple objects. Youcan use a tag to search for objects associated with the tag in the Model repository. The Developer tool displays aglossary of all tags.Note: Tags associated with an object in the Developer tool appear as tags for the same objects in the Analyst tool.

Creating a TagCreate a tag to add metadata that defines an object based on business usage. Assign the tag to an object toassociate the object with this metadata definition.

1. To add tags at the Model Repository Service level, click Window > Preferences.

The Preferences dialog box appears.

2. Click Tags on the left pane.

The current tags appear on the right pane.

3. Select the Model Repository Service icon, and click Add.

4. Enter a name. Optionally, addd a description.

5. Click OK.

6. To add a tag to an object, browse the Object Explorer and find the object you need to tag, such as a profile ora physical data object.

7. On the right pane, click the Tags tab.

120

Page 134: In 910 Dev UserGuide En

The current tags attached to the object appear below the tab.

8. Click Edit.

The Assign Tags for Object dialog box appears.

9. Click the New icon.

10. Enter a name and an optional description.

11. Click OK.

Assigning a TagAfter you have created a tag, assign it to an object so that you can organize objects into logical groups based ontheir business usage.

1. Select an object in the Object Explorer.

2. On the right pane, find the Tags tab, and click Edit.

The Assign Tags for Object dialog box appears that lists the current tags and all the columns associated withthe object.

3. In the Available Tags section, select a tag.

4. In the Assign Tags section, select a field you want to associate the tag with.

5. Click Assign.

The tag name associated with the field appears under the Tags column in the Assign Tags section.

6. To remove the tag from an object, select the tag in the Available Tags section and its object in the AssignTags section.

7. Click Remove.

Viewing TagsThe Developer tool displays all tags associated with an object under the Tags tab.

1. Select an object in the Object Explorer.

2. Select Window > Show View > Tags to display the Tags tab.

All fields and their associated tags appear on the Tags tab.

3. To view all the tags defined in the Model Repository, click Window > Preferences.

The Preferences dialog box appears.

4. On the left pane, click Informatica > Tags.

All tags in the Model Repository appear on the right pane.

Assigning a Tag 121

Page 135: In 910 Dev UserGuide En

C H A P T E R 1 3

Viewing DataThis chapter includes the following topics:

¨ Viewing Data Overview, 122

¨ Selecting a Default Data Integration Service, 122

¨ Configurations, 123

¨ Exporting Data, 127

¨ Logs, 128

¨ Monitoring Jobs from the Developer Tool, 128

Viewing Data OverviewYou can run a mapping, run a profile, preview data, run an SQL query, or generate a SOAP request.

You can run mappings from the command line, from the Run dialog box, or from the Data Viewer view. You canrun a profile, preview data, run an SQL query, and generate a SOAP request from the Data Viewer view.

Before you can view data, you need to select a default Data Integration Service. You can also add other DataIntegration Services to use when you view data.

You can create configurations to control settings that the Developer tool applies when you run a mapping orpreview data.

When you view data in the Data Viewer view, you can export the data to a file. You can also access logs thatshow log events.

You can monitor applications and jobs from the Progress view.

Selecting a Default Data Integration ServiceThe Data Integration Service performs data integration tasks in the Developer tool. You can select any DataIntegration Service that is available in the domain. Select a default Data Integration Service. You can override thedefault Data Integration Service when you run a mapping or preview data.

Add a domain before you select a Data Integration Service.

1. Click Window > Preferences.

The Preferences dialog box appears.

122

Page 136: In 910 Dev UserGuide En

2. Select Informatica > Data Integration Services.

3. Expand the domain.

4. Select a Data Integration Service.

5. Click Set as Default.

6. Click OK.

ConfigurationsA configuration is a group of settings that the Developer tool applies when you run a mapping or preview output.

A configuration controls settings such as the default Data Integration Service, number of rows to read from asource, default date/time format, and optimizer level. The configurations that you create apply to your installationof the Developer tool.

You can create the following configurations:

¨ Data viewer configurations. Control the settings the Developer tool applies when you preview output in theData Viewer view.

¨ Mapping configurations. Control the settings the Developer tool applies when you run mappings through theRun dialog box or from the command line.

Data Viewer ConfigurationsData viewer configurations control the settings that the Developer tool applies when you preview output in theData Viewer view.

You can select a data viewer configuration when you preview output for the following objects:

¨ Custom data objects

¨ Logical data objects

¨ Logical data object read mappings

¨ Logical data object write mappings

¨ Mappings

¨ Mapplets

¨ Operation mappings

¨ Physical data objects

¨ Virtual stored procedures

¨ Virtual tables

¨ Virtual table mappings

Creating a Data Viewer ConfigurationCreate a data viewer configuration to control the settings the Developer tool applies when you preview output inthe Data Viewer view.

1. Click Run > Open Run Dialog.

The Run dialog box appears.

Configurations 123

Page 137: In 910 Dev UserGuide En

2. Click Data Viewer Configuration.

3. Click the New button.

4. Enter a name for the data viewer configuration.

5. Configure the data viewer configuration properties.

6. Click Apply.

7. Click Close.

The Developer tool creates the data viewer configuration.

Mapping ConfigurationsMapping configurations control the mapping deployment properties that the Developer tool uses when you run amapping through the Run dialog box or from the command line.

To apply a mapping configuration to a mapping that you run through the Developer tool, you must run the mappingthrough the Run dialog box. If you run the mapping through the Run menu or mapping editor, the Developer toolruns the mapping with the default mapping deployment properties.

To apply mapping deployment properties to a mapping that you run from the command line, select the mappingconfiguration when you add the mapping to an application. The mapping configuration that you select applies to allmappings in the application.

You can change the mapping deployment properties when you edit the application. An administrator can alsochange the mapping deployment properties through the Administrator tool. You must redeploy the application forthe changes to take effect.

Creating a Mapping ConfigurationCreate a mapping configuration to control the mapping deployment properties that the Developer tool uses whenyou run mappings through the Run dialog box or from the command line.

1. Click Run > Open Run Dialog.

The Run dialog box appears.

2. Click Mapping Configuration.

3. Click the New button.

4. Enter a name for the mapping configuration.

5. Configure the mapping configuration properties.

6. Click Apply.

7. Click Close.

The Developer tool creates the mapping configuration.

Updating the Default Configuration PropertiesYou can update the default data viewer and mapping configuration properties.

1. Click Window > Preferences.

The Preferences dialog box appears.

2. Click Informatica > Run Configurations.

3. Select the Data Viewer or Mapping configuration.

124 Chapter 13: Viewing Data

Page 138: In 910 Dev UserGuide En

4. Configure the data viewer or mapping configuration properties.

5. Click Apply.

6. Click OK.

The Developer tool updates the default configuration properties.

Configuration PropertiesThe Developer tool applies configuration properties when you preview output or you run mappings. Setconfiguration properties for the Data Viewer view or mappings in the Run dialog box.

Data Integration Service PropertiesThe Developer tool displays the Data Integration Service tab for data viewer and mapping configurations.

The following table displays the properties that you configure for the Data Integration Service:

Property Description

Use default Data IntegrationService

Uses the default Data Integration Service to run the mapping.Default is enabled.

Data Integration Service Specifies the Data Integration Service that runs the mapping if you do not use the default DataIntegration Service.

Source PropertiesThe Developer tool displays the Source tab for data viewer and mapping configurations.

The following table displays the properties that you configure for sources:

Property Description

Read all rows Reads all rows from the source.Default is enabled.

Read up to how many rows Specifies the maximum number of rows to read from the source if you do not read all rows.Note: If you enable the this option for a mapping that writes to a customized data object, theData Integration Service does not truncate the target table before it writes to the target.Default is 1000.

Read all characters Reads all characters in a column.Default is disabled.

Read up to how manycharacters

Specifies the maximum number of characters to read in each column if you do not read allcharacters. The Data Integration Service ignores this property for SAP sources.Default is 4000.

Configurations 125

Page 139: In 910 Dev UserGuide En

Results PropertiesThe Developer tool displays the Results tab for data viewer configurations.

The following table displays the properties that you configure for results in the Data Viewer view:

Property Description

Show all rows Displays all rows in the Data Viewer view.Default is disabled.

Show up to how many rows Specifies the maximum number of rows to display if you do not display all rows.Default is 1000.

Show all characters Displays all characters in a column.Default is disabled.

Show up to how manycharacters

Specifies the maximum number of characters to display in each column if you do not displayall characters.Default is 4000.

Advanced PropertiesThe Developer tool displays the Advanced tab for data viewer and mapping configurations.

The following table displays the advanced properties:

Property Description

Default date time format Date/time format the Data Integration Services uses when the mapping converts strings todates.Default is MM/DD/YYYY HH24:MI:SS.

Override tracing level Overrides the tracing level for each transformation in the mapping. The tracing leveldetermines the amount of information that the Data Integration Service sends to the mappinglog files.Choose one of the following tracing levels:- None. The Data Integration Service uses the tracing levels set in the mapping.- Terse. The Data Integration Service logs initialization information, error messages, and

notification of rejected data.- Normal. The Data Integration Service logs initialization and status information, errors

encountered, and skipped rows due to transformation row errors. Summarizes mappingresults, but not at the level of individual rows.

- Verbose initialization. In addition to normal tracing, the Data Integration Service logsadditional initialization details, names of index and data files used, and detailedtransformation statistics.

- Verbose data. In addition to verbose initialization tracing, the Data Integration Servicelogs each row that passes into the mapping. Also notes where the Data IntegrationService truncates string data to fit the precision of a column and provides detailedtransformation statistics.

Default is None.

Sort order Order in which the Data Integration Service sorts character data in the mapping.Default is Binary.

126 Chapter 13: Viewing Data

Page 140: In 910 Dev UserGuide En

Property Description

Optimizer level Controls the optimization methods that the Data Integration Service applies to a mapping asfollows:- None. The Data Integration Service does not optimize the mapping.- Minimal. The Data Integration Service applies the early projection optimization method to

the mapping.- Normal. The Data Integration Service applies the early projection, early selection, and

predicate optimization methods to the mapping.- Full. The Data Integration Service applies the early projection, early selection, predicate

optimization, and semi-join optimization methods to the mapping.Default is Normal.

High precision Runs the mapping with high precision.High precision data values have greater accuracy. Enable high precision if the mappingproduces large numeric values, for example, values with precision of more than 15 digits, andyou require accurate values. Enabling high precision prevents precision loss in large numericvalues.Default is enabled.

Send log to client Allows you to view log files in the Developer tool. If you disable this option, you must view logfiles through the Administrator tool.Default is enabled.

Troubleshooting Configurations

I created two configurations with the same name but with different cases. When I close and reopen theDeveloper tool, one configuration is missing.Data viewer and mapping configuration names are not case sensitive. If you create multiple configurations with thesame name but different cases, the Developer tool deletes one of the configurations when you exit. The Developertool does not consider the configuration names unique.

I tried to create a configuration with a long name, but the Developer tool displays an error message that says itcannot not write the file.The Developer tool stores data viewer and mapping configurations in files on the machine that runs the Developertool. If you create a configuration with a long name, for example, more than 100 characters, the Developer toolmight not be able to save the file to the hard drive.

To work around this issue, shorten the configuration name.

Exporting DataYou can export the data that displays in the Data Viewer view to a tab-delimited flat file, such as a TXT or CSVfile. Export data when you want to create a local copy of the data.

1. In the Data Viewer view, right-click the results and select Export Data.

2. Enter a file name and extension.

3. Select the location where you want to save the file.

4. Click OK.

Exporting Data 127

Page 141: In 910 Dev UserGuide En

LogsThe Data Integration Service generates log events when you run a mapping, run a profile, preview data, or run anSQL query. Log events include information about the tasks performed by the Data Integration Service, errors, andload summary and transformation statistics.

When you run a profile, preview data, or run an SQL query, you can view log events in the editor. To view logevents, click the Show Log button in the Data Viewer view.

Monitoring Jobs from the Developer ToolYou can access the Monitoring tool from the Developer tool to monitor the status of applications and jobs, such asa profile jobs. As an adminstrator, you can also monitor applications and jobs in the Administrator tool.

Monitor applications and jobs to view properties, run-time statistics, and run-time reports about the integrationobjects. For example, you can see the general properties and the status of a profiling job. You can also see whoinitiated the job and how long it took the job to complete.

To monitor applications and jobs from the Developer tool, click the Menu button in the Progress view and selectMonitor Jobs. Select the Data Integration Service that runs the applications and jobs and click OK. TheMonitoring tool opens.

128 Chapter 13: Viewing Data

Page 142: In 910 Dev UserGuide En

Part II: Informatica Data ServicesThis part contains the following chapters:

¨ Data Services, 130

¨ Logical View of Data, 133

¨ Virtual Data, 145

129

Page 143: In 910 Dev UserGuide En

C H A P T E R 1 4

Data ServicesThis chapter includes the following topics:

¨ Data Services Overview, 130

¨ Logical Data Object Model Example, 131

¨ SQL Data Service Example, 131

¨ Web Services Example, 132

Data Services OverviewA data service is a collection of reusable operations that you can run to access and transform data. Use a dataservice to create a unified model of data and allow end users to run SQL queries against the data or access thedata through a web service.

Use the data services capabilities in the Developer tool to create the following objects:

Logical data object models

A logical data object model describes the structure and use of data in an enterprise. The model containslogical data objects and defines relationships between them. A logical data object describes a logical entity inthe enterprise. Create a logical data object model to study data, describe data attributes, and define therelationships among attributes.

Create a logical data object model in the Developer tool. End users cannot access the logical data objectswithin a logical data object model unless you include them in an SQL data service or web service. To allowend users to run SQL queries against a logical data object, include it in an SQL data service. Make the logicaldata object the source for a virtual table. To allow end users to access a logical data object over the Web,include it in a web service. Make the logical data object the source for an operation.

SQL data services

An SQL data service is a virtual database that end users can query. It contains virtual schemas and the virtualtables or stored procedures that define the database structure. Create an SQL data service so that end userscan run SQL queries against the virtual tables through a third-party client tool. End users can query the virtualtables as if they were physical tables. End users can also use a third-party client tool to run virtual storedprocedures.

Create an SQL data service in the Developer tool. To make it available to end users, include it in anapplication, and deploy the application to a Data Integration Service. When the application is running, endusers can connect to the SQL data service from a third-party client tool by supplying a connect string. Afterthey connect to the SQL data service, end users can run SQL queries through the client tool.

130

Page 144: In 910 Dev UserGuide En

Web services

A web service provides access to data integration functionality. A web service client can connect to a webservice to access, transform, or deliver data. A web service is a collection of operations. A web serviceoperation defines the functions that the web service supports. For example, you might create a web serviceoperation to retrieve customer information by customer ID.

Create a web service in the Developer tool. To make it available to end users, include it in an application anddeploy the application to a Data Integration Service. When the application is running, end users can connectto the web service through the WSDL URL. End users send requests to the web service and receiveresponses through SOAP messages.

For more information about web services, see the Informatica Data Services Web Services Guide.

Logical Data Object Model ExampleCreate a logical data object model to describe the representation of logical entities in an enterprise. For example,create a logical data object model to present account data from disparate sources in a single view.

American Bank acquires California Bank. After the acquisition, American Bank has the following goals:

¨ Present data from both banks in a business intelligence report, such as a report on the top 10 customers.

¨ Consolidate data from both banks into a central data warehouse.

Traditionally, American Bank would consolidate the data into a central data warehouse in a developmentenvironment, verify the data, and move the data warehouse to a production environment. This process might takeseveral months or longer. The bank could then run business intelligence reports on the data warehouse in theproduction environment.

A developer at American Bank can use the Developer tool to create a model of customer, account, branch, andother data in the enterprise. The developer can link the relational sources of American Bank and California bank toa single view of the customer. The developer can then make the data available for business intelligence reportsbefore creating a central data warehouse.

SQL Data Service ExampleCreate an SQL data service to make a virtual database available for end users to query. Create a virtual databaseto define uniform views of data and to isolate the data from changes in structure. For example, create an SQL dataservice to define a uniform view of customer data and to allow end users to run SQL queries against the data.

Two companies that store customer data in multiple, heterogeneous data sources merge. A developer at themerged company needs to make a single view of customer data available to other users at the company. Theother users need to make SQL queries against the data to retrieve information such as the number of customers ina region or a list of customers whose purchases exceed a certain dollar amount.

To accomplish this goal, the developer creates an SQL data service that contains virtual schemas and virtualtables that define a unified view of a customer. The developer creates virtual table mappings to link the virtualtables of the customer with the sources and to standardize the data. To make the virtual data accessible by endusers, the developer includes the SQL data service in an application and deploys the application.

After the developer deploys the application, end users can make SQL queries against the standardized view of thecustomer through a JDBC or ODBC client tool.

Logical Data Object Model Example 131

Page 145: In 910 Dev UserGuide En

Web Services ExampleHypostores customer service representatives want to access customer data from the Los Angeles and Bostonoffices over a network. The customer service representatives want to view customer details based on thecustomer name or the customer ID. The corporate policy requires that data accessed over a network must besecure.

The developer and administrator complete the following steps to provide access to the data required by customerservice:

1. In the Developer tool, the developer creates a web service with the following operations:

¨ getCustomerDetailsByNameThe operation input includes an element for the customer name. The operation output includes elementsfor the customer details based on the customer name.

¨ getCustomerDetailsByIdThe operation input includes an element for the customer ID. The operation output includes elements forcustomer details based on the customer ID.

2. The developer configures an operation mapping for each operation with the following components:

¨ An Input transformation and an Output transformation.

¨ A Lookup transformation that performs a lookup on a logical data object that defines a single view ofcustomer data from the Los Angeles and Boston offices.

3. The developer deploys the web service to a Data Integration Service.

4. In the Administrator tool, the administrator configures the web service to use transport layer security andmessage layer security so that it can receive authorized requests using an HTTPS URL.

5. The administrator sends the WSDL file to customer service so that they can connect to the web service.

132 Chapter 14: Data Services

Page 146: In 910 Dev UserGuide En

C H A P T E R 1 5

Logical View of DataThis chapter includes the following topics:

¨ Logical View of Data Overview, 133

¨ Developing a Logical View of Data, 133

¨ Logical Data Object Models, 134

¨ Logical Data Object Model Properties , 135

¨ Logical Data Objects, 141

¨ Logical Data Object Mappings, 143

Logical View of Data OverviewA logical view of data is a representation of data that resides in an enterprise. A logical view of data includes alogical data model, logical data objects, and logical data object mappings.

With a logical view of data, you can achieve the following goals:

¨ Use common data models across an enterprise so that you do not have to redefine data to meet differentbusiness needs. It also means if there is a change in data attributes, you can apply this change one time anduse one mapping to make this change to all databases that use this data.

¨ Find relevant sources of data and present the data in a single view. Data resides in various places in anenterprise, such as relational databases and flat files. You can access all data sources and present the data inone view.

¨ Expose logical data as relational tables to promote reuse.

Developing a Logical View of DataDevelop a logical view of data to represent how an enterprise accesses data and uses data. After you develop alogical view of data, you can add it to a data service to make virtual data available for end users.

Before you develop a logical view of data, you can define the physical data objects that you want to use in alogical data object mapping. You can also profile the physical data sources to analyze data quality.

1. Create or import a logical data model.

2. Optionally, add logical data objects to the logical data object model and define relationships between objects.

133

Page 147: In 910 Dev UserGuide En

3. Create a logical data object mapping to read data from a logical data object or write data to a logical dataobject. A logical data object mapping can contain transformation logic to transform the data. Thetransformations can include data quality transformations to validate and cleanse the data.

4. View the output of the logical data object mapping.

Logical Data Object ModelsA logical data object model describes the structure and use of data in an enterprise. The model contains logicaldata objects and defines relationships between them.

Define a logical data object model to create a unified model of data in an enterprise. The data in an enterprisemight reside in multiple disparate source systems such as relational databases and flat files. A logical data objectmodel represents the data from the perspective of the business regardless of the source systems.

For example, customer account data from American Bank resides in an Oracle database, and customer accountdata from California Banks resides in an IBM DB2 database. You want to create a unified model of customeraccounts that defines the relationship between customers and accounts. Create a logical data object model todefine the relationship.

You can import a logical data object model from a modeling tool. You can also import a logical data object modelfrom an XSD file that you created in a modeling tool. Or, you can manually create a logical data object model inthe Developer tool.

You add a logical data object model to a project or folder and store it in the Model repository.

Creating a Logical Data Object ModelCreate a logical data object model to define the structure and use of data in an enterprise. When you create alogical data object model, you can add logical data objects. You associate a physical data object with each logicaldata object. The Developer tool creates a logical data object read mapping for each logical data object in themodel.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Logical Data Object Model.

The New dialog box appears.

3. Select Logical Data Object Model and click Next.

The New Logical Data Object Model dialog box appears.

4. Enter a name for the logical data object model.

5. To create logical data objects, click Next. To create an empty logical data object model, click Finish.

If you click Next, the Developer tool prompts you to add logical data objects to the model.

6. To create a logical data object, click the New button.

The Developer tool adds a logical data object to the list.

7. Enter a name in the Name column.

8. Optionally, click the Open button in the Data Object column to associate a physical data object with thelogical data object.

The Select a Data Object dialog box appears.

9. Select a physical data object and click OK.

134 Chapter 15: Logical View of Data

Page 148: In 910 Dev UserGuide En

10. Repeat steps 6 through 9 to add logical data objects.

11. Click Finish.

The logical data object model opens in the editor.

Importing a Logical Data Object Model from a Modeling ToolYou can import a logical data object model from a modeling tool or an XSD file. Import a logical data object modelto use an existing model of the structure and use of data in an enterprise.

1. Select the project or folder to which you want to import the logical data object model.

2. Click File > New > Logical Data Object Model.

The New Logical Data Object Model dialog box appears.

3. Select Logical Data Object Model from Data Model.

4. Click Next.

5. In the Model Type field, select the modeling tool from which you want to import the logical data object model.

6. Enter a name for the logical data object model.

7. Click Next.

8. Browse to the file that you want to import, select the file, and click Open.

9. Configure the import properties.

10. Click Next.

11. Add logical data objects to the logical data object model.

12. Click Finish.

The logical data objects appear in the editor.

Logical Data Object Model PropertiesWhen you import a logical data object model from a modeling tool, provide the properties associated with the tool.

CA ERwin Data Modeler Import PropertiesConfigure the import properties when you import a logical data object model from CA ERwin Data Modeler.

The following table describes the properties to configure when you import a model from CA ERwin Data Modeler:

Property Description

Import UDPs Specifies how to import user-defined properties.

Logical Data Object Model Properties 135

Page 149: In 910 Dev UserGuide En

Property Description

Select one of the following options:- As metadata. Import an explicit value as the property value object. Explicit

values are not exported.- As metadata, migrate default values. Import explicit and implicit values as

property value objects.- In description, migrate default values. Append the property name and value,

even if implicit, to the object description property.- Both, migrate default values. Import the UDP value, even if implicit, both as

metadata and in the object's description.Default is As metadata.

Import relationship name Specifies how to import the relationship names from ERwin.Select one of the following options:- From relationship name- From relationship descriptionDefault is From relationship name.

Import IDs Specifies whether to set the unique ID of the object as the NativeId property.

Import subject areas Specifies how to import the subject areas from ERwin.Select one of the following options:- As diagrams- As packages and diagrams- As packages and diagrams, assuming one subject area for each entity- Do not import subject areasDefault is As diagrams.

Import column order form Specifies how to import the position of columns in tables.Select one of the following options:- Column order. Order of the columns displayed in the ERwin physical view.- Physical order. Order of the columns in the database, as generated in the SQL

DDL.Default is Physical order.

Import owner schemas Specifies whether to import owner schemas.

IBM Cognos Business Intelligence Reporting - Framework ManagerImport Properties

Configure the import properties when you import a logical data object model from IBM Cognos BusinessIntelligence Reporting - Framework Manager.

The following table describes the properties to configure when you import a model from IBM Cognos BusinessIntelligence Reporting - Framework Manager:

Property Description

Folder Representation Specifies how to represent folders from the Framework Manager.Select one of the following options:- Ignore. Ignore folders.- Flat. Represent folders as diagrams but do not preserve hierarchy.- Hierarchial. Represent folders as diagrams and preserve hierarchy.Default is Ignore.

Package Representation Specifies how to represent packages from Cognos Framework Manager.

136 Chapter 15: Logical View of Data

Page 150: In 910 Dev UserGuide En

Property Description

Select one of the following options:- Ignore. Ignore subject areas.- Subject Areas. Represent packages as subject areas.- Model. Represent the package as the model.Default is Ignore.

Reverse engineer relationships Specifes whether the Developer tool computes the relationship between twodbQueries as referential integrity constraints.

Tables design level Specifies how to control the design level of the imported tables:Select one of the following options:- Logical and physical. The tables appear in both the logical view and in the

physical view of the model.- Physical. The tables appear only in the physical view of the model.Default is Physical.

Ignore usage property Specify whether the usage property of a queryItem should be used.

SAP BusinessObjects Designer Import PropertiesConfigure the import properties when you import a logical data object model from SAP BusinessObjects Designer.

The following table describes the properties to configure when you import a model from SAP BusinessObjectsDesigner:

Property Description

System Name of the BusinessObjects repository.For BusinessObjects versions 11.x and 12.x (XI), enter the name of the CentralManagement Server. For BusinessObjects version 5.x and 6.x, enter name of therepository defined by the Supervisor application

Authentication mode Login authentication mode.This parameter is applicable to SAP BusinessObjects Designer 11.0 and later.Select one of the following authentication modes:- Enterprise. Business Objects Enterprise login- LDAP. LDAP server authentication- Windows AD. Windows Active Directory server authentication- Windows NT. Windows NT domain server authentication- Standalone. Standalone authenticationDefault is Enterprise.

User name User name in the BusinessObjects server. For version 11.x and 12.x (XI), you needto be a member of BusinessObjects groups.

Password Password for the BusinessObjects server.

Silent execution Specifies whether to execute in interactive or silent mode.Default is Silent.

Close after execution Specify whether to close BusinessObjects after the Developer Tool completes themodel import.

Table design level Specifies the design level of the imported tables.

Logical Data Object Model Properties 137

Page 151: In 910 Dev UserGuide En

Property Description

Select one of the following options:- Logical and physical. The tables appear both in the logical view and in the

physical view of the model.- Physical. The tables appear both in the physical view of the model.Default is Physical.

Transform Joins to Foreign Keys Transforms simple SQL joins in the model into foreign key relationships.Select the parameter if you want to export the model to a tool that only supportsstructural relational metadata, such as a database design tool.

Class representation Specifies how to import the tree structure of classes and sub-classes. TheDeveloper Tool imports each class as a dimension as defined by the CWM OLAPstandard. The Developer Tool also imports classes and sub-classes as a tree ofpackages as defined by the CWM and UML standards.Select one of the following options:- As a flat structure. The Developer tool does not create package.- As a simplified tree structure. The Developer tool creates package for each

class with sub-classes.- As a full tree structure. The Developer tool creates a package for each class.Default is As a flat structure.

Include List of Values Controls how the Developer tool imports the list of values associated with objects.

Dimensional properties transformation Specifies how to transfer the dimension name, description, and role to theunderlying table and the attribute name, description, and datatype to the underlyingcolumn.Select one of the following options:- Disabled. No property transfer occurs.- Enabled. Property transfer occurs where there are direct matches between the

dimensional objects and the relational objects. The Developer tool migratesthe dimension names to the relational names.

- Enabled (preserve names). Property transfer occurs where there are directmatches between the dimensional objects and the relational objects. TheDeveloper tool preserves the relational names.

Default is Disabled.

Sybase PowerDesigner CDM Import PropertiesConfigure the import properties when you import a logical data object model from Sybase PowerDesigner CDM.

The following table describes the properties to configure when you import a model from Sybase PowerDesignerCDM:

Property Description

Import UDPs Specifies how to import user-defined properties.Select one of the following options:- As metadata. Import an explicit value as the property value object. Explicit

values are not exported.- As metadata, migrate default values. Import explicit and implicit values as

property value objects.- In description, migrate default values. Append the property name and value,

even if implicit, to the object description property.- Both, migrate default values. Import the UDP value, even if implicit, both as

metadata and in the object's description.

138 Chapter 15: Logical View of Data

Page 152: In 910 Dev UserGuide En

Property Description

Default is As metadata.

Import Association Classes Specifies whether the Developer tool should import association classes.

Import IDs Specifies whether to set the unique ID of the object as the NativeId property.

Append volumetric information to thedescription field

Import and append the number of occurrences information to the descriptionproperty.

Remove text formatting Specifies whether to remove or keep rich text formatting.Select this option if the model was generated by PowerDesigner 7.0 or 7.5Clear this option if the model was generated by PowerDesigner 8.0 or greater.

Sybase PowerDesigner OOM 9.x to 15.x Import PropertiesConfigure the import properties when you import a logical data object model from Sybase PowerDesigner OOM9.x to 15.x.

The following table describes the properties to configure when you import a model from PowerDesigner OOM:

Property Description

Target Tool Specifies which tool generated the model you want to import.Select one of the following options:- Auto Detect. The Developer tool auto-detects which tool generated the file.- OMG XMI. The file conforms to the OMG XMI 1.0 standard DTDs.- Argo/UML 0.7. The file was generated by Argo/UML 0.7.0 or earlier.- Argo/UML 0.8. The file was generated by Argo/UML 0.7.1 or later.- XMI Toolkit. The file was generated by IBM XMI Toolkit.- XMI Interchange. The file was generated by Unisys Rose XMI Interchange.- Rose UML. The file was generated by Unisys Rose UML.- Visio UML. The file was generated by Microsoft Visio Professional 2002 and

Visio for Enterprise Architects using UML to XMI Export.- PowerDesigner UML. The file was generated by Sybase PowerDesigner using

XMI Export.- Component Modeler. The file was generated by CA AllFusion Component

Modeler using XMI Export.- Netbeans XMI Writer. The file was generated by one of applications using

Netbeans XMI Writer such as Poseidon.- Embarcadero Describe. The file was generated by Embarcadero's Describe.Default is Auto Detect.

Auto Correct Specifies whether the Developer Tool should attempt to a slightly incomplete orincorrect model in the XML file.

Model Filter Model to import if the XML file contains more than one model. Use a comma toseparate multiple models.

Top Package The top level package in the model.

Import UUIDs Specify whether to import the UUIDs as NativeId.

Logical Data Object Model Properties 139

Page 153: In 910 Dev UserGuide En

Sybase PowerDesigner PDM Import PropertiesConfigure the import properties when you import a logical data object model from Sybase PowerDesigner PDM.

The following table describes the properties to configure when you import a model from Sybase PowerDesignerPDM:

Property Description

Import UDPs Specifies how to import user-defined properties.Select one of the following options:- As metadata. Import an explicit value as the property value object. Explicit

values are not exported.- As metadata, migrate default values. Import explicit and implicit values as

property value objects.- In description, migrate default values. Append the property name and value,

even if implicit, to the object description property.- Both, migrate default values. Import the UDP value, even if implicit, both as

metadata and in the object's description.Default is As metadata.

Import IDs Specifies whether to set the unique id of the object as the NativeId property.

Append volumetric information to thedescription field

Import and append the number of occurrences information to the descriptionproperty.

Remove text formatting Specifies whether to remove or keep rich text formatting.Select this option if the model was generated by PowerDesigner 7.0 or 7.5Clear this option if the model was generated by PowerDesigner 8.0 or greater.

XSD Import PropertiesYou can import logical data object models from an XSD file exported by a modeling tool.

The following table describes the properties to configure when you import a model from an XSD file:

Property Description

Elements content name Attribute to hold the textual content like #PCDATA in the XSD file.Default is As metadata.

Collapse Level Specifies when to collapse a class. The value you select determines whether theDeveloper tool imports all or some of the elements and attributes in the XSD file.Select one of the following options:- None. Every XSD element becomes a class and every XSD attribute becomes

an attribute.- Empty. Only empty classes collapse into the parent classes.- Single Attribute. Only XSD elements with a single attribute and no children

collapse into the parent class.- No Children. Any XSD element that has no child element collapse into the

parent class.- All. All collapsible XSD elements collapse into the parent class.Default is All.

Collapse Star Specifies whether the Developer tool should collapase XML elements with anincoming xlink into the parent class.

140 Chapter 15: Logical View of Data

Page 154: In 910 Dev UserGuide En

Property Description

Class Type Specifies whether the Developer tool should create a class type an elementcollapses into the parent element.

Any Specifies whether to create a class or entity for the 'xs:any' pseudo-element.

Generate IDs Specifies whether to generate additional attributes to create primary and foreignkeys. By default, the Developer tool does not generate additional attributes.

Import substitutionGroup as Specifies how to represent inheritance.Select one of the following options:- Generalization. Represents inheritance as generalization.- Roll down. Duplicate inherited attributes in the subclass.Default is Roll down.

Include Path Path to the directory that contains the included schema files, if any.

UDP namespace Namespace that contains attributes to be imported as user-defined properties.

Logical Data ObjectsA logical data object is an object in a logical data object model that describes a logical entity in an enterprise. Ithas attributes, keys, and it describes relationships between attributes.

You include logical data objects that relate to each other in a data object model. For example, the logical dataobjects Customer and Account appear in a logical data object model for a national bank. The logical data objectmodel describes the relationship between customers and accounts.

In the model, the logical data object Account includes the attribute Account_Number. Account_Number is aprimary key, because it uniquely identifies an account. Account has a relationship with the logical data objectCustomer, because the Customer data object needs to reference the account for each customer.

You can drag a physical data object into the logical data object model editor to create a logical data object. Or, youcan create a logical data object and define the attributes and keys.

Logical Data Object PropertiesA logical data object contains properties that define the data object and its relationship to other logical data objectsin a logical data object model.

A logical data object contains the following properties:

Name Description

General Name and description of the logical data object.

Attributes Comprise the structure of data in a logical data object.

Keys One or more attributes in a logical data object can be primarykeys or unique keys.

Logical Data Objects 141

Page 155: In 910 Dev UserGuide En

Name Description

Relationships Associations between logical data objects.

Access Type of access for a logical data object and each attribute ofthe data object.

Mappings Logical data object mappings associated with a logical dataobject.

Attribute RelationshipsA relationship is an association between primary or foreign key attributes of one or more logical data objects.

You can define the following types of relationship between attributes:Identifying

A relationship between two attributes where an attribute is identified through its association with anotherattribute.

For example, the relationship between the Branch_ID attribute of the logical data object Branch and theBranch_Location attribute of the logical data object Customer is identifying. This is because a branch ID isunique to a branch location.

Non-Identifying

A relationship between two attributes that identifies an attribute independently of the other attribute.

For example, the relationship between the Account_Type attribute of the Account logical data object and theAccount_Number attribute of the Customer logical data object is non-identifying. This is because you canidentify an account type without having to associate it with an account number.

When you define relationships, the logical data object model indicates an identifying relationship as a solid linebetween attributes. It indicates a non-identifying relationship as a dotted line between attributes.

Creating a Logical Data ObjectYou can create a logical data object in a logical data object model to define a logical entity in an enterprise.

1. Click File > New > Other.

2. Select Informatica > Data Objects > Data Object and click Next.

3. Enter a data object name.

4. Select the data object model for the data object and click Finish.

The data object appears in the data object model canvas.

5. Select the data object and click the Properties tab.

6. On the General tab, optionally edit the logical data object name and description.

7. On the Attributes tab, create attributes and specify their datatype and precision.

8. On the Keys tab, optionally specify primary and unique keys for the data object.

9. On the Relationships tab, optionally create relationships between logical data objects.

10. On the Access tab, optionally edit the type of access for the logical data object and each attribute in the dataobject.

Default is read only.

142 Chapter 15: Logical View of Data

Page 156: In 910 Dev UserGuide En

11. On the Mappings tab, optionally create a logical data object mapping.

Logical Data Object MappingsA logical data object mapping is a mapping that links a logical data object to one or more physical data objects. Itcan include transformation logic.

A logical data object mapping can be of the following types:

¨ Read

¨ Write

You can associate each logical data object with one logical data object read mapping or one logical data objectwrite mapping.

Logical Data Object Read MappingsA logical data object read mapping contains one or more physical data objects as input and one logical data objectas output. The mapping can contain transformation logic to transform the data.

It provides a way to access data without accessing the underlying data source. It also provides a way to have asingle view of data coming from more than one source.

For example, American Bank has a logical data object model for customer accounts. The logical data object modelcontains a Customers logical data object.

American Bank wants to view customer data from two relational databases in the Customers logical data object.You can use a logical data object read mapping to perform this task and view the output in the Data Viewer view.

Logical Data Object Write MappingsA logical data object write mapping contains a logical data object as input. It provides a way to write to targetsfrom a logical data object.

The mapping can contain transformation logic to transform the data.

Creating a Logical Data Object MappingYou can create a logical data object mapping to link data from a physical data object to a logical data object andtransform the data.

1. In the Data Object Explorer view, select the logical data object model that you want to add the mapping to.

2. Click File > New > Other.

3. Select Informatica > Data Objects > Data Object Mapping and click Next.

4. Select the logical data object you want to include in the mapping.

5. Select the mapping type.

6. Optionally, edit the mapping name.

7. Click Finish .

The editor displays the logical data object as the mapping input or output, based on whether the mapping is aread or write mapping.

Logical Data Object Mappings 143

Page 157: In 910 Dev UserGuide En

8. Drag one or more physical data objects to the mapping as read or write objects, based on whether themapping is a read or write mapping.

9. Optionally, add transformations to the mapping.

10. Link ports in the mapping.

11. Right-click the mapping canvas and click Validate to validate the mapping.

Validation errors appear on the Validation Log view.

12. Fix validation errors and validate the mapping again.

13. Optionally, click the Data Viewer view and run the mapping.

Results appear in the Output section.

144 Chapter 15: Logical View of Data

Page 158: In 910 Dev UserGuide En

C H A P T E R 1 6

Virtual DataThis chapter includes the following topics:

¨ Virtual Data Overview, 145

¨ SQL Data Services, 146

¨ Virtual Tables, 147

¨ Virtual Table Mappings, 149

¨ Virtual Stored Procedures, 151

¨ SQL Query Plans, 153

Virtual Data OverviewCreate a virtual database to define uniform views of data and make the data available for end users to query. Endusers can run SQL queries against the virtual tables as if they were physical database tables.

Create a virtual database to accomplish the following tasks:

¨ Define a uniform view of data that you can expose to end users.

¨ Define the virtual flow of data between the sources and the virtual tables. Transform and standardize the data.

¨ Provide end users with access to the data. End users can use a JDBC or ODBC client tool to run SQL queriesagainst the virtual tables as if they were actual, physical database tables.

¨ Isolate the data from changes in data structures. You can add the virtual database to a self-containedapplication. If you make changes to the virtual database in the Developer tool, the virtual database in theapplication does not change until you redeploy it.

To create a virtual database, you must create an SQL data service. An SQL data service contains the virtualschemas and the virtual tables or stored procedures that define the database structure. If the virtual schemacontains virtual tables, the SQL data service also contains virtual table mappings that define the flow of databetween the sources and the virtual tables.

After you create an SQL data service, you add it to an application and deploy the application to make the SQLdata service accessible by end users.

End users can query the virtual tables or run the stored procedures in the SQL data service by entering an SQLquery in a third-party client tool. When the user enters the query, the Data Integration Service retrieves virtual datafrom the sources or from cache tables, if an administrator specifies that any of the virtual tables should be cached.

145

Page 159: In 910 Dev UserGuide En

SQL Data ServicesAn SQL data service is a virtual database that end users can query. It contains a schema and other objects thatrepresent underlying physical data.

An SQL data service can contain the following objects:

¨ Virtual schemas. Schemas that define the virtual database structure.

¨ Virtual tables. The virtual tables in the database. You can create virtual tables from physical or logical dataobjects, or you can create virtual tables manually.

¨ Virtual table mappings. Mappings that link a virtual table to source data and define the data flow between thesources and the virtual table. If you create a virtual table from a data object, you can create a virtual tablemapping to define data flow rules between the data object and the virtual table. If you create a virtual tablemanually, you must create a virtual table mapping to link the virtual table with source data and define data flow.

¨ Virtual stored procedures. Sets of data flow instructions that allow end users to perform calculations or retrievedata.

Defining an SQL Data ServiceTo define an SQL data service, create an SQL data service and add objects to it.

1. Create an SQL data service.

You can create virtual tables and virtual table mappings during this step.

2. Create virtual tables in the SQL data service.

You can create a virtual table from a data object, or you can create a virtual table manually.

3. Define relationships between virtual tables.

4. Create or update virtual table mappings to define the data flow between data objects and the virtual tables.

5. Optionally, create virtual stored procedures.

6. Optionally, preview virtual table data.

Creating an SQL Data ServiceCreate an SQL data service to define a virtual database that end users can query. When you create an SQL dataservice, you can create virtual schemas, virtual tables, and virtual table mappings that link virtual tables withsource data.

1. Select a project or folder in the Object Explorer view.

2. Click File > New > Data Service.

The New dialog box appears.

3. Select SQL Data Service.

4. Click Next.

5. Enter a name for the SQL data service.

6. To create virtual tables in the SQL data service, click Next. To create an SQL data service without virtualtables, click Finish.

If you click Next, the New SQL Data Service dialog box appears.

7. To create a virtual table, click the New button.

The Developer tool adds a virtual table to the list of virtual tables.

146 Chapter 16: Virtual Data

Page 160: In 910 Dev UserGuide En

8. Enter a virtual table name in the Name column.

9. Click the Open button in the Data Object column.

The Select a Data Object dialog box appears.

10. Select a physical or logical data object and click OK.

11. Enter the virtual schema name in the Virtual Schema column.

12. Select Read in the Data Access column to link the virtual table with the data object. Select None if you do notwant to link the virtual table with the data object.

13. Repeat steps 7 through 12 to add more virtual tables.

14. Click Finish.

The Developer tool creates the SQL data service.

Virtual TablesA virtual table is a table in a virtual database. Create a virtual table to define the structure of the data.

Create one or more virtual tables within a schema. If a schema contains multiple virtual tables, you can defineprimary key-foreign key relationships between tables.

You can create virtual tables manually or from physical or logical data objects. Each virtual table has a dataaccess method. The data access method defines how the Data Integration Service retrieves data. When youmanually create a virtual table, the Developer tool creates an empty virtual table and sets the data access methodto none.

When you create a virtual table from a data object, the Developer tool creates a virtual table with the samecolumns and properties as the data object. The Developer tool sets the data access method to read. If you changecolumns in the data object, the Developer tool updates the virtual table with the same changes. The Developertool does not update the virtual table if you change the data object name or description.

To define data transformation rules for the virtual table, set the data access method to custom. The Developer toolprompts you to create a virtual table mapping.

You can preview virtual table data when the data access method is read or custom.

Data Access MethodsThe data access method for a virtual table defines how the Data Integration Service retrieves data.

When you create a virtual table, you must choose a data access method. The following table describes the dataaccess methods:

Data AccessMethod

Description

None The virtual table is not linked to source data.If you change the data access method to none, the Developer tool removes the link between the dataobject and the virtual table. If the virtual table has a virtual table mapping, the Developer tool deletesthe virtual table mapping.

Virtual Tables 147

Page 161: In 910 Dev UserGuide En

Data AccessMethod

Description

The Data Integration Service cannot retrieve data for the table.

Read The virtual table is linked to a physical or logical data object without data transformation. If you add,remove, or change a column in the data object, the Developer tool makes the same change to thevirtual table. However, if you change primary key-foreign key relationships, change the name of thedata object, or change the data object description, the Developer tool does not update the virtual table.If you change the data access method to read, the Developer tool prompts you to choose a data object.If the virtual table has a virtual table mapping, the Developer tool deletes the virtual table mapping.When an end user queries the virtual table, the Data Integration Service retrieves data from the dataobject.

Custom The virtual table is linked to a physical or logical data object through a virtual table mapping. If youupdate the data object, the Developer tool does not update the virtual table.If you change the data access method to custom, the Developer tool prompts you to create a virtualtable mapping.When an end user queries the virtual table, the Data Integration Service applies any transformation ruledefined in the virtual table mapping to the source data. It returns the transformed data to the end user.

Creating a Virtual Table from a Data ObjectCreate a virtual table from a physical or logical data object when the virtual table structure matches the structure ofthe data object. The Developer tool creates a virtual table mapping to read data from the data object.

1. Open an SQL data service.

2. Click the Schema view.

3. Drag a physical or logical data object from the Object Explorer view to the editor.

The Add Data Objects to SQL Data Service dialog box appears. The Developer tool lists the data object inthe Data Object column.

4. Enter the virtual schema name in the Virtual Schema column.

5. Click Finish.

The Developer tool places the virtual table in the editor and sets the data access method to read.

Creating a Virtual Table ManuallyCreate a virtual table manually when the virtual table structure does not match the structure of an existing dataobject. The Developer tool sets the data access method for the virtual table to none, which indicates the virtualtable is not linked to a source.

1. Open an SQL data service.

2. In the Overview view Tables section, click the New button.

The New Virtual Table dialog box appears.

3. Enter a name for the virtual table.

4. Enter a virtual schema name or select a virtual schema.

5. Click Finish.

The virtual table appears in the Schema view.

6. To add a column to the virtual table, right-click Columns and click New.

7. To make a column a primary key, click the blank space to the left of the column name.

148 Chapter 16: Virtual Data

Page 162: In 910 Dev UserGuide En

Defining Relationships between Virtual TablesYou can define primary key-foreign key relationships between virtual tables in an SQL data service to showassociations between columns in the virtual tables.

1. Open an SQL data service.

2. Click the Schema view.

3. Click the column you want to assign as a foreign key in one table. Drag the pointer from the foreign keycolumn to the primary key column in another table.

The Developer tool uses an arrow to indicate a relationship between the tables. The arrow points to theprimary key table.

Running an SQL Query to Preview DataRun an SQL query against a virtual table to preview the data.

For the query to return results, the virtual table must be linked to source data. Therefore, the virtual table must becreated from a data object or it must be linked to source data in a virtual table mapping.

1. Open an SQL data service.

2. Click the Schema view.

3. Select the virtual table in the Outline view.

The virtual table appears in the Schema view.

4. Click the Data Viewer view.

5. Enter an SQL statement in the Input window.For example:

select * from <schema>.<table>6. Click Run.

The query results appear in the Output window.

Virtual Table MappingsA virtual table mapping defines the virtual data flow between sources and a virtual table in an SQL data service.Use a virtual table mapping to transform the data.

Create a virtual table mapping to link a virtual table in an SQL data service with source data and to define the rulesfor data transformation. When an end user queries the virtual table, the Data Integration Service applies thetransformation rules defined in the virtual table mapping to the source data. It returns the transformed data to theend user.

If you do not want to transform the data, you do not have to create a virtual table mapping. When an end userqueries the virtual table, the Data Integration Service retrieves data directly from the data object.

You can create one virtual table mapping for each virtual table in an SQL data service. You can preview virtualtable data as you create and update the mapping.

A virtual table mapping contains the following components:

¨ Sources. Physical or logical data objects that describe the characteristics of source tables or files. A virtualtable mapping must contain at least one source.

Virtual Table Mappings 149

Page 163: In 910 Dev UserGuide En

¨ Transformations. Objects that define the rules for data transformation. Use different transformation objects toperform different functions. Transformations are optional in a virtual table mapping.

¨ Virtual table. A virtual table in an SQL data service.

¨ Links. Connections between columns that define virtual data flow between sources, transformations, and thevirtual table.

ExampleYou want to make order information available to one of your customers.

The orders information is stored in a relational database table that contains information for several customers. Thecustomer is not authorized to view the orders information for other customers.

Create an SQL data service to retrieve the orders information. Create a virtual table from the orders table and setthe data access method to custom. Add a Filter transformation to the virtual table mapping to remove orders datafor the other customers.

After you create and deploy an application that contains the SQL data service, the customer can query the virtualtable that contains his orders information.

Defining a Virtual Table MappingTo define a virtual table mapping, create a virtual table mapping, add sources and transformations, and validatethe mapping.

1. Create a mapping from a virtual table in an SQL data service.

2. Add sources and transformations to the mapping and link columns.

3. Validate the mapping.

4. Optionally, preview the mapping data.

Creating a Virtual Table MappingCreate a virtual table mapping to define the virtual data flow between source data and a virtual table in an SQLdata service. You can create one virtual table mapping for each virtual table.

1. Open the SQL data service that contains the virtual table for which you want to create a virtual table mapping.

2. Click the Overview view.

3. In the Tables section, change the data access method for the virtual table to Custom.

The New Virtual Table Mapping dialog box appears.

4. Enter a name for the virtual table mapping.

5. Click Finish.

The Developer tool creates a view for the virtual table mapping and places the virtual table in the editor. If youcreated the virtual table from a data object, the Developer tool adds the data object to the mapping as asource.

6. To add sources to the mapping, drag data objects from the Object Explorer view into the editor.

You can add logical or physical data objects as sources.

7. Optionally, add transformations to the mapping by dragging them from the Object Explorer view orTransformation palette into the editor.

150 Chapter 16: Virtual Data

Page 164: In 910 Dev UserGuide En

8. Link columns by selecting a column in a source or transformation and dragging it to a column in anothertransformation or the virtual table.

The Developer tool uses an arrow to indicate the columns are linked.

Validating a Virtual Table MappingValidate a virtual table mapping to verify that the Data Integration Service can read and process the entire virtualtable mapping.

1. Open an SQL data service.

2. Select the virtual table mapping view.

3. Select Edit > Validate.

The Validation Log view opens. If no errors appear in the view, the virtual table mapping is valid.

4. If the Validation Log view lists errors, correct the errors and revalidate the virtual table mapping.

Previewing Virtual Table Mapping OutputAs you develop a virtual table mapping, preview the output to verify the virtual table mapping produces the resultsyou want.

The virtual table must be linked to source data.

1. Open the SQL data service that contains the virtual table mapping.

2. Click the virtual table mapping view.

3. Select the object for which you want to preview output. You can select a transformation or the virtual table.

4. Click the Data Viewer view.

5. Click Run.

The Developer tool displays results in the Output section.

Virtual Stored ProceduresA virtual stored procedure is a set of procedural or data flow instructions in an SQL data service. When you deployan application that contains an SQL data service, end users can access and run the virtual stored procedures inthe SQL data service through a JDBC client tool.

Create a virtual stored procedure to allow end users to perform calculations, retrieve data, or write data to a dataobject. End users can send data to and receive data from the virtual stored procedure through input and outputparameters.

Create a virtual stored procedure within a virtual schema in an SQL data service. You can create multiple storedprocedures within a virtual schema.

A virtual stored procedure contains the following components:

¨ Inputs. Objects that pass data into the virtual stored procedure. Inputs can be input parameters, Readtransformations, or physical or logical data objects. Input parameters pass data to the stored procedure. Readtransformations extract data from logical data objects. A virtual stored procedure must contain at least oneinput.

Virtual Stored Procedures 151

Page 165: In 910 Dev UserGuide En

¨ Transformations. Objects that define the rules for data transformation. Use different transformation objects toperform different functions. Transformations are optional in a virtual stored procedure.

¨ Outputs. Objects that pass data out of a virtual stored procedure. Outputs can be output parameters, Writetransformations, or physical or logical data objects. Output parameters receive data from the stored procedure.Write transformations write data to logical data objects. A virtual stored procedure must contain at least oneoutput.

¨ Links. Connections between ports that define virtual data flow between inputs, transformations, and outputs.

ExampleAn end user needs to update customer email addresses for customer records stored in multiple relationaldatabases.

To allow the end user to update the email addresses, first create a logical data object model to define a unifiedview of the customer. Create a logical data object that represents a union of the relational tables. Create a logicaldata object write mapping to write to the relational tables. Add a Router transformation to determine whichrelational table contains the customer record the end user needs to update.

Next, create an SQL data service. In the SQL data service, create a virtual stored procedure that contains inputparameters for the customer ID and email address. Create a Write transformation based on the logical data objectand add it to the virtual stored procedure as output.

Finally, deploy the SQL data service. The end user can call the virtual stored procedure through a third-party clienttool. The end user passes the customer ID and updated email address to the virtual stored procedure. The virtualstored procedure uses the Write transformation to update the logical data object. The logical data object writemapping determines which relational table to update based on the customer ID and updates the customer emailaddress in the correct table.

Defining a Virtual Stored ProcedureTo define a virtual stored procedure, create a virtual stored procedure, add inputs, transformations, and outputs,and validate the stored procedure.

1. Create a virtual stored procedure in an SQL data service.

2. Add inputs, transformations, and outputs to the virtual stored procedure, and link the ports.

3. Validate the virtual stored procedure.

4. Optionally, preview the virtual stored procedure output.

Creating a Virtual Stored ProcedureCreate a virtual stored procedure to allow an end user to access the business logic within the procedure through aJDBC or ODBC client tool. You must create a virtual stored procedure within a virtual schema.

1. In the Object Explorer view or Outline view, right-click an SQL data service and select New > Virtual StoredProcedure.

The New Virtual Stored Procedure dialog box appears.

2. Enter a name for the virtual stored procedure.

3. Enter a virtual schema name or select a virtual schema.

4. If the virtual stored procedure has input parameters or output parameters, select the appropriate option.

5. Click Finish.

152 Chapter 16: Virtual Data

Page 166: In 910 Dev UserGuide En

The Developer tool creates an editor for the virtual stored procedure. If you select input parameters or outputparameters, the Developer tool adds an Input Parameter transformation or an Output Parametertransformation, or both, in the editor.

6. Add input parameters or sources to the virtual stored procedure.

7. Add output parameters or targets to the virtual stored procedure.

8. Optionally, add transformations to the virtual stored procedure by dragging them from the Object Explorerview or the Transformation palette into the editor.

9. Link ports by selecting a port in a source or transformation and dragging it to a port in another transformationor target.

The Developer tool uses an arrow to indicate the ports are linked.

Validating a Virtual Stored ProcedureValidate a virtual stored procedure to verify that the Data Integration Service can read and process the virtualstored procedure.

1. Open a virtual stored procedure.

2. Select Edit > Validate.

The Validation Log view opens. If no errors appear in the view, the virtual stored procedure is valid.

3. If the Validation Log view lists errors, correct the errors and revalidate the virtual stored procedure.

Previewing Virtual Stored Procedure OutputPreview the output of a virtual stored procedure to verify that it produces the results you want.

The virtual stored procedure must contain at least one input parameter or source and one output parameter ortarget.

1. Open a virtual stored procedure.

2. Select the Data Viewer view.

3. If the virtual stored procedure contains input parameters, enter them in the Input section.

4. Click Run.

The Developer tool displays results in the Output section.

SQL Query PlansAn SQL query plan enables you to view a mapping-like representation of the SQL query you enter when youpreview virtual table data.

When you view the SQL query plan for a query, the Developer tool displays a graphical representation of the querythat looks like a mapping. The graphical representation has a source, transformations, links, and a target.

The Developer tool allows you to view the graphical representation of your original query and the graphicalrepresentation of the optimized query. The optimized query view contains different transformations ortransformations that appear in a different order than the transformations in the original query. The optimized queryproduces the same results as the original query, but usually runs more quickly.

SQL Query Plans 153

Page 167: In 910 Dev UserGuide En

View the query plan to troubleshoot queries end users run against a deployed SQL data service. You can also usethe query plan to help you troubleshoot your own queries and understand the log messages.

The Developer tool uses optimizer levels to produce the optimized query. Different optimizer levels might producedifferent optimized queries, based on the complexity of the query. For example, if you enter a simple SELECTstatement, for example, "SELECT * FROM <schema.table>," against a virtual table in an SQL data service withouta user-generated virtual table mapping, the Developer tool might produce the same optimized query for eachoptimizer level. However, if you enter a query with many clauses and subqueries, or if the virtual table mapping iscomplex, the Developer tool produces a different optimized query for each optimizer level.

SQL Query Plan ExampleWhen you view the SQL query plan for a query you enter in the Data Viewer view, you can view the original queryand the optimized query. The optimized query displays the query as the Data Integration Service executes it.

For example, you want to query the CUSTOMERS virtual table in an SQL data service. The SQL data service doesnot contain a user-generated virtual table mapping. In the Data Viewer view, you choose the default data viewerconfiguration settings, which sets the optimizer level for the query to normal.

You enter the following query in the Data Viewer view:

select * from CUSTOMERS where CUSTOMER_ID > 150000 order by LAST_NAME

When you view the SQL query plan, the Developer tool displays the following graphical representation of the query:

The non-optimized view displays the query as you enter it. The Developer tool displays the WHERE clause as aFilter transformation and the ORDER BY clause as a Sorter transformation. The Developer tool uses the pass-through Expression transformation to rename ports.

When you view the optimized query, the Developer tool displays the following graphical representation of thequery:

The optimized view displays the query as the Data Integration Service executes it. Because the optimizer level isnormal, the Data Integration Service pushes the filter condition to the source data object. Pushing the filtercondition improves query performance because it reduces the number of rows that the Data Integration Servicereads from the source data object.

As in the non-optimized query, the Developer tool displays the ORDER BY clause as a Sorter transformation. Ituses pass-through Expression transformations to enforce the data types you specify in the logical transformations.

Viewing an SQL Query PlanDisplay the SQL query plan to view a mapping-like representation of the SQL query you enter when you previewvirtual table data.

1. Open an SQL data service that contains at least one virtual table.

154 Chapter 16: Virtual Data

Page 168: In 910 Dev UserGuide En

2. Click the Data Viewer view.

3. Enter an SQL query in the Input window.

4. Optionally, select a data viewer configuration that contains the optimizer level you want to apply to the query.

5. Click Show Query Plan.

The Developer tool displays the SQL query plan for the query as you entered it on the Non-Optimized tab.

6. To view the optimized query, click the Optimized tab.

The Developer tool displays the optimized SQL query plan.

SQL Query Plans 155

Page 169: In 910 Dev UserGuide En

A P P E N D I X A

Datatype ReferenceThis appendix includes the following topics:

¨ Datatype Reference Overview, 156

¨ DB2 for i5/OS, DB2 for z/OS, and Transformation Datatypes, 157

¨ Flat File and Transformation Datatypes, 158

¨ IBM DB2 and Transformation Datatypes, 158

¨ Microsoft SQL Server and Transformation Datatypes, 159

¨ Nonrelational and Transformation Datatypes, 161

¨ ODBC and Transformation Datatypes, 163

¨ Oracle and Transformation Datatypes, 164

¨ XML and Transformation Datatypes, 166

¨ Converting Data, 167

Datatype Reference OverviewWhen you create a mapping, you create a set of instructions for the Data Integration Service to read data from asource, transform it, and write it to a target. The Data Integration Service transforms data based on dataflow in themapping, starting at the first transformation in the mapping, and the datatype assigned to each port in a mapping.

The Developer tool displays two types of datatypes:

¨ Native datatypes. Specific to the relational table or flat file used as a physical data object. Native datatypesappear in the physical data object column properties.

¨ Transformation datatypes. Set of datatypes that appear in the transformations. They are internal datatypesbased on ANSI SQL-92 generic datatypes, which the Data Integration Service uses to move data acrossplatforms. The transformation datatypes appear in all transformations in a mapping.

When the Data Integration Service reads source data, it converts the native datatypes to the comparabletransformation datatypes before transforming the data. When the Data Integration Service writes to a target, itconverts the transformation datatypes to the comparable native datatypes.

When you specify a multibyte character set, the datatypes allocate additional space in the database to storecharacters of up to three bytes.

156

Page 170: In 910 Dev UserGuide En

DB2 for i5/OS, DB2 for z/OS, and TransformationDatatypes

DB2 for i5/OS and DB2 for z/OS datatypes map to transformation datatypes in the same way that IBM DB2datatypes map to transformation datatypes. The Data Integration Service uses transformation datatypes to movedata across platforms.

The following table compares DB2 for i5/OS and DB2 for z/OS datatypes with transformation datatypes:

Datatype Range Transformation Range

Bigint -9,223,372,036,854,775,808 to9,223,372,036,854,775,807

Bigint -9,223,372,036,854,775,808 to9,223,372,036,854,775,807Precision 19, scale 0

Char 1 to 254 characters String 1 to 104,857,600 characters

Char for bit data 1 to 254 bytes Binary 1 to 104,857,600 bytes

Date 0001 to 9999 A.D.Precision 19; scale 0 (precision tothe day)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

Decimal Precision 1 to 31, scale 0 to 31 Decimal Precision 1 to 28, scale 0 to 28

Float Precision 1 to 15 Double Precision 15

Integer -2,147,483,648 to 2,147,483,647 Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Smallint -32,768 to 32,767 Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Time 24-hour time periodPrecision 19, scale 0(precision to the second)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

Timestamp1 26 bytesPrecision 26, scale 6(precision to the microsecond)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

Varchar Up to 4,000 characters String 1 to 104,857,600 characters

Varchar for bit data Up to 4,000 bytes Binary 1 to 104,857,600 bytes

1. DB2 for z/OS Version 10 extended-precision timestamps map to transformation datatypes as follows:- If scale=6, then precision=26 and transformation datatype=date/time- If scale=0, then precision=19 and transformation datatype=string- If scale=1-5 or 7-12, then precision=20+scale and transformation datatype=string

Unsupported DB2 for i5/OS and DB2 for z/OS DatatypesThe Developer tool does not support certain DB2 for i5/OS and DB2 for z/OS datatypes.

The Developer tool does not support DB2 for i5/OS and DB2 for z/OS large object (LOB) datatypes. LOB columnsappear as unsupported in the relational table object, with a native type of varchar and a precision and scale of 0.The columns are not projected to customized data objects or outputs in a mapping.

DB2 for i5/OS, DB2 for z/OS, and Transformation Datatypes 157

Page 171: In 910 Dev UserGuide En

Flat File and Transformation DatatypesFlat file datatypes map to transformation datatypes that the Data Integration Service uses to move data acrossplatforms.

The following table compares flat file datatypes to transformation datatypes:

Flat File Transformation Range

Bigint Bigint Precision of 19 digits, scale of 0

Datetime Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D. (precision to the nanosecond)

Double Double Precision of 15 digits

Int Integer -2,147,483,648 to 2,147,483,647

Nstring String 1 to 104,857,600 characters

Number Decimal Precision 1 to 28, scale 0 to 28

String String 1 to 104,857,600 characters

When the Data Integration Service reads non-numeric data in a numeric column from a flat file, it drops the rowand writes a message in the log. Also, when the Data Integration Service reads non-datetime data in a datetimecolumn from a flat file, it drops the row and writes a message in the log.

IBM DB2 and Transformation DatatypesIBM DB2 datatypes map to transformation datatypes that the Data Integration Service uses to move data acrossplatforms.

The following table compares IBM DB2 datatypes and transformation datatypes:

Datatype Range Transformation Range

Bigint -9,223,372,036,854,775,808 to9,223,372,036,854,775,807

Bigint -9,223,372,036,854,775,808 to9,223,372,036,854,775,807Precision 19, scale 0

Blob 1 to 2,147,483,647 bytes Binary 1 to 104,857,600 bytes

Char 1 to 254 characters String 1 to 104,857,600 characters

Char for bit data 1 to 254 bytes Binary 1 to 104,857,600 bytes

Clob 1 to 2,447,483,647 bytes Text 1 to 104,857,600 characters

Date 0001 to 9999 A.D.Precision 19; scale 0 (precision tothe day)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

158 Appendix A: Datatype Reference

Page 172: In 910 Dev UserGuide En

Datatype Range Transformation Range

Decimal Precision 1 to 31, scale 0 to 31 Decimal Precision 1 to 28, scale 0 to 28

Float Precision 1 to 15 Double Precision 15

Integer -2,147,483,648 to 2,147,483,647 Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Smallint -32,768 to 32,767 Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Time 24-hour time periodPrecision 19, scale 0(precision to the second)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

Timestamp 26 bytesPrecision 26, scale 6(precision to the microsecond)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

Varchar Up to 4,000 characters String 1 to 104,857,600 characters

Varchar for bit data Up to 4,000 bytes Binary 1 to 104,857,600 bytes

Unsupported IBM DB2 DatatypesThe Developer tool does not support certain IBM DB2 datatypes.

The Developer tool does not support the following IBM DB2 datatypes:

¨ Dbclob

¨ Graphic

¨ Long Varchar

¨ Long Vargraphic

¨ Numeric

¨ Vargraphic

Microsoft SQL Server and Transformation DatatypesMicrosoft SQL Server datatypes map to transformation datatypes that the Data Integration Service uses to movedata across platforms.

The following table compares Microsoft SQL Server datatypes and transformation datatypes:

Microsoft SQLServer

Range Transformation Range

Binary 1 to 8,000 bytes Binary 1 to 104,857,600 bytes

Bit 1 bit String 1 to 104,857,600 characters

Microsoft SQL Server and Transformation Datatypes 159

Page 173: In 910 Dev UserGuide En

Microsoft SQLServer

Range Transformation Range

Char 1 to 8,000 characters String 1 to 104,857,600 characters

Datetime Jan 1, 1753 A.D. to Dec 31, 9999A.D.Precision 23, scale 3(precision to 3.33 milliseconds)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D.(precision to the nanosecond)

Decimal Precision 1 to 38, scale 0 to 38 Decimal Precision 1 to 28, scale 0 to 28

Float -1.79E+308 to 1.79E+308 Double Precision 15

Image 1 to 2,147,483,647 bytes Binary 1 to 104,857,600 bytes

Int -2,147,483,648 to 2,147,483,647 Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Money -922,337,203,685,477.5807 to922,337,203,685,477.5807

Decimal Precision 1 to 28, scale 0 to 28

Numeric Precision 1 to 38, scale 0 to 38 Decimal Precision 1 to 28, scale 0 to 28

Real -3.40E+38 to 3.40E+38 Double Precision 15

Smalldatetime Jan 1, 1900, to June 6, 2079Precision 19, scale 0(precision to the minute)

Date/Time Jan 1, 0001 A.D. to Dec 31, 9999A.D. (precision to the nanosecond)

Smallint -32,768 to 32,768 Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Smallmoney -214,748.3648 to 214,748.3647 Decimal Precision 1 to 28, scale 0 to 28

Sysname 1 to 128 characters String 1 to 104,857,600 characters

Text 1 to 2,147,483,647 characters Text 1 to 104,857,600 characters

Timestamp 8 bytes Binary 1 to 104,857,600 bytes

Tinyint 0 to 255 Small Integer Precision 5, scale 0

Varbinary 1 to 8,000 bytes Binary 1 to 104,857,600 bytes

Varchar 1 to 8,000 characters String 1 to 104,857,600 characters

Unsupported Microsoft SQL Server DatatypesThe Developer tool does not support certain Microsoft SQL Server datatypes.

The Developer tool does not support the following Microsoft SQL Server datatypes:

¨ Bigint

¨ Nchar

¨ Ntext

160 Appendix A: Datatype Reference

Page 174: In 910 Dev UserGuide En

¨ Numeric Identity

¨ Nvarchar

¨ Sql_variant

Nonrelational and Transformation DatatypesNonrelational datatypes map to transformation datatypes that the Data Integration Service uses to move dataacross platforms.

Nonrelational datatypes apply to the following types of connections:

¨ Adabas

¨ IMS

¨ Sequential

¨ VSAM

The following table compares nonrelational datatypes and transformation datatypes:

Nonrelational Precision Transformation Range

BIN 10 Binary 1 to 104,857,600 bytesYou can pass binary data from a source to a target, but youcannot perform transformations on binary data. Binary data forCOBOL or flat file sources is not supported.

CHAR 10 String 1 to 104,857,600 charactersFixed-length or varying-length string.

DATE 10 Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D.Combined date/time value, with precision to the nanosecond.

DOUBLE 18 Double Precision of 15 digitsDouble-precision floating-point numeric value.

FLOAT 7 Double Precision of 15 digitsDouble-precision floating-point numeric value.

NUM8 3 Small Integer Precision of 5 and scale of 0Integer value.

NUM8U 3 Small Integer Precision of 5 and scale of 0Integer value.

NUM16 5 Small Integer Precision of 5 and scale of 0Integer value.

NUM16U 5 Integer Precision of 10 and scale of 0Integer value.

NUM32 10 Integer Precision of 10 and scale of 0Integer value.

Nonrelational and Transformation Datatypes 161

Page 175: In 910 Dev UserGuide En

Nonrelational Precision Transformation Range

NUM32U 10 Double Precision of 15 digitsDouble-precision floating-point numeric value.

NUM64 19 Decimal Precision 1 to 28 digits, scale 0 to 28Decimal value with declared precision and scale. Scale must beless than or equal to precision. If you pass a value withnegative scale or declared precision greater than 28, the DataIntegration Service converts it to a double.

NUM64U 19 Decimal Precision 1 to 28 digits, scale 0 to 28Decimal value with declared precision and scale. Scale must beless than or equal to precision. If you pass a value withnegative scale or declared precision greater than 28, the DataIntegration Service converts it to a double.

NUMCHAR String 1 to 104,857,600 charactersFixed-length or varying-length string.

PACKED 15 Decimal Precision 1 to 28 digits, scale 0 to 28Decimal value with declared precision and scale. Scale must beless than or equal to precision. If you pass a value withnegative scale or declared precision greater than 28, the DataIntegration Service converts it to a double.

TIME 5 Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D.Combined date/time value, with precision to the nanosecond.

TIMESTAMP 5 Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D.Combined date/time value, with precision to the nanosecond.

UPACKED 15 Decimal Precision 1 to 28 digits, scale 0 to 28Decimal value with declared precision and scale. Scale must beless than or equal to precision. If you pass a value withnegative scale or declared precision greater than 28, the DataIntegration Service converts it to a double.

UZONED 15 Decimal Precision 1 to 28 digits, scale 0 to 28Decimal value with declared precision and scale. Scale must beless than or equal to precision. If you pass a value withnegative scale or declared precision greater than 28, the DataIntegration Service converts it to a double.

VARBIN 10 Binary 1 to 104,857,600 bytesYou can pass binary data from a source to a target, but youcannot perform transformations on binary data. Binary data forCOBOL or flat file sources is not supported.

VARCHAR 10 String 1 to 104,857,600 charactersFixed-length or varying-length string.

ZONED 15 Decimal Precision 1 to 28 digits, scale 0 to 28Decimal value with declared precision and scale. Scale must beless than or equal to precision. If you pass a value withnegative scale or declared precision greater than 28, the DataIntegration Service converts it to a double.

162 Appendix A: Datatype Reference

Page 176: In 910 Dev UserGuide En

ODBC and Transformation DatatypesODBC datatypes map to transformation datatypes that the Data Integration Service uses to move data acrossplatforms.

The following table compares ODBC datatypes, such as Microsoft Access or Excel, to transformation datatypes:

Datatype Transformation Range

Bigint Bigint -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807Precision 19, scale 0

Binary Binary 1 to 104,857,600 bytes

Bit String 1 to 104,857,600 characters

Char String 1 to 104,857,600 characters

Date Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D.(precision to the nanosecond)

Decimal Decimal Precision 1 to 28, scale 0 to 28

Double Double Precision 15

Float Double Precision 15

Integer Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Long Varbinary Binary 1 to 104,857,600 bytes

Nchar String 1 to 104,857,600 characters

Nvarchar String 1 to 104,857,600 characters

Ntext Text 1 to 104,857,600 characters

Numeric Decimal Precision 1 to 28, scale 0 to 28

Real Double Precision 15

Smallint Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

Text Text 1 to 104,857,600 characters

Time Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D.(precision to the nanosecond)

Timestamp Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D.(precision to the nanosecond)

Tinyint Integer -2,147,483,648 to 2,147,483,647Precision 10, scale 0

ODBC and Transformation Datatypes 163

Page 177: In 910 Dev UserGuide En

Datatype Transformation Range

Varbinary Binary 1 to 104,857,600 bytes

Varchar String 1 to 104,857,600 characters

Oracle and Transformation DatatypesOracle datatypes map to transformation datatypes that the Data Integration Service uses to move data acrossplatforms.

The following table compares Oracle datatypes and transformation datatypes:

Oracle Range Transformation Range

Blob Up to 4 GB Binary 1 to 104,857,600 bytes

Char(L) 1 to 2,000 bytes String 1 to 104,857,600characters

Clob Up to 4 GB Text 1 to 104,857,600characters

Date Jan. 1, 4712 B.C. to Dec. 31,4712 A.D.Precision 19, scale 0

Date/Time Jan 1, 0001 A.D. toDec 31, 9999 A.D.(precision to thenanosecond)

Long Up to 2 GB Text 1 to 104,857,600charactersIf you include Longdata in a mapping, theIntegration Serviceconverts it to thetransformation Stringdatatype, and truncatesit to 104,857,600characters.

Long Raw Up to 2 GB Binary 1 to 104,857,600 bytes

Nchar 1 to 2,000 bytes String 1 to 104,857,600characters

Nclob Up to 4 GB Text 1 to 104,857,600characters

Number Precision of 1 to 38 Double Precision of 15

Number(P,S) Precision of 1 to 38,scale of 0 to 38

Decimal Precision of 1 to 28,scale of 0 to 28

164 Appendix A: Datatype Reference

Page 178: In 910 Dev UserGuide En

Oracle Range Transformation Range

Nvarchar2 1 to 4,000 bytes String 1 to 104,857,600characters

Raw 1 to 2,000 bytes Binary 1 to 104,857,600 bytes

Timestamp Jan. 1, 4712 B.C. to Dec. 31,9999 A.D.Precision 19 to 29, scale 0 to 9(precision to the nanosecond)

Date/Time Jan 1, 0001 A.D. toDec 31, 9999 A.D.(precision to thenanosecond)

Varchar 1 to 4,000 bytes String 1 to 104,857,600characters

Varchar2 1 to 4,000 bytes String 1 to 104,857,600characters

XMLType Up to 4 GB Text 1 to 104,857,600characters

Number(P,S) DatatypeThe Developer tool supports Oracle Number(P,S) values with negative scale. However, it does not supportNumber(P,S) values with scale greater than precision 28 or a negative precision.

If you import a table with an Oracle Number with a negative scale, the Developer tool displays it as a Decimaldatatype. However, the Data Integration Service converts it to a double.

Char, Varchar, Clob DatatypesWhen the Data Integration Service uses the Unicode data movement mode, it reads the precision of Char,Varchar, and Clob columns based on the length semantics that you set for columns in the Oracle database.

If you use the byte semantics to determine column length, the Data Integration Service reads the precision as thenumber of bytes. If you use the char semantics, the Data Integration Service reads the precision as the number ofcharacters.

Unsupported Oracle DatatypesThe Developer tool does not support certain Oracle datatypes.

The Developer tool does not support the following Oracle datatypes:

¨ Bfile

¨ Interval Day to Second

¨ Interval Year to Month

¨ Mslabel

¨ Raw Mslabel

¨ Rowid

¨ Timestamp with Local Time Zone

¨ Timestamp with Time Zone

Oracle and Transformation Datatypes 165

Page 179: In 910 Dev UserGuide En

XML and Transformation DatatypesXML datatypes map to transformation datatypes that the Data Integration Service uses to move data acrossplatforms.

The Data Integration Service supports all XML datatypes specified in the W3C May 2, 2001 Recommendation. Formore information about XML datatypes, see the W3C specifications for XML datatypes at http://www.w3.org/TR/xmlschema-2..

The following table compares XML datatypes to transformation datatypes:

Datatype Transformation Range

anyURI String 1 to 104,857,600 characters

base64Binary Binary 1 to 104,857,600 bytes

boolean String 1 to 104,857,600 characters

byte Integer -2,147,483,648 to 2,147,483,647

date Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D. (precision to the nanosecond)

dateTime Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D. (precision to the nanosecond)

decimal Decimal Precision 1 to 28, scale 0 to 28

double Double Precision of 15 digits

duration String 1 to 104,857,600 characters

ENTITIES String 1 to 104,857,600 characters

ENTITY String 1 to 104,857,600 characters

float Double Precision of 15 digits

gDay String 1 to 104,857,600 characters

gMonth String 1 to 104,857,600 characters

gMonthDay String 1 to 104,857,600 characters

gYear String 1 to 104,857,600 characters

gYearMonth String 1 to 104,857,600 characters

hexBinary Binary 1 to 104,857,600 bytes

ID String 1 to 104,857,600 characters

IDREF String 1 to 104,857,600 characters

IDREFS String 1 to 104,857,600 characters

int Integer -2,147,483,648 to 2,147,483,647

166 Appendix A: Datatype Reference

Page 180: In 910 Dev UserGuide En

Datatype Transformation Range

integer Integer -2,147,483,648 to 2,147,483,647

language String 1 to 104,857,600 characters

long Bigint Precision of 19 digits, scale of 0

Name String 1 to 104,857,600 characters

NCName String 1 to 104,857,600 characters

negativeInteger Integer -2,147,483,648 to 2,147,483,647

NMTOKEN String 1 to 104,857,600 characters

NMTOKENS String 1 to 104,857,600 characters

nonNegativeInteger Integer -2,147,483,648 to 2,147,483,647

nonPositiveInteger Integer -2,147,483,648 to 2,147,483,647

normalizedString String 1 to 104,857,600 characters

NOTATION String 1 to 104,857,600 characters

positiveInteger Integer -2,147,483,648 to 2,147,483,647

QName String 1 to 104,857,600 characters

short Integer -2,147,483,648 to 2,147,483,647

string String 1 to 104,857,600 characters

time Date/Time Jan 1, 0001 A.D. to Dec 31, 9999 A.D. (precision to the nanosecond)

token String 1 to 104,857,600 characters

unsignedByte Integer -2,147,483,648 to 2,147,483,647

unsignedInt Integer -2,147,483,648 to 2,147,483,647

unsignedLong Bigint Precision of 19 digits, scale of 0

unsignedShort Integer -2,147,483,648 to 2,147,483,647

Converting DataYou can convert data from one datatype to another.

To convert data from one datatype to another, use one the following methods:

¨ Pass data between ports with different datatypes (port-to-port conversion).

Converting Data 167

Page 181: In 910 Dev UserGuide En

¨ Use transformation functions to convert data.

¨ Use transformation arithmetic operators to convert data.

Port-to-Port Data ConversionThe Data Integration Service converts data based on the datatype of the port. Each time data passes through aport, the Data Integration Service looks at the datatype assigned to the port and converts the data if necessary.

When you pass data between ports of the same numeric datatype and the data is transferred betweentransformations, the Data Integration Service does not convert the data to the scale and precision of the port thatthe data is passed to. For example, you transfer data between two transformations in a mapping. If you pass datafrom a decimal port with a precision of 5 to a decimal port with a precision of 4, the Data Integration Service storesthe value internally and does not truncate the data.

You can convert data by passing data between ports with different datatypes. For example, you can convert astring to a number by passing it to an Integer port.

The Data Integration Service performs port-to-port conversions between transformations and between the lasttransformation in a dataflow and a target.

The following table describes the port-to-port conversions that the Data Integration Service performs:

Datatype Bigint Integer Decimal

Double String,Text

Date/Time Binary

Bigint No Yes Yes Yes Yes No No

Integer Yes No Yes Yes Yes No No

Decimal Yes Yes No Yes Yes No No

Double Yes Yes Yes No Yes No No

String,Text

Yes Yes Yes Yes Yes Yes No

Date/Time

No No No No Yes Yes No

Binary No No No No No No Yes

168 Appendix A: Datatype Reference

Page 182: In 910 Dev UserGuide En

I N D E X

Aapplications

creating 107mapping deployment properties 109redeploying 110replacing 110updating 109, 110

attributesrelationships 142

Ccheat sheets

description 5configurations

troubleshooting 127connections

Adabas properties 14Connection Explorer view 29creating 30creating web service 30DB2 for i5/OS properties 15DB2 for z/OS properties 18IBM DB2 properties 19IMS properties 20Microsoft SQL Server properties 22ODBC properties 23Oracle properties 23overview 13SAP properties 24sequential properties 26VSAM properties 27web services properties 28

copydescription 11objects 12objects as links 12

custom queriesInformatica join syntax 43left outer join syntax 44normal join syntax 43outer join support 42right outer join syntax 46

custom SQL queriescreating 42customized data objects 42

customized data objectsadding pre- and post-mapping SQL commands 46adding relational data objects 48adding relational resources 48advanced query 36creating 47creating a custom query 42creating key relationships 38

creating keys 38custom SQL queries 42default query 36description 35entering source filters 39entering user-defined joins 41key relationships 37pre- and post-mapping SQL commands 46reserved words file 36select distinct 39simple query 36sorted ports 40troubleshooting 62user-defined joins 41using select distinct 39using sorted ports 40write properties 47

DData Integration Service

selecting 122data services

overview 130data viewer

configuration properties 125configurations 123creating configurations 123troubleshooting configurations 127

datatypesDB2 for i5/OS 157DB2 for z/OS 157flat file 158IBM DB2 158Microsoft SQL Server 159nonrelational 161ODBC 163Oracle 164overview 156port-to-port data conversion 168XML 166

default SQL queryviewing 42

dependenciesimplicit 69link path 69

deploymentmapping properties 109overview 106redeploying an application 110replacing applications 110to a Data Integration Service 108to file 108updating applications 110

169

Page 183: In 910 Dev UserGuide En

domainsadding 6description 5

Eexport

dependent objects 95objects 96overview 94to PowerCenter 99XML file 96

export to PowerCenterexport restrictions 103exporting objects 102options 101overview 99release compatibility 100rules and guidelines 104setting the compatibility level 100troubleshooting 105

expressionspushdown optimization 84

Ffilters 39flat file data objects

advanced properties 57column properties 51configuring read properties 54configuring write properties 57creating 58delimited, importing 59description 50fixed-width, importing 58general properties 51read properties 51, 55

folderscreating 9description 9

functionsavailable in sources 84pushdown optimization 84

IIBM DB2 sources

pushdown optimization 82identifying relationships

description 142import

application archives 98dependent objects 95objects 97overview 94XML file 96

Informatica Data Qualityoverview 2

Informatica Data Servicesoverview 3

Informatica Developeroverview 2setting up 5

Jjoin syntax

customized data objects 43Informatica syntax 43left outer join syntax 44normal join syntax 43right outer join syntax 46

Kkey relationships

creating between relational data objects 34creating in customized data objects 38customized data objects 37relational data objects 34

Llogical data object mappings

creating 143read mappings 143types 143write mappings 143

logical data object modelscreating 134description 134example 131overview 130

logical data objectsattribute relationships 142creating 142description 141example 131properties 141

logical view of datadeveloping 133overview 133

logsdescription 128

Mmapping optimizer levels

description 74mappings

adding objects 65configuration properties 125configurations 123, 124connection validation 71cost-based optimization method 78creating 64creating configurations 124deployment properties 109developing 64early projection optimization method 75early selection optimization method 76expression validation 72object dependency 63object validation 72objects 65optimization methods 75optimizer levels 74overview 63performance tuning 74

170 Index

Page 184: In 910 Dev UserGuide En

predicate optimization method 77running 72semi-join optimization method 78troubleshooting configurations 127validating 72validation 71

mappletscreating 92exporting to PowerCenter 100input 91output 92overview 90rules 91types 90validating 92

Microsoft SQL Server sourcespushdown optimization 82, 83pushdown optimization 82, 83

Model repositoryadding 7connecting 7description 6objects 6

monitoringdescription 128

Nnon-identifying relationships

description 142nonrelational data objects

description 49importing 49

nonrelational data operationscreating read transformations 49

nonrelational sourcespushdown optimization 83

Oobjects

copying 12copying as a link 12

operatorsavailable in sources 88pushdown optimization 88

Oracle sourcespushdown optimization 82

outer join supportcustomized data objects 42

Pparameter files

creating 119overview 112purpose 115running mappings with 115structure 115XML schema definition 117

parametersassigning 115creating 114overview 112purpose 113types 113

where to apply 114where to create 113

performance tuningcost-based optimization method 78creating data viewer configurations 123creating mapping configurations 124data viewer configurations 123early projection optimization method 75early selection optimization method 76mapping configurations 124mapping optimizer levels 74optimization methods 75overview 74predicate optimization method 77pushdown optimization 81semi-join optimization method 78

permissionsassigning 9

physical data objectscustomized data objects 35description 32flat file data objects 50nonrelational data objects 49relational data objects 33SAP data objects 60synchronization 61troubleshooting 62

port attributespropogating 68

portsconnection validation 71linking 66linking automatically 67linking by name 67linking by position 67linking manually 66linking rules and guidelines 67propagated attributes by transformation 69

pre- and post-mapping SQL commandsadding to customized data objects 46customized data objects 46

primary keyscreating in customized data objects 38creating in relational data objects 34

projectsassigning permissions 9creating 8description 8sharing 8

pushdown optimizationexpressions 84SAP sources 83functions 84IBM DB2 sources 82Microsoft SQL Server sources 82, 83nonrelational sources on z/OS 83ODBC sources 83operators 88Oracle sources 82overview 81relational sources 82sources 82Sybase ASE sources 83

Index 171

Page 185: In 910 Dev UserGuide En

Rread transformations

creating from nonrelational data operations 49creating from relational data objects 34

relational connectionsadding to customized data objects 48

relational data objectsadding to customized data objects 48creating key relationships 34creating keys 34creating read transformations 34description 33importing 35key relationships 34troubleshooting 62

relational sourcespushdown optimization 82

reserved words filecreating 37customized data objects 36

SSAP data objects

description 60importing 61

SAP sourcespushdown optimization 83

searchdescription 10searching for objects and properties 10

segmentscopying 73, 93

select distinctcustomized data objects 39using in customized data objects 39

self-joinscustom SQL queries 42

sorted portscustomized data objects 40using in customized data objects 40

source filtersentering 39

SQL data servicescreating 146defining 146example 131overview 130, 146previewing data 149

SQL queriespreviewing data 149

SQL query plansexample 154overview 153viewing 154

Sybase ASE sourcespushdown optimization 83

synchronizationcustomized data objects 61

physical data objects 61

Ttroubleshooting

exporting objects to PowerCenter 105

Uuser-defined joins

customized data objects 41entering 41Informatica syntax 43left outer join syntax 44normal join syntax 43outer join support 42right outer join syntax 46

Vvalidation

configuring preferences 11views

Connection Explorer view 29description 4

virtual dataoverview 145

virtual stored procedurescreating 152defining 152overview 151previewing output 153validating 153

virtual table mappingscreating 150defining 150description 149previewing output 151validating 151

virtual tablescreating from a data object 148creating manually 148data access methods 147defining relationships 149description 147example 131previewing data 149

Wweb service

example 132web services

overview 130Welcome page

description 5workbench

description 4

172 Index