41
COGNOS Development Standards and Guidelines Version : 1.2 TCS Confidential

COGNOS_Development_Guidelines_and_Standards V1.2

Embed Size (px)

Citation preview

Page 1: COGNOS_Development_Guidelines_and_Standards V1.2

COGNOS Development Standards and Guidelines

Version : 1.2

TCS Confidential

Page 2: COGNOS_Development_Guidelines_and_Standards V1.2

Table of Contents

1 Naming Conventions...................................................................................................41.1 Impromptu Catalog..............................................................................................41.2 Impromptu Reports..............................................................................................41.3 PowerPlay Models and Cubes...........................................................................541.4 Visualizer.............................................................................................................51.5 KPI Business Pack...............................................................................................51.6 Upfront.................................................................................................................51.7 Access Manager...................................................................................................51.8 Environment........................................................................................................5

1.8.1 Developer Workstation Directory Structure................................................51.8.2 Server Directory Structure.........................................................................65

1.9 Visual consistency...............................................................................................61.10 Formating.............................................................................................................6

2 Impromptu.................................................................................................................872.1 Catalog...............................................................................................................87

2.1.1 Dimension Folders.....................................................................................872.1.2 Fact Folders...............................................................................................98

2.2 Drill Through Reports......................................................................................1092.2.1 Report Components...............................................................................1110

3 PowerPlay..............................................................................................................12113.1 Data Sources..................................................................................................1211

3.1.1 Dimensions............................................................................................12113.1.2 Facts.......................................................................................................1211

3.2 Models...........................................................................................................12113.2.1 Level Properties.....................................................................................14133.2.2 Number of Dimensions..........................................................................15143.2.3 Number of Levels/Categories................................................................15143.2.4 Number of Measures..............................................................................15143.2.5 Incremental Build..................................................................................15143.2.6 Compressed Cubes.................................................................................15143.2.7 Cross-Tab Caching................................................................................15143.2.8 Cube Groups (Detail & Summary)........................................................15143.2.9 Data Source Properties..........................................................................1514

4 Visualizer...............................................................................................................16155 KPI Business Pack.................................................................................................17166 Upfront...................................................................................................................18177 Access Manager.....................................................................................................19188 Design Guidelines..................................................................................................2019

8.1 Common Dimensions....................................................................................20198.1.1 Required Dimensions.............................................................................20198.1.2 Optional Dimension...............................................................................2120

8.2 Performance...................................................................................................21209 Miscellaneous Tips & Techniques.........................................................................2221

9.1 Nth Percentile Calculation.............................................................................2221

- 2 -TCS Confidential

Page 3: COGNOS_Development_Guidelines_and_Standards V1.2

9.1.1 Quartile in MS-Excel.............................................................................22219.1.2 Implementation in Impromptu...............................................................2322

9.2 V% / Vprime Calculation..............................................................................25249.3 Cumulative Trends.........................................................................................25249.4 Alerts..............................................................................................................25249.5 Partitioning....................................................................................................25249.6 Oracle.............................................................................................................2827

9.6.1 Fetch Settings.........................................................................................28279.6.2 Numeric Formats...................................................................................2928

9.7 General...........................................................................................................29289.7.1 Currency Conversion.............................................................................29289.7.2 Fiscal Calendar......................................................................................29289.7.3 Dimension Hierarchy.............................................................................30299.7.4 Modality.................................................................................................31309.7.5 Company................................................................................................3231

- 3 -TCS Confidential

Page 4: COGNOS_Development_Guidelines_and_Standards V1.2

1 Naming ConventionsIn order to be able to leverage the excellent features of Cognos suite, a strict naming convention needs to be adhered to. This will allow for optimal drill across and drill thru to functionality. All file names should be in lowercase (with _ separator) to simplify the UNIX case sensitivity issues. All other names (i.e. Impromptu Folders, Dimension Levels) should be Mixed Case with a space separator.

1.1 Impromptu CatalogBelow are the naming convention used for existing Catalogs. General naming convention is to use project prefix to all the objectsGenesis projects should use the following names for the catalogs:

genap Accounts Payablegenar Accounts Receivablegenbom Bills of Materialgengl General Ledgergeninv Inventorygenmfg Manufacturing/Financegenom Order Managementgenpo Purchase Ordersgenwip Work in Progress

Any additional catalogs will have the same convention, , followed by 2-3 characters that descript the functional area the catalogs are designed for.

1.2 Impromptu ReportsReport names should use the following convention:

genap_aaaa_dim Where the Dim signifies a dimension or structural query for eventual use in building a PowerPlay model. The aaaa is a description corresponding to the dimension. Examples could be Modality or Product.

genap_aaaa_fact Where the Fact signifies a transactional query for even eventual use in building a PowerPlay model. The aaaa is a description corresponding to the transaction data. Examples could be Orders or Sales.

Above two naming convention are used for IMR files used to generate iqd’s

genap_aaaa_drill Where the Drill signifies a drill through report. The aaaa is an description corresponding to the function of the report.

Impromptu Query Definition files should always be created from and IMR and use the same naming conventions. Under no circumstances are .iqd files to be edited independently from the Impromptu .imr source file. /Framwork query subject

- 4 -TCS Confidential

Page 5: COGNOS_Development_Guidelines_and_Standards V1.2

Impromptu Query Definition files are created using Report Net Framework Manager model for all new Projects. If Reportnet Framework Manager cannot support any requirements which was supported in Impromptu. BI Operations team will investigate and take up this with Cognos Support

1.3 PowerPlay Models and CubesModels will follow this naming convention. Use the module name as above (project name), followed by an alpha name that will briefly describe the intended audience.

Edw_ar_agingGENAP_BAL which would be for Genesis Accounts Payable, Balances Owed.

In the instances when there are multiple cubes in one area, such as AP, use the remainder of the naming convention to clarify its usage. Note that the naming convention should be followed through to the cube name as well. This will facilitate finding the “stream of evidence” from the model to the cube name to the published cube name in PPES.

Model Name edwargenar_aging.mdl Make sure to save both versionsedwargenar_aging.pyi

Cube Name edwargenar_aging.mdc Give this name in model for cubePublished Cube Name edwaregenar_aging Be sure to add a description

1.4 VisualizerTBD

1.5 KPI Business Pack TBD

1.6 UpfrontTBD

1.7 Access ManagerTBD

1.8 Environment

1.8.1 Developer Workstation Directory StructureAll developers should use the following directory structure to minimize the impact in a team development environment.

\cognoswork\project n (name of project)

\architect (contains Architect model files, used for Cognos Query - .cem, .cemx)\catalog (contains the Impromptu catalog files - .cat)\csv (contains any flat files used for Transformer - .csv, .asc, .xls, .txt)\images (contains any images that are used within Impromptu Web Reports or PowerPlay

portable reports.bmp, .gif, .jpg)

- 5 -TCS Confidential

Page 6: COGNOS_Development_Guidelines_and_Standards V1.2

\imr (contains the Impromptu reports - .imr)\iqd (contains the Impromptu query definitions used to build the PowerPlay cubes - .iqd)\model (contains the Transformer models for building cubes - .mdl, .pyi)\viz (contains Cognos Visualizations - .vis)

1.8.2 Server Directory StructureAll projects must have the following directory structure on the server. Your project Architect (promoter) will then be able place the project objects in the appropriate directory for testing and migration to PRD.This structure is created with the UNIX script – create_struct.sh

\cognoswork\project n (name of project)

\architect (contains Architect model files, used for Cognos Query - .cem, .cemx)\backup (contains the backups of the cubes)\build (contains the cube file as it is being built)\catalog (contains the Impromptu catalog files - .cat)\csv (contains any flat files used for Transformer - .csv, .asc, .xls, .txt)\cubes (contains the PowerPlay cubes - .mdc)\flag (contains flag files noting database population and mirroring completion)\images (contains any images that are used within Impromptu Web Reports or PowerPlay

portable reports.bmp, .gif, .jpg)\imr (contains the Impromptu reports - .imr)\iqd (contains the Impromptu query definitions used to build the PowerPlay cubes - .iqd)\iwr (contains the Impromptu reports and catalog that will be published to the Upfront

-.cat, .imr)\logs (contains the log files generated when Transformer builds a cube - .log)\model (contains the Transformer models for building cubes - .mdl, .pyi)

\ppx (contains PowerPlay reports against the cubes in Enterprise Server - .ppr, .ppx)\reportstore (is the location where IWR publishes the reports to during the publishing

process)\script (contains shell scripts for automation)\vis (contains Cognos Visualizations - .vis)

1.9 Visual consistency

1.10Formating

Font Font Arial Font style Regular (Data)

Bold (Summation, Report Heading, Column Heading)

Size 8 (Data)8 (Column Heading, Summation)12 (Report Heading)

Effects NoneColor Black (Data, Report Heading)

White (Column Heading)Border DefaultBorder Line Style ½ pt

- 6 -TCS Confidential

Page 7: COGNOS_Development_Guidelines_and_Standards V1.2

Color BlackAlignment Left (Data, Column Heading)

Right (Amount & Summation)Center (Report Heading)

Patterns Pattern Clear (2nd pattern)Foreground White (Data, Report Heading)

Navy (Column Heading)Silver (Total)

Background WhiteDetail Numeric Fields (Fractional)

Normal 0.0000

Positive 0.0000Negative (0.0000)Zero 0.0000Missing 0

Data & Summation

Normal 0.00

Positive 0.00Negative (0.00)Zero 0.00Missing 0

Header Report oriented main information’s to be displayed. (Ex: Context to Be selected: "Aging_f")

Footer Enter the page number

Currency Display

Transactional Amount is displayed as given according to the prior format of Data & Summation.Functional Amount is displayed as given according to the prior format of Data & Summation, with the “$” (dollar), sign in the left side. $1000.00

- 7 -TCS Confidential

Page 8: COGNOS_Development_Guidelines_and_Standards V1.2

2 Impromptu

2.1 CatalogCatalogs that are created for building PowerCubes and for IWR reports must have a separate folder for users to use that contains the relevant columns from the fact table and joined dimension tables.

For example:

If the above structure were to be included in a catalog, the appropriate joins would be made on the tables. All common dimension tables should be included in the same folder as the relevant fact table.Most of the tables have geglb as their prefix. Please remove this part of the name from the folder name for the catalog. The users prefer to see names that are meaningful and not cryptic. Note, even if you remove this from the Folder view the fully qualified names will be used in the .icr report.

2.1.1 Dimension FoldersA folder must be created that maintains all the Dimension/Structural tables. All the data sources for the dimensions should be created only using these folders (unless a dimension is required to be created with manual categories). This will ensure that all Column names used for building dimensions & levels in PowerPlay have the same names across the functional catalogs.

1. Add all the Dimension Tables / Views to the Tables list in the Catalog-Tables window.

1. Do not make any joins to these tables in the Catalog-Joins window.2. Create a new folder with the name “ALL Dimensions”.3. Move all the folders that were created in step1 to this new folder.

- 8 -TCS Confidential

Page 9: COGNOS_Development_Guidelines_and_Standards V1.2

2.1.2 Fact FoldersThen, the developers need to create a separate folder that represents the hub of the star schema as depicted above. Then, include the related, joined dimension tables underneath the fact table folder as alias folders and label them accordingly.Keep adding the other tables in the star schema to create a structure that is grouped for all the dimension tables within the fact table.This structure gives clarity of what dimension tables can be used with the fact tables. Whenever there is a need to use additional attributes from the dimensions for any given fact table they should be taken from the corresponding alias dimension folder. This will ensure that there are no loops formed in the queries, in the event there is a need to join multiple fact tables. However, an effort should be made to minimize the need to join multiple fact tables as this could cause performance issues. With the combination of Dimension and Fact Folders, developers should be able to create the main fact query and the dimensional queries. The key columns will be used at the bottom of the dimensions with any hierarchy used to create multiple levels in the dimension. If the fact table is used with no joined columns from the dimension tables, the cube builds will be very quick. There may need to be some joins in most cases, however.

1. Add one of the required fact tables to the Tables list in the Catalog-Tables window.

2. Add all the dimension tables required to be used with the fact table added in step1, with the following naming convention as aliases:<Dimension table name> for <Fact group>

3. Create joins between these new alias dimension and the fact tables that were added in steps 1 & 2.

4. Create a new folder for the functional group of the fact table. Ex. Revenue, Orders, Quotes etc.

5. Move all the folders that were created as a result of Steps 1 & 2 into this new folder.

- 9 -TCS Confidential

Page 10: COGNOS_Development_Guidelines_and_Standards V1.2

6. If there are calculations needed, such as currency conversion, then there will need to be alias versions of some dimension tables added to the catalog. Please use the convention: DIMENSION NAME’ALIAS FOR fact table’. This will help identify the source.

7. Join this version of the dimension table to the fact table by the known key value(s). Create a separate folder, ALIAS DIMENSIONS to hold these folders. Move the alias dimensions into the ALAIS DIMENSIONS folder. Use the appropriate columns from the fact table and from the alias version of the dimension table to compute the values needed.

8. Since the users will likely want to see the dimension column name instead of the unique key value, alias tables for the dimensions needed should be created. Again, use the naming convention DIMENSION NAME’ALIAS FOR fact table’. Store these in the ALIAS DIMENSION folder. Make the necessary joins between the fact tables and the alias dimension tables. Use the alias dimension table as the source for the description column for the key column from the fact table.

2.2 Drill Through ReportsTo create the Drill Through report, start with the fact table Impromptu report. This will have all the same column names that are needed to join the cube to the drill through IMR. Note: If name changes are needed for the column names in the data source queries in the Transformer model, make the changes in the Impromptu Query window. Then the names will be identical in the Impromptu .iqd, in the Transformer data source query and in the Drill Through report. Before submitting a drill-thru report, its catalog must not be “Distributed Catalog”.

- 10 -TCS Confidential

Page 11: COGNOS_Development_Guidelines_and_Standards V1.2

2.2.1 Report ComponentsUse of DECODE statements is discouraged as it has a negative impact on performance. All translations of this nature should be pushed to the ETL (Extract, Translate, Load) layer.Make certain that appropriate indexes exist on the tables and are being utilized in the reports.Reports that will be used for building a PowerCube should not use SORT or GROUP BY operators as this is perform by the Transformer process of building cubes.

- 11 -TCS Confidential

Page 12: COGNOS_Development_Guidelines_and_Standards V1.2

3 PowerPlay

3.1 Data SourcesThere are two main classes of queries in Transformer: dimension queries and fact queries. Dimension queries are composed of columns whose values build category structures within a Transformer model. The columns of dimension queries are associated with the dimensions and levels of the model, and provide data that is used to generate categories within these dimensions. The most important point to remember about dimension queries is that they do not contain columns that represent measure values. Instead, they establish category structures, provide labels, sort values, descriptions, and so forth. A dimension query is associated with a particular dimension, and provides all of the columns necessary for "populating" it with categories.

3.1.1 DimensionsGenerally, dimension queries may not change as frequently over time as fact queries. For this reason, they may not have to be re-executed each time a cube is generated. For example, a structure query representing a geographic dimension may only need to be executed once if the geography for a model does not change.

3.1.2 FactsFact queries provide measure values for a cube. The columns in a transaction query are associated with measures, and with unique levels in the model. Unlike dimension queries, fact queries will change frequently, representing the latest data to be added to cubes. These queries will be the main drivers of cube generation, and are designed to minimize the data source processing time during cube production. Consequently, these queries should be designed to have small, concise records, with the absolute minimum amount of information required to add new data to the PowerCubes.

3.2 Models

Within the Transformer model, all dimensions must contain a unique category at the lowest level. This unique category (“Account Key” for example from Dimension Hierarchy Queries above) should exist both in the dimension hierarchy query AND all fact queries.

1. Using the Category Source Panel, populate the boxes as follows:

- 12 -TCS Confidential

Page 13: COGNOS_Development_Guidelines_and_Standards V1.2

(a)b. The Source value contains the code for the applicable level (i.e. “Account n” from

the Dimension Hierarchy Queries example above).c. The Label value contains the description for the applicable level (i.e. “Account n

Desc” from the Dimension Hierarchy Queries example above).d. The Description value is an optional field and contains the description for this

category label that will be displayed in the help window within PowerPlay web.e. The Short Name is an optional field and will only be applicable in PowerPlay

Client reports.f. If left empty, the Category Code will inherit the value from the Source. This

should remain empty under most circumstances.(a) NOTE, at the lowest level of the dimension, the Unique checkbox

must be checked.2. Using the Category Order By Panel, populate the box as follows:

- 13 -TCS Confidential

Page 14: COGNOS_Development_Guidelines_and_Standards V1.2

a. The Order By value will generally contain the label value from A2 above.b. The Sort Order radio button should be selected as applicable (either Ascending or

Descending).c. The Sort As radio button should be selected as applicable (either Alphabetic or

Numeric).3. The model name will be used as the name of the cube file (.mdc) and therefore the

model name should be populated as follows:a. Within the model, select File and click on Model Properties.b. In the Model Name textbox, enter a valid name:

i. The model name should NOT contain any spaces (use _ to substitute).

The model name will be the name of the .mdc file (which also appears in the Upfront title bar within the PowerPlay cube), so give it a meaningful name.

3.2.1 Level PropertiesSpecify a “Category Code” whenever possible. By default this is the same as the “Source Value”. This is used across the cube as a unique reference indicator. This is required for Manual Categories.With the separation of structural and fact queries PowerPlay must be able to identify each category in the level by its source alone. Therefore, levels should be unique, especially when there is a convergence level. If this is not implemented invalid results will occur when Transformer attempts to associate values with these categories.

3.2.2 Number of DimensionsKimball’s “Dimensional Scorecard” implies that 10 dimensions are practical in a single analysis module and that drill across functionality be deployed to optimize the business solution. When at all possible a single level dimension should be avoided.

- 14 -TCS Confidential

Page 15: COGNOS_Development_Guidelines_and_Standards V1.2

3.2.3 Number of Levels/CategoriesDrill-down performance is dependent on the number of categories it must display and simultaneously roll-up. As such, each dimension should be limited to no more than 5 levels and the numbers of categories in a hierarchy should ideally be distributed with a 1 to 64 ratio and should not exceed 1000.

3.2.4 Number of MeasuresThe total number of measures request in a cross tab display will impact the performance. Each measure can add 10-15% to the overall cube size which is also known to directly relate to cube performance. The total number of measures should be limited to 10.

3.2.5 Incremental BuildIncremental update is an effective way to optimize cube production by adding only the newest data to an existing PowerCube without reprocessing all previous data. Updates are much smaller than the entire rebuilding of the cube and are usually done much quicker. Incremental update can only be employed for cubes that have a static structure and where incremental transaction data is available.

This is not applicable to the GENESIS EDW.

3.2.6 Compressed CubesThis feature is only available for 32-bit Windows platforms.

This is not applicable to the GENESIS EDW.

3.2.7 Cross-Tab Caching This feature caches the initial cross-tab of a PowerCube view in order to improve performance when a PowerPlay user first opens the PowerCube.

3.2.8 Cube Groups (Detail & Summary)

3.2.9 Data Source PropertiesEnable Multi-processing, Transformer will use up to 2 processors to load balance the PowerCube generation process. Turning on this feature is highly recommended for queries that have calculated columns defined in Transformer. However, this has not proven to be advantageous on the GEMS infrastructure.

- 15 -TCS Confidential

Page 16: COGNOS_Development_Guidelines_and_Standards V1.2

4 VisualizerTBD

- 16 -TCS Confidential

Page 17: COGNOS_Development_Guidelines_and_Standards V1.2

5 KPI Business PackTBD

- 17 -TCS Confidential

Page 18: COGNOS_Development_Guidelines_and_Standards V1.2

6 Upfront

TBD We have to follow the Standard News Box structure for placing the Cognos Objects

NewsIndex will have Individual Project News Boxes and Administartion Newsboxes. Each project Newsboxes can have newsboxes and other objects as per the requirement.

Administartion folder is secured to have access only to Architects who publish the cognos objects to Cognos Upfront.

Upfront publishing standards

PowerPlay All cubes must be published to “//NewsIndex/Administration/PublishedCubes/<Project Folder>/<modelname>/”. If a link to the base cube is required in your project folder then copy a shortcut into your project folder. The same applies to PPXs that are published to PowerPlay Enterprise Server. All other references to this cube will be “saved views” and may be placed directly into your project folder structure.

IWR All Drill Thru reportsets must be published to “//NewsIndex/Administration/IWRDTR/<Project Folder>”. All source objects that reference this reportset can then use this common location. All standalone reportsets must be published to “//NewsIndex/Administration/PublishedReports/<Project Folder>/<reportsetname>”. Place a shortcut to these reports into your project folder.

VIZ All visualizations must be published to “//NewsIndex/Administration/PublishedViz/<Project Folder>”. A copy (not a shortcut as is required for powerplay and iwr) of these links can then be placed into your project folder.

- 18 -TCS Confidential

Page 19: COGNOS_Development_Guidelines_and_Standards V1.2

7 Access ManagerImplementation of security has been limited to associating the DBMS logon with the environment. There are no requirements for UserClass security at this point. In the near term UserClass views may be defined to clarify the users content. This implementation should be defined such that only one UserClass need be assigned to a User.

There are two types of Security can be inplimented for Series 7 objects.

1. Dynamic Sceurity2. Static Security

Dynamic Security is achieved by generating and implementing security scripts dynamically after reading the database. In case of Cubes security related scripts can be added MDL script and that can be executed before the cube builds to generate Models. This Model will have dimensional Views for each Userclass. Security script regenerated each time asper the schedule by reading stored parameters in Database.

Static Security is achieved by assigning User Class to a Cube/Newsitem/Newsbox/Reports .

Creation of Users and User Classes for Dynmaic Security scripts are done through batch maintenance scripts. For Static Security Objects Userclasses are created manually

- 19 -TCS Confidential

Page 20: COGNOS_Development_Guidelines_and_Standards V1.2

8 Design Guidelines

8.1 Common DimensionsTo provide the benefit of drill across and drill thru while passing along the filters, strict naming conventions need to be adhered to. These are the common dimensions required for GENESIS. Additional dimension definitions are included but are optional for the cubes.

8.1.1 Required Dimensions

TIME

Fiscal Year

Fiscal Quarter

Fiscal Month

Fiscal Week

Calendar Day

NOTE: For the Time Dimension, always use the “Tim Csm Gems Fiscal D V” folder in the catalog as the source. The Column Name on the transaction data source that pertains to the date needs to be labeled “Calendar Date” so that the dimension ties out. Rename the Dimension relative to the date referenced.Include a “Current Period” data source in the model that includes one row of data from the same source.

GeographyPole

Region

Zone

LCT

*** This is being redefined as a Cost Center Hier

PRODUCT

Product Segment

PSI Group

PSI Code

OrganizationBusiness GroupSet of BookLegal EntityOperating Unit

ModalityParent ModalitySubmodality

- 20 -TCS Confidential

Page 21: COGNOS_Development_Guidelines_and_Standards V1.2

ModalityMaterial Class

8.1.2 Optional DimensionHere are some common dimension definitions but are not required in all models.

Sales RepOperating UnitSales Rep

CustomerCustomer TypeCustomer CategoryCustomer Class

Rev Rec TermsRev Rec Terms

As a starting point for all projects baseline set of CAT, IMRs, IQDs and MDL can be located in Quickplace with these prebuilt.

8.2 PerformanceDependency factors include Speed of the network Size of the PPDS cache Operations performed in the report, such as calculations, ranking, exception

highlighting and zero suppression

Always use auto partition .

- 21 -TCS Confidential

Page 22: COGNOS_Development_Guidelines_and_Standards V1.2

9 Miscellaneous Tips & Techniques

9.1 Nth Percentile CalculationNth Percentile is a statistical calculation available in MS-Excel. Following section (Excerpt from Microsoft Corporation web site) describes the algorithm for Quartile function:

9.1.1 Quartile in MS-ExcelIn Microsoft Excel, the QUARTILE() function returns a specified quartile in an array of numeric values. QUARTILE() accepts 2 arguments: Array and Quart. Array is the range of values for which you want to find the quartile value. Quart indicates the value you want to return, where: 0 - Minimum value (Same as MIN()) 1 - 1st quartile - 25th percentile

2 - 2nd quartile - 50th percentile (Same as MEDIAN())3 - 3rd quartile - 75th percentile4 - 4th quartile - 100th percentile (Same as MAX())

NOTE: In Microsoft Excel versions 5.0 and later, you can use the Function Wizard to insert the QUARTILE() function, by clicking Function on the Insert menu. The Function Wizard gives you information about the function, as well as required and optional arguments.

Following is the algorithm used to calculate QUARTILE():

1. Find the kth smallest member in the array of values, where:K = (quart/4)*(n-1))+1If k is not an integer, truncate it but store the fractional portion (f) for use in step 3.

quart = value between 0 and 4 depending on which quartile you want to find.n = number of values in the array

2. Find the smallest data point in the array of values that is greater than the kth smallest, the (k+1)th smallest member.

3. Interpolate between the kth smallest and the (k+1)th smallest values: Output = a[k]+(f*(a[k+1]-a[k]))

a[k] = the kth smallest<BR/>a[k+1] = the k+1th smallest

ExampleTo find the 3rd quartile in the array of values, 0,2,3,5,6,8,9, follow these steps:

1. Find k and f:

k=TRUNC((3/4*(7-1))+1)=5

- 22 -TCS Confidential

Page 23: COGNOS_Development_Guidelines_and_Standards V1.2

f=(3/4*(7-1))-TRUNC(3/4*(7-1))=.5

2. The 5th (kth) smallest value is 6, and the (5+1)th smallest value is 8.

3. Interpolate: 6+(.5*(8-6))=7

9.1.2 Implementation in ImpromptuAs PowerPlay does not have a built-in function for calculation of Nth Percentile, there will be a need to calculate the same in Impromptu and feed as an externally rolled-up measure to PowerPlay model. Following example gives steps involved for achieving P01 (for a column called TAT) in Impromptu:

1. Create a regular report along with the column on which the Nth Percentile need to be calculated.

2. Add the following additional columns shown below:

Fig. 1

Data Element Name Dscription CalculationTAT for Pxx Source Data Column for

which Nth Percentile need to be calculated.

NA

Running Count of TAT for Pxx

Common for all Nth Percentile Calculations.

Running-count(TAT for Pxx)

Count of TAT for Pxx Total Count of Data Count(TAT for Pxx) – 1

- 23 -TCS Confidential

Page 24: COGNOS_Development_Guidelines_and_Standards V1.2

Column entries less 1

3. Create the additional column P01 Data using the formula shown in Fig.2.

Fig. 2 This Data element captures the Kth & (K+1)th elements per the following algorithm from the previous discussion:k=TRUNC((N/100*(Total Count-1))+1)

4. Create the final column for P01 as shown below:

Fig. 3 This data element implements the following equation:Kth Element +(f*((K+1)th Element – Kth Element))where in f=(N/100*(Total Count-1))-TRUNC(N/100*(Total Count-1)

- 24 -TCS Confidential

Page 25: COGNOS_Development_Guidelines_and_Standards V1.2

9.2 V% / Vprime Calculation

9.3 Cumulative Trends

We have seen requests for different rolling time periods in our cubes. To be consistent across cubes, we should establish the following three rolling categories in our time dimensions in Transformer:

Rolling 4 Weeks Rolling 13 Weeks Rolling 26 Weeks

9.4 AlertsNew calculated measures are being added, and these measures will be color-coded per alert specifications. This is because you cannot color a column based on an evaluation of other columns.

Whenever possible, employ custom exception highlighting in PowerPlay Web Explorer cube views rather than in PDFs. This is because views are interactive, and PDFs are static. If alerts are employed in views, then the alerts will be retained when drilling and slicing/dicing.

9.5 PartitioningAlways use auto-partitioning!

If a critical level within the dimensions can be identified, this can be used to improve performance significantly by assigning to similar partition numbers.

Partitioning is a process by which Transformer divides a large PowerCube into a set of nested "sub-cubes" called partitions. Partitioning optimizes run-time performance in PowerPlay by reducing the number of data records searched to satisfy each information request. For Large PowerCubes consisting of millions of rows, you should set up partitions to speed up cube access for PowerPlay users. Partitioning pre-summarizes the data in a PowerCube and groups it into several subordinate partitions so that it can be retrieved significantly faster than if the cube were not partitioned. Creating a very large cube without using partitions can result in poor run-time performance for PowerPlay users. While partitioning does significantly improve run-time access for large PowerCubes, there are some associated processing costs at PowerCube creation time. These are:

1. PowerCube size. Partitioned cubes are larger than non-partitioned cubes. 2. PowerCube creation time. Partitioned cubes take longer to create than non-

partitioned cubes.

Consider the following diagram:

- 25 -TCS Confidential

Page 26: COGNOS_Development_Guidelines_and_Standards V1.2

This represents the trade-off with respect to the effect of partitioning. On the left end of the spectrum, we have the time it takes to build cubes, on the other end we have the time it takes to navigate the cube for the end user. If partitioning is not employed, the build performance will be optimal, however, this comes at the potential cost of query performance for end users as they are navigating the cube. As the number of levels of partitioning increases, the time it takes to build the cube increases proportionally. However, this yields performance gains for the end users.

Transformer supports for auto-partitioning during cube building has greatly simplified the partitioning process as it determines the best partitioning strategy as it is creating the cube. In addition, the partitioning strategy is specific for each cube being created by a model. Unlike manual partitioning, the user does not have to have a strong understanding of partitioning to be able to effective partition PowerCubes. The auto-partition feature is controlled through an optimization on the ‘Processing’ tab of the PowerCube property sheet. When the auto-partition optimization is used, it enables the ‘Auto-Partition’ tab seen below.

Figure 3. Auto-Partition Tab of the PowerCube Property Sheet.

- 26 -TCS Confidential

Page 27: COGNOS_Development_Guidelines_and_Standards V1.2

Control Description

Estimated Number of Consolidated Rows

Default: 10,000,000 rows

This control is used to serve as an estimation of the number of rows after consolidation the cube would contain. The default value is set to the published maximum number of consolidated rows a cube can have in PowerPlay 5.21. This value can be changed and is used to scale the desired partition size controls.

Desired Partition Size

Default: 500,000 rows or 5% of the Estimated Number of Consolidated Rows control.

These controls are used to set the desired size for each partition in the cube. The slider control conveys the trade-off between optimizing for cube build performance and end user query performance in PowerPlay clients.

The slider control is gauged at 1% increments with the maximum being 100% of the Estimated Number of Consolidated Rows control. The maximum desired partition size(100%) will be set with the slider at the cube build performance side, and minimum(1%) at the end user query performance side of the control.

The Desired Partition Size edit control will reflect the desired partition size as a number of rows as reflected by the position of the slider control. Also, the desired partition size can be set by typing a value in the edit control. The slider will reflect this setting as a percentage of the Estimated Number of Consolidated Rows.

Maximum Number of Levels of Partitioning

Default: 5

The value of this control is used as a safeguard in the case in which the Desired Partition Size is too small compared to the number of rows added to the Cube.

This value also safeguards against "shallow" models that lack depth in terms of dimension levels.

Each level of partitioning represents additional passes of a portion of the original source data. As the number of levels of partitioning increases, a subsequent increase in the number of passes of the data will occur.

Use Multiple Processes

Default: Unchecked

If a cube has this option set, the cube generation phase will exploit the use of multiple processes when possible.

- 27 -TCS Confidential

Page 28: COGNOS_Development_Guidelines_and_Standards V1.2

When specifying the auto partitioning strategies using these controls consider the following:

The estimated number of rows is merely an estimate used to allow you to scale the desired partition size controls. This setting does not have to be accurate.

When setting the desired partition size, don’t hesitate setting it larger that you would have set the ‘Maximum number of transactions per Commit" setting in 5.21. The auto-partition algorithm does a very good job creating near equivalent sized partitions, thus the query time performance is on average much better than PowerCubes partitioned manually by 5.21.

The maximum number of levels of partitioning should only be modified when it is clear that the auto-partitioning algorithm is performing extra passes that are not improving the performance of the PowerCube.

When a cube is built using auto-partitioning, Transformer will employ a dynamic weighting algorithm to choose each candidate dimension for partitioning. This algorithm tends to favor dimensions with more levels that have category parent-child ratios that are consistent throughout the dimension. Next the auto-partitioning algorithm will dynamically assign a partitioning strategy to the categories in the dimension, which represent equivalent sized partitions as close to the desired partition size as possible. As Transformer processes each pass of partitioning the number of rows and categories left in the summary partition decreases. This is evident both through the animation and in the log file. A number of features are not supported by the auto-partition algorithm. They include externally rolled up measures, before rollup calculated measures, other cube optimizations. Before rollup calculated measures can be replaced by creating calculated columns. If the model includes settings that are not supported by the auto-partitioning algorithm then check model will indicate this with a warning. Incremental update only supports auto-partitioning during the initial load of the PowerCube. Incremental updates to the PowerCube will have performance comparable to that of 5.21. Refer to Section 2.2.3 for using incremental update and auto-partitioning. Certain dimensions are not good candidates for auto partitioning. They mainly consist of few levels, and large numbers of categories. It is possible to exclude these dimensions from the auto-partitioning algorithm by checking

check box found on the general tab of the dimension property sheet.

9.6 Oracle This section provides information that can be used to optimize the read performance for data sources stored using Oracle.

9.6.1 Fetch Settings When reading Oracle data sources, two settings are used to control the size of the buffer and number of rows when fetching rows. These two settings can be seen in the cogdmor.ini file located in the Transformer executable directory. These settings are as follows: Fetch Number of Rows. This setting is used to determine the number of rows to fetch each time a fetch operation is performed. The current limit for this setting is 32767. The default value for this setting is currently 10. However increasing it may yield a performance increase. In one experiment, we changed the value from 10 to 100, which yielded roughly a three-fold increase in performance. It should be noted,

- 28 -TCS Confidential

Page 29: COGNOS_Development_Guidelines_and_Standards V1.2

however, that increasing this number arbitrarily might cause performance degradation. Fetch Buffer Size. This setting can be used to control the size of the buffer used to fetch data. This value may also yield a performance increase depending on the situation. Fetch Number of Rows will take precedence if both settings are set. By default, this setting is disabled. It should be noted, that these settings can yield differing performance benefits depending on the system, although, through experimentation noticeable benefits may be realized.

9.6.2 Numeric Formats While source data is read from Oracle, integer values, that are greater than 10 digits, undergo a conversion to a double representation that Transformer uses to internally represent the values. If possible, consider storing the numeric values using a double equivalent in Oracle. This will eliminate the overhead of performing any conversions as Transformer is processing the data. This may be accomplished by using a temporary table to store the data records destined for Transformer. If storing the values using a double equivalent is not possible, you can force Oracle to perform the conversions by simply multiplying each numeric column by 1.0 in the SQL.

9.7 General

9.7.1 Currency Conversion

Join the fact table to the alias for OM_MOR_OP_RATES_V using:Invoice date = Conversion Date and Currency Key = From Currency and To Currency = ‘USD’

The AGING fact table already has a conversion rate included. The conversion rate from transactional to functional. To make that conversion, developers need only multiply the values by the conversion rate to go to functional.

To convert to USD, there must be a change. First, there must a rate for every date that has the USD to USD conversion rate of 1.0. Then, developers can compute the USD equivalent for the numeric values in the fact table.

9.7.2 Fiscal CalendarGE Medical Systems operates on a modified 5-4-4 calendar. As such, the time wizard feature offered in Cognos PowerPlay will not be useful in designing the time dimension. As an alternative, the following technique should be applied:

1. Create a Time Hierarchy query from the applicable data source using Impromptu.

a. Include Fiscal Year, Fiscal Quarter, Fiscal Period, Fiscal Week (if applicable) and Day (if applicable)

- 29 -TCS Confidential

Page 30: COGNOS_Development_Guidelines_and_Standards V1.2

b. Include a unique key for the time period that can be used by the fact query(s)

i. If the cube will drill down to the Day, then the Day can be used as the unique key

ii. If the cube drills down to the Fiscal Week, then a concatenation of the Fiscal Week and Fiscal Year should be used as the unique key

iii. If the cube drills down to the Fiscal Period, then a concatenation of the Fiscal Period and Fiscal Year should be used as the unique key

2. Create a Current Period query that will determine the unique key from 1b that can be used for the current period

3. Include the unique key identified in 1b in all applicable fact queries.

4. Within the cube, manually define the Time dimension as follows:

a. Drag the applicable attributes from your Time Hierarchy query to the Dimension Map.

b. Rename the dimension “Time” and double click on the Time label

c. Change the dimension type from “Regular” to “Time”

5. Ensure that the Current Period query is the ONLY query used to set the current period

a. Right click on the Current Period query and select the “General” tab

b. Ensure that the “Sets the current period” checkbox is checked

c. Right click on the other queries in the model and uncheck the “Sets the current period” checkbox.

6. Manually define the necessary relative time dimensions

7. Place the Time Dimension as the first dimension in your Dimension Map.

9.7.3 Dimension Hierarchy

Many of the dimension tables defined within GE Medical Systems use a pattern of “Level n” for the level code and “Level n Desc” for the level description. As a best practice, both of these attributes should be used when constructing the dimension queries.

1. Category text labels must be renamed within the Impromptu Report (.imr) and associated Impromptu Query Definition (.iqd) in order to maintain unique category names within the Transformer Model.

- 30 -TCS Confidential

Page 31: COGNOS_Development_Guidelines_and_Standards V1.2

2. Using the Account dimension table (“GEGL5_ACCOUNT_HIER_D” in DP…), the following convention should be used.

i. “Level n” should be renamed “Account Level n”ii. “Level n Desc” should be renamed “Account Level n Desc”iii. This naming convention should be used for all attributes (i.e. Level 1,

Level 2, etc.) within the dimension table.3. The lowest level of the hierarchy within the Impromptu Report (.imr) and associated

Impromptu Query Definition (.iqd) must be unique.i. Using the Account dimension table (“GEGL5_ACCOUNT_HIER_D” in

DP…), the following key and associated description attributes meet this condition.(1) “Account Key”(2) “Account Desc”

4. Only group and sort when necessary. i. Because Transformer inherently groups category values during the cube

build process, it is unnecessary to group within the Impromptu query in most situations.

All sorting of dimension categories is done during the cube build process, and therefore it is unnecessary to sort within the Impromptu query. Note that sorting is done automatically when the grouping feature is used.

9.7.4 ModalityThis is used to categorize the product types for GE Medical System. There are four levels of hierarchy including the lowest level of product key. The first level in the structure is a single value and does not provide any additional analysis capability, so it will be omitted from cubes. The dimension table used is geglb_modality_hierarchy_d in the Genesis system and should be a similarly named table in other systems.

1. Create a Modality Hierarchy query from the applicable source, using Impromptu.a. Include Level 1, Level 2, Level 3, Level 1 Desc, Level 2 Desc, Level 3

Desc, Modality Desc, and Modality Keyb. Rename in the Impromptu Query window as follows:

i. Level 1 to Modality Level 1ii. Level 2 to Modality Level 2

iii. Level 3 to Modality Level 3iv. Level 1 Desc to Modality Level 1 Descv. Level 2 Desc to Modality Level 2 Desc

vi. Level 3 Desc to Modality Level 3 Descc. Save the report and query files to their appropriate directories.d. Note that Modality Key is unique and will be at the lowest level of the

dimension2. In the Transformer model, include the Modality Hierarchy query.

- 31 -TCS Confidential

Page 32: COGNOS_Development_Guidelines_and_Standards V1.2

Create a dimension using the Modality Level 1, 2 and 3 columns in that order, from highest hierarchy to lowest. Use the corresponding columns Modality Level 1 Desc, 2 and 3 as the Label. Order the categories by the Modality Level x Desc columns. Insert Modality Key at the bottom of the dimension. Declare it Unique in the Properties window and used Modality Desc as the Label. Order it using Modality Desc.

9.7.5 CompanyThis is used to categorize the hierarchy of the company. There are five levels of hierarchy, including the lowest level of company. The first level in the structure is a single value and does not provide any additional analysis, so it will be omitted from any cubes. The dimension table used in Genesis is geglb_company_hier_d, and should be similarly named in the other systems.

1. Create a Company Hierarchy query from the applicable source, using Impromptu.a. Include Level 2, Level 3, Level 4, Level 5, Level 1 Desc, Level 2 Desc,

Level 3 Desc, Level 4 Desc, Level 5 Desc, Company Key and Company Desc.

b. Rename in the Impromptu Query window as follows:i. Level 2 to Company Level 2

ii. Level 3 to Company Level 3iii. Level 4 to Company Level 4iv. Level 5 to Company Level 5v. Level 2 Desc to Company Level 2 Desc

vi. Level 3 Desc to Company Level 3 Descvii. Level 4 Desc to Company Level 4 Desc

viii. Level 5 Desc to Company Level 5 Desca. Save the report and query files to their appropriate directories.b. Note that Company Key is unique and will be at the lowest level of the

dimension3. In the Transformer model, include the Company Hierarchy query.

a. Create a dimension using the Company Level 2, 3, 4 and 5 columns in that order, from highest hierarchy to lowest. Use the corresponding columns Company Level 2 Desc, 3, 4, and 5 as the Label. Order the categories by the Company Level x Desc columns. Insert Company Key at the bottom of the dimension. Declare it Unique in the Properties window and used Company Desc as the Label. Order it using Company Desc.

- 32 -TCS Confidential