124
Version 11.70 Overview John F. Miller III, IBM 1

Version 11.70 Overview

  • Upload
    summer

  • View
    39

  • Download
    5

Embed Size (px)

DESCRIPTION

Version 11.70 Overview. John F. Miller III, IBM. Talk Outline. 11.70 Overview Storage Enhancements Storage Provisioning Storage Optimization Compression Index Improvements Forest of Tree Indexes Create Index extent sizes Constraint without an index Miscellaneous Network Performance - PowerPoint PPT Presentation

Citation preview

Page 1: Version 11.70 Overview

Version 11.70 Overview

John F. Miller III, IBM

1

Page 2: Version 11.70 Overview

Talk Outline

• 11.70 Overview

• Storage Enhancements– Storage Provisioning

– Storage Optimization

– Compression

• Index Improvements– Forest of Tree Indexes

– Create Index extent sizes

– Constraint without an index

• Miscellaneous– Network Performance

– Pre-Load C-UDRs

• Fragmentation / Partitioning– Interval Fragmentation

– Add and Drop Fragments Online

– Fragment Level Statistics

• Data Warehouse– Multi Index Scans

– Star and Snowflake joins

2 Page 2

Page 3: Version 11.70 Overview

3

Storage Provisioning

Page 4: Version 11.70 Overview

What is Storage Provisioning

• To proactively or reactively add storage to eliminate out of space errors

– Monitoring spaces and automatically grow a container when its free space falls below a specific amount.

– Stalling an SQL which is about to fail because of insufficient space until space is allocated to the depleted container

• The ability to tell Informix about disk space that can be used to solve storage issues in the future

– Raw Devices

– Cooked Files

– Directories

4

Page 5: Version 11.70 Overview

Benefits of Storage Provisioning• "Out-of-space" errors are virtually eliminated.

• Manual expansion and creation of storage spaces without having to worry about where the space will come from

• Automatic expansion of dbspaces, temporary dbspaces, sbspaces, temporary sbspaces, and blobspaces.

• Feature is fully incorporated into OAT.

5

Page 6: Version 11.70 Overview

Storage Provisioning: The Power of 2

• Two available modes:

– Manual

– Automatic

• Two available space expansion methods:

– Chunk extension

– Chunk creation

• Two available interfaces:

– sysadmin task()/admin() functions (SQL interface)

– OAT (Graphical interface)

Page 6

Page 7: Version 11.70 Overview

Storage Pool Facts

• What is the Storage Pool

– How the DBA tell’s Informix about space it can use to solve future space issues

– A file, device, or directory in the pool is called an entry.

• There is one storage pool per IDS instance.

• You can add, modify, delete and purge storage pool entries.

7

EXECUTE FUNCTION task("storagepool add", “/work/dbspaces/dbs1", “0", “1GB",

“100MB", “1");

Page 8: Version 11.70 Overview

OAT’s View of the Storagepool

8

Summary of space left in the storage pool

Automatic policies

Page 9: Version 11.70 Overview

Extendable Chunks

• The ability to expand an existing chunk

• Default upon creation is non-expanding chunks

• Example of enabling the extendable property of chunk 13

• A Chunk can be extended automatically or manually

• Example of manually extending chunk 27 by 2 GB

• Extending chunks do NOT consume space from the storagepool

9

EXECUTE FUNCTION task(“modify chunk extendable on”, “13”)

EXECUTE FUNCTION task(“modify chunk extend”, “27”, “2GB”);

Page 10: Version 11.70 Overview

OAT’s View of the Chunk Pod

10

Fragmentation map of selected chunkChunk Actions

• Extend a Chunk• Add a new chunk• Modify chunk settings• Drop a chunk

Page 11: Version 11.70 Overview

Expanding a Storage Container

• Keep the addition of space to a storage container simple

– The creator of a storage container specifies how a space should grow

– Manual allocations of space, Just say do it

• Use the predefined container provisioning policies to allocated new space to a container

1.Determines if any chunk in the storage container is expandable

2. If no chunk can successfully expand, then add a new chunk

11

Page 12: Version 11.70 Overview

Expanding a Space in OAT

Page 12

Page 13: Version 11.70 Overview

Creating or Dropping a Space with the Storagepool

• You can create a new storage container utilizing the space from the storage pool

• Example of create a 100MB dbspace called orders_dbs

• You can drop an existing storage container and return the space to the storage pool

• Example of dropping a dbspace called dbs1

13

EXECUTE FUNCTION ADMIN ('create dbspace FROM STORAGEPOOL', 'orders_dbs', '100M')

EXECUTE FUNCTION ADMIN ('drop dbspace to storagepool', 'dbs1');

Page 14: Version 11.70 Overview

14

Save your Company

Page 15: Version 11.70 Overview

Prevent Accidental Disk Initialization

• Save companies from potential disasters

• Accidental disk re-initialization (i.e. oninit –i)

• New onconfig FULL_DISK_INIT

15

Value0 Only allow system initialization if page zero of

rootdbs is not recognized

1 Always allow system initializationAfter initialization, value is automatically set to 0

Page 16: Version 11.70 Overview

16

Storage Optimization

Page 16

Page 17: Version 11.70 Overview

Optimizing Tables

• As a DBA I need to …

– Reduce the number of extents a table contains

– Move all rows to the beginning of a table

– Return unused space at the end of a table to the system

– Shrink a partial used extent at the end of a table

Page 17

All this while accessing and modifying the table!!!

AND

While you are watching your favorite TV show

Page 18: Version 11.70 Overview

Storage Management Overview

• Defragment Extents

– Combine extents reduce the

• Data Compression

– Reduces the amount of storage taken by a single row

• Table Compaction

– Reduce the number of pages utilized by a table

• Index Compaction

– Ensure the index pages are kept full

• Automate the optimization of table storage

– Applies policies to optimize tablese

18

Page 19: Version 11.70 Overview

• The number of extents a table/partition can have has increased

• Defragment Extents

– Moves extents to be adjacent

– Merges the extents into a single extent

Example

Customer Extent 32

Optimizing Table Extents - Defragment

Page 19

Customer Extent 1

Customer Extent 2

Customer Extent 5

Customer Extent 4

Orders Extent 1

Items Extent 1

Orders Extent 2

Products Extent 1

Items Extent 2

dbspace1

Customer Extent 1

3

4

Customer Extent 1MERGE

MERGE

2

3EXECUTE FUNCTION ADMIN (‘DEFRAGMENT', ‘db1:customer')

Number of extents for the customer table

543

Page 20: Version 11.70 Overview

Optimizing Tables and Indexes

20

Page 21: Version 11.70 Overview

Defragment Table Extents OnLine

Page 21

Page 22: Version 11.70 Overview

Data Compression

• Reduce the space occupied by the row

• Compressing a table can be done online

• Compress either a table or fragment

• Custom dictionary built for each fragment to ensure highest levels of compression

• Tables with compressed rows are ALWAYS variable length rows

• Many Benefits

Smaller Archives

More data in the buffer pool

Fewer long/forwarded rows

Few I/O for same amount of data read/written

execute function task(“compress table”, “tab1”,”db”)

22

Page 23: Version 11.70 Overview

REPACK Command

• Moves all rows in a table/fragment to the beginning, leaving all the free space at the end of the table

• Online operation, users can be modifying the table

Tim

Frank

Chris

Jamie

John

Steve

Travis

Roy

Lenny

Customer

execute function task(“table repack”, “customer”, ”db”)

23

Page 24: Version 11.70 Overview

SHRINK Command

• Frees the space at end of table so other table can utilize this space

– Entire extents are free

– The last extent in a table can be partially freed

– Will not shrink a table smaller than the first extent size

• New command to modify first extent size

• “ALTER TABLE MODIFY EXTENT SIZE”

• Online operation

Tim

Frank

Chris

Jamie

John

Steve

Travis

Roy

Lenny

Customer

execute function task(“table shrink”, “customer”, ”db”)

24

Page 25: Version 11.70 Overview

Automatically Optimize Data Storage

25

Page 26: Version 11.70 Overview

26

Index Optimization

Page 26

Page 27: Version 11.70 Overview

Create Index with a Specific Extent Size

• Create Index with a Specific Extent Size

– The create index syntax has been enhanced to support the addition of an extent size for indexes

– Better sizing and utilization

• Default index extent size is the

– index key size / data row size * data extent size

Page 27

CREATE INDEX index_1 ON tab_1(col_1) EXTENT SIZE 32 NEXT SIZE 32;

Page 28: Version 11.70 Overview

Create Index Extent Sizes

• Ability to specify the extent size when creating an index

• Allow for optimal space allocation

• Utilities such as, dbschema, report index extent size

create index cust_ix1 on customer (cust_num) in rootdbs extent size 80 next size 40 ;

Page 28

Page 29: Version 11.70 Overview

Creating Constraints without an Index

• Saves the overhead of the index for small child tables

CREATE TABLE parent(c1 INT PRIMARY KEY CONSTRAINT parent_c1, c2 INT, c3 INT);

CREATE TABLE child(x1 INT, x2 INT, x3 VARCHAR(32));

ALTER TABLE child ADD CONSTRAINT(FOREIGN KEY(x1) REFERENCES parent(c1)

CONSTRAINT cons_child_x1 INDEX DISABLED);

Page 29

Page 30: Version 11.70 Overview

30

B-Tree Index

1,20,40,. .. 100, .. .,199 200, .. .

1,100,200,

1,2,3. .. 80,81, .. .99 100,101, .. . 180, . ..199 200,201,. .. 280, .. .

Rootnode

Tw igs

Leaves

The figure above one Root Node and access to the underlying twig and leave pages are

through this page, which is where there can be mutex contention

Page 30

Page 31: Version 11.70 Overview

31

New Index Type “Forest Of Trees”

• Traditional B-tree index suffer from performance issues when many concurrent users access the index

– Root Node contention can occur when many session are reading the same index at the same time

– The depth of large B-tree index increases the number of levels created, which results in more buffer reads required

• Forest Of Tress (FOT) reduces some of the B-tree index issues:

– Index is larger but often not deeperReduces the time for index traversals to leaf nodes

– Index has multiple subtrees (root nodes) called buckets Reduces root node contention by enabling more concurrent users

to access the index

Page 31

Page 32: Version 11.70 Overview

Forrest of Trees Index (FOT Index)

• Reduces contentions on an indexes root node

• Several root nodes

• Some B-Tree functional is NOT supported

– max() and min()

Page 32

Hash on key value to pick a bucket / rootnodeBucket 1 Bucket 2 Bucket 3

2,189,. .. 1,198, .. . 4 ,201,. ..

2,. . . 189, . .. 1, .. . 198,. . . 4, .. . 201, .. .

Rootnodes

Leaves

Key Value Bucket1 22 13 2... ...47 3

221 1

create index index_2 on TAB1( C1, C2 ) hash on ( C1 ) with 3 buckets;

Page 33: Version 11.70 Overview

33

• Check oncheck –pT information

• Check sysmaster database

select nhashcols, nbuckets from sysindices

Average Average

Level Total No. Keys Free Bytes

----- -------- -------- ----------

1 100 27 21350

2 655 15 4535

----- -------- -------- ----------

Total 755 42 25885

There are 100 Level 1 buckets (Root Nodes)

FOT - Determining use - ONCHECK & SYSMASTER

Page 34: Version 11.70 Overview

34

Network & UDR Performance

Page 34

Page 35: Version 11.70 Overview

Network Performance Improvements

• Caching network services

• Multiple listener threads for a single server name

• Multiple file descriptor servers

• Previous network improvements

– Dynamic start and stop of listener threads

– Pre-allocate users session

35 Page 35

Page 36: Version 11.70 Overview

Network Performance - Caching Network Services

• Database caching of Host, Services, Users and Groups

• Avoids going to the operating system for each network call

• Administrator defined timeout value set for network caches

• ONCONFIG exampleNS_CACHE host=900,service=900,user=900,group=900

• Each cache is dynamically configurable

• onstat –g cache prints out how effectiveness of the caches

36 Page 36

Page 37: Version 11.70 Overview

Network Performance – Multiple Listeners

• Able to define multiple listener threads for a single DBSERVERNAME and/or DBSERVERALIAS

• Add the number of listeners to the end of the alias

• EXAMPLE

– To start three listener threads for the idsserver

– Modify the ONCONFIG as followsDBSERVERNAME idsserver-3

37 Page 37

Page 38: Version 11.70 Overview

Network Performance Results• My simple network performance tests

– 200 users connecting and disconnecting

• Connection throughput on an AIX server improved by 480%

• Connection throughput on a Linux server improved by 720%

38

Computer Type WithoutImprovements

Utilizing Improvements

AIX 64 2m 5s 27s

Linux 64 10m 11s 1m 20s

Page 38

Page 39: Version 11.70 Overview

39

Improve Throughput of C User Defined Routines (C-UDR)

• Preloading a C-UDR shared library allows Informix threads to migrate from one VP to another during the execution of the C-UDR

– Increase in performance

– Balance workloads

• Without this feature– The C UDR shared libraries are loaded when the UDRs are first used

– The thread executing the UDR is bound to the VP for the duration of the C-UDR execution

PRELOAD_DLL_FILE $INFORMIXDIR/extend/test.udr

PRELOAD_DLL_FILE /var/tmp/my_lib.so

Page 39

Page 40: Version 11.70 Overview

40

Verifying the C-UDR shared library is preloaded

• online.log during server startup

14:23:41 Loading Module </var/tmp/test.udr>

14:23:41 The C Language Module </var/tmp/test.udr> loaded

• onstat –g dll new flags

– ‘P’ represents preloaded

– ‘M’ represents thread can migrate

Datablades: addr slot vp baseaddr flags filename 0x4b247310 15 1 0x2a985e3000 PM /var/tmp/test.udr 0x4c2bc310 15 2 0x2a985e3000 PM

0x4c2e5310 15 3 0x2a985e3000 PM

Page 41: Version 11.70 Overview

41

Update Statistics

Page 41

Page 42: Version 11.70 Overview

42 42

• New Brand Name and Editions

• Simplified Installation, Migration, and Portability

• Flexible Grid with Improved Availability

• Easy Embeddability

• Expanded Warehouse Infrastructure

• Empower Applications Development

• Enhanced Security Management

• Increased Performance

• Other Features

Agenda

Page 43: Version 11.70 Overview

43

Seamless installation and Smarter configuration

• Can migrate from Informix Version 11.50 11.10, 10.0, 9.40, or 7.31 directly to Informix Version 11.70

• New installation application, using the new ids_install command, makes it easier to install and configure Informix products and features

– A typical installation now has improved default settings to quickly install all of the products and features in the software bundle, with preconfigured settings

– The custom installation is also smarter than before and allows you to control what is installed

• Both types of installations allow you can create an instance that is initialized and ready to use after installation

• Must use a custom installation setup if you want to configure the instance for your business needs

Page 44: Version 11.70 Overview

44

Changes to Installation Commands

• Some installation commands changed – To take advantage of new and changed functionality– To improve consistency across products and operating systems

• Depreciated commands– installserver– installclientsdk– installconn

• Must use ids_install to install Informix with or without bundled software• New uninstallids command

– Removes the server, any bundled software, or both– To remove specific products

• uninstall/uninstall_server/uninstallserver • uninstall/uninstall_clientsdk/uninstallclientsdk • uninstall/uninstall_connect/uninstallconnect (formerly uninstallconn) • uninstall/uninstall_jdbc/uninstalljdbc.exe or java -jar

uninstall/uninstall_jdbc/uninstaller.jar (depending on how you install the JDBC driver)

Page 45: Version 11.70 Overview

45

Auto-Registration and Auto VP Creation

• Database extensions (formerly known as built-in DataBlade modules) are automatically registered

• Prerequisite tasks, such as registering the extensions or creating specialized virtual processors, no longer required

• The BTS, WFSVP, and MQ virtual processors are created automatically• The idsxmlvp virtual processor is created automatically when an XML

function is first used

• An sbspace is created automatically for basic text searches and spatial extensions, if a default sbspace does not exist

• Basic Text Search, Web Feature Service, Node, Spatial, Binary, Large Object Locator, Timeseries, MQ Messaging, and Informix web feature service now be used without first registering them in your database

Page 46: Version 11.70 Overview

46

dbschema and dbexport Enhancements

• dbschema and dbexport utility enhancement for omitting the specification of an owner

– Can use the new –nw option to generate the SQL for creating an object without specifying an owner

Page 47: Version 11.70 Overview

47

Generating Storage Spaces and Logs with dbschema

• Can now generate the schema of storage spaces, chunks, and physical and logical logs with the dbschema utility

• Choose to generate:– SQL administration API commands

dbschema -c dbschema1.out

– onspaces and onparams utility commands

dbschema -c –ns dbschema2.out

• For migrations, generate the schema before unload data using the dbexport and dbimport utilities

# Dbspace 1 -- Chunk 1

EXECUTE FUNCTION TASK ('create dbspace', 'rootdbs', '/export/home/informix/data/rootdbs',

'200000', '0', '2', '500', '100')

# Dbspace 2 -- Chunk 2

EXECUTE FUNCTION TASK ('create dbspace', 'datadbs1', '/export/home/informix/data/datadbs',

'5000000', '0', '2', '100', '100')

# Physical Log

EXECUTE FUNCTION TASK ('alter plog', 'rootdbs', '60000')

# Logical Log 1

EXECUTE FUNCTION TASK ('add log', 'rootdbs', '10000')

# Dbspace 1 -- Chunk 1

onspaces -c -d rootdbs -k 2 -p /export/home/informix/data/rootdbs -o 0 -s 200000 -en 500 -ef

100

# Dbspace 2 -- Chunk 2

onspaces -c -d datadbs1 -k 2 -p /export/home/informix/data/usrdbs -o 0 -s 5000000 -en 100 -

ef 100

# Logical Log 1

onparams -a -d rootdbs -s 10000

SQL administration API format

onspaces/onparams format

Page 48: Version 11.70 Overview

48

Support for the IF EXISTS and IF NOT EXISTS keywords

• Now you can include the IF NOT EXISTS keywords in SQL statements that create a database object (or a database)

• You can also include the IF EXISTS keywords in SQL statements that destroy a database object (or a database)

– If the condition is false, the CREATE or DROP operation has no effect, but no error is returned to the application

• Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax

Page 49: Version 11.70 Overview

49

Simplified SQL syntax for Defining Database Tables

• No more restrictions on the order in which column attributes can be defined in DDL statements

– Simplifies the syntax rules for column definitions in the CREATE TABLE and ALTER TABLE statements

• The specifications for default values can precede or follow any constraint definitions

• List of constraint definitions can also be followed (or preceded) by the default value, if a default is defined on the column

• The NULL or NOT NULL constraint does not need to be listed first if additional constraints are defined

• Simplifies the migration to Informix of SQL applications that were originally developed for other database servers that support this syntax

Page 50: Version 11.70 Overview

50

Stored Procedure Debugging (SPD)

• Need for application developers to debug SPL procedures in Informix when necessary

– Should be able to execute the SPL routine line by line, stepping into nested routines, analyzing the values of the local, global and loop variables

– Should be able to trace the execution of SPL procedures• Trace output should show the values of variables, arguments,

return values, SQL and ISAM error codes

• Pre-requisites– Informix 11.70 or above– Integration with the Optim Data Studio procedure debugger– Integration with Microsoft Visual Studio debugger

• DRDA must be enabled

Page 51: Version 11.70 Overview

51

SPD - Supported Commands

• Breakpoints

• Run

• Step Over

• Step Into (for nested procedures)

• Step Return

• Get Variable value

• Set Variable value

Page 52: Version 11.70 Overview

52

Explicit PDQ vs Implicit PDQ

• Explicit PDQ

– User setting (SET PDQPRIORITY statement)

– All queries in current session use same setting

• Implicit PDQ

– IDS determines resource requirement based on optimizer's estimates

– Each query can have different PDQ setting

Page 53: Version 11.70 Overview

53

Implicit PDQ - Enable

• SET ENVIRONMENT IMPLICIT_PDQ ON/OFF

– Enable/disable implicit PDQ for current session– When enabled

• Informix automatically determines an appropriate PDQPRIORITY value for each query

• Informix ignores explicit PDQ setting unless BOUND_IMPL_PDQ is also set

– When disabled• Informix does not override the current PDQPRIORITY setting

• SET ENVIRONMENT BOUND_IMPL_PDQ ON/OFF

– Use explicit PDQPRIORITY setting as upper bound when calculating implicit PDQ setting

Page 54: Version 11.70 Overview

54 54

• New Brand Name and Editions

• Simplified Installation, Migration, and Portability

• Flexible Grid with Improved Availability

• Easy Embeddability

• Expanded Warehouse Infrastructure

• Empower Applications Development

• Enhanced Security Management

• Increased Performance

• Other Features

Agenda

Page 55: Version 11.70 Overview

55

Deployment Assistant (DA) – Self Configuring

• Enables users to easily package snapshots of Informix instances and/or their data, in preparation for deployment

– In past releases a snapshot had to be manually created

• Built-in intelligence to capture and configure an Informix snapshot more easily

– Allows for reduction of the packaged instances to the user's minimum desired configuration

• Graphical User Interface (GUI) developed in Java/Eclipse SWT

• ifxdeployassist command

– Starts the deployment assistant interface, which prompts for the required information to capture the instance

Page 56: Version 11.70 Overview

56

Deployment Assistant (DA) – Packages

• Produces packages that are ready for use by the Deployment Utility (DU)

• Build a package containing– Informix– (Optional) pre-built database(s)– (Optional) applications

• Compress the package without using 3rd party compression tools (BZIP2, GZIP, TAR, and ZIP)

• Deploy, decompress, and install the package on multiple systems• Good for media distribution such as CDs• Supported on Windows and Linux• No current support for data on RAW devices

Page 57: Version 11.70 Overview

57

Deployment Assistant (DA) – Usage

• To run the Deployment Assistant, run the following command in <INFORMIXDIR>/bin:

ifxdeployassist

• On Windows, executing this command with the INFORMIXSERVER environment variable set will trigger automatic detection of the instance specified

Page 58: Version 11.70 Overview

58

Deployment Utility (DU) – New Options

• IDS 11.50.xC6– Available on all platforms – ifxdeploy– Can deploy a pre-configured snapshots of an IDS instances on one or more

machines by unzipping an archive, creating users, updating configuration files, setting file permissions, and so on

– Can create new instances from existing ones or from onconfig.std; or uninstall instances

– Chunks can be dynamically relocated to a new path at deployment time

• New command line options (11.70)– -start option will start the Informix instance after deployment and wait

for it to be initialized (equivalent to running oninit –w)• Optionally add a number of seconds to wait before returning the command• The Deployment Utility configuration file has a new option START

– -autorecommend option calculates optimal values for Informix configuration parameters based on planned usage for the instance and the host environment

Page 59: Version 11.70 Overview

59

Deployment Utility (DU) – Example

• To deploy a zipped tar file of an instance that:– Prints verbose messages– Sets the SERVERNUM to 2– Relocates the chunks to “/work/chunks”– Sets new TCP listening ports 9091. 9091– Starts the instance after deployment

export INFORMIXDIR=/work/ixdir2; export INFORMIXSERVER=ixserver2;ifxdeploy -file /work/snapshots/ifxdir.tgz -verbose -servernum 2 -relocate /work/chunks

-rootpath /work/chunks -sqliport 9091 -drdaport 9092 -start –y

• To create and start a new instance using an existing INFORMIXDIR:

export INFORMIXDIR=/work/ixdir; export INFORMIXSERVER=ixserver2;ifxdeploy -servernum 2 -sqliport 9091 -drdaport 9092 -start –y

Page 60: Version 11.70 Overview

60

Unique Event Alarms

• Informix uses the event alarm mechanism to notify the DBA about any major problems in the database server

• Default alarm program scripts

– UNIX• $INFORMIXDIR/etc/alarmprogram.sh

– Windows• %INFORMIXDIR%\etc\alarmprogram.bat

• ONCONFIG parameters

– ALARMPROGRAM

– SYSALARMPROGRAM

Page 61: Version 11.70 Overview

61

Unique Event Alarms - Overview

• Informix 11.70 has 79 Event Class IDs • For each of these event alarm class, there could be multiple

specific messages used by an event alarm class • In previous releases, not easy differentiating between one type of

event alarm vs. another for the same event alarm class – Required the user to parse the specific message string

which goes with the alarm program as one of its parameters• Very inconvenient for applications which deeply embed IDS • Panther provides unique numerical values for each specific

message– Applications can interpret and take actions against each

event alarm

Page 62: Version 11.70 Overview

62

Programmability Enhancements

• Consistent return codes for server initialization (oninit)• Very helpful for application which administer Informix in deep embedded

environments• The application can take the appropriate action to bring the instance On-

Line successfully• During server initialization in embedded environments, the application

may have to take actions for:– Shared memory creation/initialization failed – Could not find libelf/libpam/…– Incorrect command line syntax– Error reading/updating onconfig– Error calculating defaults in onconfig– Incorrect serial number– Not DBSA– Incorrect SQLHOSTS entries

#!/bin/sh

# Execute the oninit program

oninit

#Get the return code from oninit execution

RC=$?

# Validate the retun code and take necessary action

case $RC in

0) echo "RC=0: The database server was initialized successfully." ;;

1) echo "RC=1: Server initialization has failed." ;;

187) echo "RC=187: Check the entries in sqlhosts file." ;;

221) echo "RC=221: DUMPDIR missing. Creating DUMPDIR."

mkdir $INFORMIXDIR/tmp

chmod 770 $INFORMIXDIR/tmp ;;

*) echo "Return Code=$RC !" ;;

esac

Sample Shell Script

Page 63: Version 11.70 Overview

63

Embeddabillity – Other Features

• Automated DB Scheduler tasks added

– Automatic notification when IDS marks an index “bad”

– Automatic table storage optimization based on user settable parameters

• Informix Embeddability toolkit

– Tutorial for creating an end to end embeddability scenario

– Example scripts for using Deployment Assistant/Utility

• Install and Deployment API’s

– API’s to install and configure Informix from your application

Page 64: Version 11.70 Overview

64

Enhanced Security Management

Page 64

Page 65: Version 11.70 Overview

65

Selective Row Level Auditing (SRLA)

• onaudit

– Manages audit masks and configuration

– Need to be DBSSO or AAO

– DBSSO can perform functions related to audit setup

– AAO can perform functions related to audit analysis

– Examples• onaudit –l 1

• onaudit –c

• onaudit –a –u sqlqa –e +RDRW

• onshowaudit

– Lets AAO extract information from an audit trail

– Example: onshowaudit –n <servernumber>

Page 66: Version 11.70 Overview

66

Selective Row Level Auditing (SRLA) – What’s New?

• Previously, there was no way to enable auditing so that it excluded audit events on tables that you did not want to monitor with the onaudit utility

– Enabling can produce huge amounts of useless data

• The database system security officer (DBSSO) can now configure auditing so that row-level events are recorded for designated tables

– Versus for ALL tables used by the database server

• Ability to select only the tables that you want to audit on the row level

– Can improve database server performance, simplify audit trail records, and mine audit data more effectively

Page 67: Version 11.70 Overview

67

SRLA – Syntax

• New table level property added (AUDIT)– CREATE TABLE {existing syntax} [with AUDIT];– ALTER TABLE {existing syntax} [add AUDIT]; [drop AUDIT];

• ADTROWS – New parameter to Audit configuration file - adtcfg– 0

• NO changes in existing row level auditing behavior (default)– 1

• SRLA is enabled and only "audit" enabled tables • Will generate row-level audit records

Page 68: Version 11.70 Overview

68

Trusted Context – Why use it?

• Trusted Context is a feature developed by DB2

• Allow connection reuse under a different userid with authentication to avoid the overhead of establishing a new connection

• Allow connection reuse under a different userid without authentication

– Accommodate application servers that need to connect on behalf of an end-user but do not have access to that end-user’s password to establish a new connection on their behalf

• Allow users to gain additional privileges when their connection satisfies certain conditions defined at the database server

Page 69: Version 11.70 Overview

69

Trusted Context – What is it?

• Database object created by the database security administrator (DBSECADM)

– Defines a set of properties for a connection that when met, allow that connection to be a “trusted connection” with special properties

• The connection must be established by a specific user

• The connection must come from a trusted client machine

• The port over which the connection is made must have the required encryption

• If the above criteria are met, the connection will allow changes in userid and privileges as defined in the trusted context

Page 70: Version 11.70 Overview

70

Trusted Context – Steps

• Step 1: Create Trusted Context Objects

– Created at database level

– Must be created by DBSECADM before Trusted Connections can be established

– Can use OS users or Mapped Users

• Step 2: Establish Trusted Connections

– Must satisfy criteria defined in Trusted Context

– Provision to Switch User

– Use transactions within switched user session

Page 71: Version 11.70 Overview

71

Trusted Context – Steps

CREATE TRUSTED CONTEXT CTX1 BASED UPON CONNECTION USING SYSTEM AUTHID BOB DEFAULT ROLE MANAGER ENABLE ATTRIBUTES (ADDRESS '9.26.113.204') WITH USE FOR JOE, MARY WITHOUT AUTHENTICATION

• Creates an Trusted Context object named CTX1

• Will allow connections from 9.26.113.204

• Can switch to user Joe or Mary once Trusted Connection established

Page 72: Version 11.70 Overview

72

Trusted Context – Switching Users

• Switch to any user defined in the Trusted Context Object scope

• Perform database operations

• Audit records will show the switched user as the originator of the operations

• If using transactions, commit or rollback before switching to a new user

Page 73: Version 11.70 Overview

73

Informix Mapped Users

• Can now configure Informix so that users no longer require operating system accounts to connect

– Allows users authenticated by an external authentication service (such as Kerberos or Microsoft Active Directory) to connect to Informix

• When a DBSA turns on the USERMAPPING parameter of the onconfig file and maps externally authenticated users to user properties in tables of the SYSUSER database

• Onconfig variable– USERMAPPING OFF|ADMIN|BASIC

Page 74: Version 11.70 Overview

74

Informix Mapped Users – Example

• grant access to bob properties user fred;

– This means that when 'bob' connects to Informix, as far as the operating system access is concerned, Informix will use the UID, GID(s) and home directory for user 'fred' (which must be a user name known to the o/s)

• grant access to bob properties user fred, group (ifx_user), userauth (dbsa);

– This is similar to the previous entry. User 'bob' will use UID 3000 ('fred') and GIDs 3000 (users), 200 (staff) and the extra group 1000 (ifx_user)

– Additionally, assuming that USERMAPPING is set to ADMIN in the ONCONFIG file, then 'bob' will be treated as a DBSA

Page 75: Version 11.70 Overview

75 75

• New Brand Name and Editions

• Simplified Installation, Migration, and Portability

• Flexible Grid with Improved Availability

• Easy Embeddability

• Expanded Warehouse Infrastructure

• Empower Applications Development

• Enhanced Security Management

• Increased Performance

• Other Features

Agenda

Page 76: Version 11.70 Overview

76

What is a Flexible Grid?

• A named set of interconnected servers for propagating commands from an authorized server to the rest of the servers in the set

• Useful if you have multiple servers and you often need to perform the same tasks on every server

• The following types of tasks are easily run through a grid: – Administering servers– Updating the database schema and the data– Running or creating stored procedures or UDRs– Managing and Maintaining replication

Page 77: Version 11.70 Overview

77

What are the features of the new Informix Flexible Grid?

• Nodes in grid do not have to be identical – Different tables, different hardware, different OS’s, different

IDS versions

• Simplify creation and maintenance of a global grid – Create grid, attach to grid, detach from grid, add/drop node

to/from Grid – DDL/DML operations on any node propagated to all nodes in

the Grid – Management of grid can be done by any node in the grid – Tables no longer require primary keys – Integration with OpenAdmin Tool (OAT)

Page 78: Version 11.70 Overview

78

Define/Enable/Disable the Grid

• The GRID is managed by using the cdr utility

• Define– Defines the nodes within the grid

cdr define grid <grid_name> --allcdr define grid <grid_name> <node1 node2 …>

• Enable– Defines the nodes within the grid which can be used to perform a grid level operation

– Also is used to determine which users are allowed to perform the grid operation

cdr enable grid –grid=<grid_name> --user=<user> --node=<node>

• Disable– Used to remove a node or user from being able to perform grid operations

cdr disable grid –grid=<grid_name> --node=<node_name>cdr disable grid –grid=<gird_name> --user=<user_name>cdr disable grid –g <grid_name> -n <node_name> -u <user_name>

OAT support enabled

Page 79: Version 11.70 Overview

79

Propagating database object changes

• Can make changes to database objects while connected to the grid and propagate the changes to all the servers in the grid

• Can propagate creating, altering, and dropping database objects to servers in the grid

• The grid must exist and the grid routines must be executed as an authorized user from an authorized server

• To propagate database object changes:– Connect to the grid by running the ifx_grid_connect() procedure

– Run one or more SQL DDL statements

– Disconnect from the grid by running the ifx_grid_disconnect() procedure

Page 80: Version 11.70 Overview

80

Example of DDL propagation

execute procedure ifx_grid_connect(‘grid1’, ‘tag1’);

create database tstdb with log;

create table tab1 (col1 int primary key,col2 int,col3 char(20)) lock mode row;

create index idx1 on tab1 (col2);

create procedure loadtab1(maxnum int)define tnum int;for tnum = 1 to maxnum

insert into tab1 values (tnum, tnum * tnum, ‘mydata’);

end for:end procedure;

execute procedure ifx_grid_disconnect();

Will be executed

on all nodes

within the ‘grid1’

GRID

Page 81: Version 11.70 Overview

81

Monitoring a Grid

• cdr list grid – View information about server in the grid – View the commands that were run on servers in the grid– Without any options or a grid name, the output shows the list of

grids

• Servers in the grid on which users are authorized to run grid commands are marked with an asterisk (*)

• When you add a server to the grid, any commands that were previously run through the grid have a status of PENDING for that server

• Options include:--source=<source_node>--summary

--verbose--nacks--acks--pending

cdr list grid grid1

NEW: Monitor a cluster

onstat -g cluster

Page 82: Version 11.70 Overview

82

Informix Flexible Grid – Requirements

• Requirements

– Enterprise Replication must be running

– Servers must be on Panther (11.70.xC1)• Pre-panther servers within the ER domain cannot be part of the

GRID

Page 83: Version 11.70 Overview

83

Informix Flexible Grid Quickly CLONE a Primary server

• Previously, to clone the Primary– Create a level-0 backup– Transfer the backup to the new system– Restore the image– Initialize the instance

• ifxclone utility– Clones a primary server with minimal setup and configuration– Starts the backup and restore processes simultaneously

• No need to read or write data to disk or tape– Can create a standalone server or a remote standalone secondary

server– Add a server to a replication domain by cloning – Requires the DIRECT_IO configuration parameter to be set to 0 on

both the source and target servers– Data is transferred from the source server to the target server over the

network using encrypted SMX Connections

Page 84: Version 11.70 Overview

84

Informix Flexible Grid DDL on Secondary servers

• Can now automate table management in high-availability clusters by running Data Definition Language (DDL) statements on all servers

• Can run most DDL statements such as CREATE, ALTER, and DROP on secondary servers

• In previous releases, only Data Manipulation Language (DML) statements could be run on secondary servers

Page 85: Version 11.70 Overview

85

Replicate tables without primary keys

• No longer require a Primary Keys for tables replicated by Enterprise Replication (ER)

• Use the WITH ERKEY keyword when defining tables– Creates shadow columns (ifx_erkey_1, ifx_erkey_2, and

ifx_erkey_3) – Creates a new unique index and a unique constraint that ER uses

for a primary key• For most database operations, the ERKEY columns are hidden

– Not visible to statement like SELECT * FROM tablename; – Seen in DB-Access - Table Column information– Included in the number of columns (ncols) in the systables system

catalog• To view the contents of the ERKEY columns

SELECT ifx_erkey_1, ifx_erkey_2, ifx_erkey_3 FROM customer;

• ExampleCREATE TABLE customer (id INT) WITH ERKEY;

Page 86: Version 11.70 Overview

86

Transaction Survival during Cluster Failover

• Can now configure servers in a high-availability cluster environment to continue processing transactions after failover of the primary server

– Transactions running on secondary servers are not affected– Transactions running on the secondary server that becomes

the primary server are not affected– Transactions running on the failed primary server are

terminated

• Benefits – Reduce application development complexity

• Design applications to run on any node– Reduce application maintenance

• Reduce the application downtime of cleanup and restarting the application after a failover

Page 87: Version 11.70 Overview

87

Transaction Survival – Configuration

• FAILOVER_TX_TIMEOUT– Maximum number of seconds the server waits before rolling

back transactions after failure of the primary server – 0

• Disable transaction survival (default value)– > 0

• Enable transaction survival, 60 seconds seems reasonable

• On failover node, maximum time to wait for secondary nodes to reconnect before rollback

• On surviving secondary node, maximum time to wait before returning error to user. (-1803/-7351).

Page 88: Version 11.70 Overview

88

Fragmentation

Page 88

Page 89: Version 11.70 Overview

Two New Fragmentation Schemes• List Fragmentation

– Fragments data based on a list of discrete values

– Helps in logical segregation of data

– Useful when a table has finite set of values for the fragment key and queries on table have equality predicate on the fragment key

• Interval Fragmentation

– Fragments data based on an interval (numeric or time) value– Tables have an initial set of fragments defined by a range expression– When a row is inserted that does not fit in the initial range fragments,

Informix automatically creates a fragment to hold the row

89 Page 89

Page 90: Version 11.70 Overview

90

List Fragmentation

• Fragments data based on a list of discrete values

– e.g. states in the country or departments in an organization

• Table below is fragmented on column “state” – also known as fragment key or partitioning key

CREATE TABLE customer

(cust_id INTEGER, name VARCHAR(128), street VARCHAR(128),

state CHAR(2), zipcode INTEGER, phone CHAR(12))

FRAGMENT BY LIST (state)

PARTITION p0 VALUES ("WA","OR", "AZ") in rootdbs,

PARTITION p1 VALUES ("CA") in rootdbs,

PARTITION p2 VALUES (NULL) in rootdbs,

PARTITION p4 REMAINDER in rootdbs;

Fragment Key List Values

Page 90

Page 91: Version 11.70 Overview

91

Details of Interval Fragmentation

• Fragments data based on an interval (numeric or time) value

• Table’s initial set of fragment(s) are defined by a range expression

• When a row is inserted that does not fit in the initial range fragments

– Informix automatically creates a fragment to hold the row

– No exclusive access required for fragment addition

– No DBA intervention

• Purging a range can be done with a detach and drop

– No exclusive access is required

• If dbspace selected for the interval fragment is full or down, Informix will skip those dbspaces and select the next one in the list

Page 91

Page 92: Version 11.70 Overview

92

Example of Interval Fragmentation with Integers

92

CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR)FRAGMENT BY RANGE (order_id)INTERVAL( 10000 ) STORE IN (dbs1, dbs2, dbs3)PARTITION p0 VALUES < 100000 in rootdbs;

Fragment KeyInterval Expression

List of DBSpaces

Initial Value

Page 92

Page 93: Version 11.70 Overview

93

Example of Interval Fragmentation with Dates

CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR)

FRAGMENT BY RANGE (order_date)INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) STORE IN (dbs1, dbs2, dbs3)PARTITION p0 VALUES < DATE('01/01/2010') in rootdbs;

Fragment KeyInterval Expression

List of DBSpaces

Initial Value

Page 93

Page 94: Version 11.70 Overview

94

Usage Example of Interval Fragmentation with Dates

CREATE TABLE orders (order_id INTEGER, cust_id INTEGER, order_date DATE, order_desc LVARCHAR)

FRAGMENT BY RANGE (order_date)INTERVAL( NUMTOYMINTERVAL(1,'MONTH')) STORE IN (dbs1, dbs2, dbs3)PARTITION p0 VALUES < DATE('01/01/2010') in rootdbs;

What happens when you insert into order_date “07/07/2010”?

A new fragment is automatically allocated for the month of July and will

hold values 7/1/2010 through 7/31/2010

What happens when you insert into order_date “10/10/2009”?

The value is insert into the existing partition p0

Page 94

Page 95: Version 11.70 Overview

New Fragmentation Schemes Supported by OAT’s Schema Manager

95 Page 95

Page 96: Version 11.70 Overview

96

Update Statistics

Page 96

Page 97: Version 11.70 Overview

Update Statistics Improvements For Fragmented Tables

• New finer granularity of statistics for fragmented tables– Statistics are calculated at the individual fragment level

– Controlled by new table property STATLEVEL

• UPDATE STATITICS no longer has to scan the entire table after an ATTACH/DETACH of a fragment

– Using the new:

UPDATE STATISTICS FOR TABLE ... [AUTO | FORCE]

– For extremely large tables, substantial Informix resources can be conserved by updating only the subset of fragments with stale statistics

• Can also specify the criteria by which stale statistics are defined

– Using the new STATCHANGE property

Page 97

Page 98: Version 11.70 Overview

Update Statistics Improvements

• Each table/fragment tracks the number of update, deletes and inserts

• New table property statchange and statlevel

– statchange• Change percentage of a table/fragment before statistics or distributions

will be updated

– Statlevel• Specifies the granularity of distributions and statistics

• TABLE, FRAGMENT, AUTO

• Fragment level statistics and distributions

– Stored at the fragment level

– Only fragments which exceed the statchange level are re-evaluated

– Detaching or Attaching a fragment can adjust the table statistics without have to re-evaluated the entire table

Page 98

Page 99: Version 11.70 Overview

Improving Update Statistics

• Fragment Level Statistics

– When attaching a new fragment only the new fragments needs to be scanned, not the entire table

– Only fragments which have expired statistics are scanned

• Defining statistics expiration policies at the table level

• Detailed tracking of modification to each table and fragment

• Automatically skipping tables or fragments whose statistics are not expired

• ANSI database implicitly commit after each index/column statistics is updated

99 Page 99

Page 100: Version 11.70 Overview

Table Level Optimizer Statistics Policies

• STATLEVEL

– Defines the granularity or level of statistics created for a table

• STATCHANGE

– Percentage of the table modified before table/fragment statisics are considered expired

100

CREATE TABLE ….

STATLEVEL [ TABLE | FRAGMENT | AUTO ]

STATCHANGE 1...100

Page 100

Page 101: Version 11.70 Overview

101

Fragment Level Statistics – STATLEVEL clause

• Defines the granularity of statistics created for a table

• TABLE

– Entire table is read and table level statistics stored in sysdistrib catalog

• FRAGMENT

– Each fragment has its own statistics which are stored in the new sysfragdist catalog

• AUTO (default option for all tables)

– System automatically determines the STATLEVEL

– FRAGMENT is chosen if:• Table is fragmented by EXPRESSION, INTERVAL or LIST and

• Table has more than a million rows

• Otherwise, mapped to TABLE

Page 101

Page 102: Version 11.70 Overview

102

Fragment Level Statistics – STATCHANGE property

• Threshold applied at a table or fragment level to determine if existing statistics are considered expired

• Valid values for STATCHANGE is an integer between 0 and 100

• Can be set for:

– Entire server using new ONCONFIG parameter STATCHANGE (default 10%)

– Session level using SET ENVIRONMENT STATCHANGE value

– Table level by specifying STATCHANGE property in CREATE or ALTER TABLE statement

• Order of precedence for STATCHANGE

1. Table Property

2. Session setting

3. ONCONFIG setting

4. Default value (10%)

Page 102

Page 103: Version 11.70 Overview

103

New Syntax for Update Statistics

• AUTO

– Only tables or fragments having expired statistic are re-calculated

• FORCE

– All indexes and columns listed in the command will have their statistics re-calculated

• Default behavior is set by AUTO_STAT_MODE parameter

– Enabled by default (i.e. AUTO)

UPDATE STATISTICS FOR TABLE …. [ AUTO | FORCE ]

Page 103

Page 104: Version 11.70 Overview

104

Data Warehouse

Page 105: Version 11.70 Overview

Multi Index Scan• Utilize multiple indexes in accessing a table

• Example, the following indexes exist on a table– Index idx1 on tab1(c1)– Index idx2 on tab1(c2)

• The server does the following:– Index scans on idx1 and idx2– Combines the results– Looks up rows satisfying both index scans

• New “Skip Scan” is used• Looks like sequential scan, but only reads the required rows

SELECT * FROM tab1WHERE c1 = 27 AND c2 BETWEEN 77 AND 88

Page 105

Page 106: Version 11.70 Overview

OAT’s View of Multi Index Scan

Page 106

Page 107: Version 11.70 Overview

107

Star Join for Snowflake Queries

• Star join is a new query processing method

• Improves query performance for star-schema queries

• Requires multi-index scan and skip scan

• Snowflake schema is an extension of star schema with multiple levels of dimension tables

• Uses bitmap technology internally for efficient removal of unnecessary data rows

• Uses pushdown technology

Page 107

Page 108: Version 11.70 Overview

108

PDHJ

Dim Tab Scan

XCHG

Fact Tab Scan

pk

• Uses alternate parent mechanism

• Requires PDQ (exchange iterator)

Rest of iterator tree

What is a Push Down Hash Join – How it Works!

• Discard Fact table rows early• Reduce number of Fact table rows

accessed based on predicates on dimension tables

• Take advantage of multiple foreign key indexes on Fact table – rely on multi-index scan

• Use hashing when index is absent

Page 108

Page 109: Version 11.70 Overview

Snow Flake and Star Joins

Page 110: Version 11.70 Overview

110 110

• New Brand Name and Editions

• Simplified Installation, Migration, and Portability

• Flexible Grid with Improved Availability

• Easy Embeddability

• Expanded Warehouse Infrastructure

• Empower Applications Development

• Enhanced Security Management

• Increased Performance

• Other Features

Agenda

Page 111: Version 11.70 Overview

111

• A sequential scan of large tables which read pages in parallel from disk and store in private buffers in Virtual memory

• Advantages of light scans for sequential scans:– Bypass the overhead of the buffer pool when many pages are read – Prevent frequently accessed pages from being forced out of the buffer

pool– Transfer larger blocks of data in one I/O operation (64K/128K platform

dependent)• Conditions to invoke light scans:

– The optimizer chooses a sequential scan of the table– The number of table pages > number of buffers in the buffer pool– The isolation level obtains no lock or a shared lock on the table

• RTO_SERVER_RESTART automatically enables light scans• Monitor using onstat -g scn

What are Light Scans (Recap)?

Page 112: Version 11.70 Overview

112

• Can now enable Informix to perform light scans on:

• VARCHAR, LVARCHAR, NVARCHAR

• Compressed tables

• Any table with rows larger than a page

• Tables now only have to be greater than 1 MB in size

– Versus greater than the size of the BUFFERPOOL

• Light Scan for fixed length rows already enabled

• Enable:

– Environment: export IFX_BATCHEDREAD_TABLE=1

– ONCONFIG file:BATCHEDREAD_TABLE 1

– Session: SET ENVIRONMENT IFX_BATCHEDREAD_TABLE ‘1’;

Light Scan Support for All Data Types (11.50.xC6)

Page 113: Version 11.70 Overview

113

• Automatic light scans on tables

– Informix now automatically performs light scans when appropriate

– No longer have to set configuration parameters to enable Informix to perform these scans

• New BATCHEDREAD_INDEX configuration parameter

– Enables the optimizer to automatically fetch a set of keys from an index buffer

– Reduces the number of times a buffer is read

Light Scan Support for All Data Types (11.70.xC1)

Page 114: Version 11.70 Overview

114

• onstat -g lsc

– Displays information based on pages scanned of large data tables, when the BATCHEDREAD_TABLE configuration parameter or the IFX_BATCHEDREAD_TABLE environment option is not enabled

• Note: this is depreciated

• onstat -g scn

– Displays the status of all light scans starting in 11.50.FC6

RSAM batch sequential scan info

SesID Thread Partnum Rowid Rows Scan'd Scan Type Lock Mode Notes

26 65 500002 111c 924 Buffpool Slock+Test

26 66 600002 171c 1260 Buffpool Slock+Test

26 67 700002 141c 1092 Buffpool Slock+Test

26 68 800002 141c 1092 Buffpool Slock+Test

Light Scan Support for All Data Types (11.70.xC1)

Page 115: Version 11.70 Overview

115

Other Performance Enhancements

• Automated DB Scheduler tasks added to help with Performance– Timeout users that have been idle for too long”– Automatically allocate CPU VPs to match hardware/licensing

when IDS starts– Alerts for tables that have outstanding in-place– Ability to configure the automatic compressing, shrinking,

repacking, and defragmenting of tables and extents

• Large Page Support on Linux– Previously, only AIX and Solaris systems were supported– The use of large pages can provide performance benefits in

large memory configurations

Page 116: Version 11.70 Overview

116 116

• New Brand Name and Editions

• Simplified Installation, Migration, and Portability

• Flexible Grid with Improved Availability

• Easy Embeddability

• Expanded Warehouse Infrastructure

• Empower Applications Development

• Enhanced Security Management

• Increased Performance

• Other Features

Agenda

Page 117: Version 11.70 Overview

117

Prevent accidental disk initialization of an instance

• FULL_DISK_INIT configuration parameter– Specifies whether or not the disk initialization command (oninit -i)

be executed in an Informix instance when a page zero exists at the root path location

– Prevents accidental initialization of an instance or another instance when the first page of the first chunk (page zero) exists at the root path location

– Page zero, which is created when Informix is initialized, is the system page that contains general information about the server

• Values – 0

• The oninit -i command runs only if there is not a page zero at the root path location

– 1• The oninit -i command runs under all circumstances• Also resets the FULL_DISK_INIT configuration parameter to 0

after the disk initialization

Page 118: Version 11.70 Overview

118

Tool for collecting data for specific problems

• New ifxcollect tool to collect diagnostic data if necessary for troubleshooting a specific problem

– Such as an assertion failure• Can also specify options for transmitting the collected data via the File

Transfer Protocol (FTP)• Located in the $INFORMIXDIR/bin directory• Output files located in the $INFORMIXDIR/isa/data directory• Examples

– To collect information for a general assertion failureifxcollect –c af –s general

– To collect information for a performance problem related to CPU utilizationifxcollect –c performance –s cpu

– To include FTP information, specify the additional information-f -e user_name@company_name.org -p 9999.999.999 -f -m machine -l /tmp -u user_name -w password

Page 119: Version 11.70 Overview

119

Backup to Cloud – Overview

• Support for backup of Informix data to Amazon Simple Storage Service (S3) cloud storage system and restore from it by using ontape backup and restore utility

• Benefits

– Simplifies the process of Informix data backup to an off-site S3 storage location, which can be accessed from anywhere on the web

– Scalable storage capacity to match the growth in Informix backup data (within backup object size limit imposed by S3)

– Reliable storage system through SLA provided by S3

– Pay-as-you-go model can provide cost-effective Informix backup solution

Page 120: Version 11.70 Overview

120

Backup to Cloud – How Backup works?

1. ontape backs up the data to a file in local directory

2. ontape starts the Cloud Client and waits for it to finish

3. The Cloud Client transfers the backup file from local directory to S3

4. The Cloud Client returns its execution status back to ontape, for ontape to finish running

5. ontape starts the Cloud Client and waits for it to finish

6. The Cloud Client retrieves backup file from S3 into local directory

7. ontape restores the server from the local file

Page 121: Version 11.70 Overview

121

Websphere MQ

Page 122: Version 11.70 Overview

122

shippingreq

Order Entry Application

Shipping Application Websphere MQ

creditque

Informix Dynamic Server

MQ Functions

Transaction mgmt

Simplified InterfaceSQL based program

SQL based MQ access

2-phase commit

Credit processing

recvdnotify

Inventory

Prior to IDS Panther

(11.50 and earlier)

Functions to:

• Send

• Receive

• Publish

• Subscribe

• Abstract

Use a virtual table to map

a queue to a table

Send and receive via

INSERT and SELECT

Send strings, documents

(CLOB/BLOB)

Page 123: Version 11.70 Overview

123

Order Entry Application

Shipping Application

Simplified InterfaceSQL based program

SQL based MQ access

2-phase commit

Credit processing

WITH MQ

Enhancements in

Informix 11.70

Inventory

IDS

IDS

DB2

Websphere MQ

shippingreq

Websphere MQ

shippingreq

Websphere MQ

shippingreq

Oracle

Support distributed topology for IDS and

Websphere MQ

• Server based MQ

messaging

• Client based MQ

messaging

Support multiple Queue Managers within a

single transactio

New Functions

Page 124: Version 11.70 Overview

124124