7
20.08.2013 Page 1 of 7 SAP Note 1013049 - FAQ: Oracle Data Pump Note Language: English Version: 16 Validity: Valid Since 25.03.2008 Summary Symptom Important: Before you use the Data Pump you must refer to HotNews Note 1039393. 1. What is the Oracle Data Pump? 2. What technical components does Data Pump comprise? 3. Are Data Pump and EXP/IMP compatible? 4. What are the advantages of Data Pump over EXP/IMP? 5. To what extent does SAP support the use of Data Pump? 6. How does a BRSPACE reorganization run when based on Data Pump? 7. What restrictions and problems are there in the Data Pump environment? 8. Which wait events exist in connection with Data Pump? 9. What views are available with Data Pump? 10. Does the Data Pump write to the Oracle alert log? Other terms FAQ datapump Reason and Prerequisites Solution 1. What is the Oracle Data Pump? Oracle Data Pump is a new utility, introduced as of Oracle 10g, for exporting and importing data and structures. Data Pump can be described as an enhancement to the EXP and IMP tools. 2. What technical components does Data Pump comprise? Data Pump uses the following components and processes: o EXPDP/IMPDP - Tools for exporting and importing data (similar to EXP/IMP). o DBMS_DATAPUMP - Package with functions specific to Data Pump - EXPDP and IMPDP use implicit functions from the DBMS_DATAPUMP

sapnote_0001013049.pdf

Embed Size (px)

DESCRIPTION

sapnote_0001013049.pdf

Citation preview

Page 1: sapnote_0001013049.pdf

20.08.2013 Page 1 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

Note Language: English Version: 16 Validity: Valid Since 25.03.2008

Summary

Symptom

Important: Before you use the Data Pump you must refer to

HotNews Note 1039393.

1. What is the Oracle Data Pump?

2. What technical components does Data Pump comprise?

3. Are Data Pump and EXP/IMP compatible?

4. What are the advantages of Data Pump over EXP/IMP?

5. To what extent does SAP support the use of Data Pump?

6. How does a BRSPACE reorganization run when based on Data Pump?

7. What restrictions and problems are there in the Data Pump environment?

8. Which wait events exist in connection with Data Pump?

9. What views are available with Data Pump?

10. Does the Data Pump write to the Oracle alert log?

Other termsFAQ datapump

Reason and Prerequisites

Solution

1. What is the Oracle Data Pump?

Oracle Data Pump is a new utility, introduced as of Oracle 10g, forexporting and importing data and structures. Data Pump can bedescribed as an enhancement to the EXP and IMP tools.

2. What technical components does Data Pump comprise?

Data Pump uses the following components and processes:

o EXPDP/IMPDP

- Tools for exporting and importing data (similar to EXP/IMP).

o DBMS_DATAPUMP

- Package with functions specific to Data Pump

- EXPDP and IMPDP use implicit functions from the DBMS_DATAPUMP

Page 2: sapnote_0001013049.pdf

20.08.2013 Page 2 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

package.

o Data Pump Master process

- During EXPDP/IMPDP runtime, the Data Pump Master initiates aprocess that controls the process flow of the Data Pumpactivity.

o Data Pump Master processes

- The Data Pump Worker processes are controlled by the Data PumpMaster. These processes are responsible for executing theexport/import.

o Streams

- Data Pump activities are technically based on Oracle Streams.

o Advanced Queueing

- These streams use functions from the Oracle Advanced Queuingoption.

3. Are Data Pump and EXP/IMP compatible?

Although Data Pump and EXP/IMP are operated in a similar way, thetechnical basis of each utility is very different. They are thereforenot compatible, which means that, for example, you cannot export anEXP dump file with IMPDP.

4. What are the advantages of Data Pump over EXP/IMP?

The main advantages of Data Pump are:

o Enhanced options for parallel processing (PARALLEL parameter)

o Restart option for planned or unplanned terminated export or importactivities (START_JOB parameter) and option for making subsequentchanges to settings such as the parallel processing level (PARALLELparameter); However, these options are not supported withinBRSPACE.

o Automatic adjustment of tablespace and datafile names(REMAP_TABLESPACE and REMAP_DATAFILE parameters)

o Advance estimate of the size of an export dump file (ESTIMATE_ONLYparameter).

o Performance gains through using direct path operations and othertechnical improvements as standard

5. To what extent does SAP support the use of Data Pump?

Data Pump is supported as of BRSPACE 7.00 (17). For more detailedinformation, see Note 976435.

While using EXPDP and IMPDP directly is allowed, SAP does not providesupport if problems occur (see Note 105047).

Page 3: sapnote_0001013049.pdf

20.08.2013 Page 3 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

6. How does a BRSPACE reorganization run when based on Data Pump?

A BRSPACE reorganization based on Data Pump generally consists of anexport and an import phase. Parameter, log and dump files are saved bydefault to $SAPDATA_HOME/sapreorg. The main steps are:

o Stop the SAP system.

o Start the BRSPACE export using the "-l expdp" option and othercontrol options, as described in Note 976435, for example:

brspace -u system/<password> -c force -f tbexport -l expdp -t <table_name>

o Create an export parameter file with BRSPACE (parfile.exp). Samplecontents:

Parameter setting Meaning

------------------------------------ --------------------------content=all Export of data and metadatacompression=none No compressionparallel=1 No parallelprocessingdirectory=br_dumpdir_sdugkrdq Directory forexport dumpdumpfile=expdat.dmp,expdat%U.dmp Name of exportdump(s)filesize=20000M Size of export dumptables='"SAP<sid>"."<table>"' Table namelogfile=br_logdir_sdugkrdq:export.log Directory for logfilesjob_name=br_export_sdugkrdq Name of export job

o Create DIRECTORY references for the dump and log directory usingBRSPACE, for example:

CREATE DIRECTORY br_dumpdir_sdugkrdq AS '<path>\sdugkrdq.edd';CREATE DIRECTORY br_logdir_sdugkrdq AS '<path>\sdugkrdq';

o Call EXPDP with the export parameter file generated above usingBRSPACE

o Drop the DIRECTORY references created with BRSPACE, for example:

DROP DIRECTORY br_dumpdir_sdugkrdq;DROP DIRECTORY br_logdir_sdugkrdq;

o Start the BRSPACE import on the basis of the generated export dumpfile, for example:

brspace -u system/<password> -c force -f tbimport -x <explog>.tbe -a replace

Page 4: sapnote_0001013049.pdf

20.08.2013 Page 4 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

o You execute BRSPACE as part of the import steps in the same way asthe steps performed for the actions during the export (see above).

o Start the SAP system.

7. What restrictions and problems are there in the Data Pump environment?

Note the following points in particular when using Data Pump:

o Due to possible corruptions, if you have not implemented any fixfor the Oracle bug that was described in Note 1039393, you shouldnot use Data Pump for tables with LONG or LONG RAW columns inOracle 10.2.0.2.

o Reorganizations on Data Pump may only be carried out offline. Thismeans the SAP system must not be started during the entirereorganization phase.

o BRSPACE does not contain any fallback feature if the Data Pumpimport cannot be successfully completed. Since the source table (s)was or were already deleted, a restore is usually necessary in thiscase. To avoid this type of scenario, you can manually rename thesource tables affected before starting the import.

o The Data Pump initialization phase takes a relatively long time(especially for the DBMS_DATAPUMP.OPEN andkupc$que_int.create_queues functions). As a result, thereorganization of smaller datasets may take longer with Data Pumpthan with EXP/IMP.

o Directories for log or dump files cannot be specified explicitly.Instead these must be created using CREATE DIRECTORY (or thedefault directory $ORACLE_HOME/rdbms/log is used).

o Parameters such as TABLES in the export parameter file must notexceed 4,000 bytes. This severely restricts your ability to exporta large number of tables. Nevertheless, you can still always exportan entire tablespace, since the system does not access the TABLESparameter in this case. You may also resort to other alternatives,such as using special exports (Note 778426) and wildcards("<prefix>*") as of BRSPACE 7.00 (22).

o Oracle advises against starting Data Pump with "/ AS SYSDBA". Youmust therefore start BRSPACE with "-u system/<password>". There areother possible alternatives (such as "-u /" for the OPS$ user), butthese cause "/ AS SYSDBA" to be used internally.

o The streams function used by DATA Pump leads to a decrease of theusable area of the Oracle buffer pool, since 10 % of the sharedpool size are reserved for the streams pool in the buffer pool(Note 789011) if STREAMS_POOL_SIZE is set to 0. To avoid thissituation, you can explicitly set STREAMS_POOL_SIZE to a value suchas 10MB, for example.

o The dynamic allocation of the streams pool only works if you areusing the dynamic SGA. If you are still using DB_BLOCK_BUFFERSinstead, the dynamic creation of the streams pool fails with

Page 5: sapnote_0001013049.pdf

20.08.2013 Page 5 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

ORA-00832 (Note 1034126).

8. Which wait events exist in connection with Data Pump?

The most important wait events in the Data Pump environment are:

o Datapump dump file I/O

- Meaning: Data Pump worker process is waiting for I/O accessesto the export dump file.

- Optimization steps:

Increased average access times for "Datapump dump file I/O" mayindicate I/O problems when the export dumpfile is accessed. Ifthis happens, check whether there is an I/O bottleneck at theoperating or hardware level. Make sure that the dump files arelocated in high-speed disk areas and that no other frequentlyaccessed files are in the same area. See also Note 793113.

o enq: TQ - DDL contention

- Meaning: Waiting for queue table accesses

- Optimization steps:

This wait situation occurs, for example, if EXPDP accessesqueues as part of kupc$que_int.detach_queues and Data PumpWorker processes want to use the queues at the same time.Usually, this wait situation should resolve itself after a fewseconds.

o enq: UL - contention

- Meaning: waiting for a User-Lock

- Optimization steps:

As part of Data Pump, UL enqueue wait situations may also occurwhen you start processes. However, these should only last for avery short period of time. For more information about enqueues,see Note 745639.

o Streams AQ: qmn coordinator waiting for slave to start

- Meaning: Waiting for the QMNC process to start an advancedqueuing slave process.

- Optimization steps:

This wait situation usually only lasts a few seconds.

o kupp process wait

- Meaning: Waiting for a Data Pump process to start

- Optimization steps:

Page 6: sapnote_0001013049.pdf

20.08.2013 Page 6 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

This wait situation starts, for example, if EXPDP isinitializing Data Pump and has to wait until the QMNC processhas created a new slave process. This wait situation usuallyonly lasts a few seconds.

Note 619188 provides more detailed information on the wait eventanalysis.

9. Which long database accesses may occur?

o The following access includes the export activities and maytherefore occur as a time-consuming command in the shared cursorcache:

BEGIN SYS.KUPW$WORKER.MAIN('<tab>', '<user>'); END;

o Other typical commands are:

BEGIN sys.kupc$que_int.create_queues(:1, :2, :3, :4); END;BEGIN :1 := sys.kupc$que_int.transceive_int(:2, :3, :4, :5, :6);END;

10. What views are available with Data Pump?

The following views contain Data Pump information:

o DBA_DATAPUMP_JOBS: Overview of Data Pump jobs

o DBA_DATAPUMP_SESSIONS: Overview of Data Pump-specific sessions

o DBA_DIRECTORIES: Overview of DIRECTORY objects

11. Does the Data Pump write to the Oracle alert log?

Alert log entries such as the following contain information about theData Pump:

DM00 started with pid=24, OS id=32427, job SCOTT.SYS_EXPORT_SCHEMA_01DW01 started with pid=26, OS id=32533, wid=1, job SCOTT.SYS_EXPORT_SCHThe value (30) of MAXTRANS parameter ignored.kupprdp: master process DM00 started with pid=20, OS id=4028 to execute - SYS.KUPM$MCP.MAIN('BR_EXPORT_SDUTCDNC', 'SYS', 'KUPC$C_1_20070305124240', 'KUPC$S_1_20070305124240', 0);kupprdp: worker process DW01 started with worker id=1, pid=21, OS id=3852 to execute - SYS.KUPW$WORKER.MAIN ('BR_EXPORT_SDUTCDNC', 'SYS');

Therefore, these are merely information messages that can be ignored.

Header Data

Release Status: Released for CustomerReleased on: 25.03.2008 10:24:28Master Language: GermanPriority: Recommendations/additional infoCategory: FAQ

Page 7: sapnote_0001013049.pdf

20.08.2013 Page 7 of 7

SAP Note 1013049 - FAQ: Oracle Data Pump

Primary Component: BC-DB-ORA Oracle

The Note is release-independent

Related Notes

Number Short Text

1039393 DATAPUMP creates corruptions in tables with LONG (RAW)

1034126 ORA-00832 when using Oracle Data Pump

976435 Support for Oracle Data Pump in BRSPACE

806554 FAQ: I/O-intensive database operations

793113 FAQ: Oracle I/O configuration

789011 FAQ: Oracle memory areas

778426 Special exports and imports with BRSPACE

619188 FAQ: Oracle wait events

105047 Support for Oracle functions in the SAP environment