Business Warehouse - Data Staging and Extraction

Embed Size (px)

Citation preview

  • 5 - 1

  • 5 - 2

  • 5 - 3

  • 5 - 4

    The data is pulled based on extract structures for Logistics to the central Delta Management (Delta Queue). If

    we are talking about a reconstruction of data the data will be written into setup tables witch can be used

    starting point for a Full data extraction to SAP BW.

  • 5 - 5

  • 5 - 6

  • 5 - 7

  • 5 - 8

    Benefits and Features of the Queued Delta:

    By writing in the extraction queue within the V1 update process, the serialization of documents is ensured by

    using the enqueue concept for the applications.

    By collecting data in the extraction queue that is processed regularly (preferably hourly, as recommended),

    this process is especially recommended for customers with a high occurrence of documents.

    The collective run uses the same reports as before (RMBWV3,...).

    Report RSM13005 will not be provided any more.

    By collecting new document data during the delta-init request, the downtime in the initialization process can be

    reduced for the reconstruction run (filling of the setup tables).

    V1 immeasurably more burdened than by using V3.

    Collective run clearly performs better than the serialized V3. Especially the critical aspect of multiple

    languages does not apply here.

    Event Handling possible.

    In contrast to the V3 collective run, a definite end for the collective run is measurable, and a subsequent

    process can be scheduled. After the collective run for an application has ended, an event (&MCEX_11,...) is

    automatically triggered which, if defined, can be used at the start of the subsequent job.

    Extraction is independent of success of the V2 update.

    See note 380078 for FAQ regarding delta queue

  • 5 - 9

  • 5 - 10

    To avoid performance problems with the collective run, it is recommended to avoid that the update resp. the

    extraction queue contains too many records. In case of the serialized V3 update, you may end up in a situation

    that it is no longer possible to process these records by the collective run.

    To avoid this problem in the scenario described in the slide above, step 8 can be separated into two tasks: a

    first InfoPackage for delta initialization without data transfer and afterwards a second InfoPackage to start the

    full upload of the historical documents. In this case, the collective run can be rescheduled immediately when

    the InfoPackage for the delta initialization is finished.

    If you can guarantee by limiting the posting date or the material document number in the statistical setup that

    only those material documents are reorganized whose time value lies before the opening balance initialization

    (step 4) and if no changes to existing documents or any postings in the past are made, then users can be

    unlocked already after step 5. In that case, an InfoPackage for delta initialization without data transfer should

    be scheduled immediately after step 5 (to create the delta queue) and the collective run should be rescheduled

    as soon as the InfoPackage is done.

    Usually, it is not possible to guarantee that only those material documents are reorganized whose time value

    lies before the opening balance initialization (since the selection screen does not offer the time but only the

    date).

    Then, users can only be unlocked after the statistical setup has been finished successfully to avoid that users

    create material documents which are included in the statistical setup as well as in the first delta request. If this

    happened, the stocks would become inconsistent.

    The problem in this context is that this can result in a very long downtime for users and a negative impact on

    the operational business.

    Scenario B describes how to shorten the downtime.

  • 5 - 11

  • 5 - 12

    To avoid performance problems with the collective run, it is recommended to avoid that the update resp. the

    extraction queue contains too many records. In case of the serialized V3 update, you may end up in a situation

    that it is no longer possible to process these records by the collective run.

    In this scenario, step 8 only transfers data for two months (not for 12 months as in scenario A). Hence, the risk

    to run into performance problems with the collective run is reduced in this scenario.

    However, if the data volume for two months is high, step 8 can be separated into two tasks to avoid

    performance problems with the collective run: a first InfoPackage for delta initialization without data transfer

    and afterwards a second InfoPackage to start the full upload of the historical documents. In this case, the

    collective run can be rescheduled immediately when the InfoPackage for the delta initialization is finished.

    If you can guarantee by limiting the posting date or the material document number in the statistical setup that

    only those material documents are reorganized whose time value lies before the opening balance initialization

    (step 4) and if no changes to existing documents or any postings in the past are made, then users can be

    unlocked already after step 5. In that case, an InfoPackage for delta initialization without data transfer should

    be scheduled immediately after step 5 (to create the delta queue) and the collective run should be rescheduled

    as soon as the InfoPackage is done.

  • 5 - 13

  • 5 - 14

    SAPNote 135637

  • 5 - 15

  • 5 - 16

    3 Main ways to Enhance Data Extraction: RSO2

    1. Enhance SAP Standard Extractor

    Addition fields are added to the extraction structure and are filled using the User-Exit or BAdI

    Define generic DataSources

    2. Generic Table Based Extractor

    Using a table or view as the bases for an extractor with or without the addition of using the User-Exit or

    BAdI

    3. Generic Function Module Extractor

    Creating a complete extractor with help of a template

  • 5 - 17

    You use Customer / User Exits to branch at pre-set points from the SAP standard program run into user-

    defined subprograms. This adds a few features of your own to complement the standard functions available.

    This also gives SAP software the highest possible degree of flexibility.

    The user-defined sections of code are managed in isolation as objects in the customer namespace, so there is

    no need to worry about complications when the next Release comes or when correction packages are

    implemented.

    In the example above, not all of the function modules of the User-Exit RSAP0001 are called one after another.

    Only the specific function module is called, depending on what is being extracted.

    The BAdI RSU5_SAPI_BADI, with its two methods, should be used to replace the RSAP0001 User-Exit. As

    shown above the method DATA_TRANSFORM can be used instead of the function modules for the

    Transaction data, Attributes data, Attributes data texts. In addition the method HIER_TRANSFORM can

    be used for the extraction of hierarchies.

    BAdIs exist in parallel too the User-Exits through that becomes the customer the possibility, to remove

    existing User-Exits step by step.

    User-Exits will keep on living!

  • 5 - 18

    Prerequisites

    Read Note 691154

    SAP_BASIS 6.20

    PI_BASIS 2004_1

    Advantages

    Object orientated

    Technology is more up-to-date (future orientated)

    Interface is nearly the same as the old function modules (obsolete parameter no longer exist)

    Coding from function modules can in most cases be directly copied to BAdI, but it must be ABAP OO

    compliant

    1 BAdI implementation per DataSource/Project (good for transporting and encapsulation)

    No longer need the CASE statement in the INCLUDE to call the respective function module

    Disadvantages

    Now you need the CASE statement in the BAdI implementation!

    It is also possible to implement a filter base BAdI (new BAdI) coming later

  • 5 - 19

    As mentioned in the concept chapter above, the standard extractor can be enhanced using the User-Exit

    RASP0001 or the BAdI RSU5_SAPI_BADI

    However, first the additional fields should be appended before the User-Exit or BAdI implementation is created

    Both variants a described in the following How-To papers:

    How To ... Enhance SAP Standard Extractor by using User-Exit RSAP0001

    How To ... Enhance SAP Standard Extractor by using BAdI RSU5_SAPI_BADI

  • 5 - 20

    If your requirements are not covered completely by the DataSources supplied with the Business Content,

    you can obtain additional information in a routine by developing your own program and integrating it in a

    function enhancement.

    Note the following four steps:

    Define the required fields in an append structure that is appended to the extract structure of your

    DataSource.

    Write your function exit to call up the relevant data sources for your DataSource.

    Replicate the DataSource in BW.

    Extract the data for your enhanced DataSource

    Important !

    Beforehand, check whether it might be less work to create a new generic DataSource instead of enhancing

    an existing one.

  • 5 - 21

    In the BW IMG (transaction SBIW) choose the path:

    Data Transfer to the SAP Business Information Warehouse Postprocessing of DataSources Edit DataSources and Application Component Hierarchy

    In the first step, you have to generate the customer append for the extract structure of your DataSource. The

    system proposes a name that begins with ZA followed by the name of the extract structure.

    Next, you maintain the fields that you want to enhance later. The names of all of these fields should start with

    ZZ so that they can be identified easily.

    Finally, you have to generate your append structure and it must also be replicated in the BI system.

    NOTE: In some cases, for example with SD extractors, you only need to append the new fields as they

    will be filled automatically with a MOVE-CORRESPONDING. However, to check this you will have to

    have a look at the code in the extractor!

  • 5 - 22

    In the second step if you are going to use the User-Exit, you must define the functional enhancement for filling

    the new fields in the extract structure.

    You have to create a project before you can use an enhancement like this. You can then integrate one or

    more enhancements into this project. For our particular task, SAP provides the enhancement RSAP0001.

    Remember that an enhancement cannot be used in two projects simultaneously.

    There are four different enhancement components available for the RSAP0001 enhancement. The code for

    filling the extract structure is stored in their INCLUDEs.

    EXIT_SAPLRSAP_001: Transaction data supply

    EXIT_SAPLRSAP_002: Master data attributes supply

    EXIT_SAPLRSAP_003: Text supply

    EXIT_SAPLRSAP_004: Hierarchy supply

    Documentation is available for every enhancement project. The documentation also explains the individual

    parameters of the function module.

    Before you are able to use it, the enhancement must be activated.

    Documentation for the whole enhancement is available under Help ? Display Documentation in the project

    management transaction for SAP enhancements (CMOD)

  • 5 - 23

    The use of the BAdI RSU5_SAPI_BADI is preferred using User-Exits

    A new implementation can be create in transaction SE18 see also the Enhancement chapter in the ABAP Basics section and in note 691154

    And from there the desired coding can be implemented

    The source code from the User-Exit can usually without changes, be copied in the BAdI method.

    In the screenshot is the parameter C_T_DATA marked red, the corresponding User-Exit parameter is called

    I_T_DATA.

  • 5 - 24

    SE30 ABAP Profiler (see Tips&Tricks) The ABAP runtime analysis enables you to analyze the elapsed run time for individual objects such as transactions, programs, or function modules, or subobjects such as modularization units or commands. An ABAP runtime analysis is especially useful for individual objects that are CPU-intensive.

    DB01 Lockwait Situations Displays all active and requested database locks. Exclusive locks are used to prevent other users from accessing the locked entries. These locks can considerably reduce application System and database system performance.

    ST02 Buffer Statistics The buffer monitor displays information about buffer and memory usage for the instance where the user is logged on. Statistics are compiled from the time the server is started. The Tune Summary screen is divided into four parts:

    Buffer SAP memory Cursor cache Call statistics

    ST04 DB Performance Analysis With the SAP Database Monitor for SQL Server, you can display the important parameters and performance indicators in SQL Server, such as database size, database buffer quality, and database configuration.

    ST05 SQL Trace With this transaction you can switch different traces on or off, output trace logs in a list, display further information about the logged statement (such as an SQL statement),and create the execution plan for an SQL statement.

    All tools will be described in the SAP course BC490 ABAP Performance and Tuning.

  • 5 - 25

  • 5 - 26

  • 5 - 27

  • 5 - 28

  • 5 - 29

  • 5 - 30

  • 5 - 31

    This setting is required to change a parameter in the Infopackage (see next slide).

  • 5 - 32

    This flag is only then visible, if you are logged in as debugging user (slide before)

    If this flag is unchecked, the request IDoc (message type RSRQST) is not processed immediately in the OLTP

    system but remains in status 64.

  • 5 - 33

    Call transaction SE38 and set a breakpoint in include LRSAPF01, in line

    IF L_FLAG_DEBUGGING SBIWA_C_FLAG_ON.

    Leave transaction SE38 but do NOT leave this modus.

  • 5 - 34

  • 5 - 35

  • 5 - 36

  • 5 - 37

  • 5 - 38

    SAP Note 583086

    Example for an error analysis: ARFCSSTATE STATUS The output displays an analysis of the ARFCSSTATE status in the form

    STATUS READ 370 times

    LOW 08.11.2002 02:46:24 0A02401B 28B6 3DC9C5CD 0049 EBP-010

    HIGH 08.11.2002 02:46:24 0A024016 A1CB 3DCB031B 0023 EBP-010

    STATUS RECORDED 492 times

    LOW 15.01.2003 03:47:48 0A02401B 741F 3E24CBD4 02C3 EBP-010 HIGH

    15.01.2003 04:17:36 0A024017 1E5D 3E24D2CF 0313 EBP-010

    Using this analysis, you can see whether there are obvious inconsistencies in the delta queue. From the above output, you can see that there are 370 LUWs with the READ status (that is, they are already loaded) and 492 LUWs with the Recorded status (that is, they still have to be loaded). For a consistent queue, however, there is only one status block for each status. That is, 1 x Read status, 1 x Recorded status. If there are several blocks for a status, then the queue is not consistent. This can occur for the problem described in note 516251, for example.

    If a status other than READ or RECORDED is issued, this also points to an inconsistency. An error has probably already occurred when the data was being written into the queue. In this case, make sure you check the SYSLOG for the incorrect LUW time stamp.

  • 5 - 39

  • 5 - 40

    Note 916706 - Number of dialog processes for data transfer

    http://help.sap.com/saphelp_nw70ehp1/helpdata/en/13/083f420e09b26be10000000a155106/content.htm

  • 5 - 41

  • 5 - 42

    1292059 - Consulting: More data packages than MAXDPAKS in delta/repeat

    1231922 - Selection of maximum number of data packages in delta repeat

    894771 - Perform problems w/ sales statistics/unbilled rev reporting

    892513 - Consulting: Performance: Loading data, no of pkg, req size

  • 5 - 43

    Most important parameter:

    Max. (kB) (= maximal package size in kB):

    When you transfer data into BW, the individual data records are sent in packets of limited size. You

    can use these parameters to control how large a typical data packet like this is.

    If no entry was maintained then the data is transferred with a default setting of 10,000 kBytes per data

    packet. The memory requirement not only depends on the settings of the data packet, but also on the

    size of the transfer structure and the memory requirement of the relevant extractor.

    Unit of this value is kB.

    Max. lines (= maximal package size in number of records)

    With large data packages the memory consumption strongly depends to the number of records which

    are transfered by one package. Default value for max, lines is 100.000 per data package.

    Maximal memory consumption per data package is around 2*'Max. lines'*1000 Byte.

    Frequency:

    Frequency of Info IDocs. Default: 1. Recommended: 5 - 10

    Max proc. (= maximal number of dialog workprocesses which were allocated for this dataload for

    processing extracted data in BW(!))

    An entry in this field is only relevant from release 3.1I onwards.

    Enter a number larger than 0. The maximum number of parallel processes is set by default at 2. The

    ideal parameter selection depends on the configuration of the application server, which you use for

    transferring data.

    Goals:

    Reduce consumption of memory ressources (especially extended memory).

    Spread up data processing to different work processes in BW.

  • 5 - 44

  • 5 - 45

  • 5 - 46

  • 5 - 47

    DTP:

    Loading data from one layer to others except InfoSources

    Separation of delta mechanism for different data targets

    Enhanced filtering in dataflow

    Improved transparency of staging processes across data warehouse layers (PSA, DWH layer, ODS layer, Architected Data Marts)

    Improved performance: optimized parallelization

    Enhanced error handling for DataStore object (error stack)

    Repair mode based on temporary data storage

    Once the transformation is defined, it is possible to create the data transfer process

  • 5 - 48

    Every Path from Persistent Source to Target is a Data Transfer Process.

    InfoPackage only can load data from Source System to PSA without any transformation

    All of these InfoPackages/Data Transfer Processes should be included into a process chain to automate the

    loading process.

  • 5 - 49

  • 5 - 50

  • 5 - 51

  • 5 - 52

  • 5 - 53

  • 5 - 54

  • 5 - 55

  • 5 - 56

  • 5 - 57

  • 5 - 58

  • 5 - 59

  • 5 - 60

  • 5 - 61 61

  • 5 - 62

  • 5 - 63

  • 5 - 64

  • 5 - 65

  • 5 - 66

  • 5 - 67

  • 5 - 68

  • 5 - 69

  • 5 - 70

  • 5 - 71

  • 5 - 72

  • 5 - 73

  • 5 - 74

  • 5 - 75

  • 5 - 76

  • 5 - 77

  • 5 - 78

  • 5 - 79

  • 5 - 80

  • 5 - 81

  • 5 - 82

  • 5 - 83

  • 5 - 84

  • 5 - 85

  • 5 - 86

  • 5 - 87

  • 5 - 88

  • 5 - 89

  • 5 - 90

  • 5 - 91

  • 5 - 92

  • 5 - 93

  • 5 - 94

  • 5 - 95

  • 5 - 96

  • 5 - 97

  • 5 - 98

  • 5 - 99

  • 5 - 100

  • 5 - 101

  • 5 - 102

  • 5 - 103

  • 5 - 104

  • 5 - 105

  • 5 - 106

  • 5 - 107

  • 5 - 108

  • 5 - 109

  • 5 - 110

  • 5 - 111

    Note 1096771 - Combined DTP extraction from active table and archive

    The correction contained in this note provides the following four selection options for the full extraction from a DataStore object:

    active table (with archive),

    active table (without archive),

    archive (full extraction only) and

    change log

    For the delta extraction, the same selection options exist for the first delta request (DeltaInit), with the exception of the option "Archive (full extraction only)". Despite this setting, the data for subsequent requests is always read from the change log in the delta case.

    Regardless of whether there is already a data archiving process for the DataStore object, the selection "active table (with archive)" is the default for new DTPs. This ensures that the extraction delivers stable results if a data archiving process is subsequently created and data is archived or reloaded.

  • 5 - 112

    Note 1095924 - Correction: Deletion/analyis report for error handling logs

    How To... Automate Error Stack Analysis (https://www.sdn.sap.com/irj/scn/index?rid=/library/uuid/e01c300f-e02f-

    2c10-46a8-b5ac929bc4ac&overridelayout=true)

  • 5 - 113

  • 5 - 114

    See note 1370135 for additions.

    Default Batchgroup SAP_DEFAULT_BTC will be considered. Definition of the servergroup is possible in

    transaction SM61.

  • 5 - 115

  • 5 - 116

    Requires ST-A/PI Addon

  • 5 - 117

  • 5 - 118

    For details about parallel upload of master data see SAPNote 421419.

    Maximum size for table buffer and single record buffer is since 6.10 4GB

    Notes on upload of transaction data into the BW SAPNote 130253

  • 5 - 119

    Buffer the number range objects of the characteristics and their navigational attributes to avoid expensive calls

    to table NRIV on the database.

    Therefore use the function module RSD_IOBJ_GET to get the number range objects and buffer the number

    range objects in transaction SNRO.

  • 5 - 120

    For details about changing the buffering options of BW tables see note 550784.

    Sometimes its more useful to define a view, which is accessed in update or transfer rules. This view can then marked as fully buffered. By using this technique, the system ressources are used more efficient.

    Problem with single-record bufferd tables when a lot of processes are reading in parallel:

    If two processes ascertain at the same time, that a record does is not in the buffer and has to be loaded,

    then both processes try to load this record. The first loads the record, the second ascertains that the sentence

    exists already. In this case the entire buffer is gone out by a faulty state and as consequence it becomes

    invalidated.

    keep kernel patch up to date depending on the running process (reporting or initial data load) switch full buffering on or entirely off.

  • 5 - 121

  • 5 - 122

    See SAPnote 536223 (Activating Master Data with Navigation Attributes) for more information.

    When characteristics are activated, all SID tables of the time-dependent or time-independent navigation

    attributes must be joined to the master data table of the characteristic to create the "X/Y tables".These tables

    contain the SIDs of all time-dependent or time-independent navigation attributes of a characteristic.The table

    is required for processing of BW queries.

    If there is a large number of navigation attributes for a single characteristic, the SQL statement for creating

    these tables may become so complex that database optimizers are no longer able to find a suitable access

    plan.

    The RSADMIN value specifies the number of tables in a SQL statement over and above which the statement

    is divided into several less complex statements. If the new function results in errors, you can deactivate it by

    setting a very high threshold value (500 tables or more). This restores previous system behavior.

  • 5 - 123

  • 5 - 124

  • 5 - 125

  • 5 - 126

    To evaluate the number range objects of the dimension tables call function module RSD_CUBE_GET (see

    screenshot).

    Fill in field I_INFOCUBE with the name of your InfoCube and choose object version A. You can only buffer

    number ranges of basis cubes. It makes no sense to put in here the name of multi cubes.

    To get the number range objects see internal table E_T_DIME. See content in field NOBJECT and set up the

    buffering of the number range in transaction SNRO. Choose as option main memory.

    Repeat this step for all dimensions except package, unit and time and small dimensions (< 1000 records).

    Meaningful values for buffering number ranges in main memory are between 500 and 1000.

    Instead of having to retrieve a number from the number range table on the database for each row to be

    inserted, you can set for example, 500 numbers into memory for each dimension table.

    During the insert of a new record, it will retrieve the next number in the sequence from memory. This is

    much faster compared to taking the number of the database (NRIV).

    It is ONLY the number themselves that are held in memory.

  • 5 - 127

    Start routines can be used to work on entire data package After the startroutine in the transfer rules is processed, the datapackage is further processed record by

    record

    After the startroutine in the update rules is processed, the datapackage is further processed record by record for each keyfigure.

    Reduce database accesses

    Avoid single database access in loops

    Use internal ABAP tables to buffer data when possible

    Reduce ABAP processing time

    Control the size of the internal tables carefully (don't forget to refresh when neccessary)

    Access internal ABAP tables using hash keys or via index

    Reuse existing function modules

    If you reuse existing function modules or copy parts of them, ensure that the data buffering used works properly again in your ABAP code

    Transfer rules vs. Update rules

    Transformations in the transfer rules are cheaper because transfer rules are not processed for each key figure but update rules are

  • 5 - 128

  • 5 - 129

    In a SELECT statement the optional addition FOR ALL ENTRIES compares the content of a column in the

    database with all lines of the corresponding component of a structured internal table.

    The whole statement is evaluated for each individual line of the internal table. The resulting set of the SELECT

    statement is the union of the resulting sets from the individual evaluations.

    Duplicate lines are automatically removed from the resulting set.

    If the internal table is empty the whole WHERE statement is ignored and all lines in the database are put in

    the resulting set

  • 5 - 130

    When you go to the definition of a database table in the ABAP Dictionary (SE11), you will see information on

    all the technical attributes of the database table.

    The following information is useful for improving the performance of database accesses:

    Key fields If the lines requested from the database are retrieved according to key fields, the Database Optimizer can perform access using a primary index.

    Secondary index If the lines requested from the database are in the same order as the field in the secondary index, the Database Optimizer uses it to find the entries. Secondary indexes are displayed in a

    dialog box whenever you select Indexes. You choose an index from the dialog box by double-clicking it.

    The system then displays a screen with additional information about that index.

    Note:

    The database optimizer is a database function that analyzes SQL statements and defines an access

    strategy. It is the database optimizer that determines whether one of the existing indexes will be used, and if

    so, which one.

    All fields are valuated in the WHERE condition (the client is set automatically by the DBSS). Therefore, the

    search string for the database starting from the left, there are no gaps in the search string. As a result, the

    database has to expend relatively little effort to search the index tree and can quickly return the result

  • 5 - 131

    Whenever you want to read individual table lines by declaring a complete key, use the READ TABLE ...

    WITH TABLE KEY statement (fastest single record access by key). The runtime system supports this syntax

    variant especially for SORTED and HASHED tables.

    The same applies if you have copied the values from all key fields into the work area/structure and then use it in READ TABLE itab FROM wa.

    If the table is a STANDARD table, the runtime system performs a table scan. The runtime system carries out

    the syntax variant READ TABLE ... WITH KEY (read an entry after applying any condition).

    The only exception to this rule applies to SORTED tables, if you fill the first n key fields with "=" (no gaps),

    where n

  • 5 - 132

    If you use dont use the full key (e.g. you exclude connid) the performance is a little slower

    If you dont use the first field at all the performance is about at least 5 times slower (depending on the table!!) then using even part of the key

    Always try to use the first field of the table

  • 5 - 133

    Note: SELECT statements will be explained in the next chapter

    Standard tables are subject to a linear search (table scan). If the addition BINARY SEARCH is specified, the

    search is binary instead of linear and therefore TABLE KEY cannot be used.

    Binary search reduces the search runtime for larger tables (from 100 row upwards). For the binary search, the

    table must be sorted by the specified search key in ascending order. Otherwise the search will not find the

    correct row.

    Binary search works on the bases of divide and conquer. During run-time the pointer jumps to the middle of

    the table and depending on the values it finds, it moves up to find lower values and down to find larger values.

    However it is important to note that as good as binary search maybe, sorting a table also can costs a lot of

    performance and if you are only going to read the table a few times, it might be better to just do a table scan

    and not sort the table at all!

  • 5 - 134

    SORTED tables are subject to a binary search only if the specified search key is or includes a starting field

    (not fields the first field is sufficient) of the table key. Otherwise it is linear (table scan).

    If the specified search key is or includes a starting field of the table key, the addition binary search can be

    specified for sorted tables, but has no effect, as the binary search will be executed anyway.

    You can also define the table with a NON-UNIQUE key, and everything still works the same.

  • 5 - 135

    HASHED tables are managed by a hash algorithm. There is no table index. The entries are not ordered in the

    memory. The position of a row is calculated by specifying a key using a hash function.

    The Hash algorithm is used for both WITH KEY and WITH TABLE KEY, but only if the complete table key

    is specified.

  • 5 - 136

    In the above example, a field symbol is assigned to the table type of an internal table.

    In the first loop we pass through the hole table line by line. In each roundtrip a new line will assigned to the field symbol of a table structure .

    After a field symbol has been assigned a line, we running through the list of columns and assign each column to the field symbol .

    In the inner loop we have following scope of access to the value of each field of the hole selected inner table:

    [2]-CARRID = LH.

    -CARRID = LH.

    = LH.

    When the outer loop is in the second round, all three commands change the value of the same field.

  • 5 - 137

    Instead of READ TABLE ... INTO, you can use the READ TABLE ... ASSIGNING variant.

    This offers better performance at runtime for pure read accesses with a line width greater than or equal to 1000 bytes.

    If you then change the read line using MODIFY, READ ... ASSIGNING already improves runtime with a line width of 100 bytes.

    The same applies to LOOP ... INTO in comparison with LOOP ... ASSIGNING. The LOOP ... ASSIGNING variant offers better performance at runtime for any loop of five loop passes or more.

    Both field symbol variants are much faster than work area variants, in particular when you use nested internal tables. This is because, if you use work areas instead, the whole inner internal table is copied (unless you prevent this by using a TRANSPORTING addition).

    Always assign a type to field symbols, if you know their static type (again, for performance reasons).

    Note: If you use READ TABLE ... ASSIGNING the field symbol points to the originally assigned table line, even after the internal table has been sorted.

    Note that when using field symbols, you cannot change key fields in SORTED or HASHED tables. Trying to do so causes a runtime error.

    The following restrictions apply to LOOP ... ASSIGNING :

    You cannot use the SUM statement in control level processing.

    You cannot reassign field symbols within the loop. The statements ASSIGN do TO and UNASSIGN will cause runtime errors.

  • 5 - 138

  • 5 - 139

    Use functionality in combination with ABAP Trace (SE30) or SQL Trace (ST05) to identify expensive coding.

  • 5 - 140

  • 5 - 141

  • 5 - 142

  • 5 - 143

  • 5 - 144

    See note 999296

  • 5 - 145

  • 5 - 146

    SAPNote 514907: Processing of complex queries (DataMart, and so on)

  • 5 - 147

  • 5 - 148

    Detailed information about COMP_DISJ can be found in note 375132.

    Automatic DB parallelism via MERGE/UPSERT (also implemented in DB6)

  • 5 - 149

    Further details can be found in SAP Note 1047462

  • 5 - 150

    Further details can be found in SAP note 619826

  • 5 - 151

    823951 Consistency check for non-cumulative InfoCubes

    The program has two input parameters:

    I_prov: Name of the InfoCube to be checked

    No InfoCube given, all active InfoCubes with non-cumulative key figures are checked

    i_w_aggr: In repair mode, the program also repairs the corresponding aggregates of the InfoCube.

    i_repair: If indicator not set the system carries out the check only.

    If there are inconsistencies, it displays the number of missing records in a list.

    If you set the indicator, the system assigns a non-cumulative value of zero to missing records.

  • 5 - 152

    SAPNote 792435

  • 5 - 153

  • 5 - 154

  • 5 - 155

  • 5 - 156

  • 5 - 157

  • 5 - 158

  • 5 - 159

  • 5 - 160

  • 5 - 161

    Extractor call from RSA3

    Delta-init via function module RSC1_INIT_BIW_GET

    Delta via function module RSC1_DELTA_BIW_GET

    Table I_T_SELECT contains selections of RSA3 entry screen (!)

  • 5 - 162

    New extractor interface parameter I_READ_ONLY

    Purpose: test delta requests without changing the status of existing delta updates (e.g. by changing pointers,

    timestamps, ...)

    If I_READ_ONLY is added to interface, the extractor should not update status tables on the database on

    I_READ_ONLY = X!

    If I_READ_ONLY exists in the extractors interface, it is always flagged; Delta-Init, Delta, and Repeat Requests are enabled.

    If not, Delta-Init, Delta, and Repeat Requests are prohibited.

  • 5 - 163

  • 5 - 164

    Master data deletion takes too long. There a lot of unused dimensions in the infocubes which uses this particular infoObject for which data is being deleted. The deletion generally deletes all these unused dimensions while deleting the master data.

    While deleting the master data, the where-used function module RSDDCVER_USAGE_MDATA_BY_SID is called with no value for the I_DEL_UNUSED_DIMS parameter. Hence it takes the default value which is RS_C_TRUE and deletes all the unused dimensions. If there are too many infocubes in which infoObject is used and there are a lot of unused dimensions, the time for deletion is very high.

    As a fix to this problem an entry is added to the RSADMIN table called DEL_UNUSED_DIMS.

    If this entry is not present in the table, the old functionality is retained where all the unused dimensions are deleted while deleting master data.

    If this entry DEL_UNUSED_DIMS is present in the RSADMIN table and it has the value 'X', then again the unused dimensions are deleted while deleting the master data.

    If this entry DEL_UNUSED_DIMS is present in the RSADMIN table and it has the value ' ', then the unused dimensions are not deleted while deleting the master data.

  • 5 - 165

  • 5 - 166

  • 5 - 167

  • 5 - 168