Sideris-Oracle Database 11g R2 Tuning-1

Embed Size (px)

DESCRIPTION

Oracle Database 11g R2. Performance tuning activities. Enterprise Manager

Citation preview

Oracle Database 11g R2: SQL Tuning 1

Oracle Database 11g R2: SQL Tuning by Sideris Courseware Corporation Sideris 2011 (577 pages) Citation ISBN:9781936930029 A mandatory reference for any database administrator or SQL database application developer, this book covers the internals of SQL statement execution, how to monitor the performance of such execution, and how to influence the behavior to achieve performance gains.

Recommend?

Table of Contents

Oracle Database 11g R2SQL Tuning

Introduction

Workshop Setup Guide

Section 1 - Tuning & the Oracle Database Advisory Framework

Workshop Section - Tuning & the Oracle Database Advisory Framework

Section 2 - Viewing & Monitoring the Execution Plan

Workshop Section - Viewing & Monitoring the Execution Plan

Section 3 - Understanding the Optimizer

Workshop Section - Understanding the Optimizer

Section 4 - Execution Plan Methods & Operations

Workshop Section - Execution Plan Methods & Operations

Section 5 - Managing Optimizer Statistics

Workshop Section - Managing Optimizer Statistics

Section 6 - Enhanced Optimizer Statistics

Workshop Section - Enhanced Optimizer Statistics

Section 7 - Histograms & Extended Statistics

Workshop Section - Histograms & MultiColumn Statistics

Section 8 - Application Tracing

Workshop Section - Application Tracing

Section 9 - ADDM & the SQL Tuning Advisor

Workshop Section - ADDM & the SQL Tuning Advisor

Section 10 - The SQL Access Advisor

Workshop Section - The SQL Access Advisor

Section 11 - Plan Management

Workshop Section - Plan Management

Section 12 - Managing Cursor Sharing

Section 13 - Optimizer Hints

Workshop Section - Optimizer Hints

List of Sidebars

IntroductionObjectives About This Textbook About The Sample DatabasesAbout This TextbookObjectivesThis learning guide will equip database administrators and application developers to build efficient SQL statements and to tune database applications. When this effort is complemented by database server and PL/SQL application tuning, then a highly efficient application execution environment is created. One will learn about the internals of SQL statement execution, how to monitor the performance of such execution, and how one can influence the behavior of the database to achieve performance gains. This textbook is a mandatory reference for any database administrator or SQL database application developer. The following specific topics are among those considered: Consider the unique and differing tuning issues found in online database applications, enterprise resource and data warehouse environments and the important metrics of SQL statement performance. Learn about the internal mechanisms use for SQL statement execution within a database instance and how these can affect performance for good or bad, including the Optimizer facilities known as the Transformation Engine, Estimator and Plan Generator. Use a variety of techniques to examine the details of SQL statement execution, spotting trouble areas and bottlenecks which require tuning. Learn about the Auto-Task framework and how to manage the automatic collection of Optimizer statistics and automatic SQL tuning using both the programmatic and Enterprise Manager interfaces. Learn how statistic deficiencies can dramatically degrade performance, and how these problems are resolved through customized Optimizer statistics collection procedures using the DBMS_STATS() package, system statistics, histograms, expression statistics and MultiColumn statistics. Influence the behavior of the Optimizer by setting database parameters and other SQL tuning techniques. Utilize the database advisory framework and the SQL Tuning and SQL Access advisors. Use plan management to achieve plan stability which is adaptive and even dynamic. Understand the self-tuning infrastructure and the automatic SQL tuning capabilities found within the database. Employ SQL hints embedded into the statement text to resolve unique tuning challenges. Learn to identify poorly performing SQL statements using real-time SQL monitoring and application tracing techniques such as DBMS_MONITOR(), trcsess and tkprof.Target AudienceThe primary target audiences for this course are: Senior application designers and database developers PL/SQL developer Database administrators Web server administrators System administrators Implementation specialists Data center support engineersCourse PrerequisitesSpecific prerequisites for this course are the following Sideris titles, or equivalent experience: Oracle Database 11g: SQL Fundamentals Complete Library Oracle Database 11g: Architecture & Internals Suggested Next CoursesYou may wish to consider the following courses which are also related to the subject of Oracle database performance tuning: Oracle Database 11g: Resource Manager & Scheduler Oracle Database 11g: Advanced PL/SQL Programming & Tuning Oracle Database 11g: Performance Tuning Oracle Database 11g: Implement Parallel SQL & Partitioning Certification ExaminationThis course covers information relevant for the certification test Exam 1Z0-054: Oracle Database 11g: Performance Tuning.Instructor-Led Training (ILT) / Live Virtual Training Schedule [kit]When delivered as a standalone instructor-led training (ILT) or live virtual training session, the suggested length of this module is between 3 4 days. It is most often joined with the mandatory prerequisite course Oracle Database 11g: Architecture & Internals for a combined 5-day presentation.

Learning ResourcesOnline resources and e-learning modules for this training course are available from the Sideris portal at http://www.SiderisPress.com.

These resources from the Sideris portal may be used along with this textbook for any of the following training formats: Self-study This textbook includes extensive examples and workshop solutions, allowing it to be used both on a self-study basis and as a long-term individual reference. Enhanced ILT Integrate these online resources into the ILT presentations to improve the learning experience and extend beyond the traditional instructor-led training model. Distance-learning Use low-cost web-based conference and training services to deliver live virtual training sessions.Resource FilesEach course offers electronic resources useful for instructors and students. Included with these resources is a package which contains workshop files for exercises in this course. You can download these and other resources, from the appropriate course link within the Sideris portal.

This is a summary of the resource files package available for this course module.Open table as spreadsheetFiles Explanation

EmployeeAdd1000.sql, EmployeeAdd1000Doe.sql, ProjectAdd1000.sqlWorkshop files to quickly increase the size of tables within the sample COMPANY database.

CompanyConstraintsPK.sqlWorkshop file to add primary key constraints and accompanying indexes to the sample COMPANY database.

EquitiesDefine.sql, EquitiesInsert.sqlWorkshop files to create a data warehouse star schema sample data model known as the EQUITIES database. This will be used for selected demonstrations and workshops to supplement the standard ELECTRONICS and COMPANY data models.

BrokeragesAdd100000.sqlWorkshop file to quickly increase the size of table(s) within the sample EQUITIES database.

SQLTuningSetup.sql, SQLTuningTest.sqlWorkshop files which create a large SQL statement execution processing load on the database based upon a huge expansion of the population of the sample EQUITIES database.

SQLTuningMonitoredSQL.sql, SQLTuningProjectAdd1000.sqlWorkshop files for Section 2 Viewing & Monitoring The Execution Plan

The following files provide solutions for the more complex workshop exercises.Open table as spreadsheetFiles Explanation

Solutions02ViewPlan.sqlSolutions for selected exercises from Section 2 Viewing & Monitoring The Execution Plan

Solutions03Optimizer.sqlSolutions for selected exercises from Section 3 Understanding The Optimizer

Solutions05OptimizerStats.sqlSolutions for selected exercises from Section 5 Managing Optimizer Statistics

Solutions06 EnhancedOptimizerStats.sqlSolutions for selected exercises from Section 6 Enhanced Optimizer Statistics

Solutions07ExtendedStats.sqlSolutions for selected exercises from Section 7 Histograms & Extended Statistics

Solutions10 SQLAccessAdvisor.sqlSolutions for selected exercises from Section 10 The SQL Access Advisor

To access these files, visit the Sideris portal, click on the course link which corresponds to this course and login to the site.

Your login credentials for this course are Username 978-1-936930-02-9 and Password sqltun.ng500 and these should be entered when you are prompted for these.About the Sample DatabasesThis course uses the sample data models known as the COMPANY and ELECTRONICS databases. These are highly simplified transactional data models which adequately support the objectives of the workshops without introducing unnecessary complexity. A useful description of these data models is available online from the Sideris portal.

It is usually good practice for one to reinitialize these sample databases at the start of each new workshop, unless specifically requested otherwise. The Workshop Setup Guide will provide instructions on initializing the sample databases.

Workshop Setup GuideAbout the Workshop SetupUnless you have already done so, it is necessary to configure a workshop environment in order for one to complete the exercises within this course. Help is available online at the Sideris portal, under the link for this course.From this point one can find an extensive Workshop Setup Guide along with other workshop setup resources. We encourage you to download the Guide and complete the setup in advance of the start of the course.

Section 1: Tuning & the Oracle Database Advisory FrameworkOverviewObjectives Understand The Challenges Of Tuning Performance Metrics About The Management & Advisory Framework SQL Tuning Privileges

Section OverviewTuning of database operations and application elements such as SQL statements is a fundamental task of database administrators and senior application developers. When tuning is to be performed within an Oracle database environment, there are some essential facts which one should understand before embarking on any tuning effort. Within this section we will explore these topics: Consider the basic principles of database and application tuning and the challenges associated with using those principles. Review fundamental database processing metrics and how these are used in tuning measurements and analysis. Introduce the Management and Advisory framework provided within the Oracle database architecture. Configure and use the Enterprise Manager interface for application tuningEven the most functionally robust and well-built application would be considered a failure if its execution speed and overall performance did not meet acceptable service levels. Therefore, one of the prime considerations that administrators and developers must consider is the tuning of application code, including SQL statements.

The Challenges of TuningTuning of information systems is one of the most challenging tasks and the true improvement of performance is an elusive goal. This is likewise true within an Oracle database environment and the application code which executes within that environment. Among the factors which complicate the tuning effort are these: Isolating the problem Improving response time Identifying external factors Addressing application and logical database designIsolating the ProblemThe first challenge is to isolate exactly where the performance problem lies. For example, in the case of a SQL statement which executes poorly, the performance of the statement may be only a symptom, not the actual problem. The underlying problem may lie within the overall database configuration or operation. Or it may be the PL/SQL application in which the SQL statement is found. Conversely, one may begin with a focus on the database or a PL/SQL program unit only to find that an embedded SQL statement is the true source of the problem.Furthermore, SQL statement performance cannot be separated from the performance of the database in general. This often indicates that a collaborative approach to performance tuning is required. Database, system and network administrators typically address performance issues which arise from database configuration, host system configuration and network design. Senior database developers generally address issues related to SQL statement construction and database-resident program design using PL/SQL and Java. And both groups must work together in the diagnosis effort.In the comments which follow, we will discuss further how these various interrelated factors can affect database and application performance.About Response TimeIn the context of SQL statement execution, response time is the time to receive a response from the database for a SQL statement, as the response perceived by the user or the application. The ultimate goal of a tuning effort will primarily seek to improve the response time.Identifying External FactorsOther, more distant, external factors can also have a dramatic impact on performance. One of the most important is the hardware and software environment within the overall systems infrastructure. Just as a SQL statement is affected by the database environment in which it executes, on an even larger level the entire database installation is itself affected by the execution environment.Therefore, tuning, configuring, troubleshooting the overall systems architecture is also critical. Similarly, adding additional hardware resources may be a necessary step.Database installations, in turn, exist within a systems area (SAN), local area (LAN) or wide area network (WAN). Network design and configuration is another important factor. This is especially true with modern 3-tier and n-tier application environments. In such cases SQL statements may be issued from application servers, clients and users may be connected to web and HTTP servers, and so on.

Application & Logical Database DesignOne of the most fundamental problems which can contribute to poor systems performance is poor application design. This could be the design of the logic within the application modules or the logical data model upon which the database application is based. It is nearly impossible to tune an application when its design is poor. Thus, the best database design and the most expensive hardware resources can all be squandered by poor application design.Curriculum NoteFor these reasons one should consider this course as only one in a series which address various aspects of Oracle database and application performance tuning. One should also consider the Sideris courses Relational Database Design & Data Modeling, Oracle Database 11g: Advanced PL/SQL Programming & Tuning, Oracle Database 11g: Resource Manager & Scheduler and Oracle Database 11g: Performance Tuning.

Performance MetricsPerformance is measured by means of timed statistics or metrics. These are measurements which are taken for specific elements of processing. The sum total of these values amounts to what are called timings.Once these timings have been computed, one can make a determination as to whether a particular statistic indicates acceptable or unacceptable performance. Certainly it is important for one to track changes to these metrics. In other words, while one could debate whether a particular metric value is acceptable or not, it is very obvious when comparing new and old metric values whether performance is progressing or regressing.One must have a clear understanding of these metrics and their meaning in order to succeed at any tuning endeavor.Timed StatisticsConsider some basic operational statistics which are captured to measure elements of database processing.CPU StatisticsCPU statistics measure CPU usage. Measurements are available for particular functions of the database, such as parsing SQL statements prior to execution or the actual execution of SQL statements. Other CPU statistics may be aggregated to a higher level, such as total CPU consumption for a database session.I/O StatisticsI/O statistics are measurements of Input and Output operations performed by the database instance to database files. These may also be known as read and write operations. In the context of an Oracle database, these are mostly I/O operations to the data files and the log files.I/O or read and write operations may be either logical or physical. Consider a SQL operation which requires data from a table.The SQL statement execution will first attempt to find the required blocks within the database cache. If the blocks are found within memory, then a logical read is performed from the buffer cache. If not found, a physical read is performed from the tablespaces stored in the data files. Once fetched from a physical read, the block is retained in the buffer cache for later reference in a logical read.Obviously, a logical read from memory is preferable as this is faster than a physical read from disk. Thus, in addition to reducing overall reads, another performance goal may be to reduce physical reads in favor of logical reads.Wait EventsWait events are actions performed by the database which were momentarily suspended while the instance was waiting for a resource or an action to complete.NoteWait events are among the most important to monitor, and among the most serious performance problems to correct. While CPU and I/O statistics are subject to interpretation as to whether those values are appropriate for the task at hand, a wait event clearly is a symptom of a problem since the database is forced to wait to perform an action that it is otherwise ready to perform.

TimingsNow consider how these statistics are collected to produce meaningful timings.Response TimeThe ultimate measurement of any database or system processing is that of response time. This, in turn, is comprised of the following subsidiary elements: Service time this is the actual processing or computing time necessary to complete a task. If changes can be made which reduce the required service time then clearly the response time will be better. Wait time this is the delay a task must wait for necessary resources in order to start. Even if the service time were not reduced, if the wait time were reduced or eliminated then this would still have a significant impact on response time.Thus, the response time may be improved by improving either or both of the elements which comprise response time. Of course, ideally one will seek to reduce both the service time and the response time.Database TimeEach phase of SQL statement or database-resident program unit execution is considered to be a call. For example, the parse, the execution and the fetch phases for a given SQL statement are all considered to be calls. Each user login event is also considered as a call.The service time of a database operation is known as database time is also known as db time. This is a most important fundamental measure of database performance. It represents the total amount of time the database instance spent processing database calls, and it is also measured on a per-call basis. Considering what you have learned, database time is the sum of these elements: CPU time Wait timeElapsed TimeElapsed time is also known as clock time. This is simply the duration as measured in human terms in which a monitored event occurs.

Management & Advisory FrameworkA very robust framework is built into the Oracle database architecture which is devoted towards database, application and SQL statement performance monitoring and tuning. This is part of a design known as the management and advisory framework.The interface to this framework is the Oracle Enterprise Manager (EM) tool. While primarily a management interface for database administrators, it can also be used by senior application developers for PL/SQL and Java database-resident application module tuning and SQL statement execution monitoring and tuning. Some of its many duties relate to the collection, consolidation and interpretation of timed statistics and database timings for numerous operations.How Does This Help?Specific to SQL statement tuning, advisors exist within this framework which can perform the following tasks using sophisticated logic: Isolate tuning problems by identifying poorly performing program units and SQL statements as measured by timed statistics. Diagnose the underlying problem by identifying poorly constructed SQL statements which should be rewritten. Diagnose logical database design problems which affect SQL statement execution by suggesting logical objects which should be created, dropped or redesigned.ADDM & AWR

The following are the major components within this framework that make this possible: Automatic Workload Repository (AWR) the MMON statistics collection process is constantly collecting execution statistics for the database and SQL statements and storing these in a self-monitoring performance data warehouse. Automatic Database Diagnostics Monitor (ADDM) this is a highly-intelligent analysis tool which mines the data within AWR and produces performance tuning findings. Based upon the findings, recommendations may be produced for the tuning specialist to run selected advisors from the advisory framework. Advisors such as the SQL Tuning Advisor or the SQL Access Advisor will recommend specific configuration changes either to the database, the application, or both.While we will discuss more about the capabilities of the Advisory Framework, it is important that tuning specialists are able to access these features from the EM interface.

SQL Tuning PrivilegesA variety of system and object privileges are required for one to perform SQL statement tuning. Database administrators often have such privileges, with the default administrator account SYSTEM having all such privileges. But in following established security principles, users should only be granted those privileges necessary to perform their duties. Therefore, the privileges required to authorize someone, such as a senior application developer, to use the EM interface and otherwise access the Advisory framework of the Oracle database must be explicitly granted.Object PrivilegesThe EXECUTE object privilege is required for any system-supplied package to be used. Those packages which are most often used in the SQL statement tuning effort and for which the EXECUTE object privilege is required are as follows: DBMS_ADVISOR() DBMS_APPLICATION_INFO DBMS_MONITOR() DBMS_OUTLN() DBMS_OUTLN_EDIT() DBMS_SERVER_ALERT() DBMS_SQLTUNE() DBMS_STATS() DBMS_SYSTEM() DBMS_WORKLOAD_REPOSITORY() DBMS_XPLAN()EM Database PrivilegesThe EM interface provides the easiest access to the Advisory framework. Many of the required system-supplied package object privileges can be obtained by authorizing EM access. The following system privileges should be considered when one wishes to provide access to the Advisory framework through the EM interface.Open table as spreadsheetPrivilege Explanation

OEM_ADVISORThis role combines the ADVISOR, ADMINISTER SQL TUNING SET and CREATE JOB privileges. Together with the SELECT ANY DICTIONARY system privilege it would provide most rights needed for an application developer to perform SQL statement tuning.

ADVISORThis system privilege essentially provides access to the advisory framework, primarily through the DBMS_ADVISOR() package. However, certain related tasks such as creating SQL Tuning Sets and SQL Profiles or scheduling SQL Tuning Advisor jobs, require additional privileges as shown below.

ADMINISTER SQL TUNING SETThis system privileges allows one to create and manage SQL Tuning Sets. ADMINISTER ANY SQL TUNING SET allows one to access STS in other schemas.

CREATE JOBThis system privilege allows one to create jobs, schedules or programs using the database scheduler. CREATE ANY JOB allows one to create these objects in another schema. Since this latter privilege would essentially inherit the privileges of any other user for execution of jobs it should be carefully guarded.

SELECT ANY DICTIONARYThis system privilege permits SELECT access to any object within the SYS schema, including the actual base tables as well as the more common data dictionary views. This is the most basic system privilege required to be able to at least logon to EM and utilize its graphical interface.

SELECT_CATALOG_ROLEThis is a role which is more restrictive than SELECT ANY DICTIONARY and can often be used in its place. It provides SELECT access to all data dictionary views but not the underlying base tables.

CREATE ANY SQL PROFILEThis system privilege, along with the related ones of DROP ANY SQL PROFILE and ALTER ANY SQL PROFILE allow one to create and manage SQL Profiles suggested by the advisory framework.

The following command might be used to authorize an application developer to perform SQL monitoring and tuning through the EM interface.

SQL> CONNECT / AS SYSDBA;Connected.

SQL> GRANT oem_advisor, SELECT ANY DICTIONARY, CREATE ANY SQL PROFILE, DROP ANY SQL PROFILE, ALTER ANY SQL PROFILE TO student1;Grant succeeded.If the same account which is authorized in this way to use the EM interface is also the owner of the application schema or data model, then one will be fully equipped to perform application monitoring, tracing and tuning.

Section Review

Remember These Points The goals of database application tuning include (1) isolating the real problem, (2) improving response time (3) identifying any applicable external factors and (4) addressing application and logical database design issues. Response Time is a combination of Service Time and Wait Time. Service Time is known as database time when measuring the duration of database calls. Wait Time is one of the most important metrics to measure and reduce. Elapsed Time is the duration of an operation as measured in human terms. The Advisory framework within the Oracle database allows one to reach the goals of database application tuning as these relate to SQL statements, collecting Specific privileges may be granted to allow one to access the Advisory framework, either programmatically or through the EM interface.

Workshop Section: Tuning & the Oracle Database Advisory FrameworkExercises

What You Will Do within This WorkshopIn this workshop you will: Authorize an application developer account to use the Advisory framework and the EM interface. Verify the database installation and developer authorization by entering the EM interface.Advisory Framework-1 Since you are likely using a sample database installation created for the purpose of this course, you could decide to use the default database administrator account SYSTEM and the super administrator account SYS for all the workshop exercises of this course.Alternatively, if you wish to authorize a separate account, you could do so by granting appropriate system privileges to the owner of the sample database schemas. We recommend this latter option.

AnswersSQL> CONNECT / AS SYSDBA;Connected.

SQL> GRANT oem_advisor, SELECT ANY DICTIONARY, CREATE ANY SQL PROFILE, DROP ANY SQL PROFILE, ALTER ANY SQL PROFILE TO student1;Grant succeeded.

Advisory Framework-2 If you have successfully completed the installation process, you should be ready to use the Database Control. Launch a browser and enter the URL presented during the installation. If you are able to connect successfully, familiarize yourself with the presentation by navigating to these locations: Home page sections Performance page Availability page Server page Schema page Data Movement page Software And Support page

AnswersTo connect to the database using the EM Database Control:1. Open a browser window.2. In the browser address or URL field enter https://host:port/em where HOST represents your database host system name and PORT represents the Database Control port number reported during the database software installation.3. Type in the username and password for an administration account, perhaps using the default EM administrator account SYSMAN.4. If you authorized another account to use the EM interface, then login again using those credentials. You should be able to peruse the full range of presentation screens using this account as well.

You should see the home page similar to what is shown here, and you can briefly navigate through the interface if you are not already familiar with it.

Advisory Framework-3 In addition to the standard sample data models named COMPANY and ELECTRONICS which are used throughout our curriculum, in this course we will also make periodic use of a more complex data model typical of that found within a data warehouse environment. This warehouse model is named the EQUITIES database.The EQUITIES sample database is based upon a star schema model. Using the same database account in which you already created the other sample databases, execute the EquitiesDefine.sql and EquitiesInsert.sql scripts to also create this advanced model.This will allow us to produce examples and demonstrations during this course from whichever of these data models is best suited to the discussion at hand.

AnswersSQL> CONNECT student1/student1;Connected.

SQL> @ EquitiesDefine.sql...

SQL> @ EquitiesInsert.sql;...

Section 2: Viewing & Monitoring the Execution PlanOverviewObjectives Learn More About The Execution Plan Collecting Operational Statistics Viewing The Execution Plan Real-Time SQL Monitoring

Section OverviewAs you will soon learn, the efficiency with which a SQL statement executes is largely determined by its execution plan. The execution plan, in turn, is generated by a database facility known as the Optimizer. Thus, a critical capability which one must obtain is the ability to view the execution plan selected for a given SQL statement. Within this section we will therefore consider: Understanding the basic structure and elements of an execution plan. Viewing an execution plan using SQL*Plus, the EM interface and other methods. Enable the SQL*Plus AUTOTRACE facility. Isolating SQL statement candidates for tuning using EM features such as Top Activity, Top SQL and Top Sessions. Automatic real-time SQL statement execution monitoring. Enhancing the database performance statistics collected for evaluation of execution plan efficiency.Once we consider how one finds and views the selected execution plan for a SQL statement, we will progressively consider exactly what the operations listed within the plan mean. Our ultimate goal will be to ensure that these operations within the overall plan are the best for the SQL statement in question, and if not, to learn to apply techniques by which a better plan can be produced.

About the Execution PlanThe Plan Generator within the Optimizer produces one or more execution plans during the parse phase of SQL statement execution. Rightly or wrongly, one of these plans is ultimately chosen for execution. The execution plan is a tree-structured series of steps which identifies the following elements of the plan: Sources these are the database objects from which rows will be fetched during execution of the statement. Access method this is the method by which the rows are fetched from each source. Join method when tables are joined there are a variety of join methods from which the Optimizer may choose. Data operations these are operations which may be performed upon the data. For example, a filter operation is used to filter the rows accessed from a table based upon a SQL predicate. A sort operation would be applied when an ORDER BY clause is used. An aggregation operation might be found when a GROUP BY clause exists.Sample Execution PlanFor example, the following SQL statement might produce the corresponding execution plan which is shown thereafter.

SQL> SELECT LName, DName, Salary FROM employee INNER JOIN department ON dnumber = dno WHERE LName = 'Smith' ORDER BY DName;

LNAME DNAME SALARY---------- --------------- ----------Smith Research 30000

Q_PLAN----------------------------------------- SELECT STATEMENT SORT ORDER BY HASH JOIN TABLE ACCESS FULL EMPLOYEE TABLE ACCESS FULL DEPARTMENTOne must be clear that the internal execution plan for a SQL statement is entirely different from the SQL statement syntax submitted by the user. For example, the following SQL statement is logically identical to the one just shown. However, in this case the statement is built using older SQL92 syntax.Since these are logically identical, the Optimizer correctly selects the exact same execution plan, although the syntax of the two is quite different from each other.

SQL> SELECT LName, DName, Salary FROM employee, department WHERE dnumber = dno AND LName = 'Smith' ORDER BY DName;

LNAME DNAME SALARY---------- --------------- ----------Smith Research 30000

Q_PLAN----------------------------------------- SELECT STATEMENT SORT ORDER BY HASH JOIN TABLE ACCESS FULL EMPLOYEE TABLE ACCESS FULL DEPARTMENTReading An Execution PlanThe operations are read from right-to-left and from top-to-bottom. For example, starting from the rightmost operation, and then the top, the operation TABLE ACCESS FULL EMPLOYEE is found. This is executed first.Starting from the right and the top once again, the next operation found is TABLE ACCESS FULL DEPARTMENT, which is executed next as the plan is invoked.Following the same logic, the results from the first operation are input to a HASH JOIN operation. This is followed by a SORT induced by the ORDER BY clause. Lastly, the columns projected from the final result table are selected and output.Using the Execution Plans for TuningActions taken by the database administrator and by the application developer can affect the execution plans generated and selected by the Optimizer. Thus, one of the primary methods by which the developer will tune a SQL statement will be to examine the execution plan generated by the database for a given SQL statement. Similarly the Advisors and other sophisticated tuning tools will likewise examine the operations within the plan. By working with the database administrator where necessary, and by modifying the SQL statement and other database objects, one hopes to induce the Optimizer to select a better execution plan.Changes to the Oracle software from one version of the database to another might also affect the execution plan produced for a given SQL statement. Therefore, SQL statement tuning should be performed even for previously tuned applications whenever a new database version is installed to ensure that the optimum execution plan available is being produced. However, a mechanism known as plan stability or stored outlines, discussed later within this course, provides a means to insulate applications from such database version changes.

Collecting Performance StatisticsThe database instance, the user sessions operating the SQL statements, or both may be placed into hypersensitive modes where a large number of database performance statistics are collected and are available for tuning analysis. Normally, one will not want to utilize these modes since they are cost prohibitive and will interfere with the operation of the database. However, while one is focused on the tuning effort, these modes provide for a wealth of added operational information which is not normally available.Database Performance StatisticsThe STATISTICS_LEVEL database parameter may be set to either increase or decrease the set of performance statistics collected by the database instance from its default value. For the reasons explained, one will want to increase the statistics collection to its highest level only during focused database tuning and troubleshooting periods. Options available for this parameter are:Open table as spreadsheetValue Explanation

BASICThis retains the least amount of performance statistics and essentially disables all of the proactive manageability features discussed in this section. This setting is not recommended.

TYPICALThis is the default value and it captures all of the performance statistics generally needed for the Management and Advisory framework to maintain a healthy database.

ALLThis is a very costly option which should only be employed for a limited period of time and when necessary. Additional detailed statistics regarding the operating system and individual SQL statement execution are added to those normally collected.

One might modify this setting using the SQL interface and the ALTER SYSTEM command.

SQL> CONNECT dba1/dba1;Connected.

SQL> ALTER SYSTEM SET statistics_level = ALL SCOPE = BOTH;System altered.SQL TracingIn a similar manner, detailed performance statistics on the execution of individual SQL statements may be collected by enabling SQL tracing using the SQL_TRACE database parameter.This may be set at the instance-level for all sessions by using the ALTER SYSTEM command and restarting the database instance. For reasons of performance however, this is not recommended on a production database.Another alternative is to enable SQL tracing at the session level, and only for as long as the SQL tuning effort is ongoing. This may be done using the ALTER SESSION command:

SQL> CONNECT student1/student1;Connected.

SQL> ALTER SESSION SET SQL_TRACE = TRUE;Session altered.NoteWhile the SQL_TRACE parameter is still supported for the sake of backward compatibility, and we will make use of it initially within this course for the sake of simplicity, its use is deprecated. In a production environment it should be replaced with usage of the system-supplied packages DBMS_MONITOR() and DBMS_SESSION(). The use of these packages is discussed later within this course.

iewing the Execution PlanThere are several utilities which allow one to view execution plans. In order to do so, within the application schema one must create a utility table named PLAN_TABLE. This table will be used to hold the execution plans. Any of the utilities we will discuss which show and explain the plan will be able to do so using this table.Create PLAN_TABLEThe PLAN_TABLE table may be created in an application schema by executing the script utlxplan.sql. The exact name and location of this script is operating system and database installation dependent but is found in the standard location where database administrator scripts are located.

SQL> CONNECT student1/student1;Connected.

SQL> @ C:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\utlxplan.sqlTable created.

SQL> DESCRIBE plan_tableName Null? Type----------------------------------------- -------- -----------------STATEMENT_ID VARCHAR2(30)PLAN_ID NUMBERTIMESTAMP DATE...Populate & Examine PLAN_TABLEThere are several methods by which the plan of a given SQL statement may be stored and examined. The method you choose will be determined by the amount and format of the information you require, and the environment in which you are obtaining and examining the plan information.The SQL Statement Explain PlanThe most primitive method is to use the EXPLAIN PLAN command. This command must specify a unique STATEMENT_ID value for the SQL statement, which will then allow one plan within the PLAN_TABLE structure to be distinguished from another.Generally one will only find this method employed in SQL tuning utilities which may have been created in a legacy installation of the database.Generating & Storing the PlanThis statement will generate an execution plan for a SQL statement and store it within the table PLAN_TABLE under a unique statement ID.

SQL> EXPLAIN PLAN SET statement_id = 'TEST' FOR SELECT LName, DName, Salary FROM employee INNER JOIN department ON dnumber = dno WHERE LName = 'Smith' ORDER BY DName;Explained.Viewing the PlanReferencing the same STATEMENT_ID, the execution plan for the statement may be fetched from PLAN_TABLE and formatted.

SQL> SELECT LPAD(' ',2*LEVEL) || operation || ' ' || options || ' ' || object_name Q_PLAN FROM plan_table WHERE statement_id = 'TEST' CONNECT BY PRIOR ID = PARENT_ID AND STATEMENT_ID = 'TEST' START WITH ID = 0;

Q_PLAN----------------------------------------- SELECT STATEMENT SORT ORDER BY HASH JOIN TABLE ACCESS FULL EMPLOYEE TABLE ACCESS FULL DEPARTMENTDeleting the Plan DetailsThereafter, the individual rows for the statement may be deleted from PLAN_TABLE if desired.

SQL> DELETE FROM plan_table WHERE statement_id = 'TEST';5 rows deleted.The utlxpl*.sql ScriptsAnother method involves executing system-supplied scripts. These will output the most recent execution plan entered into the PLAN_TABLE. utlxpls.sql which will examine the execution plan for serial SQL statements utlxplp.sql which will examine the execution plan for parallelized SQL statementsIt is still good practice to delete the old rows from the table PLAN_TABLE. And once again the exact name and location of these files are operating system and installation dependent:

SQL> EXPLAIN PLAN SET statement_id = 'TEST' FOR SELECT LName, DName, Salary FROM employee INNER JOIN department ON dnumber = dno WHERE LName = 'Smith' ORDER BY DName;Explained.

SQL> @ C:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\utlxpls.sql...Viewing the OutputThe output generated by this method clearly offers much more information than the previous one. While the same operations are presented in the same sequence, the execution operation is clearly distinguished from the database object name upon which the operation occurred. Important performance information is also shown, including the number of rows and bytes involved in the operation, the relative cost in terms of CPU time and the elapsed time.

SQL> @ C:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\utlxpls.sql...

PLAN_TABLE_OUTPUT-----------------------------------------------------------------------------Plan hash value: 1937138517

----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 30 | 8 (25)| 00:00:01 || 1 | SORT ORDER BY | | 1 | 30 | 8 (25)| 00:00:01 ||* 2 | HASH JOIN | | 1 | 30 | 7 (15)| 00:00:01 ||* 3 | TABLE ACCESS FULL| EMPLOYEE | 1 | 14 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL| DEPARTMENT | 3 | 48 | 3 (0)| 00:00:01 |----------------------------------------------------------------------------------

Predicate Information (identified by operation id):--------------------------------------------------- 2 - access("DNUMBER"="DNO") 3 - filter("LNAME"='Smith')

Note----- - dynamic sampling used for this statement (level=2)Also of use in this output are predicates which are an integral part of certain operations. For instance, plan ID number 3 performs a full table scan but filters for those rows with a LNAME value of Smith.At this point we can start to demonstrate how the execution plan might be analyzed by one performing SQL tuning. The EMPLOYEE table is filtered first and therefore becomes the driving table for the join with the DEPARTMENT table. This is exactly the behavior that we want since it results in a very small join to the DEPARTMENT table using only those specific EMPLOYEE table rows which are needed.The System-Supplied Package DBMS_XPLAN()Yet another method, but producing the same output, involves calling the DISPLAY() procedure from the DBMS_XPLAN() system-supplied package. It effectively executes the same statement as that contained within the utlxpl*.sql scripts.

SQL> SELECT * FROM table(dbms_xplan.display);

PLAN_TABLE_OUTPUT-----------------------------------------------------------------------------Plan hash value: 1937138517

----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 30 | 8 (25)| 00:00:01 || 1 | SORT ORDER BY | | 1 | 30 | 8 (25)| 00:00:01 ||* 2 | HASH JOIN | | 1 | 30 | 7 (15)| 00:00:01 ||* 3 | TABLE ACCESS FULL| EMPLOYEE | 1 | 14 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL| DEPARTMENT | 3 | 48 | 3 (0)| 00:00:01 |----------------------------------------------------------------------------------

Predicate Information (identified by operation id):--------------------------------------------------- 2 - access("DNUMBER"="DNO") 3 - filter("LNAME"='Smith')

Note----- - dynamic sampling used for this statement (level=2)

21 rows selected.Controlling the Level of DetailThe reason one might employ the DBMS_XPLAN() call is that the DISPLAY() procedure can accept several parameter values to indicate the level of detail desired.In the example shown above, the default detail level TYPICAL was implied. However, in the next example, we revert to just the basic operations within the plan.

SQL> SELECT * FROM table (dbms_xplan.display (table_name => NULL, statement_id => NULL, format => 'BASIC') );

PLAN_TABLE_OUTPUT---------------------------------------------------------------------Plan hash value: 1937138517

------------------------------------------| Id | Operation | Name |------------------------------------------| 0 | SELECT STATEMENT | || 1 | SORT ORDER BY | || 2 | HASH JOIN | || 3 | TABLE ACCESS FULL| EMPLOYEE || 4 | TABLE ACCESS FULL| DEPARTMENT |------------------------------------------

11 rows selected.The most detailed display is produced using the ALL parameter value, as is shown next.

SQL> SELECT * FROM table (dbms_xplan.display (table_name => NULL, statement_id => NULL, format => 'ALL') );

PLAN_TABLE_OUTPUT-----------------------------------------------------------------------------Plan hash value: 1821410742

----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 55 | 6 (34)| 00:00:01 || 1 | SORT ORDER BY | | 1 | 55 | 6 (34)| 00:00:01 ||* 2 | HASH JOIN | | 1 | 55 | 5 (20)| 00:00:01 ||* 3 | TABLE ACCESS FULL| EMPLOYEE | 1 | 33 | 2 (0)| 00:00:01 || 4 | TABLE ACCESS FULL| DEPARTMENT | 3 | 66 | 2 (0)| 00:00:01 |----------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):------------------------------------------------------------- 1 - SEL$58A6D7F6 3 - SEL$58A6D7F6 / EMPLOYEE@SEL$1 4 - SEL$58A6D7F6 / DEPARTMENT@SEL$1

Predicate Information (identified by operation id):--------------------------------------------------- 2 - access("DNUMBER"="DNO") 3 - filter("EMPLOYEE"."LNAME"='Smith')

Column Projection Information (identified by operation id):----------------------------------------------------------- 1 - (#keys=1) "DEPARTMENT"."DNAME"[VARCHAR2,15], "EMPLOYEE"."LNAME"[VARCHAR2,10], "EMPLOYEE"."SALARY"[NUMBER,22] 2 - (#keys=1) "EMPLOYEE"."LNAME"[VARCHAR2,10], "EMPLOYEE"."SALARY"[NUMBER,22], "DEPARTMENT"."DNAME"[VARCHAR2,15] 3 - "EMPLOYEE"."LNAME"[VARCHAR2,10], "EMPLOYEE"."SALARY"[NUMBER,22], "DNO"[NUMBER,22] 4 - "DEPARTMENT"."DNAME"[VARCHAR2,15], "DNUMBER"[NUMBER,22]

Note----- - dynamic sampling used for this statement (level=2)Later within this course you will learn how to utilize these additional elements of the execution plan report.NoteThe advantage of the methods for generating the execution plan shown thus far is that although the plan is generated, the SQL statement is not actually executed. While there may be cases in which you will want to actually execute the statement as well and measure the timed statistics generated, in many cases it will be advantageous to examine and tune the execution plan initially without executing the statement.

SQL*Plus Command AutotraceOne can operate a SQL*Plus session in AUTOTRACE mode as a means of automatically producing Optimizer execution plan and execution statistics immediately after every SQL statement is issued and executed. When developing or debugging application SQL statements this offers a convenient means of obtaining SQL execution summary information quickly and easily.Enabling the UtilityFirst one must create the PLUSTRACE role within the database. This may be done by executing the SQL*Plus administration script plustrce.sql while connected as the user SYS. The exact location and name of this file is version and operating system dependent:

SQL> CONNECT / AS SYSDBA;Connected.

SQL> @ $ORACLE_HOME\sqlplus\admin\plustrce.sql;SQL> drop role plustrace;drop role plustrace *ERROR at line 1:ORA-01919: role 'PLUSTRACE' does not exist

SQL> create role plustrace;Role created.SQL> grant select on v_$sesstat to plustrace;Grant succeeded.SQL> grant select on v_$statname to plustrace;Grant succeeded.SQL> grant select on v_$mystat to plustrace;Grant succeeded.SQL> grant plustrace to dba with admin option;Grant succeeded.NoteNote that the directory which contains the SQL*Plus administration scripts is different from the directory which contains the database administration scripts. You are more likely to have used the latter at some point in time, but the directory which contains the script in question here is different.

Next, one must grant the PLUSTRACE role to each user who will use the AUTOTRACE feature

SQL> GRANT plustrace TO student1;Grant succeeded.Set Autotrace OptionsOne can now enable the AUTOTRACE option within a database session by means of the SQL*Plus command SET AUTOTRACE. Note the available options:Open table as spreadsheetOption Explanation

SET AUTOTRACE ON EXPLAINThis option shows the execution plan, including predicate information and any applicable notes.

SET AUTOTRACE ON STATISTICSThis option shows only the Statistics section of the execution plan.

SET AUTOTRACE TRACEONLYThis is similar to the default SET AUTOTRACE ON and SET AUTOTRACE ON EXPLAIN STATISTICS with the exception that this option does not include the actual table output. Only the full set of execution plan details are displayed.

SET AUTOTRACE OFFDisables AUTOTRACE for the session.

If one simple issues the SET AUTOTRACE ON command, this is equivalent to SET AUTOTRACE ON EXPLAIN STATISTICS.Using AutotraceConsider a sample database session which uses this feature:

SQL> SET AUTOTRACE ON;SQL> SELECT LName, dependent_name FROM employee INNER JOIN dependent ON ssn = essn ORDER BY LName;

LNAME DEPENDENT_---------- ----------Smith AliceSmith ElizabethSmith Michael...

Execution Plan----------------------------------------------------------Plan hash value: 1434111244---------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |---------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 7 | 252 | 8 (25)| 00:00:01 || 1 | SORT ORDER BY | | 7 | 252 | 8 (25)| 00:00:01 ||* 2 | HASH JOIN | | 7 | 252 | 7 (15)| 00:00:01 || 3 | TABLE ACCESS FULL| DEPENDENT | 7 | 126 | 3 (0)| 00:00:01 || 4 | TABLE ACCESS FULL| EMPLOYEE | 8 | 144 | 3 (0)| 00:00:01 |---------------------------------------------------------------------------------

Predicate Information (identified by operation id):--------------------------------------------------- 2 - access("SSN"="ESSN")

Note----- - dynamic sampling used for this statement

Statistics---------------------------------------------------------- 327 recursive calls 0 db block gets 64 consistent gets 0 physical reads 0 redo size 598 bytes sent via SQL*Net to client 381 bytes received via SQL*Net from client 2 SQL*Net roundtrips to/from client 11 sorts (memory) 0 sorts (disk) 7 rows processedV$SQL_PLAN ViewOne can directly examine the execution plan of any SQL statement which has been executed and is currently stored within the SQL cursor cache. This may be done using the SQL interface and accessing the view V$SQL_PLAN.This view is comparable to the structure of the table PLAN_TABLE, and as such it can be viewed using an alternate program unit within the DBMS_XPLAN() package. Notice how the SQL ID for the cursor can be used with a call to the DISPLAY_CURSOR() program unit, similar to the DISPLAY() example shown earlier.

SQL> SELECT * FROM table (dbms_xplan.display_cursor ( sql_id => '4fy39u3m71t03', cursor_child_no => NULL, format => 'ALL') );

PLAN_TABLE_OUTPUT-----------------------------------------------------------------------------Plan hash value: 1821410742

----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 1 | 55 | 6 (34)| 00:00:01 || 1 | SORT ORDER BY | | 1 | 55 | 6 (34)| 00:00:01 ||* 2 | HASH JOIN | | 1 | 55 | 5 (20)| 00:00:01 ||* 3 | TABLE ACCESS FULL| EMPLOYEE | 1 | 33 | 2 (0)| 00:00:01 || 4 | TABLE ACCESS FULL| DEPARTMENT | 3 | 66 | 2 (0)| 00:00:01 |----------------------------------------------------------------------------------...NoteWhile one can obtain the SQL ID for statements currently being executed or currently stored in the SQL cursor cache by directly querying dynamic performance views, the easiest method is to use the EM interface, as is demonstrated within this section.

Using EMThe most comprehensive look at a SQL statement, its execution plan, and the context in which it is executing is available from EM. EM is an elaborate and extensive monitoring, advisory and administration interface to the database.While there are almost a limitless number of paths through which one might navigate to obtain SQL statement execution statistics and execution plans, the following demonstration of one such path may be useful.Home Page, Active SessionsFrom the database home page one can click on one of the Active Sessions links to obtain information about database sessions currently running and active at this moment.

For example, sessions currently waiting for resources or performing I/O may be viewed by clicking on the Wait or User I/O links respectively. In this case, we want to view database sessions currently and actively consuming CPU resources, thus likely executing a SQL statement at this moment. Therefore, we will click on the Active Sessions CPU link.Top ActivityThe top database activity is displayed. One can see which SQL statements are consuming the most database resources from the Top SQL list. One could see which sessions are doing the same in the Top Sessions list.

The logic behind this selective display of sessions is that one should concentrate their tuning efforts on those SQL statements or sessions which will have the greatest impact on performance. Even a marginal improvement in a SQL statement which executes thousands of times an hour is more important than a highly inefficient statement which only rarely executes. Therefore the Top Activity screen shows both the Top SQL statements and the Top Sessions issuing those statements.Search SessionsSuppose one wanted to track the activity of a particular database session which does not happen to appear in the list of top activity at the moment one is viewing these monitors. One could isolate any current database session clicking on the Search Sessions link found within the Additional Monitoring Links section.

As you can see this offers two methods to identify the sessions you wish to follow. The Specify Search Criteria provides a point-and-click method of searching the V$SESSION view using a single search column. In our example, we use the attribute DB User and provide the search criteria "student%" to find all sessions with that database user name prefix.The second method, Specify Search Criteria Using WHERE Clause, allows one to customize a SQL statement against this same view.

However one has defined the desired session list, in this screen you can see that the Results now include sessions for the user STUDENT1. Click on the appropriate SID link to drill into more details about this particular session.Session Details (General)

There is a wealth of useful information presented for each session, segmented by the links on the top of the page: General Activity Statistics Open Cursors Blocking Tree Wait Event History Parallel SQL SQL MonitoringSession Details (Activity)

The Activity link presents database activity for this session. The display is similar to the summary Top Activity information shown earlier, except that specific database measures and SQL statements are graphically plotted and pertain only to this session.One also has the option to click on the SQL ID link to view an actual SQL statement currently active in the session. It is the actual SQL statements which we are most interested in, of course, in our discussion of SQL statement tuning.Session Details (Statistics)

The Statistics link shows a host of detailed database performance statistics for the session. While these are largely of interest to database administrators, there are several statistics which will be of interest to developers and SQL tuners, which you will recognize in time.Session Details (Blocking Tree)

Skipping past the Open Cursors link for the moment, you will see in the screen example shown here that the Blocking Tree link displays locks currently held by the database session. These can be very useful in understanding how locking and waits are responsible for performance of the application.Session Details (Wait Event History)

The Wait Event History is even more directly suited to administrators. It indicates the impact that components throughout the database installation have in producing wait events for this session.Session Details (Open Cursors)

But of primary interest to us is the information presented by means of the Open Cursors link. This displays the set of SQL statements currently stored within the SQL cache of the database. If you recall the Activity link, that page showed SQL statements which were currently executing. The Open Cursors link will show inactive statements that may not be executing at this moment, but which remain cached and ready for execution.In any event, by clicking on the SQL ID of the particular SQL statement we want to examine, we are able to obtain the actual SQL details we are seeking in this demonstration. In fact, you may have noticed that in several places throughout our navigation sequence the SQL ID link was presented.SQL Details (Statistics)

The SQL Details screen allows us to drill into the detailed database performance statistics consumed by a single SQL statement. For instance, in addition to seeing the SQL text, we can readily determine how many times this statement has executed (Total Parses) and the database resources being consumed (Activity By Waits, Activity By Time, Elapsed Time Breakdown, Other Statistics).You can see that this display also includes a series of links providing a large amount of information for the SQL statement. At this moment we are mostly interested in the execution plan selected by the database Optimizer for this SQL statement, as presented by the Plan link. However, the Plan Control and Tuning History links are also of interest to SQL tuners.

SQL Details (Plan)From the Plan page the critical information we have been seeking is a display of the execution plan generated by the Optimizer. Detailed cost information is available for each step within the plan.One could investigate the reasons for the information presented using several paths available from this page. One could schedule a SQL Tuning Advisor task to obtain advice about modifications which might improve the execution plan. Or one could examine the database objects included in the plan by clicking on the appropriate Object linkThe execution plan is available in two formats. In the example shown above, it is listed in the traditional tabular format, similar to what is seen outside of the EM environment.

But one also has the option to view the execution plan in graphical form, which is fascinating. It shows the progression of steps in the execution plan, including the sources and the join methods for each step. The number of rows which are included within each step is also shown. A sophisticated navigation pane allows one to focus on selected elements within the plan.NoteNot all browsers and client-side operating systems support the graphical view option. When this is the case one should select the traditional tabular view of the plan.

Real-Time SQL MonitoringBased upon what has been discussed, consider the various ways in which SQL statements may be identified and monitored from the EM interface: Performance Top Activity SQL ID this lists the top consuming SQL statements currently active in the instance. Performance Search Sessions SID Open Cursors SQL ID this lists the top consuming SQL statements currently cached in the instance, although they may not at the moment be active.About Real-Time Automatic SQL MonitoringYet another way exists in the EM interface whereby critical SQL statements may be identified. Anytime a single SQL statements meets any one of the following criteria, it is placed on the monitoring list represented by the V$SQL_MONITOR view. Executes in parallel Consumes 5 or more seconds of either CPU or I/O time in a single execution Is explicitly placed on the list by means of the MONITOR hintTogether with other related views such as V$SQL_PLAN_MONITOR, real-time monitoring will display the execution plan and timed statistics, updated on a real-time basis, as the statement executes. The database thus automatically selects and monitors execution of crucial SQL statements as each step of the execution plan is processed.NoteThis feature is available in Oracle 11g and later releases of the database.

This list may be examined within EM by clicking on the Performance SQL Monitoring link.

Using this display one can see how the theoretical elements of timed statistics and database timings are applied as actual measurements presented on this screen. Consider these elements visible from the Monitored SQL Executions screen: Status this indicates whether or not the SQL statement is currently active or running. Even if idle, it will likely remain in the SQL statement cache for a period of time. It will also indicate if the SQL statement failed, and if so, the database error which was the result. Duration this is the elapsed time in which the statement most recently executed. It is based upon the Start and Ended attributes. SQL ID this is a unique identification number assigned to each SQL statement cursor within the SQL statement cache. Parallel this reveals whether or not the statement execution has been parallelized, and if so, the degree of parallelism. Database time this is the all-important database time. It isolates the elements of CPU time, I/O wait time and other wait times. I/O requests these are physical reads and writes performed during SQL statement execution and along with Database Time represents a key performance metric. One may right mouse-click to toggle the statistic to I/O bytes, or the amount of physical data I/O. SQL text this is the actual SQL code submitted by the user or application for execution.

Notice in this alternate display of the Monitored SQL Executions screen some additional elements discussed: Status you can see how some executions are underway, with the timed statistics still being accumulated. Parallel one execution has a degree of parallelism of 2.Clicking on the Status link will produce the Monitored SQL Execution Details page.

From the Monitored SQL Execution Details page, important execution statistics are presented, such as Time & Wait Statistics, I/O Statistics and CPU consumption. Once again, drawing from our discussion of database performance metrics, note these items: SQL ID by clicking on the SQL cursor ID it provides another path to the SQL Details screen, discussed earlier. IO Statistics IO Requests show physical read requests and bytes, physical write requests and bytes, and logical I/O from the database buffer cache in the form of Buffer Gets. Thus, one can not only observe various metrics of I/O throughput but also quickly compare logical versus physical reads.Also important at the Details buttons Plan Statistics and Activity. As you can see in the example of the Plan Statistics shown, statistics consumed by each step of the plan are merged with the execution plan and then shown in a highly readable graphical format. You will learn more about the operations listed later in this course.

The Activity button will graphically plot Database Time elements over a timeline, placing these in the context of total available resources and other concurrent database sessions. Thus, one will see statistics for CPU usage together with all of the individual wait events which comprise the overall Database Time.

One can also obtain similar information in an elongated and consolidated report by clicking on the View Report button.Session Details, SQL Details

Both the Session Details and the SQL Details screens include a SQL Monitoring link. If statements for the session or the SQL cursor have been placed on the monitoring list, real-time monitoring information is repeated from these locations.

Section Review

Remember These Points The execution plan for a SQL statement describes the sources, accessed methods, join methods and data operations which are used to execute the statement. Enhanced database performance statistics collection may be enabled using the STATISTICS_LEVEL database parameter. Enhanced SQL statement execution statistics collection may be enabled using the SQL_TRACE database and session parameter. The execution plan for a statement may be fetched from the table PLAN_TABLE using the SQL command EXPLAIN PLAN. An execution plan may be displayed using the utlxpl*.sql administrator scripts. An execution plan may be displayed using the DBMS_XPLAN() system-supplied package. The execution plan of each SQL statement executed may be displayed immediately after the results using the SQL*Plus AUTOTRACE option. An execution plan may be presented SQL Details page of the EM interface. The EM interface will identify SQL statements which should be scrutinized for tuning by means of the Active Sessions, Top Activity, Top SQL, Top Sessions, Session Details and automatic SQL monitoring pages. The database maintains a monitoring list of SQL statements which should be examined closely as candidates for tuning. This is accessible from the Performance page of the EM interface and from the V$SQL_MONITOR view.

Workshop Section: Viewing & Monitoring the Execution PlanExercises

What You Will Do within This WorkshopWithin this workshop you will: Request both enhanced database performance statistics collection and SQL tracing for the database instance. Create a plan table within your database ID. Using the EXPLAIN PLAN command and other methods for a hypothetical SQL statement, generate a plan and store it within the plan table. Examine and interpret the execution plans. Use the EM interface to follow the execution of a SQL statement, first at the session level and later at the SQL cursor level. Monitor SQL statement execution statistics, hard parses and other internal mechanisms of the SQL statement as it executes. Examine the automatic SQL monitoring list and the elaborate set of database performance statistics and execution plan details which are produced.NoteThese workshops assume that each student is using their own dedicated database instance. If you are using a shared database installation, then you will need to make appropriate adjustments to some of the exercises, with only one student performing certain steps in behalf of the installation. Furthermore, it is assumed that the workshop database is not a production database, as we will deliberately induce serious performance bottlenecks for the purpose of learning exercises.

View Plan-1 Configure the database instance for enhanced collection of both database statistics and SQL tracing.

AnswersOne may enable enhanced collection of database statistics with the following parameter setting. The setting will not take effect until the instance is restarted. Therefore, at the end of this exercise, you will need to restart the instance.

SQL> CONNECT dba1/dba1;Connected.

SQL> ALTER SYSTEM SET statistics_level = ALL SCOPE = BOTH;System altered.The following command issued from this same session will enable SQL tracing. As mentioned, later within this course we will explore use of the preferred alternatives DBMS_MONITOR() and DBMS_SESSION, but for now we will utilize this feature which remains supported.

SQL> ALTER SYSTEM SET SQL_TRACE = TRUE SCOPE = BOTH;System altered.As you can see, these commands must be issued from a database administrator account. Finally, restart the database instance.

SQL> CONNECT / AS SYSDBA;Enter password: ********Connected.

SQL> SHUTDOWN IMMEDIATE;Database closed.Database dismounted.ORACLE instance shut down.

SQL> STARTUP;ORACLE instance started.Total System Global Area 849530880 bytesFixed Size 1303216 bytesVariable Size 587205968 bytesDatabase Buffers 255852544 bytesRedo Buffers 5169152 bytesDatabase mounted.Database opened.

SQL>

View Plan-2 Create a database session for the account in which the sample database schemas have been created. Create the PLAN_TABLE as follows: Locate the utlxplan.sql script on your database server. Execute the script to create the PLAN_TABLE table within your user ID. Describe the PLAN_TABLE to confirm that it has been created.

AnswersSQL> CONNECT student1/student1Connected.

SQL> @ C:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\utlxplan.sqlTable created.

SQL> DESCRIBE plan_tableName Null? Type--------------------------------------- ------ ---------------STATEMENT_ID VARCHAR2(30)PLAN_ID NUMBERTIMESTAMP DATE...

View Plan-3 Consider the following query which fetches project information for employees. You will notice that a number of tables are referenced, and the query limits the result to employees who make more than $30,000 and who have at least some dependents.Use the EXPLAIN PLAN command to create a plan for the following SQL statement. Specify a unique statement ID as part of the EXPLAIN PLAN command.

SQL> SELECT LName, project.PName FROM employee INNER JOIN works_on ON works_on.essn = employee.ssn INNER JOIN project ON project.pnumber = works_on.pno WHERE employee.Salary > 30000 AND EXISTS (SELECT * FROM dependent WHERE essn = employee.ssn);

LNAME PNAME---------- ---------------Wong ProductYWong ProductZWong ComputerizationWallace ReorganizationWong ReorganizationWallace Newbenefits

AnswersSQL> EXPLAIN PLAN SET statement_id = 'TEST' FOR SELECT LName, project.PName FROM employee INNER JOIN works_on ON works_on.essn = employee.ssn INNER JOIN project ON project.pnumber = works_on.pno WHERE employee.Salary > 30000 AND EXISTS (SELECT * FROM dependent WHERE essn = employee.ssn);Explained.

View Plan-4 Using the SELECT statement shown in your lecture notes, manually select the rows from the PLAN_TABLE to view the execution plan. Thereafter be sure to delete these rows by referring to the appropriate statement ID.

AnswersAlthough this is probably the least usable option for examining an execution plan, perhaps you can try it for the sake of completeness. Such primitive methods are sometimes useful in understanding the underlying elements in use, but in a later exercise you will employ more flexible methods.

SQL> SELECT LPAD(' ',2*LEVEL) || operation || ' ' || options || ' ' || object_name Q_PLAN FROM plan_table WHERE statement_id = 'TEST' CONNECT BY PRIOR ID = PARENT_ID AND STATEMENT_ID = 'TEST'

START WITH ID=0;

Q_PLAN--------------------------------------------------- SELECT STATEMENT HASH JOIN HASH JOIN HASH JOIN SEMI TABLE ACCESS FULL EMPLOYEE TABLE ACCESS FULL DEPENDENT TABLE ACCESS FULL WORKS_ON TABLE ACCESS FULL PROJECT8 rows selected.

SQL> DELETE FROM plan_table WHERE statement_id = 'TEST';8 rows deleted.Note that the specific execution plans generated on your database installation may be different from those shown in our examples.

View Plan-5 Try to interpret the results of the plan shown in the PLAN_TABLE. Depending upon the current configuration of your database the results may differ from that shown in the lecture notes but to the degree that you can, try to understand the results. Which operations are performed first, and upon which objects?

AnswersCarefully interpret the execution plan. The plan generated in our example may be interpreted as follows: Since no indexes are yet created for any of our application tables, full table scan operations are always performed on each database object for the access method. The first table scan is performed on the EMPLOYEE table and the DEPENDENT table, with a hash semi-join combining these tables together. The WORKS_ON table is accessed next and is included in the intermediate result table by means of a standard hash join. The last table scanned is PROJECT which is also incorporated into the result table using a standard hash join. All selected rows are then included within the result table.

Q_PLAN--------------------------------------------------- SELECT STATEMENT HASH JOIN HASH JOIN HASH JOIN SEMI TABLE ACCESS FULL EMPLOYEE TABLE ACCESS FULL DEPENDENT TABLE ACCESS FULL WORKS_ON TABLE ACCESS FULL PROJECT

View Plan-6 Execute and explain SQL statement again, followed by execution of the utlxpls.sql script. Manually delete the rows from the PLAN_TABLE. Carefully examine the execution plan now shown with its added details.

AnswersThis method of viewing the execution plan is somewhat easier but also results in substantially more information.

SQL> SELECT LName, project.PName FROM employee INNER JOIN works_on ON works_on.essn = employee.ssn INNER JOIN project ON project.pnumber = works_on.pno WHERE employee.Salary > 30000 AND EXISTS (SELECT * FROM dependent WHERE essn = employee.ssn);

LNAME PNAME---------- ---------------Wong ProductYWong ProductZWong ComputerizationWallace ReorganizationWong ReorganizationWallace Newbenefits

SQL> EXPLAIN PLAN SET statement_id = 'TEST' FOR SELECT LName, project.PName FROM employee INNER JOIN works_on ON works_on.essn = employee.ssn INNER JOIN project ON project.pnumber = works_on.pno WHERE employee.Salary > 30000 AND EXISTS (SELECT * FROM dependent WHERE essn = employee.ssn);Explained.

SQL> @ C:\app\Administrator\product\11.2.0\dbhome_1\RDBMS\ADMIN\utlxpls.sql

PLAN_TABLE_OUTPUT-----------------------------------------------------------------------------Plan hash value: 1272493515

----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6 | 354 | 14 (15)| 00:00:01 ||* 1 | HASH JOIN | | 6 | 354 | 14 (15)| 00:00:01 ||* 2 | HASH JOIN | | 6 | 264 | 10 (10)| 00:00:01 ||* 3 | HASH JOIN SEMI | | 3 | 93 | 7 (15)| 00:00:01 ||* 4 | TABLE ACCESS FULL| EMPLOYEE | 7 | 147 | 3 (0)| 00:00:01 || 5 | TABLE ACCESS FULL| DEPENDENT | 7 | 70 | 3 (0)| 00:00:01 || 6 | TABLE ACCESS FULL | WORKS_ON | 16 | 208 | 3 (0)| 00:00:01 || 7 | TABLE ACCESS FULL | PROJECT | 6 | 90 | 3 (0)| 00:00:01 |----------------------------------------------------------------------------------

Predicate Information (identified by operation id):--------------------------------------------------- 1 - access("PROJECT"."PNUMBER"="WORKS_ON"."PNO") 2 - access("WORKS_ON"."ESSN"="EMPLOYEE"."SSN") 3 - access("ESSN"="EMPLOYEE"."SSN") 4 - filter("EMPLOYEE"."SALARY">30000)

22 rows selected.

View Plan-7 Use the DBMS_XPLAN() package to display the most recent SQL statement execution plan with the greatest amount of detail available. As you review the added detail provided, notice the following items: How the query is decomposed into individual query blocks. Potentially these could operate in parallel, thus improving performance. Predicates used as input to various Optimizer operations. Detailed column projection information, both for columns which are ultimately included in the result table but also for columns needed within the lower-level operations of the SQL statement.

AnswersOf all the methods available for examining an execution plan without execution, when one is using the command-line SQL interface this method is preferred. It is the easiest and provides the most detailed explanation of the execution plan.

SQL> SELECT * FROM table (dbms_xplan.display (table_name => NULL, statement_id => NULL, format => 'ALL') );

PLAN_TABLE_OUTPUT------------------------------------------------------------------------------------Plan hash value: 1272493515

----------------------------------------------------------------------------------| Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time |----------------------------------------------------------------------------------| 0 | SELECT STATEMENT | | 6 | 354 | 14 (15)| 00:00:01 ||* 1 | HASH JOIN | | 6 | 354 | 14 (15)| 00:00:01 ||* 2 | HASH JOIN | | 6 | 264 | 10 (10)| 00:00:01 ||* 3 | HASH JOIN SEMI | | 3 | 93 | 7 (15)| 00:00:01 ||* 4 | TABLE ACCESS FULL| EMPLOYEE | 7 | 147 | 3 (0)| 00:00:01 || 5 | TABLE ACCESS FULL| DEPENDENT | 7 | 70 | 3 (0)| 00:00:01 || 6 | TABLE ACCESS FULL | WORKS_ON | 16 | 208 | 3 (0)| 00:00:01 || 7 | TABLE ACCESS FULL | PROJECT | 6 | 90 | 3 (0)| 00:00:01 |----------------------------------------------------------------------------------

Query Block Name / Object Alias (identified by operation id):------------------------------------------------------------- 1 - SEL$2C0D7645 4 - SEL$2C0D7645 / EMPLOYEE@SEL$1 5 - SEL$2C0D7645 / DEPENDENT@SEL$4 6 - SEL$2C0D7645 / WORKS_ON@SEL$1 7 - SEL$2C0D7645 / PROJECT@SEL$2

Predicate Information (identified by operation id):--------------------------------------------------- 1 - access("PROJECT"."PNUMBER"="WORKS_ON"."PNO") 2 - access("WORKS_ON"."ESSN"="EMPLOYEE"."SSN") 3 - access("ESSN"="EMPLOYEE"."SSN") 4 - filter("EMPLOYEE"."SALARY">30000)

Column Projection Information (identified by operation id):----------------------------------------------------------- 1 - "EMPLOYEE"."LNAME"[VARCHAR2,10], "PROJECT"."PNAME"[VARCHAR2,15] 2 - "EMPLOYEE"."LNAME"[VARCHAR2,10], "WORKS_ON"."PNO"[NUMBER,22] 3 - "EMPLOYEE"."SSN"[CHARACTER,9], "EMPLOYEE"."LNAME"[VARCHAR2,10] 4 - "EMPLOYEE"."LNAME"[VARCHAR2,10], "EMPLOYEE"."SSN"[CHARACTER,9] 5 - "ESSN"[CHARACTER,9] 6 - "WORKS_ON"."ESSN"[CHARACTER,9], "WORKS_ON"."PNO"[NUMBER,22] 7 - "PROJECT"."PNAME"[VARCHAR2,15], "PROJECT"."PNUMBER"[NUMBER,22]

View Plan-8 Using the EM interface, locate the user SQL session. Examine the full set of details available for the session. In particular, take note of the execution plan for the SQL statement you have been executing.To ensure that the application query statement is cached and available, you may need to re-issue the statement one or more times from the user session before interrogating the session with EM.You will need to carefully review the lecture notes for reminders on how to locate session and individual cursor information with the EM interface. Once you have perused all the information discussed in the lecture notes for these objects, then consult the exercise solution as there are additional steps we will encourage you to attempt.Bear in mind that there may be some differences in your solution depending upon the exact release of the database which you are using. Also, feel free to explore the EM interface and navigate through different paths to examine session and SQL execution information as it is presented on your workshop database.

AnswersIssue this statement from the user session. If you are not able to complete this exercise as outlined below, then try to re-issue this statement a few additional times.

SQL> SELECT LName, project.PName FROM employee INNER JOIN works_on ON works_on.essn = employee.ssn INNER JOIN project ON project.pnumber = works_on.pno WHERE employee.Salary > 30000 AND EXISTS (SELECT * FROM dependent WHERE essn = employee.ssn);

LNAME PNAME---------- ---------------Wong ProductYWong ProductZWong ComputerizationWallace ReorganizationWong ReorganizationWallace NewbenefitsNext, using the EM interface, perform these steps to obtain initial information about the user database session and the cursor containing the above SQL statement:1. From the database home page, click on the Active Sessions CPU link.2. From the Top Activity page, locate your user database session and click on the Session ID link. If you cannot locate your session here, try the Search Sessions link.3. Once you have located your database session, from the Session Details page examine the information shown from the General, Activity, Blocking Tree and Wait Event History links.Next, in particular notice the following information from the Statistics link. It provides useful insights into the internal mechanisms related to SQL statement execution: As you repeatedly re-execute the application SQL statement from the user session, click on the Refresh button within the EM Session Details screen and notice how after a brief delay the session statistics are updated. Each time you re-execute the SQL statement the "db block gets" statistic remains static while the "consistent gets from cache" increases. This indicates that the data has been cached within the database block buffers. Notice how the "table scan rows gotten" continues to increase beyond the number of overall rows included in the result table. While the "execute count" increases each time, the "parse count (hard)" does not, indicating that the SQL statement itself has been cached within the SQL cache. You will also note with each new execution that the "physical reads" does not increase either.You can learn quite a bit about these detailed statistics by repeating this process several times and probing into the inner workings of SQL statement execution once the statement has been parsed, prepared and cached for reuse, and once the data blocks have been fetched and cached.Next, we will drill down from the session details into the actual SQL statement cursor. From the Session Details screen, click on the Open Cursors link and try to locate the SQL statement which is the subject of this analysis. Click on the Statement ID link for the SQL statement you have been executing and thereby navigate to the SQL Details screen.

Notice the General, Execution Statistics, Activity By Waits, Activity By Time and Shared Cursors Statistics specific to this SQL statement. All of this information should help solidify the internal architectural and conceptual information discussed during the lecture section.

Notice the information available from the Plan link when requesting a Table view of the execution plan. As you might expect, the cost for the operations increases as we proceed through the steps of the execution plan while the number of rows decreases. The overall cost of each step is calculated as part of the Optimizer effort to compute the cost of the entire plan and thereby select the plan which has the least cost.There is yet another mini-exercise that you could do at this point to probe the execution environment of a SQL statement a bit further. First, create a new SQL user session connecting to the same schema containing the application database. But do so with a new independent session, while retaining the original session.

SQL> CONNECT student1/student1Connected.From this new user session, execute the script SQLTuningProjectAdd1000.sql. This will add 1000 redundant rows to the PROJECT table. While this denormalizes the data, we do not have any constraints defined at present which would prevent this and it will serve a useful purpose in monitoring our original query.

SQL> @ SQLTuningProjectAdd1000.sqlPL/SQL procedure successfully completed.Now from the original user SQL session from which you have been issuing the query, re-execute the same SQL statement once more. You should see 1000 additional rows now included in the result table output.

SQL> SELECT LName, project.PName FROM employee INNER JOIN works_on ON works_on.essn = employee.ssn INNER JOIN project ON project.pnumber = works_on.pno WHERE employee.Salary > 30000 AND EXISTS (SELECT * FROM dependent WHERE essn = employee.ssn);

LNAME PNAME---------- ---------------Wong ProductYWong ProductZWong ComputerizationWallace ReorganizationWong ReorganizationWallace NewbenefitsWong project 50Wong project 51Wong project 52Wong project 53...

1008 rows selected.Next, navigate back to the Statistics page for the SQL cursor and refresh the statistics. You will notice that the Rows metric within Execution Statistics shows these additional rows being processed. However, the P