56
Chapter 11: Optimizing For SQL Server 11-1 CHAPTER 11: OPTIMIZING FOR SQL SERVER Objectives The objectives are Distinguish between the two database options available in Microsoft Dynamics ® NAV 2009. Comprehend the advantages of both the Classic Database Server and the SQL Server for Microsoft Dynamics NAV 2009. Work with and store tables and indexes. Use collation options and descriptions. Understand the SQL Server Data Replication process of distributing data. Know backup options available and best practices for backup. Introduction to SQL Server Query Optimizer. Understand the areas within Microsoft Dynamics NAV that are to be optimized. Understand how the Microsoft Dynamics® NDBCS driver allows the Microsoft Dynamics NAV clients to communicate with SQL Server. Understand the value of optimizing cursors to maximize performance. Understand the performance impact of locking, blocking and deadlocks. Understand how SIFT data is stored in SQL Server. Introduction There are two server options to choose from when using Microsoft Dynamics NAV: the “classic” or “native” Database Server, and Microsoft SQL Server. Each has advantages and disadvantages. This chapter covers some of the differences between the two, and discusses Microsoft Dynamics NAV with Microsoft SQL Server in more detail. Microsoft Dynamics NAV with Microsoft SQL Server option is the required database option for installations that use the Microsoft Dynamics NAV RoleTailored client and/or the three tier operating mode for Web services. Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Na2009 enus devii_11

Embed Size (px)

DESCRIPTION

 

Citation preview

Page 1: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-1

CHAPTER 11: OPTIMIZING FOR SQL SERVER Objectives

The objectives are

• Distinguish between the two database options available in Microsoft Dynamics® NAV 2009.

• Comprehend the advantages of both the Classic Database Server and the SQL Server for Microsoft Dynamics NAV 2009.

• Work with and store tables and indexes. • Use collation options and descriptions. • Understand the SQL Server Data Replication process of distributing

data. • Know backup options available and best practices for backup. • Introduction to SQL Server Query Optimizer. • Understand the areas within Microsoft Dynamics NAV that are to be

optimized. • Understand how the Microsoft Dynamics® NDBCS driver allows

the Microsoft Dynamics NAV clients to communicate with SQL Server.

• Understand the value of optimizing cursors to maximize performance.

• Understand the performance impact of locking, blocking and deadlocks.

• Understand how SIFT data is stored in SQL Server.

Introduction There are two server options to choose from when using Microsoft Dynamics NAV: the “classic” or “native” Database Server, and Microsoft SQL Server. Each has advantages and disadvantages. This chapter covers some of the differences between the two, and discusses Microsoft Dynamics NAV with Microsoft SQL Server in more detail. Microsoft Dynamics NAV with Microsoft SQL Server option is the required database option for installations that use the Microsoft Dynamics NAV RoleTailored client and/or the three tier operating mode for Web services.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 2: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-2

Classic Database Server for Microsoft Dynamics NAV The Classic Database Server can be characterized as an Indexed Sequential Access Method (ISAM) server. The data is stored in B+ tree structures, with the primary key used to store the data physically on a disk and the secondary keys used to find a range of records and point to the data location based on the primary key. The Classic Database Server does not provide advanced data retrieval mechanisms -- it is necessary to specify which index is to be used, or the data is retrieved by scanning the primary key. There is no read-ahead mechanism (apart from the backup system). Records are sent to the client one by one, while commands are sent to the server are sent in batches to reduce network traffic. There are a very limited number of commands executed at the server level (MODIFYALL for example), so virtually all the data manipulation is done at the client level.

The Classic Database Server supports explicit and implicit record locking. Implicit locking takes place when a modification of a record is performed on a table, at which time the Microsoft Dynamics NAV Server issues a table-lock, which it holds until the transaction is complete. This sometimes compromises concurrency.

However, there are two important and advantageous features built into the Classic Database Server:

• SumIndexField Technology (SIFT) • Data versioning

SIFT is designed to improve performance when carrying out activities like calculating customer balances. In traditional database systems this operation involves a series of database calls and calculations. SIFT makes calculating sums for numeric table columns extremely fast, even in tables that contain thousands of records, because the SIFT data is stored in the indexes. Therefore, the read operation is minimized.

Data versioning is designed to insure that reading operations can be performed in a consistent manner. For example, when a Balance Sheet is printed, it will balance without the need to issue locks on the G/L Entry table. Conceptually this works as though the server creates a snapshot of the data. During a data update, the modified data is kept private until a commit appears, and then the new data is made public, with a new version number. Data versioning allows optimistic concurrency, which can improve performance.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 3: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-3

SQL Server for Microsoft Dynamics NAV SQL Server is a comprehensive database platform providing enterprise-class data management with integrated business intelligence (BI) tools. SQL Server can be characterized as a set-based engine. This means that SQL Server is very efficient when retrieving a set of records from a table, but less so when records are accessed one at a time. Access to SQL Server from Microsoft Dynamics NAV is accomplished with the Microsoft Dynamics NDBCS driver, which is discussed later in this chapter.

When SQL Server receives a query (in the form of a T-SQL statement), it uses the SQL Query Optimizer to create and execute the query. The Query Optimizer evaluates the query and makes decisions about how to execute the query, in what is known as the execution plan. For example, it decides which index to use, whether to use parallel execution, and so forth. Query Optimizer assumes that the client generates queries according to its own logic, and that these queries are not optimized for SQL Server. The primary criteria that Query Optimizer uses to decide which execution plan to use is the performance cost of executing the query.

SQL Server stores data in B+ tree structures. One index is used to store the data physically on a disk, and other indexes are used to find a range and point to the data in the main index. This main index is called the Clustered Index. On SQL Server, any index can be defined as the main (clustered) index, but on the Classic Database Server this is always the primary key of the table.

Unlike the Classic Database Server, SQL Server can do server-side processing, but – for the most part – Microsoft Dynamics NAV does not use this ability. Microsoft Dynamics NAV uses the same strategy for database access as it does with the Classic Database Server, so a limited number of commands are executed on a server-side basis (such as DELETEALL).

Both Microsoft Dynamics NAV clients retrieve data record by record. Because SQL Server is set-based, it does not provide a fast way to do this retrieval. SQL Server uses mechanism called cursors for record-level access. Compared to retrieving sets of records, however, cursors are very expensive. How to reduce this overhead where possible is discussed later in this chapter.

SIFT was originally implemented on SQL Server by using extra summary tables called SIFT Tables, that were maintained through table triggers directly in the table definitions on SQL Server. When an update was performed on a table containing SIFT indexes, a series of additional updates were necessary to update the associated SIFT tables. This imposed an extra performance penalty – one that grew as the number of SIFT indexes on a table increased. In Microsoft Dynamics NAV Version 5.0 Service Pack 1, Microsoft replaced SIFT tables with V-SIFT, which are indexed views. This chapter will discuss both mechanisms. Microsoft Dynamics NAV developers will likely be involved with older versions, where they may encounter performance issues related to the SIFT tables. It is important for Microsoft Dynamics NAV developers to know how SIFT tables worked prior to V5.0SP1 and how to troubleshoot performance issues that are related to these tables.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 4: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-4

The SQL data versioning that is proprietary to the Classic Database Server is implemented as follows on SQL Server: Microsoft Dynamics NAV Classic adds the timestamp column to every table and reads data using the timestamp and the READUNCOMMITED isolation level. When a record is modified, the system reads the record again, only this time with the UPDLOCK isolation level, and modifies the data with a filter on the original timestamp. This prevents data that is read as “dirty” from being stored incorrectly.

The downside is that this method allows users to view (and potentially modify) data processed by other users before their transaction is committed. To prevent this, a lock should be issued prior to the read. This ensures that the data is read with the READCOMMITED isolation level and also guarantees that no other user can modify the same set of data that is just read (and locked). The downside of locking is reduced concurrency, as some users may be blocked for the duration of the activity lock.

On the plus side, SQL Server provides much more sophisticated locking options than does the Classic Database Server. This includes record-level locking, which vastly improves the efficiency of parallel operations performed on the same set of tables.

Representation of Microsoft Dynamics NAV Tables and Indexes in SQL Server

By default, Microsoft Dynamics NAV provides unique data for each company in its database. On the Classic Database Server, the company name is stored as an invisible column in each physical table. On SQL Server, each company in the Classic database has its own copy of each table.

Reusable with Code Sample

Each table in the Classic Database Server has a corresponding table in the SQL Server for every company in the database, with a name in the following format:

Table Name Format Example

<Company Name>$< Table Name> CRONUS International Ltd_$G_L Entry

It is possible, however, to share data across companies, by setting the DataPerCompany table property to FALSE. In Microsoft Dynamics NAV terms, this is called data common to all companies. When the DataPerCompany property is turned off, there is just one table in SQL Server that is accessed from every company in the database. The naming convention for these common tables on SQL Server is the same, but without the "<Company Name>$" portion.

Microsoft Dynamics NAV Classic uses naming conventions complying with SQL Server, such as not using special characters. Some special characters are allowed in the C/SIDE table designer, and they are translated to comply with the character set that is used on SQL Server.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 5: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-5

The table will have a number of indexes, representing the keys that are designed and enabled in the C/SIDE table designer. The indexes have generic names in the following format:

Index name format Example

$<Index Number> $1, $2, and so on

However, the primary key index uses the following name format:

Primary key name format Example

<Company Name>$< Table Name>$0

CRONUS International Ltd_$G_L Entry$0

Microsoft Dynamics NAV clusters the primary key index by default. Also, by default, Microsoft Dynamics NAV adds the remainder of the primary key to every secondary index, making the indexes unique. This conforms to the “best practices” defined for SQL Server. Developers can make additional changes to the way indexes are defined on SQL Server using the MaintainSQLIndex, SQLIndex, and Cluster properties on the keys defined in the C/SIDE table designer.

To obtain a list of indexes and their definition in SQL Server, run the sp_helpindex stored procedure in a query window, as follows:

sp_helpindex "CRONUS International Ltd_$G_L Entry" GO

The query outputs the index name, if the index is clustered or unique, whether or not there is a primary key constraint, and also the index keys defined in the index.

There are some differences between the Classic Database Server and SQL Server terminology, as the following list describes:

SQL Server Terminology Classic Database Server Terminology

Primary key constraint Primary key

Clustered index No equivalent

Non-clustered index Secondary key

Index key Field in a key definition

SQL Server allows non-unique indexes; however, because SQL Server makes the indexes unique internally by adding extra index keys, it is a good practice to initially make them unique. Additionally, SQL Server allows a table not to have a clustered index. This is called a “heap” and can be used for archiving, because data is stored as it comes. However, heaps are not recommended for tables that are read, because reading from an unstructured source is too slow.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 6: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-6

Collation Options SQL Server supports several collations. A collation encodes the rules governing the proper use of characters for either a language, such as Macedonian or Polish, or an alphabet, such as Latin1_General (the Latin alphabet used by Western European languages). Each SQL Server collation specifies three properties:

• The sort order to use for Unicode data types (nchar, nvarchar, and ntext). A sort order defines the sequence in which characters are sorted, and the way characters are evaluated in comparison operations.

• The sort order to use for non-Unicode character data types (char, varchar, and text).

• The code page used to store non-Unicode character data.

SQL Server collations can be specified at any level. Each instance of SQL Server has a default collation defined, which is the default collation for all objects in that instance of SQL Server, unless otherwise specified. Each database can have its own collation, which can be different than the default collation. Separate collations can even be specified for each column, variable or parameter. In Microsoft Dynamics NAV Classic, however, the collation can only be specified at the database level.

It is a good practice to set the collation as generic as possible, for the language that is most common to the users. If all users speak the same language, set up SQL Server with a collation supporting that language. For example, if all users speak French, define a French collation on SQL Server. If the users speak multiple languages, define a collation that best supports the requirements of the various languages. For example, if the users generally speak western European languages, the Latin1_General collation is a good collation to use.

Collation settings are defined in Microsoft Dynamics NAV Classic when a database is created, and can be changed afterward, with some limitations. The Windows Locale option can be used to match collation settings in instances of SQL Server 2000 and above. If different versions of SQL Server are used, collations should match settings that are compatible with the sort orders in earlier versions of SQL Server.

Windows Locale

The default settings for Windows Locale (that is Windows collation) should only be changed if users’ installation of SQL Server must match the collation settings used by another instance of SQL Server, or must match the Windows Locale of another computer.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 7: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-7

Collation Description

Select the name of a specific Windows collation from the drop-down list. Note that when the Validate Code Page field is checked, only valid subsets of collations are available in the list based on the Windows Locale. For example, the following are subsets for the Latin1_General (1252) locale consecutively:

• Afrikaans, Basque, Catalan, Dutch, English, Faeroese, German, Indonesian, Italian, Portuguese

• Danish, Norwegian

Sort Order

Select Sort Order options to use with the collation selected - these are Binary, Case-sensitive and Accent-sensitive. Binary is the fastest sorting order, and is case-sensitive. If Binary is selected, the Case-sensitive and Accent-sensitive options are not available.

SQL Collations

The SQL Collations option is used for compatibility with earlier versions of Microsoft SQL Server. Select this option to match settings compatible with SQL Server version 7.0, SQL Server version 6.5, or earlier.

SQL Server Data Replication Data replication is the process of distributing data from a source database to one or more destination databases. SQL Server provides data replication in a number of ways, with precise control over what data is replicated, and also when and how replication occurs. Some of the main reasons to use replication include:

• Load balancing: Replication enables users to disseminate data to a number of servers and then distribute the query load among those servers.

• Offline processing: Users can manipulate data from their database on a machine that is not always connected to the network.

• Redundancy: Replication enables developers to build a failover database server that is ready to pick up the processing load at a moment’s notice.

In any replication scenario, there are two main components:

• Publishers are database servers that make data available to other servers. Any given replication scheme has only one publisher.

• Subscribers are database servers that are the destination servers for replication. There can be one or more subscribers in a replication scenario.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 8: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-8

Microsoft SQL Server supports the following types of database replication:

• Snapshot replication. Each snapshot replaces the scheme and the data of the entire database. All subscribers have identical copies of the database, but it generates high levels of network traffic. Another disadvantage is that it only runs periodically, so that subscribers do not have current data.

• Transactional replication. Selected transactions are marked for replication, and sent to subscribers separately. A benefit of transactional replication is that individual transactions can be replicated rather than the entire database. Transactional replication can occur continuously or periodically.

• Merge replication. In this scenario, subscribers are allowed to make changes to the data, such as in offline copies of a database. Merge replication does not use distributed transactions, and therefore, transactional consistency cannot be guaranteed.

Data Replication for Microsoft Dynamics NAV

Although SQL Server provides very advanced options when it comes to replication, not all of those options are suitable for Microsoft Dynamics NAV databases. In general, SQL Server replication should only be used when subscribers are used for reporting purposes or for failovers, so snapshot or transactional replication could be used for Microsoft Dynamics NAV databases. Unless it is limited to staging tables for integration purposes, merge replication is not recommended for Microsoft Dynamics NAV databases, because transactional integrity cannot be guaranteed.

Another issue that affects replicated databases is that under some circumstances, the replication mechanism changes the TimeStamp column in the Microsoft Dynamics NAV tables to GUIDs on the replicated tables, making the database unusable by Microsoft Dynamics NAV. Additionally, developers must not use replication on SIFT tables, as these are maintained by table triggers. Therefore, developers should consider log shipping before committing to a replication strategy for resilience reasons. If developers do use replication, they should ensure that it is properly tested for functionality and performance impact. There is more overhead if developers run replication on their server, so the subsystems must be sized accordingly.

To summarize, replication should be used sparingly with Microsoft Dynamics NAV databases, and only if there is no other mechanism to distribute and/or share data, such as log shipping, using views, using DTS/SSIS (SQL Server Integration Services), dataports, and so on.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 9: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-9

Backup Options There are various backup options with Microsoft Dynamics NAV on SQL Server. While the Microsoft Dynamics NAV client side backup option still exists (manually doing a backup through Tools, Backup), there are more elegant tools in SQL Server. The Microsoft Dynamics NAV backup and restore functionality remains for migration tasks such as users wanting to move data from one server with one collation into a different server with non-compatible collation, or for example if users want to use multiple database files and spread the data into those while restoring the Microsoft Dynamics NAV backup. Note that restoring the Microsoft Dynamics NAV backup is fully logged. Because it is considered to be one single transaction, it requires a massive amount of space in the log file of at least two to three times the size of the combined data files.

SQL Server itself offers a number of options for disaster recovery. To start with, users can choose what recovery model the database will be using, either Simple or Full. Bulk-Logged is another option, but it is rarely used and is similar to Full, because only bulk operations logs are simplified. After choosing the recovery model, developers start planning their backup strategies using full, differential, and log backups, or a combination of those.

Simple Recovery Model

The simple recovery model uses the log file only to record open transactions. After committing the transaction to the database, the log space is reclaimed. The benefit is that the log file can be quite small and simpler, and therefore faster. The disadvantage is that in the case of a disaster, users can only recover transactions to the end of the previous backup. Then they have to redo all transactions again. When using Simple Recovery, the backup interval should be long enough to keep the backup overhead from affecting production work, yet short enough to prevent the loss of significant amounts of data.

Full Recovery Model

The Full Recovery and Bulk-Logged Recovery models provide the greatest protection for data. These models rely on the transaction log to provide full recoverability and to prevent work loss in the broadest range of failure scenarios. The disadvantage is the size of the log file and amount of data logged; the advantage is that if users have a disaster, they can recover to any point of time, provided that the log itself is not damaged.

Full Backup and Restore

Full database backup backs up the entire database, including the transaction log. When users restore this backup, they have a fully operational database from the time of the backup. This type of backup requires the most space.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 10: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-10

Differential Backup and Restore

In general, a differential backup includes the data that is modified since the most recent full database backup. Differential backups are used primarily in heavily used systems where a failed database must be brought back online quickly. Differential backups are smaller than full database backups; therefore, they have less of an effect on the system while they run.

Transaction Log Backup and Restore

A transaction log backup includes all transactions from the transaction log since the most recent transaction log backup. A log file backup by itself cannot be used to restore a database. A log file is used after a database restore to recover the database to the point of the original failure, or to any specific point in time. Transaction log backups are not available when using the Simple recovery model.

Reducing Recovery Time

Using full database backup, differential database backup, and transaction log backup together can reduce the amount of time it takes to restore a database back to any point in time after the database backup is created.

In a typical backup procedure, full database backups are created at longer intervals, differential database backups at medium intervals, and transaction log backups at shorter intervals. An example is to create full database backups weekly, differential database backups one or more times per day, and transaction log backups every ten minutes.

If a database needs to be recovered to the point of failure, such as due to a system failure, perform the following steps:

1. Back up the currently active transaction log. This operation fails if the transaction log is damaged.

2. Restore the most recent full database backup. 3. Restore the most recent differential backup that is created after the

most recent full database backup is created. 4. Apply all transaction log backups, in sequence, created after the most

recent differential backup is created, finishing with the transaction log backup created in Step 1.

Note that if the active transaction log cannot be backed up, it is possible to restore the database only to the point when the last transaction log backup is created. Changes made to the database since the last transaction log backup are lost and must be redone manually.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 11: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-11

Consider Customer Requirements

Which recovery model to select and how to set up the backup strategy is determined by the customer’s preferences. It depends on the customer’s attitude toward data loss, as well as the available system resources. If the customer wants to minimize recent data loss, then it is necessary to select the full recovery model. In addition it might be necessary to set up a database backup plan that includes full backup as well as frequent differential and transaction log backups. Since this comes at a performance price, this also requires additional storage space.

If the customer is willing to accept the loss of an entire day’s worth of transactions, as is the case in many Microsoft Dynamics NAV Classic implementations, then it might be better to select the simple recovery model, and only schedule less frequent full backups. Because SQL Server makes it easy to set a more comprehensive backup strategy, most customers will opt for the more comprehensive approach.

Practice Disaster Recovery

Assume that the customer has set up a backup strategy that includes full backups, differential backups, as well as transaction log backups. Unless it is known how to restore a database from those backups, and the infrastructure is ready to replace the production database, they are completely meaningless.

Whether the customer employs internal Information Technology (IT) staff, or has external consultants on call, it is important that both the staff and infrastructure are ready to react to disaster. Plan for a simulated system breakdown, and use the backup files to restore the production database, including client connectivity. These drills will ensure that the staff is well prepared to restore the production database when disasters happen for real.

SQL Server Query Optimizer Query Optimizer is the heart of SQL Server when making a decision on how to execute a query. SQL Server collects statistics about individual columns (single-column statistics) or sets of columns (multi-column statistics). Statistics are used by the query optimizer to estimate the selectivity of expressions, and thus the size of intermediate and final query results. Good statistics allow the optimizer to accurately assess the cost of different query plans, and choose a high-quality plan. All information about a single statistics object is stored in several columns of a single row in the sysindexes table, and in a statistics binary large object (statblob) kept in an internal-only table.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 12: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-12

SQL Server maintains some information at the table level. These are not part of a statistics object, but SQL Server uses them in some cases during query cost estimation. This data is stored at the table level:

• Number of rows in the table or index (rows column in sys.sysindexes)

• Number of pages occupied by the table or index (dpages column in sys.sysindexes)

SQL Server collects the following statistics about table columns and stores them in a statistics object (statblob):

• Time the statistics are collected • Number of rows used to produce the histogram and density

information (described hereafter) • Average key length • Single-column histogram, including the number of steps

A histogram is a set of up to 200 values of a given column. All or a sample of the values in a given column are sorted; the ordered sequence is divided into up to 199 intervals so that the most statistically significant information is captured. In general, these intervals are of nonequal size.

Users can view the statistical information when they run the DBCC SHOW_STATISTICS command. For example, they can run it for index $6 in the Cust. Ledger Entry table, as follows:

DBCC SHOW_STATISTICS ("CRONUS International Ltd_$Cust_ Ledger Entry","$6") GO

The result set has three sections, similar to the following:

Updated Rows Rows Sampled

Steps Density Average key length

Jun 14 2009 2:51PM

26046 26046 3 0.0 26.237772

All density Average

Length Columns

0.33333334 4.0 Document Type

1.0416667E-2

11.866083 Document Type, Customer No_

5.165289E-4 19.866083 Document Type, Customer No_, Posting Date

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 13: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-13

RANGE_HI_KEY RANGE_ROWS EQ_ROWS DISTINCT_RANGE_ ROWS

AVG_RANGE_ROWS

1 0.0 23689.0 0 0.0

2 0.0 10.0 0 0.0

3 0.0 2341.0 0 0.0

4 0.0 6.0 0 0.0

Imagine that a user is filtering on the "Document Type" column in the "Cust. Ledger Entry" table, for example looking for all "Credit Memo" type entries. The "Credit Memo" type entries have a value of "Document Type" equal to three. Microsoft Dynamics NAV issues a query similar to this:

SELECT * FROM "CRONUS International Ltd_$Cust_ Ledger Entry" WHERE "Document Type" = 3 GO

The query optimizer analyzes usefulness of every index in the table so that the query is executed at minimal cost, minimizing first the cost of data retrieval, followed by costs of sorting, and so on. When analyzing this particular index (index $6) from the data retrieval aspect, the query optimizer makes the majority of its decisions based on the statistics in the following way.

"Document Type" is filtered, but there are only three distinct values in the index (refer to the "All density" column in the preceding table) indicating that 0.3334 of the table is within the filtered set. Additionally, since the histogram exists on the "Document Type," it will look up the information (RANGE_HI_KEY = 3) and see that there are about 2341 rows in the set, indicating that 0.0898 of the table is within the filtered set (2341 rows out of a total of 26046 rows).

Based on this, the query optimizer decides that there is no point in using this index to do this operation because it needs to load the index, scan the range, and look up the data in the clustered indexes to return the results. Since there is no other better way to read the data, it decides to do Clustered Index Scan instead.

As a rule of thumb, if the selectivity is close and better than one percent, the index will be considered beneficial. Be careful with this simple rule, because operations such as SELECT TOP 1 (asking for the first record in a set) escalate the index benefit, and the index will probably be used.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 14: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-14

Continuing this example, filtering on a more unique value makes the index help with selectivity, such as:

SELECT * FROM "CRONUS International Ltd_$Cust_ Ledger Entry" WHERE "Document Type" = 4 GO

The query optimizer knows – in this example -- that only six out of 26046 records qualify (refer to the previous histogram or RANGE_HI_KEY = 4) and makes use of the index. It looks in the non-clustered index followed by the bookmark lookup to the clustered index to get the relevant records.

SELECT * FROM "CRONUS International Ltd_$Cust_ Ledger Entry" WHERE "Document Type" = 4 GO

If users further filter on the next index key, like this:

SELECT * FROM "CRONUS International Ltd_$Cust_ Ledger Entry" WHERE "Document Type" = 3 AND "Customer No_" = '10000' GO

The combined selectivity is used, and a plan is calculated, which may result in a decision to use this index.

However, if users do not filter on an index key or use the <> (not equal operator) or use the OR operator, there is no way that SQL Server can combine the subqueries. Thus, this may result in bad behavior. For example:

SELECT * FROM "CRONUS International Ltd_$Cust_ Ledger Entry" WHERE "Customer No_" = '10000' AND (("Document Type" = 2) OR ("Document Type" = 4)) GO

In this example, the query optimizer decides to use the index, doing a non-clustered index seek, but has to traverse the area of “Document Type” = 3. Because the set is read from beginning to end, it has a similar effect, as if the user did a table scan.

Similarly, if the user leaves one of the index keys unfiltered, all the subtree index entries have to be scanned. There is one simple rule with regards to performance: "Scan is bad. Seek is good." Developers need to avoid scans as much as they can, ensuring that indexes are of a good selectivity and that the queries do not have scan-like behavior.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 15: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-15

On the other side of the spectrum, is an index that is overly complex and designed to fully match the entire query, for example with eight index keys. It might be that if the index has only four index keys, SQL Server will have to scan a slightly larger set to provide the required several records, but at the extreme cost of having to maintain the composite index, delaying every modification in the table, because SQL Server has to update the index accordingly. In a majority of cases, users have to optimize the transaction speed. If users over-index the tables, then users have to pay a price in terms of performance. It might be that a specific report is slow when fewer composite indexes are used, but it is worth it, since processing (such as posting inventory) will be quicker.

Additionally, if a relatively small set is to be ordered by a different column, it is not necessary for the index to fully support the sorting; SQL Server can efficiently sort small result sets quickly.

The above demonstrates that the way that indexes are designed and used can severely impact SQL Server performance. Adhere to the following principles:

• Minimize the number of indexes for faster table updates. • Design indexes with index keys of good selectivity. • Put index keys with higher selectivity toward the beginning of an

index. • Put index keys that are more likely to be filtered toward the

beginning of an index. • If the filtered index keys point to approximately 50 records, there is

no need to add extra index keys to support index selectivity or sorting; SQL Server returns the set sorted as desired.

• It is better to use more composite indexes than a lot of simple indexes; in other words, it is better to "combine" index use rather than have many "specific" indexes.

• There is no point of indexing "empty" (not used) columns, since that just creates an extra overhead for no benefit.

• Ensure that users filter on unique values in indexes; otherwise SQL Server performs similarly to table scans.

• Do not make "over selective" index keys. If users index on DateTime fields, for example, they will force creation of a unique index leaf in the index for each record in the table.

• Put date fields toward the end of the index, since this is not always filtered. If an index is always filtered on a unique value, then it is a good index.

To determine whether an index is good, imagine a telephone book or list of personal details designed to find people by name and surname, or date of birth, or social security number. Compare to a phone book indexed by gender, for example. When it comes to designing indexes, choose those that support a high level of selectivity. Common sense applies.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 16: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-16

Optimizing a Microsoft Dynamics NAV Application There are a number of areas where users need to focus when optimizing Microsoft Dynamics NAV applications. These areas are, in order of importance (based on the processing costs):

• SIFT • Indexes • Cursors • Locks • Suboptimum code • GUI

Optimizing SIFT Tables

SIFT tables are used in Microsoft Dynamics NAV version 5.0 and older, to implement SIFT on the SQL Server, and store aggregate values for SumIndexFields for keys in the source tables. Starting with version 5.0 Service Pack 1, these SIFT tables are replaced by indexed views. Separate SIFT tables are no longer part of Microsoft Dynamics NAV on SQL Server. This discussion section is included because Microsoft Dynamics NAV developers are likely to run into issues concerning SIFT tables in implementations of older versions of Microsoft Dynamics NAV.

The overhead of the separate SIFT tables is massive and should be carefully considered for activation. Microsoft Dynamics NAV by default activates the SIFT tables when users create a new index with SumIndexFields. Users should review all their existing SIFT indexes and decide if they really need to keep them activated.

Users can de-activate the creation and maintenance of a SIFT table by using the MaintainSIFTIndex property in the Microsoft Dynamics NAV key designer. If they make the property false, and there is no other maintained SIFT index supporting the retrieval of the cumulative sum, Microsoft Dynamics NAV asks SQL Server to calculate the sum itself.

For example, if users have a Sales Line table and put Amount in the SumIndexFields for the primary key (“Document Type, Document No., Line No.”), a new SIFT table “CRONUS International Ltd_$37$0” is created and maintained. When a CALCSUM is used to display a FlowField in Microsoft Dynamics NAV showing the sum of all Sales Lines for a specific Sales Header (Order ORD-980001), the resulting query looks like the following:

SELECT SUM(s29) FROM "CRONUS International Ltd_$37$0" WHERE "bucket" = 2 AND "f1" = 1 AND "f3" = 'ORD-980001'

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 17: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-17

If users disable the SIFT table by clearing the MaintainSIFTIndex checkbox, Microsoft Dynamics NAV still works, and the resulting query looks like the following:

SELECT SUM("Amount") FROM "CRONUS International Ltd_$Sales Line" WHERE "Document Type" = 1 AND "Document No_" = 'ORD-980001'

This is a very light load on CPU overhead compared to the massive costs of maintaining the SIFT table.

SIFT tables are extremely beneficial when users need to sum up a larger number of records. With that in mind, users can check existing SIFT tables and see if they need some of the level of details. There is no need, for example, to store a cumulative sum of just a few records. Users can use the property SIFTLevels and disable specific levels by clearing the checkbox Maintain for a specific bucket, thus reducing the overall overhead of the SIFT table while still keeping the SIFT table in place for summing the larger number of records. However, there is no need, for example, to keep cumulative sums on the top level buckets if they are used, such as a total of Quantity on "Location Code" in the Item Ledger Entry table, since users always filter on "Item No."

Optimizing Indexes

The second largest typical Microsoft Dynamics NAV overhead is the processing load to maintain indexes. The Microsoft Dynamics NAV Classic database is over-indexed, since customers require certain reports to be ordered in different ways, and the only way to sort is to create a key for each sequence. However, SQL Server can sort data directly and quite fast if the set is small, so there is no need to keep indexes for sorting purposes only. For example, in the Warehouse Activity Line table, there are a number of keys that begin with "Activity Type" and "No." fields, such as the following:

“Activity Type,No.,Sorting Sequence No.” “Activity Type,No.,Shelf No.” “Activity Type,No.,Action Type,Bin Code” etc.

The issue here is that these indexes are not needed on SQL Server, because the Microsoft Dynamics NAV code always filters on "Activity Type" and "No." when using these keys. In SQL Server, the Query optimizer looks at the filter and realizes that the clustered index is "Activity Type,No_,Line No_." It also determines that the set is small, that there is no need to use an index to retrieve the set and return it in that specific order. It uses only the clustered index for these operations.

Since the entire functionality is not used by customers, if they never pick the stock by "Sorting Sequence No.," then there is no need to maintain the index.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 18: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-18

Developers need to analyze the existing indexes with a focus on use and benefits compared to the processing overhead, and determine what action is needed. Choose between disabling the index entirely using they key property Enable, or the KeyGroups property, or by using the MaintainSQLIndex property. Indexes that remain active can change structure by using the SQLIndex property. Developers can also make the table clustered by a different index.

Enabled Property

The Enabled property turns a specific key on and off. If a key is not enabled and is referenced by a C/AL code or CALCSUMS function, a run-time error will be received.

KeyGroups Property

Make one or more keys a member of a predefined key group. This allows the key to be defined, but only enabled when it is going to be used.

Use the KeyGroups property to select the predefined key groups. Choose the KeyGroups option on the Database Information window (select File, click Database, click Information, and then click Tables). There are key groups already defined, such as Acc(Dim), Item(MFG), but more can be created and assigned to keys.

The purpose of key groups is to set up a group of special keys that are infrequently used (such as for a special report that is run once every year). Since adding many of keys to tables eventually decreases performance, using key groups makes it possible to have the necessary keys defined, but only active when needed.

MaintainSQLIndex Property

This property determines whether an SQL Server index corresponding to the Microsoft Dynamics NAV key should be created (when set to Yes) or dropped (when set to No). A Microsoft Dynamics NAV key is created to sort data in a table by the required key fields. However, SQL Server can sort data without an index on the fields to be sorted. If an index exists, sorting by the fields matching the index is faster, but modifications to the table will be slower. The more indexes there are on a table, the slower the modifications become.

In situations where a key must be created to allow only occasional sorting (for example, when running infrequent reports) users can disable this property to prevent slow modifications to the table.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 19: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-19

SQLIndex Property

This property allows users to define the fields that are used in the SQL index. The fields in the SQL index can be the following:

• Different than the fields defined in the key in Microsoft Dynamics NAV - there can be fewer fields or more fields.

• Arranged in a different order.

If the key in question is not the primary key and the SQLIndex property is used to define the index on SQL Server, the index that is created contains exactly the fields that users specify and is not necessarily a unique index. It will only be a unique index if it contains all the fields from the primary key.

If the SQL index is defined for the primary key, it must include all the fields defined in the Microsoft Dynamics NAV primary key. Extra fields can be added and these fields can be rearranged to suit individual needs.

Clustered Property

Use this property to determine which index is clustered. By default the index corresponding to Microsoft Dynamics NAV primary key is made clustered.

Implicit/Explicit Locking

There are further considerations to make when working with Microsoft Dynamics NAV on SQL Server. Microsoft Dynamics NAV is designed to read without locks and locks only if needed, following optimistic concurrency recommendations. If records are going to be modified, that intent should be indicated to the driver (use explicit locking), so that the data is read properly.

Implicit Locking

The following table demonstrates implicit locking. The C/AL pseudo-code on the left is mapped to the equivalent action on SQL Server:

Sample code Result TableX.FIND(‘-’); SELECT * FROM TableX

WITH (READUNCOMMITTED) (the retrieved record timestamp = TS1)

TableX.Field1 := Value;

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 20: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-20

Sample code Result

TableX.MODIFY; SELECT * FROM TableX WITH (UPDLOCK, REPEATABLEREAD) (the retrieved record timestamp = TS2) performs the update UPDATE TableX SET Field1 = Value WITH (REPEATABLEREAD) WHERE TimeStamp <= TS1

The reason for such a complex execution is that:

• The data is read with the READUNCOMMITED isolation level, but because users are going to update, they need to ensure that they read committed data and issue an update lock on it to prevent other users from updating the same records.

• The data is originally read uncommitted; users need to lock the record and also ensure that they update the record with an original TimeStamp. If somebody else changes the record, that person receives an error: "Another user has modified the record since users retrieved from the database…."

Explicit Locking

If users indicate to the database driver that their intention is to modify the record by using explicit locking, they can eliminate the bad behavior entirely, as shown by the following pseudo code:

Sample code Result

TableX.LOCKTABLE;

Indicates to the driver explicit lock

TableX.FIND(‘-’); SELECT * FROM TableX WITH (UPDLOCK) (the retrieved record timestamp = TS1)

TableX.Field1 := Value;

TableX.MODIFY; UPDATE TableX SET Field1 = Value WITH (REPEATABLEREAD) (the retrieved record timestamp is guaranteed to be TS1)

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 21: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-21

Problems with NEXT

In some situations, the NEXT command causes the biggest performance problem in Microsoft Dynamics NAV, and users should pay particular attention to avoid these situations. The problem is that the database driver is using a cursor, and there is no way to change the isolation level in the middle of the data retrieval. Also, if users change the condition of the set, then the set has to be retrieved again.

There are a number of scenarios that cause problems with NEXT. Instead, FETCH should be used, but if the set is wrong, a new SELECT statement needs to be executed. This imposes a serious performance penalty in SQL Server and in some situations leads to very lengthy execution.

Performance problems occur if users perform a NEXT on a set, but they did one of the following:

• Changed filter • Changed sorting • Changed key value • Changed isolation level • Performed NEXT "in the middle of nowhere" on a record that they

got through GET for example.

The following code examples demonstrate this problem:

Code Result

SETCURRENTKEY(FieldA);

SETRANGE(FieldA,Value);

FIND(‘-‘) Set or part of the set is retrieved, first record loaded

REPEAT

FieldA := NewValue;

Record position is now outside the set

MODIFY; Record is put outside the set

UNTIL NEXT = 0; The system is asked to go to NEXT from position, but from outside the set

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 22: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-22

"Jumping" through data - NEXT "in the middle of nowhere"

Code Result

SETRANGE(FieldA,Value);

FIND(‘-’); Set based on filter on FieldA

REPEAT

SETRANGE(FieldB,FieldB);

New set based on filter on FieldA AND FieldB

FIND(‘+’); Record position is end of the set

SETRANGE(FieldB); Filter removed

UNTIL NEXT = 0 The system is asked go to NEXT from undefined position

Non-cursor to cursor

Code Result

SETRANGE(FieldA,Value);

FINDFIRST; Non-cursor fetch

REPEAT

UNTIL NEXT = 0; The system is asked to go to NEXT on non-existing set

SETRANGE(FieldA,Value);

Solutions

To eliminate performance problems with NEXT, consider these solutions:

• Use a separate looping variable. • Restore original key values, sorting, and filters before the NEXT

statement.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 23: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-23

• Read records to temporary tables, modify within, and write back afterwards.

• FINDSET(TRUE,TRUE).

Note also that FINDSET(TRUE,TRUE) is not a "real solution;" it is merely a reduction of the costs and should be used only as a last resort.

Suboptimum Coding and Other Performance Penalties

Taking performance into consideration often influences programming decisions. For example, if users do not use explicit locking, or if they program bad loops and provoke problems with NEXT, they often pay a big price in terms of performance. Users also need to review their code and see how many times the code is reading the same table, or use COUNT for checking if there is a record (IF COUNT = 0, IF COUNT = 1), or use MARKEDONLY instead of pushing records to a temporary table and read them from there. Additionally, there are features in the application that need to be avoided or minimized. For example, the advance dimensions functionality is cost demanding. Users should review the application setup for the performance aspect and make corrective actions if they can.

The Classic Database Server setup can also make a big difference, such as object cache size, but a big overhead may come from the “Find As You Type” feature. When "Find As You Type" is enabled, the system is forced to do another query for each keystroke.

The GUI overhead can be slowing the client down, if, for example, a dialog is refreshed 1000 times in a loop. GUI overhead can also cause increased server trips. When users use the default SourceTablePlacement = <Saved> on forms, it costs more than using Last or First. Users should review all forms showing data from large tables to look for performance problems.

Overview of NDBCS NDBCS, or the Microsoft Dynamics NAV Database Driver for Microsoft SQL Server, translates Microsoft Dynamics NAV data requests into Transaction-SQL (T-SQL). This translation allows the Microsoft Dynamics NAV clients to communicate with SQL Server.

For example, consider the following C/AL statements:

GLEntry.SETCURRENTKEY("G/L Account No.","Posting Date"); GLEntry.SETRANGE("G/L Account No.",’7140’); IF GLEntry.FIND('-') THEN;

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 24: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-24

This code causes the following information to be sent from Microsoft Dynamics NAV to the NDBCS driver:

Function Name Parameter Data

FIND/NEXT Table G/L Entry

FIND/NEXT Search Method -

FIND/NEXT Key G/L Account No.,Posting Date,Entry No.

FIND/NEXT Filter G/L Account No.:7140

The NDBCS driver often uses cursors. Cursors are used in SQL Server if records from a set record-by-record rather than by retrieving an entire set. There are different types of cursors which have different capabilities. For example, the forward-only cursor allows fast reading from top to bottom, while the dynamic cursor allows retrieval in either direction. (There are two other types of cursors: static cursors and keyset-driven cursors.)

The NDBCS driver translates the preceding example into a set of T-SQL statements that uses a dynamic cursor, and looks like the following:

SELECT * FROM "CRONUS International Ltd_$G_L Entry" WITH (READUNCOMMITTED) WHERE (("G_L Account No_"='7140')) ORDER BY "G_L Account No_","Posting Date","Entry No_" OPTION (FAST 5)

If explicitly indicating that the records will be modified using the GLEntry.LOCKTABLE command followed by the C/AL code above, the following is sent to the driver:

Function Name Parameter Data

LOCKTABLE Table G/L Entry

FIND/NEXT Table G/L Entry

FIND/NEXT Search Method -

FIND/NEXT Key G/L Account No.,Posting Date,Entry No.

FIND/NEXT Filter G/L Account No.:7140

In this case the driver will do the same, but it will use a different isolation level, like the following:

WITH (UPDLOCK, ROWLOCK)

A loop can be added, like the following:

IF GLEntry.FIND('-') THEN REPEAT UNTIL GLEntry.NEXT = 0;

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 25: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-25

As a result, the following is sent to the driver when the first NEXT command is executed:

Function Name

Parameter Data

FIND/NEXT Table G/L Entry

FIND/NEXT Search Method

>

FIND/NEXT Key G/L Account No.='7140',Posting Date='01/01/00',Entry No.='77'

The second NEXT produces the following:

Function Name

Parameter Data

FIND/NEXT Table G/L Entry

FIND/NEXT Search Method

>

FIND/NEXT Key G/L Account No.='7140',Posting Date='01/02/00',Entry No.='253'

The important facts are that the NDBCS driver:

• Does not know what the code intends to do (whether it wants to browse forward or backward through the table, or just want the first record).

• Builds dynamic cursors with many different optimizations, such as reading ahead using FETCH 5, FETCH 10, and so on, using read ahead through SELECT TOP 49 * FROM, dropping dynamic cursor and using forward-only cursor, and so on.

Optimization of Cursors By default, the way the dynamic cursors are used is not very efficient. Because cursors have a big impact on performance, handling them in a different way can yield significant improvements. For example, there is no reason to create a cursor at all for retrieving a single record. When they are necessary, however, they can still be efficient. Optimizing cursors can be done with the following four Microsoft Dynamics NAV commands:

• ISEMPTY • FINDFIRST

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 26: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-26

• FINDLAST • FINDSET

ISEMPTY

By default, determining if a set is empty uses cursors. For example:

Customer.SETRANGE(Master, TRUE); IF NOT Customer.FIND('-') THEN ERROR('Customer master record is not defined');

This code determines if a record exists, but causes the NDBCS driver to generate T-SQL that uses cursors. However, the ISEMPTY command has a different effect:

Customer.SETRANGE(Master, TRUE); IF Customer.ISEMPTY THEN ERROR('Customer master record is not defined');

When executed, the code above results in this T-SQL command:

SELECT TOP 1 NULL FROM …

Note that the NDBCS driver uses NULL, which means that no record columns are retrieved from the database (as opposed to ‘*’, which would get all columns). This makes it an extremely efficient command that causes just a few bytes to be sent over the network. This can be a significant improvement as long as subsequent code does not use the values from the found record.

FINDFIRST

Retrieving the first record in a table can also be an unnecessarily expensive command. Consider this code:

Customer.SETRANGE(Master, TRUE); IF NOT Customer.FIND('-') THEN ERROR('Customer master record is not defined');

The FINDFIRST command retrieves the first record in a set. Like ISEMPTY, FINDFIRST does not use cursors:

Customer.SETRANGE(Master, TRUE); IF NOT Customer.FINDFIRST THEN ERROR('Customer master record is not defined');

When executed, this T-SQL is generated:

SELECT TOP 1 * FROM ... ORDER BY ...

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 27: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-27

Warning: If doing a REPEAT/UNTIL NEXT loop, do not use this command, because the NEXT will have to create a cursor for fetching the subsequent records.

FINDLAST

Retrieving the last record in a table can also be an unnecessarily expensive command. Consider this code:

Message.SETCURRENTKEY(Date); IF Message.FIND('+') THEN MESSAGE('Last message is dated ’ + FORMAT(Message.Date));

The FINDLAST command retrieves the last record in a set. Like FINDFIRST, FINDLAST does not use cursors:

Message.SETCURRENTKEY(Date); IF Message.FINDLAST THEN MESSAGE('Last message is dated ’ + FORMAT(Message.Date));

This command retrieves the last record in the set, and does not use cursors. When executed, this T-SQL is generated:

SELECT TOP 1 * FROM ... ORDER BY ... DESC

Warning: If doing a REPEAT/UNTIL NEXT(-1) loop, do not use this command, because the NEXT will have to create a cursor for fetching the subsequent records.

FINDSET

The FINDSET allows browsing through a set of records. In previous versions, retrieving a set in a loop could only be done this way:

IF FIND('-') THEN REPEAT UNTIL NEXT = 0;

FINDSET can be used in the following manner without any arguments (FINDSET arguments are explained later in this chapter), like this:

IF FINDSET THEN REPEAT UNTIL NEXT = 0;

Unlike the FIND(‘-‘) command, FINDSET does not use cursors. When executed, the T-SQL result looks like this:

SELECT TOP 500 * FROM ...

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 28: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-28

The REPEAT/UNTIL NEXT browses through the records locally on the client machine. This is the recommended way to retrieve sets quickly -- without any cursor overhead.

Maximum Record Set Size

There is a parameter in Microsoft Dynamics NAV that is used to set up the maximum of how many records are retrieved from the database (File, Database, Alter, Advanced tab, Caching, Record Set = 500). If the set is bigger than the maximum, Microsoft Dynamics NAV will continue working but it will replace the reading mechanism with a dynamic cursor. If there is an indication that this is going to happen, use the ‘old’ FIND(‘-‘) command as opposed to FINDSET.

Use FINDSET for forward direction only; it will not work for REPEAT/UNTIL NEXT(-1).

Also, if the LOCKTABLE command is used prior to the FINDSET, the set is locked, and records can be modified within the loop.

A good example of an efficient use of cursors (using the ‘old’ FIND command), is for the read of a big set of records, for example all G/L Entries for a specific account, probably with more than 500 records in the set:

GLEntry.SETRANGE("G/L Account No.", "6100"); IF GLEntry.FIND('-') THEN REPEAT UNTIL GLEntry.NEXT = 0;

A good example of using the new FINDSET command (as opposed to using the ‘old’ FIND command), is for the read of a small set of records, such as all sales lines in a sales order, probably always with less than 500 records. This can be done this way:

SalesLine.SETRANGE("Document Type","Document Type"::Order); SalesLine.SETRANGE("Document No.",'S-ORD-06789'); IF SalesLine.FINDSET THEN REPEAT TotalAmount := TotalAmount + SalesLine.Amount; UNTIL SalesLine.NEXT = 0;

FINDSET(TRUE)

This variation of FINDSET locks the set that is read, so it is equivalent to a LOCKTABLE followed by FIND(‘-). FINDSET(TRUE) and can be used like this:

IF FINDSET(TRUE) THEN REPEAT UNTIL NEXT = 0;

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 29: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-29

Unlike the commands discussed previously, this command does use a dynamic cursor. The main purpose of this command is to raise the isolation level before starting to read the set because the resulting records are to be modified.

FINDSET(TRUE) uses the read-ahead mechanism to retrieve several records instead of just one. It is recommended that LOCKTABLE be used with FINDSET for small sets, and that the FINDSET(TRUE) command be used for sets larger than 500 records (the Record Set parameter).

A good example of using the FINDSET(TRUE) command is for the read of a big set of records and the need to modify those records. For example, when going through all G/L entries for a specific account, and changing a field value based on the record condition, the filtered set will probably have more than 500 records. This might be done this way:

GLEntry.SETRANGE("G/L Account No.", "6100"); IF GLEntry.FINDSET(TRUE) THEN REPEAT IF (GLEntry.Amount > 0) THEN BEGIN GLEntry."Debit Amount" := GLEntry.Amount; GLEntry."Credit Amount" := 0 END ELSE BEGIN GLEntry."Debit Amount" := 0; GLEntry."Credit Amount" := -GLEntry.Amount; END; GLEntry.MODIFY UNTIL GLEntry.NEXT = 0;

A good example of using the LOCKTABLE and FINDSET command (as opposed to using the FINDSET(TRUE) command) is for the read of a small set of records and the need to modify those records. For example, when going through all sales lines for a specific order, and changing several fields' values, the filtered set will probably have less than 500 records. This can be done this way:

SalesLine.SETRANGE("Document Type","Document Type"::Order); SalesLine.SETRANGE("Document No.",'S-ORD-06789'); SalesLine.LOCKTABLE; IF SalesLine.FINDSET THEN REPEAT SalesLine."Qty. to Invoice" := SalesLine."Outstanding Quantity; SalesLine."Qty. to Ship" := SalesLine."Outstanding Quantity; SalesLine.MODIFY UNTIL SalesLine.NEXT = 0;

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 30: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-30

FINDSET(TRUE, TRUE)

This variation of the FINDSET(TRUE) allows the modification of the Key value of the sorting order of the set. It can be used like this:

IF FINDSET(TRUE,TRUE) THEN REPEAT UNTIL NEXT = 0;

The command, like FINDSET(TRUE), uses a dynamic cursor. The main purpose of this command is to raise the isolation level before you starting to read the set because the set needs to be modified. It does not use the read-ahead mechanism. Instead it retrieves one record at a time because the set is expected to be invalidated within the loop. Avoid using this command, since the loop code should be changed to a more efficient method of working, such as using a different variable for browsing through the set.

A good example of using the FINDSET(TRUE,TRUE) command (as opposed to using FIND command) is for the read of a set of records and the need to modify a key value. This should be avoided by any means. If there is not a way to avoid this, use FINDSET(TRUE,TRUE). For example, going through all sales lines for a specific order, and changing key value, the filtered set will probably have less than 500 records in the set. This can be done this way:

SalesLine.SETRANGE("Document Type","Document Type"::Order); SalesLine.SETRANGE("Document No.",'S-ORD-06789'); SalesLine.SETFILTER("Location Code",''); IF SalesLine.FINDSET(TRUE,TRUE) THEN REPEAT IF SalesLine.Type = SalesLine.Type::Item THEN SalesLine."Location Code" := 'GREEN'; IF SalesLine.Type = SalesLine.Type::Resource THEN SalesLine."Location Code" := 'BLUE'; SalesLine.MODIFY UNTIL SalesLine.NEXT = 0;

Note that the example above can be easily changed into more efficient code, using FINDSET as opposed to FINDSET(TRUE,TRUE), using a separate variable to modify the records. This can be done this way:

SalesLine.SETRANGE("Document Type","Document Type"::Order); SalesLine.SETRANGE("Document No.",'S-ORD-06789'); SalesLine.SETRANGE("Document Type","Document Type"::Order); SalesLine.SETRANGE("Document No.",'S-ORD-06789'); SalesLine.SETFILTER("Location Code",''); SalesLine.LOCKTABLE; IF SalesLine.FINDSET THEN REPEAT SalesLine2 := SalesLine; IF SalesLine.Type = SalesLine.Type::Item THEN SalesLine2."Location Code" := 'GREEN';

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 31: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-31

IF SalesLine.Type = SalesLine.Type::Resource THEN SalesLine2."Location Code" := 'BLUE'; SalesLine2.MODIFY UNTIL SalesLine.NEXT = 0;

Locking, Blocking, and Deadlocks When data is read from the database, Microsoft Dynamics NAV uses the READUNCOMMITTED isolation level, meaning that any other user can modify the records that are currently being read. This is often referred to as optimistic concurrency. Data that is read is considered “dirty” because it can be modified by another user. When the data is updated, the Microsoft Dynamics NAV driver must compare the timestamp of the record. If the record is ‘old,’ the Microsoft Dynamics NAV error “Another user has modified the record after you retrieved it from the database” is displayed.

Optimistic concurrency allows for better performance because data can be accessed simultaneously by multiple queries. The tradeoff is that care must be taken when writing code that modifies the data. This requires that locking and blocking be employed to synchronize access, but deadlock – a condition where one or more competing processes are stalled indefinitely – can occur. The following subtopics discuss strategies for synchronizing data access while avoiding deadlock.

Locking

The isolation level can be changed to a more restrictive setting, such as UPDLOCK. In this level, records that are read are locked, meaning that no other user can modify the record. This is referred to as pessimistic locking, and causes the server to protect the record in case there is a need to modify it -- making it impossible for others to modify.

An example of a lock of a customer record can be demonstrated by this code:

Customer.LOCKTABLE; Customer.GET('10000'); // Customer 10000 is locked Customer.Blocked := TRUE; Customer.MODIFY; COMMIT; // Lock is removed

If the record is not locked, the following situation can occur:

User A User B Comment

Customer.GET('10000');

User A reads record without any lock

Customer.GET('10000');

User B reads same record without any lock

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 32: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-32

User A User B Comment

… Customer.Blocked := TRUE; Customer.MODIFY; COMMIT;

User B modifies record

Customer.Blocked := FALSE; Customer.MODIFY;

User A gets an error: “Another user has modified the record...”

ERROR SUCCESS

Blocking

When other users try to lock data that is currently locked, they are blocked and have to wait. If they wait longer than the defined timeout, they receive a Microsoft Dynamics NAV error: "The XYZ table cannot be locked or changed because it is already locked by the user with User ID ABC." If necessary, change the default timeout with File, Database, Alter, Advanced tab, Lock Timeout checkbox and Timeout duration (sec) value.

Based on the previous example where two users try to modify the same record, the data that is intended to be modified can be locked, preventing other users from doing the same. Here is an example:

User A User B Comment

Customer.LOCKTABLE; Customer.GET('10000');

User A reads record with lock

Customer.LOCKTABLE; Customer.GET('10000');

User B tries to read same record with a lock

… … blocked, waiting …

User B waits and is blocked, because the record is locked by user A

Customer.Blocked := TRUE; Customer.MODIFY;

… blocked, waiting …

User A successfully modifies record.

COMMIT; Lock is released.

… Data is sent to user B

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 33: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-33

User A User B Comment

Customer.Blocked := FALSE; Customer.MODIFY;

User B successfully modifies record.

COMMIT; Lock is released.

SUCCESS SUCCESS

Deadlocks

There is a potential situation when blocking cannot be resolved by the server in a good way. The situation arises when one process is blocked because another process has locked some data. The other process is also blocked because it tries to lock the first process data. Only one of the transactions can be finished; SQL Server terminates the other and sends an error message back to the client: "Your activity was deadlocked with another user …"

For example, consider a case where two users are working simultaneously and trying to get each other’s blocked records, as shown in this pseudo code:

User A User B Comment

TableX.LOCKTABLE; TableY.LOCKTABLE;

TableX.LOCKTABLE; TableY.LOCKTABLE;

Indicates that the next read will use UPDLOCK

TableX.FINDFIRST;

TableY.FINDFIRST;

A blocks Record1 from TableX. B blocks Record 1 from tableY.

… …

User A User B Comment

TableY.FINDFIRST;

TableX.FINDFIRST;

A wants B’s record, while B wants A’s record. A conflict occurs.

"Your activity was deadlocked with another user"

SQL Server detects deadlock and arbitrarily chooses one over the other, so one will receive an error.

ERROR SUCCESS

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 34: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-34

SQL Server supports record level locking, so there may be a situation where these two activities bypass each other without any problems, such as with this pseudo code (note that User A is fetching the last record compared to the situation above):

User A User B Comment TableX.LOCKTABLE; TableY.LOCKTABLE;

TableX.LOCKTABLE; TableY.LOCKTABLE;

Indicates that the next read will use UPDLOCK

TableX.FINDFIRST;

TableY.FINDFIRST;

A blocks Record1 from TableX. B blocks Record 1 from tableY.

… …

TableY.FINDLAST;

TableX.FINDLAST;

No conflict, as no records are in contention.

SUCCESS SUCCESS

Note that there would be a deadlock if one of the tables is empty, or contained one record only. To add to this complexity, there may be a situation where two processes read the same table from opposite directions and meet in the middle, such as with this pseudo code:

User A User B Comment

TableX.LOCKTABLE; TableY.LOCKTABLE;

TableX.LOCKTABLE; TableY.LOCKTABLE;

Indicates that the next read will use UPDLOCK

TableX.FIND(‘-’);

TableY.FIND(‘+’);

A reads from top of TableX. B reads from bottom of TableX.

User A User B Comment

REPEAT REPEAT

… …

UNTIL NEXT = 0;

UNTIL NEXT(-1) = 0;

…after some time… A wants B’s record, while B wants A’s record. A conflict occurs.

"Your activity was deadlocked with another user"

SQL Server detects deadlock and chooses one of the users for failure.

SUCCESS ERROR

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 35: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-35

There are also situations where a block on index update may produce the conflict, and situations where updating SIFT tables can cause a deadlock. These situations can be complex and hard to avoid. However, the transaction chosen to fail is rolled back to the beginning, so there should be no major issue. However, if the process is written with several partial commits, then there might be “dirty” data in the database as a side-product of those deadlocks that can become a major issue for the customer.

Avoiding Deadlocks

A large number of deadlocks can lead to major customer dissatisfaction, but deadlocks cannot be avoided entirely. To reduce the number of deadlocks, do the following:

• Process tables in the same sequence. • Process records in the same order. • Keep the transaction length to a minimum.

If the above is not possible due to the complexity of the processes, as a last resort revert to serializing the code by ensuring that conflicting processes cannot execute in parallel. The following code demonstrates how this can be done:

User A User B Comment

TableX.LOCKTABLE; TableY.LOCKTABLE; TableA.LOCKTABLE;

TableX.LOCKTABLE; TableY.LOCKTABLE; TableA.LOCKTABLE;

Indicates that the next read will use UPDLOCK

TableA.FINDFIRST;

… User A locks Record1 from TableX

User A User B Comment

TableA.FINDFIRST;

User B tries to lock Record1 from TableX

… Blocked User B is blocked

TableY.FINDFIRST; TableX.FINDFIRST;

Blocked User A processes tables in opposite order

COMMIT; Block is released

OK on read table A

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 36: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-36

User A User B Comment

TableX.FINDFIRST; TableY.FINDFIRST;

User B processes tables in opposite order

COMMIT;

SUCCESS SUCCESS

By serializing the transactions, a higher probability of timeouts can be experienced, so keeping the length of transactions short becomes even more important. This also demonstrates that you can combine the various principles and methods can be combined together, depending on the situation and complexity; one method works for one customer while the other works for another.

It is also recommended to adhere to some of the following “golden rules:”

• Test conditions of data validity before the start of locking. • Allow some time gap between heavy processes so that other users

can process. • Never allow user input during an opened transaction.

If the transaction is too complex or there is limited time, consider discussing with the customer the possibility of over-night processing of heavy jobs, thus avoiding the daily concurrency complexity and avoiding the high costs of rewriting the code.

How SIFT Data is Stored in SQL Server SIFT tables are used in Microsoft Dynamics NAV version 5.0 and older, to implement SIFT on SQL Server, and store aggregate values for SumIdexFields for keys in the source tables. Starting with version 5.0 Service Pack 1, these SIFT tables are replaced by indexed views. SIFT tables themselves are no longer part of Microsoft Dynamics NAV. This section is preserved, however, because developers working with Microsoft Dynamics NAV are likely to run into issues concerning SIFT tables in implementations of older versions of Microsoft Dynamics NAV, and it is essential to have a good understanding of their inner working.

A SumIndexField is always associated with a key, and each key can have a maximum of 20 SumIndexFields associated with it. When the MaintainSIFTIndex property of a key is set to Yes, Microsoft Dynamics NAV regards this key as a SIFT key and creates the SIFT structures that are needed to support it.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 37: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-37

Any field of the Decimal data type can be associated with a key as a SumIndexField. Microsoft Dynamics NAV then creates and maintains a structure that stores the calculated totals that are required for the fast calculation of aggregated totals.

In the SQL Server Option for Microsoft Dynamics NAV, this maintained structure is a normal table, but is called a SIFT table. These SIFT tables exist on SQL Server, but are not visible in the table designer in C/SIDE. As soon as the first SIFT table is created for a base table, a dedicated SQL Server trigger is also created and is then automatically maintained by Microsoft Dynamics NAV. This is known as a SIFT trigger. A base table is also a standard Microsoft Dynamics NAV table, as opposed to an extra SQL Server table that is created to support Microsoft Dynamics NAV functionality.

One SIFT trigger is created for each base table that contains SumIndexFields. This dedicated SQL Server trigger supports all the SIFT tables that are created to support this base table. The purpose of the SIFT trigger is to implement all the modifications that are made on the base table whenever a SIFT table is affected. This means that the SIFT trigger automatically updates the information in all the existing SIFT tables after every modification of the records in the base table.

The name of the SIFT trigger has the following format: <base Table Name>_TG. For example, the SIFT trigger for table 17, G/L Entry is named "CRONUS International Ltd_$G/L Entry_TG." Regardless of the number of SIFT keys that are defined for a base table, only one SIFT trigger is created.

A SIFT table is created for every base table key that has at least one SumIndexField associated with it. No matter how many SumIndexFields are associated with a key, only one SIFT table is created for that key.

The name of the SIFT table has the following format: <Company Name>$<base Table ID>$<Key Index>. For example, one of the SIFT tables created for table 17, G/L Entry is named "CRONUS International Ltd_$17$0."

The column layout of the SIFT tables is based on the layout of the SIFT key along with the SumIndexFields that are associated with this SIFT key. But the first column in every SIFT table is always named "bucket" and contains the value of the bucket or the SIFT level for the precalculated sums that are stored in the table. To view the structure, look at the SIFTLevels property for a key in Microsoft Dynamics NAV.

After the bucket column is a set of columns with names that start with the letter "f." These are also known as f- or key-columns. Each of these columns represents one field of the SIFT key.

The name of these columns has the format, f<Field No.>, where Field No. is the integer value of the Field No. property of the represented SIFT key field. For example, column f3 in “CRONUS International Ltd_$17$0” represents the G/L Account No. field (it is field number three in the base table G/L Entry).

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 38: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-38

Finally, there is a group of columns with names that start with the letter "s" followed by numbers. These are known as s-columns. These columns represent every SumIndexField associated with the SIFT key.

The name of these columns has the format, s<Field No.>. Field No. is the integer value of the Field No. property of the represented SumIndexField. The precalculated totals of values for the corresponding SumIndexFields are stored in these fields of the SIFT table.

With regards to performance, SIFT tables are one of the biggest Microsoft Dynamics NAV performance problems on SQL Server, as one record update in the base table produces a potentially massive stream of Input/Output (I/O) requests with updates to the records in the SIFT tables, possibly blocking other users during that time.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 39: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-39

Lab 11.1 - Find As You Type in the Client Monitor An important part of being able to do SQL Optimization is being familiar with the tools that are used to diagnose Microsoft Dynamics NAV SQL Server activity. The Client Monitor is one such tool, an important one.

Scenario

As discussed in this chapter, the Find As You Type option can cause many performance problems on SQL Server (or any other server). In this lab you will learn to use the Client Monitor to look at the SQL Statements that are generated by Microsoft Dynamics NAV, when the Find As You Type database option is turned on.

The Client Monitor is a tool that logs the details of every action that is taken by the user. It logs which object is used, the duration of each step, the SQL Statements that are generated, among many more. In this lab, you will focus on the SQL Statements.

Step by Step

The Client Monitor is a tool that runs from the Classic client. In Microsoft Dynamics NAV:

1. Open the File menu, and select Alter from the Database submenu. 2. Go to the Options tab, and ensure that the Allow Find As You Type

property is turned on.

FIGURE 11.1 ALLOW FIND AS YOU TYPE IN ALTER DATABASE

3. Click OK to accept the values in the Alter Database Form.

The Client Monitor logs everything, and it takes very little to generate thousands of lines. When analyzing the Client Monitor results it can be quite overwhelming to have to wade through large numbers of seemingly irrelevant entries. To make analysis easier, it is recommended to prepare the transaction as much as possible, so that there are as few irrelevant lines as possible. Ensure that the action of interest is ready to be executed before starting the client monitor.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 40: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-40

The goal is to log what happens when a user is trying to find a particular customer from the Customer List form, by typing the name into the Find dialog. As the user is typing, the system will query the database for the customer name. The Client Monitor will log all the queries that are generated.

4. Open the Customer Card. 5. Press F5 to open the Customer List form. 6. Select Client Monitor from the Tools menu.

FIGURE 11.2 CLIENT MONITOR STARTED

7. Click Start and move back to the Customer List form. 8. With the cursor on the Name field, press Ctrl+F to open the Find

dialog. 9. Ensuring that the Find As You Type box is checked, type in

"Cronus." 10. Go back to the Client Monitor and click Stop.

At this point, the Client Monitor should be populated with a number of lines. Take a moment to investigate some of the entries, and to become familiar with the type of information that can be found. The focus is to find the SQL Statements that are generated when searching for a customer name.

11. Move the cursor to the Parameter field. Drop-down the Option list and look at the available options.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 41: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-41

12. Press F7 to enter a filter. 13. Select SQL Statement and click OK.

FIGURE 11.3 CLIENT MONITOR FILTERED BY SQL STATEMENT

Searching for the name Cronus, with the Find As You Type option activated, generated a total of 32 SQL Statements. Take a closer look at the actual SQL Statements in the Data field, starting with the one at the top. Go down the list of statements until the following is reached:

SELECT TOP 1 *,DATALENGTH("Picture") FROM "CRONUS International Ltd_$Customer" WHERE (("Name" LIKE '%[cCçÇ]%')) ORDER BY "No_"

Note that the WHERE clause includes the keyword LIKE. This is a query that tells SQL Server it is not sure whether the name contains a lower case or an upper case letter c, or that it might contain an accent. These LIKE queries are very costly queries for SQL Server, because it does not contain exact matches, and it therefore has to scan the table.

Move a little further down the list until the following is reached:

SELECT *,DATALENGTH("Picture") FROM "CRONUS International Ltd_$Customer" WHERE "No_"='01454545' AND (("Name" LIKE '%[cCçÇ][rR]%'))

The WHERE clause now includes variations of the characters in the string ‘cr,’ the first two letters of the name typed. The same type of LIKE query is also found for the strings ‘cro,’ ‘cron,’ all the way through to ‘cronus.’ Each character that is typed generated another LIKE query.

A number of additional queries are generated to select records in a number of different ways. The Microsoft Dynamics NAV executable generates these queries to support the Find As You Type option, assuming that the user does not really know what he or she are looking for.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 42: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-42

While this option can be of functional value to the users, and sometimes they are adamant about keeping this option, it is also very costly. When many users are searching for records at the same time, overall performance can be severely affected.

The next few step, involve turning off the Find As You Type option, and looking at the same data in the Client Monitor. First clear the Client Monitor.

14. Remove all filters from the Client Monitor, by pressing Ctrl+Shift+F7

15. Select all records and press F4 to delete all the lines. 16. Open the Alter Database form and go to the Options tab. 17. Remove the checkmark from the Allow Find As You Type field and

click OK. 18. Turn on the Client Monitor and go back to the Customer List form. 19. Open the Find dialog.

When the Find dialog opens, it has the name of the current field pre-populated. Note that Find As You Type is cleared and that it cannot be edited.

FIGURE 11.4 FIND AS YOU TYPE DISABLED

When typing the letters into the Find What text box, note that the system is not jumping through the records. To actually search for the string that is entered into the Find What text box, the Find First button or the Find Next button must be clicked.

20. Enter the string ‘cronus’ and click Find First. This should cause the system to jump to the same customer found previously.

21. Go back to the Client Monitor and click Stop. 22. Filter the Client Monitor’s Parameter field on SQL Statement.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 43: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-43

This time, there is only one LIKE query, for the full string. In addition to the LIKE query, the system also generated some other queries. However, by searching just once, there are only six queries, a significantly lower number of queries than with the Find As You Type option turned on.

The LIKE query is still not a very good query for performance, but by turning off the Find As You Type option, a large number of queries are eliminated from the system. With many users searching the system, this can make a big difference in performance.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 44: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-44

Lab 11.2 - Using the Client Monitor to Analyze Performance Once the basic setup and usage of the Client Monitor has been conquered, the next step is to learn how to take advantage of Microsoft Dynamics NAV built-in data analysis tools (such as filtering) to do additional analysis of the results created by Client Monitor. Such techniques are especially useful for analyzing the timing (such as duration) of various NAV processing activities.

Scenario

The Client Monitor is one of the most useful tools within the Microsoft Dynamics NAV client to analyze how the system is performing. Since it logs individual steps in the code, and it logs every query that is sent to SQL Server, it is possible to determine the details of what does and does not work well in each individual implementation.

In this lab you will post a sales invoice while running the Client Monitor. Then you will look at the duration of the steps to analyze the performance.

Step by Step

The Client Monitor is a tool that runs from the Classic client. From Microsoft Dynamics NAV:

1. Ensure that the Client Monitor is cleared, by deleting all entries from the form.

2. Create a Sales Order for any item in the system. In this example, a Sales Order for Customer number 10000 is created, with an order for five Side Panels (Item number 70000).

FIGURE 11.5 SALES ORDER

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 45: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-45

3. Turn on the Client Monitor. 4. Ship and Invoice the Sales Order. 5. Stop the Client Monitor.

As discussed previously, the Client Monitor logs every step of the client process, to a level of detail where it is almost possible to identify individual lines of code. For every step it creates an entry, and logs a number of parameters that are relevant for that particular step. Lab 11.1 looked at the SQL Statement parameter. Not every step has a line for this parameter, because not every step generates a SQL statement.

Different reasons for using the Client Monitor make different parameters interesting to look at. For this lab, you will look at the Elapsed Time (ms) parameter (milliseconds), to identify the steps that take the longest time.

6. Filter the Client Monitor where the Parameter field equals Elapsed Time (ms).

7. Filter the Number field for values greater than 100.

FIGURE 11.6 CLIENT MONITOR DURATION GREATER THAN 100

The results in the figure above may be slightly different from the results that you are seeing. Depending on a number of variables, performance varies greatly from one machine to the next. The screenshot above is taken from a standard Cronus International demonstration company in a Virtual PC image.

The Client Monitor shows five results with a duration that is longer than 100 milliseconds (ms). It might be necessary to filter the Elapsed Time field on a different value for any results to show in this screen. To learn what actually happens during these steps, all lines for each entry need to be investigated.

As indicated before, the results in this example might be different from the results in your system. The next steps are taken in this example, and they are an illustration of how to analyze individual Client Monitor results.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 46: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-46

The next step is to look at entry number 339:

8. Set a filter on the Entry No. field where it equals 339. 9. Remove the filter from the Number field and from the Parameter

field.

Now all the Client Monitor lines that are part of Entry number 339 can be seen, as shown in the following image.

FIGURE 11.7 CLIENT MONITOR FOR STEP 339

This Client Monitor entry provides some interesting information:

• The step took 101 ms, which is one tenth of a second. For one single step this is not a very high number, but if this had been a sales order with 100 lines, this step might have been repeated 100 times, and it could have taken a total of 10 seconds. Ideally, no step should take more than 10-30 ms, and everything that has a longer duration warrants a closer inspection of the code.

• The Function Name is FIND/NEXT, which tells us that this step is a C/AL code statement that does either a FIND or a NEXT. As discussed in this chapter, some types of FIND and NEXT commands can cause performance problems, so this provides an important clue about what might be going on.

• The Source Object is Codeunit 80 Sales-Post, which identifies the object that contains this FIND/NEXT statement.

• The Source Trigger/Function further identifies exactly which trigger is being executed.

• The Source Text specifies the exact line of code that is executed. • The SQL Statement shows which query is sent to SQL Server to

support this C/AL statement.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 47: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-47

What this does NOT tell us, is the context of the code. For this, it is necessary to look at the actual object to determine the context in which this code is written this way. Typically, FIND(‘+’) is not ideal for performance. Possible replacements could include FINDLAST, or ISEMPTY.

The preliminary conclusion is that this code needs to be investigated further, and possibly rewritten to increase performance. The next step is to open the codeunit and look at the actual code.

FIGURE 11.8 CREATE PREPAYMENT LINES IN CODEUNIT 80

As shown in the figure, the CreatePrepaymentLines function does contain the line of code identified by the Client Monitor, which is highlighted in green.

At first glance all that this line of code seems to do is determine whether records exist in this filter, and EXIT if there are not. So the first thing that comes to mind is to replace this code with ISEMPTY. Further down, however, one of the field values is used to set the value of the NextLineNo variable. This particular FIND(‘+’) statement should be replaced by FINDLAST.

Another line of code is highlighted, a FIND(‘-‘) statement. Looking at the REPEAT statement directly following this statement, this code is going to loop through a set of records. This FIND(‘-‘) statement should be replaced by a FINDSET statement, and further code review should be done to determine the value of its parameters.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 48: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-48

Summary This chapter covered the essential points of insuring optimal performance in Microsoft Dynamics NAV applications, particularly when using SQL Server. The two database options were compared, revealing that they are different database types and therefore have different performance characteristics.

The specific ways in which SQL Server is implemented were discussed in detail to reveal why some operations are expensive in terms of performance, and others are cheap and just as effective.

The labs showed how to use the Client Monitor to analyze what happens inside the code. This tool should be used sparingly in a production environment, since the tool itself consumes a large number of system resources.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 49: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-49

Test Your Knowledge

1. What are two important proprietary Classic Database Server features that are simulated on SQL Server? Explain how these features are simulated.

2. Explain the difference between the concepts of clustered index and primary key.

3. What is the purpose of collation?

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 50: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-50

4. What is the purpose of database replication, and which replication mechanism on SQL Server can be used for Microsoft Dynamics NAV?

5. What elements are important when defining the backup strategy? List four elements of the most comprehensive backup strategy.

6. How can a SQL Server index be disabled from the table designer in C/SIDE, without disabling the key?

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 51: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-51

7. Why is the Find As You Type option bad for performance on SQL Server?

8. What can be done to help avoid deadlocks?

9. How is SIFT stored on SQL Server?

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 52: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-52

10. What tools in C/SIDE can be used to troubleshoot performance issues?

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 53: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-53

Quick Interaction: Lessons Learned Take a moment and write down three key points you have learned from this chapter

1.

2.

3.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 54: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-54

Solutions Test Your Knowledge

1. What are two important proprietary Classic Database Server features that are simulated on SQL Server? Explain how these features are simulated.

MODEL ANSWER:

SIFT, which is simulated on SQL Server by indexed views, and prior to version 5.0 SP1 by additional SIFT tables. Data Versioning, which is simulated on SQL Server by including a datetime value for each record in the database.

2. Explain the difference between the concepts of clustered index and primary key.

MODEL ANSWER:

The Clustered Index is the index that is used by SQL Server to physically store the data. If the clustered index is set to any particular field, then SQL Server will physically store the records in the table in the order of that field. The Primary Key is the key that defines the uniqueness of a record. Primary key field values uniquely identify a record in the table. It is possible to set an index other than the primary key index as a table’s clustered index.

3. What is the purpose of collation?

MODEL ANSWER:

The collation of a database determines which character set is used to store the values in the database. It determines the way that data is sorted, and can affect the way that data is retrieved from the database.

4. What is the purpose of database replication, and which replication mechanism on SQL Server can be used for Microsoft Dynamics NAV?

MODEL ANSWER:

Database replication is the process of distributing data from one source database to one or more destination databases. It provides for load balancing, offline processing and redundancy. For Microsoft Dynamics NAV, it is recommended to only allow data to be modified in the source database. For this reason, it is recommended to use snapshot or transactional replication, and not to use merge replication.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 55: Na2009 enus devii_11

Chapter 11: Optimizing For SQL Server

11-55

5. What elements are important when defining the backup strategy? List four elements of the most comprehensive backup strategy.

MODEL ANSWER:

The defining elements of a backup strategy are the recovery model, the customer’s attitude toward data loss, and available system resources. The most comprehensive backup strategy includes the following: • The Recovery Model is set to full • There is a periodic full database backup • There is a periodic differential database backup • There is a periodic transaction log backup

6. How can a SQL Server index be disabled from the table designer in C/SIDE, without disabling the key?

MODEL ANSWER:

By turning off the MaintainSQLIndex property of the key.

7. Why is the Find As You Type option bad for performance on SQL Server?

MODEL ANSWER:

Because the Microsoft Dynamics NAV executable generates a LIKE query for each character that is typed into the Find dialog box, which causes large overhead.

8. What can be done to help avoid deadlocks?

MODEL ANSWER:

To help avoid deadlocks, the following can be done: • Lock tables in the same order for different types of transactions • Process records in the same order for different types of transactions • Keep transaction length to a minimum • Serialize the transaction, by locking a general table at the start of every transaction

9. How is SIFT stored on SQL Server?

MODEL ANSWER:

Prior to version 5.0 SP1, SIFT is stored on SQL Server in separate SIFT tables, and SIFT totals are updated by table triggers on SQL Server. From version 5.0 SP1 forward, SIFT is stored on SQL Server by indexed views.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement

Page 56: Na2009 enus devii_11

C/SIDE Solution Development in Microsoft Dynamics® NAV 2009

11-56

10. What tools in C/SIDE can be used to troubleshoot performance issues?

MODEL ANSWER:

The Client Monitor, and Code Coverage.

Microsoft Official Training Materials for Microsoft Dynamics ® Your use of this content is subject to your current services agreement