ASE15 Task Management v1.0 022008 Wp

Embed Size (px)

Citation preview

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    1/15

    AATTEECCHHNNIICCAALLLLOOOOKKIINNSSIIDDEESSYYBBAASSEESS

    AADDAAPPTTIIVVEESSEERRVVEERR

    EENNTTEERRPPRRIISSEE1155..00..XX::

    UUNNDDEERRSSTTAANNDDIINNGGTTAASSKKMMAANNAAGGEEMMEENNTT AANNDDSSCCHHEEDDUULLIINNGG IINN TTHHEEAASSEEKKEERRNNEELL

    September, 2006Version 1.0

    Written by:

    Peter F. ThawleySenior Director / ArchitectTechnology Evangelism

    Sybase ITSG [email protected]

    A key concern of IT organizations is predicting their platforms capacity. As a business grows,its systems must scale to handle growth in supporting additional users, applications of varyingtypes, and increasing data volumes. To fully understand Adaptive Server Enterprise's (ASE)capacity in a specific environment, one must understand and maximize the efficiency with which

    the users of the various applications utilize the resources that ASE uses to provide its services.This paper introduces the methods ASE uses to efficiently manage the many users requestingdatabase services. These methods, commonly referred to as task management, control howAdaptive Server shares its CPU time across different users as well as how system servicessuch as database locking, disk I/O, and network I/O are affected by these mechanisms.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    2/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 2 of 15

    Table of Contents

    Overview........................................................................................................................4ASE Design Principles..................................................................................................5

    ASEs Virtual Server Architecture.........................................................................................5Data Structures to Manage Tasks ................................................................................6

    User Tasks ..............................................................................................................................6Run Queues ............................................................................................................................7Sleep Queues..........................................................................................................................9Pending Disk I/O Queues ........................................................................................................9Pending Network I/O Queues ................................................................................................10Lock Chains...........................................................................................................................10

    Knowing When to do What .........................................................................................13Keeping Time by Counting Clock Ticks ...............................................................................13To Yield or Not to Yield that is the question ........................................................................13

    Conclusion ...................................................................................................................15

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    3/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 3 of 15

    Table of Figures

    Figure 1 Important Data Structures Used in ASEs Task Management ......................................6Figure 2 Data Structures Representing User Tasks...................................................................7Figure 3 Task Priorities in ASE..................................................................................................7Figure 4 Partitioning CPU Capacity into Engine Groups ............................................................8Figure 5 Task Selection Using Engine Groups ............................................................................8Figure 6 ASEs Lock Chains.....................................................................................................11Figure 7 ASEs Lock Compatibility Matrix ................................................................................12

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    4/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 4 of 15

    OVERVIEWRelational databases share one inherent similarity to operating systems they each arerequired respond to the requests of hundreds, if not thousands of simultaneous users. Thisrequirement constantly challenges systems (i.e., the hardware, operating system, and RDBMS)to efficiently and equitably share the resources needed to provide these services. DifferentRDBMS vendors respond to this challenge differently. At one extreme, Oracle's predominatelyprocess-based architecture defers this challenge to the operating system and hardware byrepresenting users as separate instances of the Oracle kernel. While certainly simple, thisapproach forces a general purpose operating system to manage resources such as memory andCPU scheduling on behalf of specialized database processing and results in significant resourceconsumption to provide these services.

    At the other extreme, Sybase took the time to build a database which could be as efficient aspossible with resources such as memory and CPU by building a multi-threaded kernel. Sinceoperating systems of the late 80s and early 90s did not provide them, Sybase built its own

    threads package to minimize both memory and CPU consumption. This implies ASE hascomplete responsibility for nearly all aspects of multi-user database services such as sharing thefinite amount of CPU time allotted to it by the operating system across many database users.Therefore, how the ASE kernel chooses to share its resources impacts both applicationperformance and the systems total capacity. An inequitable allocation of computing resourcescan lead to great performance for some users at the expense of others!

    One of the first steps in understanding a systems capacity is to understand how user tasksconsume key system resources and what system resources constrain performance. To do this,one needs to understand context switch behavior of the user tasks, that is, how and why usertasks start and stop execution. This step, vital to the final tuning and capacity planning of asystem, allows you to understand the relationship between the components of the system andthe standard performance metrics of throughput (i.e., transactions per second) and responsetime. This paper introduces the concepts and algorithms of task management used in ASE solets begin by reviewing the design principles under which ASE has been built.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    5/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 5 of 15

    ASEDESIGNPRINCIPLESOne of ASEs principal and on-going design goals since its first release has been efficientutilization of hardware and operating system resources. For this reason and the fact that mostoperating systems did not yet implement operating system level threading, Sybase designed adatabase kernel which implemented its own threading model. This provides customers with adatabase having a very low memory footprint since user tasks are represented in the ASE asinternal data structures rather than the more memory-intensive operating system processes.Therefore, the database kernel performs many of the jobs typically found in the operatingsystem such scheduling and dispatching the execution of user tasks.

    ASEs Virtual Server Architecture

    In version 4.8 when it was known as the Sybase SQL Server, Symmetric Multi-Processor (SMP)support was added to ASE. Known as the Virtual Server Architecture (VSA), Sybase builtupon its efficient resource utilization by designing the concept of Database Engines.Database Engines are instances of an ASE which each service user requests and act uponcommon user databases and internal structures such as data caches and lock chains. Theengines are fundamentally identical since they each perform the same types of user and systemtasks such as searching the data cache, issuing disk I/O read & write requests, and requesting& releasing locks. This approach was chosen because it offered a fully symmetric approach todatabase processing. To better understand these concepts, lets begin by looking at the mostimportant data structures within ASE.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    6/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 6 of 15

    DATA STRUCTURES TOMANAGETASKS

    Databases are complex pieces of software requiring extensive use of internal data structures toprovide the foundation for reliable database services. While a complete discussion of all of SQLServers internal data structures is clearly beyond the scope of this paper, there are a numberof the important data structures which must be understood to better comprehend how taskmanagement is accomplished. Please refer to Figure 1 below for a graphical picture of thesestructures as we review each of these data structures in detail.

    Figure 1 Important Data Structures Used in ASEs Task Management

    User Tasks

    User tasks executing within ASE are represented by data structures (depicted as yellowtriangles in Figure 1 above). Users connecting to the database do not use any operatingsystem resources other than the network socket and file descriptor used to communicate.Instead, when an application connects to ASE, several internal data structures are allocatedwithin the shared memory region the database server obtained when it initially started up.

    Some of these structures are connection-oriented such as the well-known Process StatusStructure (PSS) which contains static information about a particular user. Other structures arecommand-oriented. For example, when an application sends a command to ASE, theexecutable query plan is physically stored in an internal data structure. It is the representationof user tasks as these various internal data structures that makes ASEs task managementmodel so lightweight in both memory and context switch overhead when compared to othermethods.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    7/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 7 of 15

    Figure 2 Data Structures Representing User Tasks

    Figure 2 above shows the two basic data structures used to represent User Tasks within ASE.With this model, task management is essentially an exercise of moving some of the kerneltask data structure between an engine (i.e., the OS process where the commands arephysically executed) and one of two other structures, the Run Queues and the Sleep Queue,used to keep track of user tasks waiting for the appropriate resource.

    Run Queues

    User tasks simply waiting for their turn to begin or resume execution on an engine are stored ina data structure known as a Run Queue. It is important to understand that tasks on a RunQueue all have a runnable status. You can find out which tasks are on a Run Queue byquerying the sysprocesses table or the sp_who system procedure for tasks with this status.Run Queues are implemented as FIFO-based (i.e., First In, First Out) linked lists of PSS taskstructures. As part of the ASE 11.5 release, we added support for user task priority through thenotion of Execution Classes.

    Currently, ASE uses eight different execution priorities to service the different types of user andsystem services it must perform. As shown below in Figure 4, user tasks generally fall into oneof three (High, Medium, and Low) priorities with most system services (except Housekeeper)being scheduled less frequently but at a higher priority. Each priority is implemented as aseparate Run Queue to minimize synchronization contention on SMP systems and consequentlyspeed task selection.

    Figure 3 Task Priorities in ASE

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    8/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 8 of 15

    Anytime an engine is idle, the engine simply iterates through the different Run Queues to findthe highest priority task which will be at the top of the first non-empty Run Queue. In the caseof a user task, it then begins (or continues) executing the steps outlined in the tasks queryplan. In the example in Figure 1, we see that task #9 is at the top of the higher priority RunQueue and consequently will be the next task to execute. By default, there is not any affinitybetween a task and the engine on which it executes so whichever engine happens to becomeidle first will be the engine which grabs task #9.

    Another capability provided by the execution class feature which also affects scheduling andtask management is the notion of Engine Groups. This provides a method by which CPUcapacity can be partitioned into distinct groups and bound to application-specific services inorder to more effectively manage CPU resources and help applications predictably meet theirservice level agreements to the lines of business. Figure 4 below depicts a simplisticconfiguration where two Engine Groups are used to separate OLTP from DSS applications.

    Figure 4 Partitioning CPU Capacity into Engine Groups

    As noted above however, the use of Engine Groups affects the schedulers decision on which

    task to run. When an engine tries to run a task, it must quickly check to make sure that thetask is bound to an engine group that includes that engine. To do so, each engine checks abitmap contained in the PSS to verify that the task is allowed to run on that engine by ANDingits own bitmap with one specific to the Engine Group to which the task is currently associated.

    As shown in Figure 5 below, this may imply an engine must examine a few tasks beforeselecting one it can run.

    Figure 5 Task Selection Using Engine Groups

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    9/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 9 of 15

    Sleep Queues

    During normal database processing, it is common for a task to require a resource that is notimmediately available such as a page from disk or a database lock on a specific row. Insituations as these, where the time needed to obtain the resource is either non-deterministic(e.g., a lock where its release is dependent on another users transaction) or too long (e.g., a

    physical disk I/O which takes 8+ milliseconds or longer), it would degrade the throughput of amulti-user system to force an engine to wait idly until that resource became available.Consequently, in order to encourage multi-user throughput, ASE generally utilizes asynchronoustechniques when resources are unavailable. The cornerstone of these asynchronous techniquesis a data structure known as the Sleep Queue.

    The Sleep Queue is essentially a hash table of user task structures. Whenever a task cantobtain a required resource, it is put to sleep on the Sleep Queue by hashing the user taskstructure by its SPID value. The task will only be woken up upon obtaining the resource forwhich it is sleeping. Typically, obtaining that resource is dependent on some other event. Forexample, a task sleeping for a page lock will be woken up only when that lock has been grantedto it. Once woken, the task is placed at the bottom of the Run Queue to wait its turn to resumeexecution on an engine. This technique of putting tasks to sleep on unavailable resourcesobviously requires the capability to recognize when the resource becomes available so that theappropriate task can be woken. This is achieved through a few additional data structuresdepending on the type of resource (i.e., I/O, lock, etc.).

    When a task needs to send results back to the client application, it first obtains and savesinformation about the network I/O in a structure called a network I/O structure. For example,the tasks SPID and a pointer to the TDS (i.e., Tabular Data Stream) buffer containing the datato send across the network are all saved in this structure. The network I/O structure is thenlinked onto a Pending Network I/O Queue and the task is put to sleep waiting for its NetworkEngine to actually perform the network send. When it is time for this tasks Network Engine to

    send all its accumulated network I/O (this will be covered in detail in a subsequent section), thenetwork I/O structure is retrieved, the network send is physically requested to the OS, andfinally, the task is woken by moving it off the Sleep Queue onto the bottom of the appropriateRun Queue. As an optimization, if the engine executing a task happens to also be that tasksnetwork engine, the engine immediately sends the TDS packet and the task continues toexecute rather than being put to sleep.

    Pending Disk I/O Queues

    Physical disk I/O is relatively expensive in computer terms with even todays fastest magnetic

    disks taking 6-8 milliseconds to access data. Consider all the instructions a CPU can execute inthat amount of time. Consequently, in order to maximize system throughput, ASE usesasynchronous I/O on whenever possible. In this case, when a task needs to do a physical I/O,the engine on which the task is running first issues the I/O request to the operating system andthen puts the task to sleep to wait for the data to be returned. Since the asynchronous I/Ocompletion event is processed at some future point in time, we need a mechanism to match theI/O being returned by the OS with the task that initiated it so that the right task can be woken!This is achieved through Pending Disk I/O Queues.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    10/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 10 of 15

    In order to match completed asynchronous I/Os to the task which initiated them, the ASEkernel uses a structure called a Disk I/O structure. As the engine prepares to initiate aphysical disk I/O, it first obtains and saves some information about the I/O it is about torequest. For example, the tasks SPID, the device, logical and physical address as well as thenumber of bytes to read or write are all saved in this structure. The Disk I/O structure is thenlinked into a list of Pending Disk I/Os. At this point, the engine issues the physical request tothe OS and puts the task to sleep until the I/O is returned to ASE. When the OS returns thedata to ASE at some point in the future, the corresponding disk I/O structure is retrieved fromthe Pending Disk I/O Queue so that the appropriate task can be woken by moving it off theSleep Queue onto the Run Queue. Well explore how and when this I/O completion processingoccurs later in this paper. Network I/O behaves very similarly.

    Pending Network I/O Queues

    Network connections using standard OSI transport layer protocols such as TCP are the meansby which client applications and ASE communicate. This connection is initially established atlogin time between the client application and an ASE Listener Service. In version 11, afeature called Multiple Network Engines distributed the networking responsibility across allengines for improved performance (both throughput and response time) and scalability bymigrating the connection to the least busy engine at login time. Since there is no affinitybetween tasks and engines, a task will likely be executing on a different engine than the engineassigned networking responsibility for it! Therefore, much like disk I/O above, ASE uses

    Pending Network I/O Queues to manage tasks requiring network services.

    When a task needs to send results back to the client application, it first obtains and savesinformation about the network I/O in a structure called a network I/O structure. For example,the tasks SPID and a pointer to the TDS (i.e., Tabular Data Stream) buffer containing the data

    to send across the network are all saved in this structure. The network I/O structure is thenlinked onto a Pending Network I/O Queue and the task is put to sleep waiting for its NetworkEngine to actually perform the network send. When it is time for this tasks Network Engine tosend all its accumulated network I/O (this will be covered in detail in a subsequent section), thenetwork I/O structure is retrieved, the network send is physically requested to the OS, andfinally, the task is woken by moving it off the Sleep Queue onto the bottom of the Run Queue.

    Lock Chains

    One of the more obvious uses of asynchronous techniques in database processing is in the area

    of concurrency (i.e., database locking). Since lock duration is non-deterministic from ASEsperspective, it is clearly not in the systems best interest for a task to wait on an engine idlywaiting until the requested lock is available. Therefore, when a task requests a lock on anobject that is unavailable because another task already holds it, the task is put to sleep untilthat lock is granted to it. As in previous cases, ASE needs a mechanism to wake theappropriate task when the lock becomes available. This mechanism, however, is more complexthan others since multiple tasks may need to be woken for the same lock (e.g., for a SHARElock on the same page or row). To complicate matters, these tasks must be woken in the order

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    11/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 11 of 15

    in which the requests were made! To achieve this, ASE uses a collection of structures calledLock Chains which are implemented as hash tables containing two-dimensional linked lists oftwo different structures, Semawait and Lock Requests structures. Please refer to Figure 6below for a graphical picture.

    Figure 6 ASEs Lock Chains

    Lock Request structures are allocated by each user task requesting a lock and are used toboth match the correct user tasks to wake as well as the order in which tasks should be wokenwhen the lock becomes available. Information such as the users SPID is stored in thisstructure. The Semawait structures are used to link multiple Lock Requests (i.e., for differentusers) waiting for compatible locks on the same object or page.

    When a user requests a lock, the task obtains a lock request structure from a pool of available

    structures for the engine on which it is executing. If this engines pool is empty, structures aremoved from the servers global pool. Once obtained, information is saved in this structure suchas the users SPID, the object id of the object being locked, the type of lock (e.g., shared,exclusive, etc), the granularity (e.g., row, page, table, etc.), and, if applicable, the pagenumber. The engine now determines if this request can be immediately granted. This is doneby hashing the unique id of the requested object (row, page, or table) to determine if there is a

    Semawait structure for this object.

    If no Semawait structure is found, this indicates no one is currently holding a lock on thisobject. Therefore, this lock is granted immediately by creating a Semawait structure and linkingit into the hash table so subsequent users requesting a lock on that object can see it. Finallythe lock request structure is linked to the newly created Semawait and the user continues

    execution on the engine.

    If a Semawait structure was found in the above search, one or more users already hold a lockon this object or page. At this point ASE must determine whether this is a compatible lockrequest or not. Compatibility is determined by whether the two locks can co-exist with eachother. For example, multiple share locks co-exist since multiple users can lock the same pageat the same time. Update locks, on the other hand, have somewhat mixed behavior. Figure 7outlines the lock compatibility matrix used by ASE to make this decision.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    12/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 12 of 15

    Figure 7 ASEs Lock Compatibility Matrix

    Lock Types SHARE UPDATE EXCLUSIVE

    SHARE Compatible Compatible NotCompatible

    UPDATE Compatible Not

    Compatible

    Not

    CompatibleEXCLUSIVE Not

    CompatibleNot

    CompatibleNot

    Compatible

    If the locks are compatible, the lock is granted by linking this users lock request structure ontothe semawait structure and the user continues execution on the engine. If the locks are notcompatible, the kernel finds the first semawait structure that is compatible and links the lockrequest structure to it. Since another user (whose lock request structures are linked to the firstsemawait) currently holds that lock, the user is put to sleep to wait for the lock by placing itonto the sleep queue.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    13/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 13 of 15

    KNOWINGWHEN TO DOWHAT

    As a multi-threaded database that provides its own system services on behalf of user tasks, ASEmust keep track of time for many reasons. Predominately, we need a mechanism to decidewhen to perform periodic system activities such as processing completed asynchronous disk andnetwork I/Os on a regular basis. However, ASE uses a non-preemptive scheduler to providecomplete control over when a task schedules off of an engine in order to ensure tasks dont goto sleep holding critical shared resources such as latches or spinlocks. This places an additionalrequirement to ensure tasks dont run too long. Unlike business applications, databases cant

    just glance at a watch to find out what time it is high performance systems software likedatabases rely on relative time intervals to figure out when to do something.

    Keeping Time by Counting Clock Ticks

    The configuration parameter "sql server clock tick length" is how ASE keeps time within eachengine. It defines a time interval, expressed in microseconds, which the operating system usesto periodically interrupt the engine to let it know that a complete time interval has occurred.Platforms use an optimized mechanism such as signals with a frequency matching this timeinterval. The signal handler for each engine is responsible for suspending the current taskrunning on that engine, performing some "run-time accounting housework", and then resumingexecution of the suspended task. Each engine, being a separate process under the OS, sets up,receives, and handles its own interrupt which is important since each engine does its ownscheduling.

    Obviously, this mechanism provides a relatively course-grained, but highly efficient, way to keeptrack of time. It is therefore an obvious design choice for deciding when certain system tasks

    need to execute. However, our non-preemptive scheduler introduces a few wrinkles in order tomake sure the choice for when to physically run these system services is in fact the best choiceto do so.

    To Yield or Not to Yield that is the question

    As noted above, a non-preemptive design implies that the code knows best when it is a goodidea for one task to relinquish control of an engine in order to provide equitable services tolarge numbers of users. Although most tasks block on some resource such as a disk or networkI/O relatively frequently causing it to be scheduled off an engine, our course-grained timingmethod can challenge CPU-intensive systems to yield the CPU often enough to ensure

    reasonable sharing of CPU resources.

    Like operating systems, ASE has the notion of the time quantum. The time slice" parameterdefines the maximum amount of time a task gets before it is a candidate to voluntarily yield theengine. Although one configures time slices in milliseconds, the ASE kernel actually converts itto clock ticks.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    14/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 14 of 15

    Each time a task is taken from the run queue by an engine and starts executing, its' privateexecution time counter is set to time slice number of clock ticks and the task's executionbegins. The task continues to execute on that engine until either:

    It requests a physical I/O (either disk or net); It blocks waiting for a lock or some other shared resource; It exceeded its time slice and the task's execution path in our code hits a yield point; It exceeded the maximum time allowed running on an engine without yielding as

    governed by the "cpu grace time" parameter.

    As you can see, tasks that do a lot of physical disk or network I/Os or often block on locks orother resources spend significantly less than a single time slice executing on an engine. So, theonly real question left is if a very CPU intensive request occurs, how does ASE determine howlong its been running and when it should yield the engine (CPU) to another task.

    Each time an engine receives an interrupt from the OS that "clock tick" time period has expired,the engine "suspends" the current task and decrements the task's private execution counter by1 (again units of clock ticks). If the task's private execution counter is less than zero, the

    engine sets a bit in the task's private status structure that marks it as available to "yield". In allcode and loop paths, there are checks to see if its time for the task to yield! The engine thenperforms the earlier mentioned chores. Once complete with its chores, the engine continuesexecuting the same task.

    The check for an execution counter < 0 may seem puzzling to you since it was initialized to 1when the task began executing on the engine. Remember though that the clock tick interruptis the only way we keep time. A task could have begun executing 75% through a clock tickinterval. Therefore if we marked it "yield-able" at the first interrupt, the task would have onlygotten 25% of a time slice. Since we want the task to get at least a full time slice, we mark ityield-able when less than zero (i.e., 2 interrupts have been processed in this case) so that weknow the task got one full time slice. Since on average, most tasks block on I/O, locks, etcearlier than a full time slice, this is rarely a problem.

    Occasionally, a CPU-bound task will begin execution on an engine and due to its nature or moreoften, system calls that dont return, could continue to execute longer than its time slice. Forthese rare occasions, we use the cpu grace time parameter to prevent the task fromexecuting forever. The cpu grace time parameter is defined in units of "clock ticks". Duringinterrupt handling, if the engine recognizes that the task's private exec counter is equal to- (cpu grace time + time slice in ticks), then the task is assumed to be in an infinite loop and isterminated with the timeslice error. The error number (-201) which you may see here isactually the number of clock tick periods of time it has consumed in all.

    One has to be very careful about changing these. For example, changing sql server clock ticklength is generally not advised. Its like opening up your Sun server and plugging in a 3.0Ghzcrystal because you want it to run as fast as an AMD Opteron chip. Some parameters havedependencies that could cause a mis-configured server if you don't understand therelationships. For example, since cpu grace time is in units of clock ticks, if you change

    clock tick length (say you half it to 50,000) but leave cpu grace time the same, you've justreduced the wall-clock time of cpu grace time. You would have to double cpu grace time to400 in this example to maintain the default wall-clock time of 20 seconds.

  • 7/30/2019 ASE15 Task Management v1.0 022008 Wp

    15/15

    Understanding Task Management and Scheduling in the ASE Kernel

    Version 1.0 2006 Sybase, Inc. Sybase ITSG EngineeringSeptember, 2006 All Rights Reserved. Page 15 of 15

    CONCLUSION

    Although the SQL that developers write and the physical data model the DBA builds are clearlythe two most dominant factors in a systems performance and capacity, some businessrequirements dictate a detailed understanding of database processing to fine-tune systems. Forexample, tuning systems for real-time performance where query response times often cantexceed 10-20 milliseconds require significantly different approaches than systems doing large,complex query processing. As businesses continue to strive to improve the resource efficiencyof their hardware and software systems, DBAs are increasingly being asked to reach a littlefarther into their bag of tricks. It is with these considerations in mind, that understanding taskmanagement and scheduling in the ASE Kernel becomes vital to making the informed decisionsnecessary to squeeze every last bit of performance and capacity from your systems.