71
Real-Time Database S ystems: Concepts and Design Saud A. Aldarmi Department of Computer Science The University of York April 1998

Realtime Application

Embed Size (px)

Citation preview

Page 1: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 1/71

Real-Time Database Systems:

Concepts and Design

Saud A. Aldarmi

Department of Computer Science

The University of York 

April 1998

Page 2: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 2/71

2

Abstract

This qualifying dissertation is intended to review the state-of-the-art of  Real-Time Database Systems under a

uniprocessor and centralized environments. Due to the heterogeneity of the issues, the large amounts of infor-

mation, and space limitation, we limit our presentation to the most important issues to the overall design, con-

struction, and advancement of Real-Time Database Systems. Such topics are believed to include Transaction

Scheduling,   Admission Control,   Memory Management , and   Disk Scheduling. Furthermore, Transaction

Scheduling consists of Concurrency Control Protocols, Conflict Resolution Protocols, and  Deadlocks. Out of 

these issues, the most emphasis is placed on Concurrency Control and Conflict Resolution protocols due to

their severe role on the overall systems performance. Other important issues that were not included in our

presentation include Fault Tolerance and Failure Recovery, Predictability, and most important of all,  Mini-

mizing Transaction Support ; i.e.,  Relaxing Atomicity and Serializability. Various solutions to many of the in-

cluded topics are listed in chronological order along with their advantages, disadvantages, and limitations.

While we took the liberty to debate some solutions, we list the debates of other researchers as well. The pres-

entation concludes with the identification of five research areas, all of which are believed to be very important

to the advancement of Real-Time Database Systems.

Page 3: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 3/71

3

Contents

Introduction i

1  Fundamentals in Real-Time Systems

1.1. Introduction … … … … … … … … … … … … … … … … … … … … … … … … 1

1.2. System Models and Timing … … … … … … … … … … … … … … … … … … … 2

1.3. Scheduling … … … … … … … … … … … … … … … … … … … … … … … … 4

1.3.1. Priority Based Scheduling … … … … … … … … … … … … … … … … … 5

1.4. Synchronization … … … … … … … … … … … … … … … … … … … … … … … 9

1.4.1. Priority Inheritance … … … … … … … … … … … … … … … … … … … 10

1.4.2. Priority Ceiling … … … … … … … … … … … … … … … … … … … … … 10

1.5. Overload … … … … … … … … … … … … … … … … … … … … … … … … … 11

2  Overview of Real-Time Database Systems

2.1. Introduction … … … … … … … … … … … … … … … … … … … … … … … … 14

2.2. The Concept of Transactions and Serializability … … … … … … … … … … … … … 15

2.3. Time-Critical Systems vs. Database Requirements … … … … … … … … … … … … 18

2.4. Real-Time Database System Model … … … … … … … … … … … … … … … … … 20

2.5. Scheduling Real-Time Database Transactions … … … … … … … … … … … … … 222.5.1. Concurrency Control … … … … … … … … … … … … … … … … … … … 23

2.5.2. Conflict Resolution … … … … … … … … … … … … … … … … … … … … 23

2.5.3. Deadlocks … … … … … … … … … … … … … … … … … … … … … … … 25

2.6. Admission Control … … … … … … … … … … … … … … … … … … … … … … 25

2.7. Memory Management … … … … … … … … … … … … … … … … … … … … … 29

2.8. Disk Scheduling … … … … … … … … … … … … … … … … … … … … … … … 32

Page 4: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 4/71

4

3  Concurrency Control

3.1. Introduction … … … … … … … … … … … … … … … … … … … … … … … … 38

3.2. Locking Concurrency Control … … … … … … … … … … … … … … … … … … … 38

3.2.1. Synchronizing RTDB Transactions in Locking-based Protocols … … … … … … 39

3.3. Optimistic Concurrency Control … … … … … … … … … … … … … … … … … … 43

3.4. Speculative Concurrency Control … … … … … … … … … … … … … … … … … … 47

3.5. Multiversion Concurrency Control … … … … … … … … … … … … … … … … … 49

3.6. Dynamic Adjustment of Serialization Order … … … … … … … … … … … … … … 50

4  Open Problems and Future Plan  … … … … … … … … … … 54

Page 5: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 5/71

5

Introduction

In general, data in a real-time system is managed on individual basis by every task within the system.

However, with the advancement of technology, many applications are requiring large amounts of information

to be handled and managed in a timely manner. Thus, a substantial number of real-time applications are be-

coming more data-intensive. Such lager amounts of information had produced an interdependency relationship

among real-time applications. Therefore, in various application domains, data can no longer be treated and

managed on individual basis, rather it is becoming a vital resource requiring an efficient data management

mechanism. Meanwhile, database management systems are designed around such a concept; that is, with the

sole goal of managing data as a resource. Hence, the principles and techniques of transaction management in

Database Management Systems need to be applied to real-time applications for efficient storage and manipula-

tion of information.

In an attempt to achieve the advantages of both systems, real-time and database, continuous efforts are di-

rected towards the integration of the two technologies. Such an integration of the two technologies resulted in

combined systems known as   Real-Time Database Systems. Real-Time Database Systems emerged with the

publication of a special issue in the ACM SIGMOD Record in March 1988. Today, many applications require

such systems, i.e., information retrieval systems, airline reservation systems, stock market, banking, aircraft

and spacecraft control, robotics, factory automation, and computer-integrated manufacturing, and the list is

vast.

The engineering of data-intensive real-time applications can be improved through adaptation of the tech-

niques and principles of database management systems ( DBMS), which implies a corresponding reduction in

the cost of construction and maintenance. Database systems encapsulate data as a resource, and therefore pro-

vide a central control of data. Consequently, instead of managing data in an application-dependent manner,

database systems offer a more structured management of data, which offers the following advantages:

•  Elimination of redundancy – there is only one set of data shared by different applications as opposed

to each application maintaining its own version of the data; thus, better utilization of storage.

•  Maintenance and integrity controls – erroneous values can be rejected before being permanently re-corded in the database, thereby eliminating corruption of information. That is, individual applications

do not have to expend the extra effort in managing and maintaining the integrity of such information;

rather, it becomes the system’s responsibility.

•  More importantly, database systems allow the separation of   policy vs. mechanism and data-

abstraction. An application only specifies its desired operations disregarding the underlying imple-

mentations and structural characteristics of the required data items. It becomes the sole responsibility

of the DBMS to actually specify the storage structure and maintain it.

Page 6: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 6/71

6

This qualifying dissertation is intended to review Real-Time Database Systems as part of our research ef-

fort to the advancement of such systems. Due to the heterogeneity of the area and space limitations, we had to

limit our presentation to only a subset of the involved design and research issues. Our choice of inclu-

sion/exclusion of various topics is based on our view of Real-Time Database Systems and the associated im-

portance of the issues in the overall construction and advancement of such systems. More importantly, our

choice is influenced by our doctoral research intentions.

Although we intend to research Real-Time Database Systems, it is very crucial to understand the funda-

mentals of the underlying conventional non-database real-time systems. Therefore, chapter one of this review

is intended to discuss such fundamentals. The chapter starts with the basic definitions of time-critical systems

and their timing models. These definitions are followed by a presentation of general scheduling issues that in-

clude priority-based scheduling policies; i.e.,  Rate-Monotonic,  Most-Critical-First ,  Earliest-Deadline-First ,

Value-Density, and combined   Criticalness-Deadline. Synchronization issues such as the  priority-inversion

problem is discussed in more details than the preceding fundamentals, and two protocols are presented; i.e., the

Priority Inheritance and the Priority Ceiling, both of which have been proposed in the literature to reduce the

negative effects of priority inversion. The chapter concludes with a detailed discussion of  overloads, outlining

the impact of such operating conditions on the overall performance of the system and their theoretical limita-

tions.

Chapter two covers the vast area of Real-Time Database Systems and identifies the subset of topics that we

decided to include/exclude in our presentation. In the same vein as understanding the underlying fundamentals

of conventional non-database real-time systems, we believe that it is also very crucial to understand the funda-

mentals of conventional non-real-time database systems. Therefore, chapter two starts with an introduction to

the concepts of transactions and serializability as the notion of correctness of conventional database systems.

Next, we identify the differences between time-critical systems and conventional non-real-time database sys-

tems, which is followed by a model of a Real-Time Database System to outline the heterogeneity of the in-

volved resources and to indicate the components for which priority inclusion might be of great concern. The

rest of chapter two is divided into several sections, each of which is intended to discuss a separate component

of the model. These sections include transaction scheduling, admission control, memory management , and disk 

scheduling.

Due to the impact and severity of concurrency control on the overall performance of Real-Time Database

Systems, chapter three of this review is mainly dedicated for a detailed discussion of such an issue. The chapter

starts with a presentation of locking techniques as the most basic form of concurrency control, and most com-

mercially implemented in conventional database systems. As in conventional real-time systems, the priority-

inversion problem is revisited under different assumptions, limitations, and solutions. Locking is followed by a

discussion of  optimistic (restart-based ) concurrency control, followed by a detailed comparison of the two

techniques due to the substantial amount of controversy that can be found in the literature regarding their per-

formance under various environments. Next, we present a very recent technique in the arena of concurrency

control, known as “Speculative” protocol. The technique emerged from a detailed comparison of locking-based

Page 7: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 7/71

7

and restart-based protocols. It combines their advantages, while avoiding their drawbacks and shortcomings.

The rest of chapter three discusses two concurrency control protocols; i.e., Multiversion, and Dynamic Adjust-

ment of Serialization Order . These protocols are very powerful schemes; both of which can be designed and

tailored in a variety of ways to suite such constraints and objectives as can be found in Real-Time Database

Systems.

This review is concluded with the identification of five research areas, all of which are believed to be very

important to the advancement of Real-Time Database Systems. Having identified such “open” problems, we

precisely state our future plans and research intentions.

Page 8: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 8/71

8

1 – Fundamentals of Real-Time Systems

1.1. Introduction

Real-time systems can be defined as those computing systems that are designed to operate in a timely

manner. That is, performing certain actions within specific timing constraints; e.g., producing results while

meeting predefined deadlines. Hence, the notion of correctness of a real-time system is contingent upon the

logical correctness of the produced results as well as the timing at which such results are produced[PAN93, STA

88b].

Typical real-time systems consist of a controlled  system (the underlying application) and a controlling

system (a computer monitoring the state of the environment, as well as supplying it with the appropriate driv-

ing signals). The controlling system interacts with its environment based on the data available about the envi-

ronment. Therefore, it is important that the state of the environment, as perceived by the controlling system, be

consistent with the actual state of the environment. Otherwise, the effects of the controlling system activities

may be inappropriate. The need of maintaining consistency between the actual state of the environment and the

state as reflected or perceived by the system leads to the notion of  temporal-consistency. Therefore, the specifi-

cation of real-time systems includes timing constraints, which must be met in addition to the desired computa-

tions. Such timing constraints are usually defined in the form of  deadlines associated with the various opera-

tions of the computing system. In addition, such timing constraints introduce a notion of  periodicity, wherecertain tasks must be initiated at specific instants and must be executed within specific time intervals

[AUD90,

GRA92, PAN93, RAM93, and STA 88b].

The need to handle explicit deadlines and periodicity that are associated with activities requires employing

time-cognizant  protocols[RAM93]

. Such time-driven management policies should be applied on a system-wide

basis; e.g., processor, memory, I/O, and communications resources (data and channels). Thus, for a set of tasks

to meet their prescribed deadlines, precedence constraints must be established, satisfied, and resources must be

available in time for each task. Abrupt delays at any stage of the process can disrupt the system’s behavior and

objectives; i.e., delayed production of results [PAN93, STA 88a].

Scheduling decisions are guided by various metrics that depend on the application domain. The variety of 

metrics suggested for real-time systems indicates the different types of real-time systems that exist in the real

world, as well as the type of requirements imposed on them. Different execution requirements of  firm deadlines

and soft deadlines lead to different system objectives, and hence, different performance metrics in comparative

studies. In a real-time system that deals with firm deadlines; hence discarding tardy-tasks1, the objective is

simply to minimize the number of tasks missing their deadlines. Thus, a single metric,  Miss-Percentage2, is

 1 Tardy tasks are those that do not complete their execution by their prescribed deadlines.

2 Miss-Percentage is the percentage of tasks that do not complete by their deadlines.

Page 9: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 9/71

9

sufficient to characterize the system’s performance. On the other hand, a system dealing with soft deadlines, an

additional metric,  Average-Tardy-Time, is required to capture the degree of lateness ( tardiness) of tardy-tasks

[STA93].

When tasks with different priorities access shared resources in an exclusive mode, a problem known as the priority-inversion can occur, and one must take some corrective measures. Such corrective measures are not

only required to manage priority inversions, but also to cope with any overload that might occur due to unan-

ticipated system activities and/or emergencies.

The rest of this chapter is outlined as follows: the chapter will start with a discussion of various system

models and their corresponding timing behavior. Next, a brief discussion of scheduling will be presented,

starting with static policies and progressively moving towards the more complex dynamic policies. Due to the

amount of literature dedicated to the priority-inversion problem and its severe impact on the overall perform-

ance of a system, the problem is presented and discussed in details in a separate section of this chapter. Finally,

we conclude the chapter with a detailed discussion of overload conditions. The discussion of overload will ad-

dress its impact and theoretical limitations on the system’s behavior.

The purpose of the chapter is to present the fundamentals of real-time systems and to address the real-time

issues that are most relevant to the construction of  Real-Time Database ( RTDB) systems. The chapter outlines

various issues in the domain of real-time systems. However, such issues reoccur in  RTDB systems, yet require

different solutions due to the differences between the two domains.

1.2. System Models and Timing

Real-time applications can be modeled as a set of tasks, where each task can be classified according to its

timing requirements as hard ,  firm, or soft . A hard real-time task is the one whose timely and logically correct

execution is considered to be critical for the operation of the entire system. The deadline associated with a hard

real-time task is conventionally termed a hard-deadline. Missing a hard-deadline can result in catastrophic

consequences – such systems are known as safety-critical. Thus, the design of a hard real-time system requires

that a number of performance and reliability trade-off issues to be carefully evaluated [AUD90, PAN93, and STA 88b].

On the other hand, a soft real-time application is characterized by a soft-deadline whose adherence is de-

sirable, although not critical, for the functioning of the system. That is, missing a soft-deadline does not cause

a system failure or compromises the system’s integrity. There may still be some (diminishing) value3 for com-

pleting an application after its deadline, without any catastrophic consequences resulting from missing such a

deadline [PAN93, STA 88b].

Finally, a  firm real-time task, like a soft real-time task, is characterized by a  firm-deadline whose adher-

ence is desirable, although not critical, for the functioning of the system. However, unlike a soft real-time task,

 3 Gained values will be defined in the discussion of value-functions later in the chapter.

Page 10: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 10/71

10

a firm real-time task is not executed after its deadline and no value is gained by the system from firm tasks that

miss their deadlines. An interesting comparative study of soft vs. firm deadline behavior is presented in[LEE96]

.

The study showed that the miss-percentage of soft-deadlines increases exponentially with the arrival rate of 

tasks. Meanwhile, in firm-deadlines, where the population in the system is regulated automatically by dis-

carding tardy tasks, the miss-percentage increases only polynomially as the arrival rate increases.

There are two general paradigms for the design of real-time operating systems known as Time-Triggered 

(TT ) and Event-Triggered ( ET ) architectures, both of which are explained next.

•  Systems activities in TT are initiated as predefined instants, and therefore TT architectures require the as-

sessment of resource requirements and resource availability prior to the execution of each application task.

Each task’s needed resources and the length of time over which these resources will be used can be com-

puted off-line in a resource requirement matrix. If these requirements cannot be anticipated, then worst-

case resource and execution time estimates are used. Thus, TT is prone to waste resources and lowering

systems utilization under pessimistic estimates (overestimates). However, TT architecture can provide pre-

dictable behavior due to its pre-planed execution pattern [BUC89, PAN93].

•  System activities in  ET are initiated in response to the occurrence of particular events that are possibly

caused by the environment. In  ET architectures, an excessive number of possible behaviors must be care-

fully analyzed in order to establish their predictability, because resource needs and availability may vary at

run-time. Thus, the resource-need assessment in  ET architecture is usually probabilistic. Although,  ET is

not as reliable as TT architecture, it provides more flexibility and ideal for more classes of applications,

which do not lend themselves to predetermination of resource requirements [PAN93].

As a direct consequence of these architectures along with the timing requirements we mentioned earlier,

application tasks can be classified as periodic, aperiodic, or sporadic tasks[AUD90, PAN93]

.

1.  Periodic tasks are those tasks that execute at regular intervals of time; i.e., every T  time units – corre-

sponding to TT architectures. These tasks typically tend to have hard deadlines, characterized by their  pe-

riod (s) and their required execution time per period [LIU73], which is usually given, by a worst-case execu-

tion time.

2.   Aperiodic tasks are those tasks whose execution time cannot be priori anticipated. That is, the activation of 

aperiodic tasks is essentially a random event caused by a trigger – corresponding to ET architectures. Such

a behavior does not allow for worst-case analysis, and therefore, aperiodic tasks tend to have soft dead-

lines.

3.  Sporadic tasks are those tasks that are aperiodic in nature, but they have hard deadlines. Such tasks can be

used to handle emergency conditions and/or exceptional situations. Due to the nature of hard deadlines,

 

Page 11: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 11/71

11

worst-case calculations may be facilitated by a schedulability-constraint , which defines a minimum period

between any two sporadic events from the same source.

 There is a large body of literature on mixing  periodic, aperiodic, and sporadic tasks within one system

and there are various techniques on scheduling such a mix, each with its own advantages, disadvantages, andlimitations. However, we do not intend to discuss such an issue in our current review nor do we intend to in-

vestigate such an issue in our future research. The interested reader may refer to[CHE90, HOM94, SPR88, and SPU95]

.

1.3. Scheduling

A scheduler in general is an algorithm or a policy for ordering the execution of the outstanding processes

(tasks) on a processor according to some predefined criteria. Each task within a real-time system has a dead-

line, an arrival time, and possibly an estimated worst-case execution time. A task’s execution time can be de-

rived from the time each resource is required and the precedence constraints among sub-tasks. Execution time

information can be given in terms of deterministic, worst-case, or  probabilistic estimates. The responsibility of 

a real-time system scheduler is to determine an order of execution of the tasks that are  feasible4. Typically, a

scheduler is optimal if it can schedule all task sets that other schedulers can [AUD 90, BUC89, and PAN93].

A scheduler may be  preemptive or non-preemptive. A preemptive scheduler can arbitrarily suspend and

resume the execution of a task without affecting its behavior. Preemption is used to control priority-driven

scheduling. Typically, preemption occurs when a higher-priority task becomes runnable while a lower-priority

task is executing. On the other hand, in a non-preemptive scheduler, a task must run without interruption until

completion [PAN93, LIU73]. Simulation studies in [PAN93] showed that the use of preemption is more appropriate for

scheduling real-time systems. Finally, a hybrid approach is a preemptive scheduler, but preemption is only al-

lowed at certain points within the code of each task.

Real-time scheduling algorithms can be classified as either static or dynamic [NAT92, BUC89, and PAN93]

. A static

approach also known as a  fixed-priority, where priorities are computed off-line, assigned to each task, and

maintained unaltered during the entire lifetime of the task and the system. A static scheduler requires complete

priori knowledge of the real-time environment in which it is deployed. A table is generated that contains all the

scheduling decisions during run-time. Therefore, it requires little run-time overhead. Aside from the many dis-

advantages of static scheduling [NAT92], it is rather inflexible, because the scheme is workable only if all the

tasks are effectively periodic. Jensen et al. [JEN85] stated that fixed priority scheduling could work only for rela-

tively simple systems, and results in a real-time system that is extremely fragile in the presence of changing re-

quirements. It was shown in [JEN85, LOC86] that fixed-priority schedulers perform inconsistently, particularly as

the load increases.

On the other hand, dynamic scheduling techniques assume unpredictable task-arrival times and attempt to

schedule tasks dynamically upon arrival. That is, a dynamic scheduling algorithm dynamically computes and

Page 12: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 12/71

12

assigns a priority value to each task, which can change at run-time. The decisions are based on both task char-

acteristics and the current state of the system, thereby furnishing a more flexible scheduler that can deal with

unpredictable events.

The computational complexity of a scheduling algorithm is of great concern in time-driven systems.Scheduling algorithms with exponential complexities are clearly undesirable for on-line scheduling schemes.

Audsley and Burns[AUD90]

stated that the computational complexity is concerned with computability and de-

cidability.

•  Computability is concerned with whether a given schedule can meet the timing constraints of a set of 

tasks, a problem that can be determined within a polynomial time. The computability problem is also

known as the schedulability problem [KUO91].

•   Decidability is concerned with whether a feasible schedule for a set of tasks exists, a problem that is

shown NP-Complete [GAR79]

. The decidability problem is also known as the feasibility problem[KUO91]

.

Due to the intractability of the scheduling problem, dynamic on-line scheduling techniques are based pri-

marily on heuristics, which entails higher run-time costs. Dynamic on-line scheduling policies can adapt to

changes in the environment and could result in greater processor utilization. In addition, dynamic methods are

most applicable for aperiodic applications, most applicable to error-recovery, and most appropriate for applica-

tions that lack a worst case upper limit on resource and execution requirements. Audsley and Burns[AUD90]

ar-

gued that no event should be unpredictable and that schedulability should be guaranteed before execution in a

safety-critical system, which implies the use of static scheduling methods for such systems.

Tasks whose progress is not dependent upon the progress of other task(s), excluding the competition for

processor time between tasks, are termed independent . On the other hand, interdependent  tasks can interact in

many ways including communication and precedence relationships[AUD90]

.

1.3.1. Priority-Based Scheduling

CPU scheduling is the most significant of all system scheduling in improving the performance of real-time

systems [HUA89, STA91]. Conventional scheduling algorithms employed by most operating systems aim at balanc-

ing the number of CPU-bound and I/O-bound jobs in order to maximize system utilization and throughput, in

addition to fairness as a major design issue. On the other hand, real-time tasks need to be scheduled according

to their criticalness5

and timeliness, even if it is at the expense of sacrificing some of the conventional design

goals[STA 88b]

.

Therefore, real-time scheduling algorithms establish a form of priority ordering among the various tasks

within the system. Priorities are either assigned statically during system design time as a measure of the task’s

 4 If a task set can be scheduled to meet given pre-conditions, the set is termed  feasible. That is, a scheduling algorithm is feasible if the requests

of all tasks can be fulfilled before their respective deadlines.

Page 13: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 13/71

13

importance to the system, or can be expressed as a function of time and dynamically evaluated by the scheduler

[BUC89]. Such priorities are related to the attributes of the tasks. Since different applications have different at-

tributes and characteristics, different scheduling algorithms also tend to differ in their priority assignment re-

gimes. For example, priorities can be based on criticalness, deadlines, slack time, required/expected computa-

tion time, amount of finished/unfinished work, age, and/or a combination of such attributes [KAO95].

The objective of priority scheduling is to provide preferential treatment to tasks with higher priorities over

the ones with lower-priorities. Therefore, a priority-driven scheduler prioritizes the scheduling (ready)-queue

in order to service requests according to their priorities, either non-preemptively, or preemptively as we dis-

cussed earlier. Consequently, the system can ensure that the progress of higher-priority tasks (ideally) is never

hindered by lower-priority tasks.

In the rest of this section, we briefly discuss the various methods used in constructing different priority-

driven schedulers for real-time systems.

Rate-Monotonic (RM)

The Rate-Monotonic ( RM ) policy [LIU73] is a preemptive policy in which priorities are assigned to tasks ac-

cording to their request rates ( periodicity), independent of their run-times. All tasks are statically allocated a

priority according to their period. The shorter the task’s period the higher its assigned priority. The scheme is

simple, because the priorities remain fixed, resulting in a straightforward implementation. The RM policy was

shown an optimal fixed-priority scheduling policy for periodic tasks [LIU73].

Most-Critical-First ( MCF)

The MCF policy [JEN85] is very simple. It divides the set of tasks and assigns a certain priority level to each

task based on its functionality and importance to the system. The difficulty in the  MCF priority assignment

comes when new functions are added to the system. Such a modification in the functionality of the system

might require one to adjust all other priority assignments to reflect the new additions and modifications. The

policy can significantly degrade the performance of the system if the most critical tasks tend to require the

most resources or tend to have longer execution times. However, the nice property of  MCF is that it can pro-

duce reliable schedulers in the sense that it strives to meet the deadlines of the top most critical tasks, regard-

less of the systems load.

The alternative to assigning priorities statically is to derive them dynamically at run-time. Several dy-

namic on-line scheduling algorithms are presented next.

Earliest-Deadline-First (EDF)

The EDF policy is a preemptive priority-based scheduling scheme. It uses the deadlines of tasks as its pri-

mary heuristic. That is, the task with the current closest (earliest) deadline is assigned the highest priority in

 5 Criticalness represents a task’s importance to the overall functionality of the system.

Page 14: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 14/71

14

the system and therefore is executed next. For a given set of tasks, the EDF policy is feasible (if and only if) iff:

(C 1 /T 1) + (C 2 /T 2) + … + (C n /T n) ≤  1, where C i and T i represent the amount of computation and period (submis-

sion rate) of task ‘i’. That is, the EDF policy could achieve full processor utilization based on the above given

bound. The policy is also optimal in the sense that if a set of tasks can be scheduled by any algorithm under the

load limits given above, it can also be scheduled by the EDF policy [LIU73].

A major weakness of this policy is that under an overload condition, it assigns the highest priority to the

task that has already missed and/or about to miss its deadline. The scheme can be made time cognizant by as-

signing the highest priority to the task with the earliest-feasible deadline. A deadline is feasible if the remain-

ing computation time ≤ (deadline - current time) [ABB 88a]. In addition, it was shown in a study conducted by

Huang et al. [HUA89] regarding the sensitivity of scheduling algorithms to deadline distributions that the  EDF is

most sensitive among other scheduling policies to deadline settings. The performance of the  EDF policy was

shown to worsen, as the deadlines become tighter6.

Value-Functions

Jensen et al. [JEN85] introduced the concept of value-functions. A value-function is more than just a deadline

in the sense that a deadline represents only one discrete instant in time, whereas a value-function models a

task’s requirements over a window of continuous time frame. The essential idea is that the completion of each

task has a value to the system upon the task’s successful completion, which is expressed as a function of time.

Thus, the time taken to execute a task is mapped against a value that this task has to the system. Consequently,

the scheduler is required to assign priorities as well as defining the system values of completing each task at

any instant in time. The system’s objective is to maximize the cumulative sum of the collected values from the

complete and successful execution of a given set of tasks [JEN85, ABB88]. A value-function can include a disconti-

nuity to represent a deadline. For example, depending on the type of discontinuity, a value-function can repre-

sent hard , firm, and soft deadlines as shown Figure (1.1).

The value may directly correlate to the criticalness of a task, or it may be a time varying function of a

task’s attributes. As can be seen from Figure (1.1), a hard deadline can be modeled so that a task can impart a

full value if executed before the expiration of its deadline. However, a tardy task will impart a negative value to

the system. A firm deadline task will have a value up to its deadline, and such value drops to zero after the

deadline along with discarding the task. A soft deadline can be modeled by including a decay function after the

deadline so that the task will still impart a  positively diminishing value to the system even after its deadline is

passed [JEN85, ABB88].

Value-Density (VD)

A value density function (VD) is defined as VD = value ÷  computation time. This scheme tends to select

the tasks that earn more value per time unit that they consume. Thus, the task with the greatest value density

 6 A deadline becomes tighter as [deadline – (current time + computation time)] becomes smaller.

Page 15: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 15/71

15

receives the highest priority[JEN85, ABB 88a]

. The VD policy is a greedy technique in the sense that it always

schedules the task that has the highest expected value within the shortest possible time unit.

The simulation conducted in[JEN85]

showed that the performance of the VD policy is variable depending on

the value-function chosen and on the systems load. Because it is a greedy algorithm, it picks up a value earlyrather than waits to get a higher value; thus, it (unnecessarily) misses many opportunities to meet time con-

straints.

Value

Time

Deadline

Hard

Deadline

Firm

Deadline

Soft

Figure (1.1)

Combined Criticalness-Deadline

Scheduling real-time tasks is priority-driven, where priorities are based on some characteristics of the cor-

responding tasks; e.g., deadline and/or criticalness. Biayabani et al. [BIY88] argued that scheduling based on pri-

orities where priorities are derived from deadlines or criticalness (separately) is not adequate, because tasks

with very short deadlines might not be very critical, and vice versa. An important point that is addressed in the

literature is that criticalness and deadlines are two separate independent characteristics that do not correlate in

one-to-one relationship

[HUA89, KAO95]

. Based on this observation, many attempts have been made to combine thetwo attributes into the scheduling decision. In the rest of this section, we present a few of such attempts.

Biayabani et al.[BIY88]

introduced two scheduling algorithms  ALG1 and  ALG2, both of which integrate

deadlines and criticalness in deriving the corresponding priorities. The two algorithms attempt to schedule an

incoming task according to its deadline, ignoring its criticalness. If scheduling the task is feasible, then sched-

uling is successful. However, if the newly incoming task is not schedulable due to having too many tasks al-

ready in the system, the algorithms attempt to schedule the incoming task on the expense of the less critical

previous tasks in the system. The two algorithms differ only in how they remove the lower critical tasks from

the system. ALG1 removes lower critical tasks one at a time from low to high criticalness order.  ALG2 also re-

moves lower critical tasks one at a time, but starting from the tasks with the least criticalness and furthest

deadlines. Note that Biayabani et al. [BIY88] relocates the removed task(s) to another processor, a point that we

do not address in this review.

Both algorithms apply EDF for under-load conditions. However, in overload situations,  ALG1 switches to

 MCF , whereas ALG2 switches to another policy that is an artificial combination of  EDF and  MCF . If sched-

uling was based on  EDF and MCF  together at the same time then it is a natural combination. However, since

the scheduling decision under an overload is based on  MCF first, then on  EDF , the two policies are not actu-

ally integrated into one measure and therefore we believe it is an artificial combination of the two policies.

Page 16: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 16/71

16

The simulation conducted by Biayabani et al.[BIY88]

revealed that at low loads, deadline-based algorithms

tend to perform better than criticalness-based algorithms. On the other hand, at high loads, the situation is re-

versed and the criticalness-based algorithms outperform deadline-based algorithms. Furthermore, combining

the deadlines and criticalness together in one policy; e.g., ( ALG1 and  ALG2) can outperform both deadline-based and criticalness-based algorithms.

Huang et al.[HUA89]

proposed an on-line scheduling algorithm called Criticalness-Deadline First (CDF ) in

which each task is assigned a priority at the time of its arrival, based on its relative-deadline7 divided by its

criticalness. Huang et al.[HUA89]

showed that CPU scheduling based on the CDF policy significantly improves

the overall performance of the system over techniques that consider deadlines or criticalness as separate pa-

rameters. Furthermore, the CDF policy was shown to achieve a good performance for the more critical tasks at

the cost of losing the less critical tasks. This trade-off reflects the nature of real-time processing that is based

on criticalness and timing constraints. Thus, to get the best performance, both criticalness and deadlines

should be used for CPU scheduling.

The study conducted in [STA91] concluded the following points. First, in a CPU-bound system, the CPU

scheduling algorithm has a significant impact on the performance of a real-time system, and dominates all of 

the other types of protocols. Second, in order to obtain good CPU scheduling performance, both criticalness

and deadlines of a task should be considered in priority assignment.

Buttazzo et al.[BUT95]

proposed another value-deadline combined technique known as a weighted  Earliest 

 Deadline Value Density First ( EDVDF ). We defer the discussion of the EDVDF policy to the overload section,

towards the end of this chapter.

1.4. Synchronization

Real-time tasks interact in order to satisfy system wide requirements. Such interactions range from simple

synchronization to mutual exclusion protection of non-sharable resources. To calculate the execution time for a

task requires knowledge of how long it will be blocked on any synchronization primitive it uses. Ideally, a

higher-priority task, TH, should be able to preempt a lower-priority task, TL, immediately upon request. How-

ever, to maintain consistency of a shared resource, the access must be serialized. If T H gains access first, then

the proper priority order is maintained. On the other hand, if TL gains access first followed by a request from

TH to access the shared resource, TH is blocked until TL completes its access to the shared resource.

The primary difficulty with blocking mechanisms is that a higher-priority task can be blocked by a lower-

priority task possibly for an unbounded number of times and for unbounded periods – a phenomenon known as

the  priority-inversion problem. Unfortunately, TH, mentioned above, is not only blocked by TL, but it ends up

waiting for any medium priority task, TM, that wishes to execute during that period. Task TM will preempt TL

 7 A relative deadline = absolute deadline – arrival time.

Page 17: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 17/71

17

and hence further delay TH, whose progress became dependent on that of TL. Such priority inversion could

mature into a serious problem in real-time systems due to its role in lowering both, schedulability as well as

 predictability of the system[SHA90]

.

There are various methods that can be integrated with the scheduler to reduce the negative effect(s) of thepriority inversion problem, two of which are presented in the following subsequent sections.

1.4.1. The Priority Inheritance Protocol

Under the priority-inheritance protocol[SHA90]

, when a task blocks one or more higher-priority task(s), it

inherits the highest priority level of all the tasks it blocks and executes its resource (critical section) at such an

elevated priority level. After exiting its critical section, it returns to its original priority level. Consequently, a

lower-priority task TL directly blocks a higher-priority task TH (temporarily – only for the duration of the criti-

cal section). Such blocking is necessary to ensure mutual exclusion and the consistency of critical section exe-

cution. Furthermore, a medium priority task TM will also be blocked by the elevated priority task to avoid hav-

ing TM preempt TL, thereby indirectly preempting or delaying TH8.

The priority inheritance criterion is transitive. That is, given T1, T2, and T3 as three tasks with T1 having

the highest priority and T3 having the lowest priority. If T2 blocks T1, and T3 blocks T2, then T3 inherits T1’s

priority through T2’s inheritance. Effectively, T1 is blocked by both lower-priority tasks T2 and T3. In addition,

when a task inherits a higher-priority, it uses the elevated priority in competing for all its resource needs dur-

ing the duration at which its priority is elevated.

1.4.2. The Priority Ceiling Protocol

The   priority ceiling protocol [SHA90]

extends the priority inheritance protocol in order to prevent the for-

mation of deadlocks as well as chained-blocking9, both of which are experienced by the priority-inheritance

protocol presented above. The priority ceiling protocol is as follows:

•  Assign a priority level; i.e., a ceiling, to every critical section (CS). The ceiling is set equal to the

highest priority level that may use or access this CS.

•  A task wishes to access a CS can simply do so if there are no suspended tasks within their CSs.

•  If a task is suspended while executing within a CS due to preemption by a higher-priority task, then

the priority ceiling comes into effect. If the higher-priority preempting task, T H, has a priority that is

higher than all ceilings of all currently preempted tasks, then it can access its CS. Otherwise, TH is

suspended, the lower-priority task, TL, inherits the priority of TH, and TL resumes execution at the ele-

vated priority level.

•  When a task exits its CS, it returns to its original priority if it had inherited any higher priority during

its execution.

 8 When an elevated priority task blocks a medium priority task, it is called push-through blocking  

[SHA90].

Page 18: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 18/71

18

The priority ceiling protocol achieves what it is designed for; that is, prevent transitive blocking (chained-

blocking) and prevents deadlocks as well. However, it has the following problems.

•  The performance of the algorithm is very sensitive to the size of the critical section(s)[GRA92]

.

•  Consider two tasks T1 and T2 where T1 has a higher priority and may wish to access many critical

sections. However, T2 accesses only a single critical section and it happened to be in common with T1.

If T2 accesses the common critical section first, then T1 cannot access any of its other critical sections.

Although, the common critical section, could be embedded within a conditional statement and may

never actually be accessed by T1.

•  If T1 and T3 have a critical section in common and therefore the priority ceiling of the critical section

is set to the priority of T1. Assume that T3 accesses the critical section, and T2 arrives and preempts

T3, while T1 has not even arrived. T2 cannot access any of its critical sections due to the push-through

blocking effect. The main reason behind push-through blocking is to prevent T2 from indirectly

blocking T1. However, as a side effect to it, T2 will block even if T1 is not in the system yet. Imagine

the same scenario between T1 … T100, with T100 having the lowest priority in the system. T100 can ac-

tually block T1 … T99, directly and via push-through blocking, which could cause T1 … T99 to miss

their deadlines due to being blocked by the absolute lowest priority task, in a priority-driven system!

The interested reader may find a detailed presentation of the priority-inheritance and the priority-ceiling

protocols in [RAJ91, and RAJ95].

1.5. Overload

A system is under-loaded  if there is a schedule that will meet the deadline of every task. We have pre-

sented several on-line scheduling algorithms for a uniprocessor environments. However, none of the presented

algorithms actually makes any performance guarantees when the system is overloaded . Practical systems are

prone to intermittent overloading caused by any of the following factors:

•  A cascading of exceptional situations, often corresponding to emergencies.

•  Effective scheduling decisions require complete knowledge of the execution time of a task. Meanwhile,

execution times are generally stochastic in many systems and environments[BAR

 91a]

. Using worst-case exe-

cution times to schedule a set of tasks could reduce the processor utilization under normal operating con-

ditions. On the other hand, scheduling using less than worst execution times introduces the possibility of 

an overload.

•  If worst-case calculations were too optimistic, or the hardware fails to perform as anticipated.

A practical on-line scheduling algorithm should not only be optimal under normal circumstances, but also

respond appropriately to overload conditions [BAR 91a].

 9 Chained-blocking refers to a chain of lower-priority tasks blocking a higher-priority task.

Page 19: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 19/71

19

An on-line scheduling algorithm is said to have a competitive-factor  (r) on a set of tasks, iff  it is guaran-

teed to achieve a cumulative value ≥ r of the value achievable by a clairvoyant  scheduling algorithm on the

same set of tasks – where 0 ≤ r  ≤ 1. A clairvoyant scheduling algorithm is the one that knows the arrival time,

value, execution time, and deadline of all future task requests[BAR

 91a, BAR

 91b]

.

An optimal on-line scheduling algorithm such as EDF has been shown to achieve r = 1, when the loading-

  factor (f) ≤ 1. For a uniprocessor environment, Baruah et al. [BAR  91a, and BAR  91b] have proven that no on-line

scheduling algorithm can offer r > 0.25 (or ¼) under  f  ≥ 2 + ε, where ε is an arbitrary small positive number.

This implies, in contrast to EDF whose competitive factor crumbles at  f > 1, that there could exist an on-line

scheduling algorithm whose performance can be optimal for 0 ≤  f  ≤ 2. However, for 1< f  ≤ 2, Baruah et al.[BAR

91a, BAR 91b] showed that an on-line scheduling algorithm may not obtain more than [1÷  (1+ k  )2] of the value

obtainable by an off-line clairvoyant algorithm; where k  is the ratio of the highest value density to the lowestvalue density of the tasks within the system. Such a bound rapidly drops below 0.25 as the value density differs

among the competing tasks within the system.

In contrast to the theoretical bound presented above, Buttazzo et al.[BUT95]

argued that such an upper

bound has only a theoretical validity, because it is achieved under very restrictive (almost unrealistic) set of as-

sumptions. For example, tasks have zero laxity, task’s execution time can be very short (epsilon short which

Baruah et al. called baits), and each task’s value is equal to its computation time.

Buttazzo et al.[BUT95]

conducted a comparative study for four scheduling policies:  EDF ,  MCF , VD, and a

weighted  Earliest Deadline Value Density First ( EDVDF ). Where the priority Pi derived by EDVDF  is com-

puted as follows: Pi = (α ) VDi – (1-α ) d i, where Pi, VDi, and d i represent the priority, the value density, and

deadline of task ‘i’, respectively. The four policies were further extended to manage overload in two different

manners. Either simply reject the incoming task(s) or remove the task(s) with the least value or criticalness

(depending on the priority assignment policy being used) until the removal of the overload. Note that for the

 MCF policy, the value of a task correlates directly to its criticalness. The first method was called the guaran-

teed-class, while the second was called the robust-class. In addition, the later was also equipped with a queue

to hold all rejected (removed) tasks, which will be processed only if active tasks finish their execution earlier

than was anticipated. The simulation conducted in [BUT95] showed the following results:

•  In the presence of an overload and without any mechanism to deal with an overload, the VD policy was the

most effective policy among the simulated four algorithms. In addition, the VD policy degrades gracefully

and less sensitive to the tasks parameters.

•  In the presence of an overload and with an overload management mechanism, whether employing the ro-

bust or the guaranteed overload management strategy, the EDF policy seems to be the most effective.

•  The robust class was found to outperform the guaranteed class.

Page 20: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 20/71

Page 21: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 21/71

21

2 – Overview of 

Real-Time Database Systems

2.1. Introduction

In this chapter, we address various issues that impact the design of  RTDB systems. Not until one attempts

to survey the field of  RTDB systems, one realizes the massive amount of information to be reviewed. For ex-

ample, a list of topics with consequential severity on the overall design of  RTDB systems, in a centralized  uni-

 processor system, is as follows:

1.   RTDB System Models

2.  Scheduling RTDB Transactions

2.1. Concurrency Control

2.2. Conflict Resolution

2.3. Deadlocks

3.  Fault Tolerance and Failure Recovery

4.  Admission Control

5.  Memory Management

6.  I/O and Disk Scheduling

7.  Imprecise Computations

8.  Main Memory Database Systems

9.  Minimizing Transaction Support; i.e., Relaxing Serializability

10.  Access Invariance

11.  Predictability

Due to the heterogeneity of the issues, space limitations, and our future research intentions, we will only

address the following issues: 1, 2.1-2.3, 4-6. Out of the remaining issues:  Recovery is a major issue and the

interested reader may refer to [HAE83, SHI86, BER87, UPA88, LEC88, KOR90, NIC90, MOH92, ELM94, SIV95, THO95, and HUA96].  Access

 Invariance and Predictability have a strong correlation and the interested reader may refer to[FRA90, FRA92, ONE95,

KIM93, and KIM96]. Finally, Minimizing Transaction Support ; i.e.,  Relaxing Serializability is a massive topic. Only a

fraction of the attempts in this area include  Epsilon Serializability [PU

 91a, PU

 91b, RAM94, WU92]

, Semantic Analysis

[GAR83, BAD92]including SAGAS 

[GAR87], Weak Consistency 

[GAR82], Quasi Serializability 

[DU89], Pre/Post Condi-

tions [KOR88]

,   Eventual Consistency [SHE90]

, Controlled Inconsistency [ALO90]

,   Relaxed Atomicity [LEV91]

, and the

 Escro Method  [ONE86]

.

Page 22: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 22/71

22

We start this chapter by presenting a very brief overview of conventional database issues and introduce the

concepts of transaction and serializability, which will ease the discussion of subsequent sections and the next

chapter. Topics with consequential severity on the overall design of the system will be discussed in more de-

tails than other less important topics. However, due to our future research interest and the amount of informa-

tion in the literature regarding concurrency control, we only introduce the topic in this chapter and we dedicate

the next chapter for the detailed discussion of concurrency control. Our final note before we start the chapter is

the following note quoted from M. Graham [GRA92], “it is not possible to determine if there exists best or uni-

versal solution to many of the design issues in RTDB systems. It seems reasonable to assume that for every

solution, there is a problem, which it does not fit ”. However, we believe that making the solution fit that one

odd problem, is one way to advance our knowledge of  RTDB systems, and on such a goal we strive.

2.2. The Concept of Transactions and Serializability

The state of a database consists of records, assertions about the values of these records, and an allowed set

of transformations applicable to the values of such records. These assertions are called consistency constraints.

One may need to temporarily violate the consistency of the system-state while modifying it. Therefore, in order

to transform a consistent system-state to another consistent state, the system provides sets of actions in the

form of read and write operations. A transaction is a collection of such actions, which comprise a consistent

transformation of the system-state. Each transaction, when executed alone, transforms a consistent state into a

new consistent state; that is, transactions preserve consistency of the database information [ESW76, GRA 81a, BER87,

and CHR94].

Interleaving transactions access to the database can maximize throughput and resource utilization. There-

fore, various actions of different transactions need to be executed with maximal concurrency by interleaving

actions from several transactions while continuing to give each transaction a consistent view of the database. A

particular sequencing of the actions from different transactions is called a schedule. A schedule that gives each

transaction a consistent view of the database-state is called a consistent schedule[ESW76]

. However, failures and

concurrency are the two sources of potential errors that can lead to database inconsistencies. Traditional data-

base management systems ( DBMS) prevent such inconsistencies by satisfying four properties associated with

transactions, known as the ACID properties [GRA93, CHR94].

 A Atomicity: Either all or none of the transactions operations are/is performed.

  All the operations of a transaction are treated as a single, indivisible, atomic unit.

C  Consistency: A transaction maintains the integrity constraints on the database.

 I  Isolation: Transactions can execute concurrently but with no interference with each other’s operations.

 D Durability: All changes made by a committed transaction become permanent in the database,

  surviving any subsequent failures.

Page 23: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 23/71

23

While each transaction preserves the consistency of the database at its boundaries, recovery protocols are

used to ensure the atomicity and durability properties. Finally, isolation can be insured by concurrency control

protocols[GRA93, CHR94]

.

Generally, a particular transaction depends only on a small part of the database-state. Therefore, one tech-nique for avoiding conflicts is to partition entities into disjoint classes. One can then schedule transactions

concurrently only if they use distinct classes of entities. Transactions using common entities must still be

scheduled serially. If such a policy is adopted, then each transaction will see a consistent version of the data-

base-state[ESW76]

. Some systems try to guess the read/write sets in advance and do set-intersection at transac-

tion scheduling time to decide whether a transaction conflicts with some already executing transaction(s). In

such cases, initiation of a new transaction is delayed until it does not conflict with any running transaction.

IMS/360 seems to have been the first to try this scheme, it has not been very successful, and pre-declaration

was abandoned by IMS[GRA

 81a, sighting OBE80]

. Thus, based on IMS/360 past unsuccessful experience, one can

confidently state that it is usually impossible to examine a transaction and decide exactly which subset of the

database it will use. It is not uncommon for a transaction to lock the set of all entities within a certain value;

e.g., key addressing. The size of such a set could only be determined at run-time, possibly by examining the

entire database. Therefore, such a  partitioning scheme has been abandoned in favor of the more flexible

scheme where individual entities are acquired dynamically[ESW76]

.

When data objects are acquired/locked dynamically, a transaction requesting an object may wait if the re-

quested granule is already acquired by another transaction. Distinguishing between two lock modes can ac-

commodate multiple readers. One mode indicates an update access while the other indicates a read access.

Read locks are compatible while update locks are not[GRA

 81a]

.

A traditional transaction may have various execution paths within its body, and the actual path to execute

is dependent on run-time parameters. The difficulty of dealing with traditional transactions is that different

execution paths have significantly different requirements. Thus, it is impossible to quantify such requirements

without being overly too pessimistic/optimistic and thereby overly overestimating/underestimating resource re-

quirements. Canned  transactions are special type of transactions[DAT96]

, which are distinguished by having a

repetitive recurring behavior and requirements. That is, when a canned transaction is triggered in order to cor-

rect a certain condition or to accomplish a specific chore, it is the same transaction that was previously trig-

gered on the same event. In any traditional transaction, there might exist a number of different execution

paths, whereas in a canned transaction, there is only one execution path. In other words, canned transactions

have a fixed read/write-set for every single activation, which does not depend on any run-time parameters.

The correctness of the individual transactions is sufficient for the database consistency only for a serial

execution. However, due to interleaving of conflicting operations from separate transactions, a concurrent exe-

cution may violate the database integrity constraints regardless of the correctness of individual transactions.

Thus, conflicting operations need to be ordered in a non-interleaved (serial) manner in order to maintain thedatabase consistency

[CHR94]. In other words, an execution is said to be serializable if it produces the same out-

Page 24: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 24/71

24

put and has the same effect on the database as some serial execution of the same transactions. Since serial exe-

cutions are correct, and since each serializable execution has the same effect as a serial execution, serializabil-

ity becomes the notion of correctness in any DBMS [BER87]

.

There are several versions of serializability. The simplest and most common form of serializability is con- flict-serializability . Conflict serializability insures that conflicting operations appear in the same order in two

equivalent executions. That is, two operations conflict if their effects on the database depend on their execution

order; i.e., read and write operations on the same data object [BER87, CHR94].

On the other hand, View serializability is not concerned with the order of conflicting operations. Rather,

two executions are equivalent if each transaction reads the same values in the two executions, and the final

value of the database is the same in both executions. View serializability allows more concurrency than conflict

serializability, but it is NP-Complete to test whether an execution is view-serializable[BER87, CHR94, PAP84]

.

Recoverable History

If a transaction T j reads a value that was last written by an aborted transaction Ti, then T j must also be

aborted. This situation is known as cascading-aborts  [BER87, CHR94]. For an execution to be durable, once a

transaction commits, it could not subsequently be aborted nor its effects changed due to cascading-aborts. An

execution is recoverable if, once a transaction is committed, the transaction is guaranteed not to be involved in

cascading aborts. That is, an execution is recoverable if it is cascadeless, which ensures that every transaction

reads only data values written by committed transactions, thereby avoiding cascading-aborts. Consequently, if 

transaction T j reads a value from Ti, then Ti must commit before T j. Thus, to assure atomicity and durability,

an execution must be recoverable. A strict execution ensures that every transaction must read and write only

data values written by committed transactions [BER87, CHR94]. Note that a cascadeless execution requires reading

from committed transactions, whereas a strict execution requires reading and writing from committed transac-

tions. Therefore cascadeless is a proper subset of strictness, and a strict execution implies a cascadeless execu-

tion.

•  Cascadeless: Read only committed written data. That is, if transaction T j reads from Ti, then Ti must be an

already committed transaction; i.e.,

Wi [x] → R j [x] ⇒ Ci → C j

•  Strict: Read and write only committed written data. That is, if transaction T j reads from Ti, or overwrites a

data item that was last written by Ti, then Ti must be an already committed transaction; i.e.,

Wi [x] → R j [x] ⇒ Ci → C j

W j [x] ⇒ Ci → C j

Page 25: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 25/71

25

Hence, Cascadeless ⊂ Strict, and therefore, Strict ⇒ Cascadeless. A cascadeless execution is faster than a

strict execution, and it ensures durability and recoverability. However, it is subject to the lost-update problem.

The lost update problem occurs when: Wi [x] → W j [x] and C j → Ci. That is, the commitment of Ti erases the

update of T j although T j’s update was performed after that of Ti’s. Thus, T j’s update is a lost-update. On the

other hand, a strict execution ensure durability, recoverability, and is not subjected to the lost-update problem;

however, it is slower that a cascadeless execution. Based on this observation and since some data is  persistent 

while other data is  perishable, is there any systematic technique that could automatically switch between a

strict and cascadeless executions based on the data that is being handled? Therefore, one of the first issues that

one must investigate is the adaptability and adequacy of such executions to a RTDB environment.

2.3. Time-critical Systems vs. Database Requirements

In this subsection, we list the major differences between traditional real-time systems and conventional

database systems. It is important to identify and recognize the differences between the two technologies due to

the residual impact on the corresponding design issues in the combined field, RTDB systems.

1.  Database systems are designed to maintain the database consistency, and the correctness of the database

operations is hardly affected by the timeliness of the transactions. Meanwhile, real-time systems are de-

signed to deal with timing constraints abstracting away the notion of data consistency and integrity. Con-

sequently, the primary goal of database systems is to minimize the response time in order to achieve a

good throughput, while the primary goal of real-time systems is meeting the stringent timeliness of the

underlying applications. Combining the two technologies also means combining their design objectives

and constraints. Therefore, RTDB systems have inherited the properties; i.e., objectives and constraints, of 

both systems. Timeliness of the results and maintaining the integrity of the database together form the cor-

rectness criterion of any RTDB system. Therefore, management of time-critical information through a da-

tabase system requires the integration of concepts from both fields in order to properly handle timing con-

straints and data consistency[ABB 88b, BUC89, HUA89, RAM93, SIN88, STA 88a, ULU92, and ULU 95b]

.

2.  Scheduling algorithms in the two systems,  DBMS and real-time operating systems; differ in their schedul-

able units. While tasks are considered as the schedulable unit in real-time systems, transactions are the

schedulable unit in DBMS.

•  Tasks typically include an arrival time, deadline, worst-case execution time, and criticality. In addi-

tion, a task may also include a resource list, where data is rarely considered as part of it. Therefore,

the majority of time-critical scheduling algorithms depend heavily on a worst-case execution time,

making the processing of time-critical tasks highly predictable[GRA92]

.

•  Transactions, on the other hand, do not carry any timing information other than their arrival time. In

addition, transactions tend to have extensive data resource requirements, which usually are acquired

dynamically.

Page 26: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 26/71

26

Unpredictability is infested in  RTDB systems due to the assumptions that have been made in

building conventional database and operating systems. Virtually, it is impossible to predict the re-

sponse time for a transaction due to intricate protocols such as paging working sets, dynamically as-

signing and adjusting priorities, I/O scheduling, buffering, concurrency control among various trans-

actions, commit protocols, and recovery protocols. Hence, the assumption of a well-known worst-case

execution time is not valid for general database transactions. Thus, it is generally impossible to stati-

cally determine all possible schedules in order to guarantee meeting real-time constraints. Further-

more, the order-of-magnitude difference between CPU and I/O times pessimistically overestimates

page faults; thereby execution times are inflated, resulting in very low system utilization [ABB 88b, BUC89,

GRA92, and STA 88b]

. Therefore, it is argued in[STA 88b, HAR92, and KAO95]

, that  RTDB systems are not suitable

for hard real-time constraints due to the limitations of the current technology and its consequent un-

predictability.

Finally, the atomic nature of a transaction requires that the effects of any failed transaction not be

visible to any other transaction. Therefore, concurrently executing transactions must be isolated from

each other. Thus, a transaction is a unit of recovery while a task is not. A task may require a certain

degree of recovery; however, a task’s recovery revolves around the task’s own data. Meanwhile, a

transaction’s recovery is concerned with recovering the contents of the database, as a shared resource

among all transactions within a system. Therefore, the resulting delay of a task’s recovery may effect

the task’s own execution, whereas the resulting delay of a transaction’s recovery may effect the trans-

action’s own execution as well as the execution of other transactions in the system.

3.  Preemption is generally used in real-time systems to enhance the overall performance; however, preemp-

tion could have severe consequences in database contexts. Preempting a transaction while it holds lock(s)

on data items may initiate rollbacks if the transaction were to be involved in conflict(s) while being pre-

empted. Such rollbacks imply undoing accomplished work, in addition to redoing the work, if/when the

preempted transaction is restarted. The resultant delay and the wasted execution time may cause the trans-

action as well as other transactions to miss their deadline(s). The net result may be degeneration in per-

formance, and therefore, a preemption decision in a RTDB system should be made carefully[STA 88b]

.

4.  Real-time system schedulers have to deal with overload situations. Overload-management policies make

sure that the most critical tasks are still executed at the expense of the less critical tasks. Database sched-

ulers simply attempt to complete all pending work, typically slowing performance across board [BUC89].

5.  In a conventional non-real-time database system, not accessing the database leaves the database in a con-

sistent state. That is, the database consistency may only be violated by erroneous transactions and errone-

ous synchronization of transactions to the contents of the database. Note that we excluded recovery since

we do not discuss the issue in this review. However, a significant portion of the data in RTDB system envi-

ronments is highly perishable in the sense that its validity is highly time-dependent. Thus, in addition tothe causes of inconsistency listed above, the database consistency may also be violated in  RTDB systems if 

Page 27: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 27/71

27

such temporal data is not updated in a timely manner. Therefore, while not accessing the contents of the

database in conventional database systems maintains the consistency of its information, a timely access of 

the database becomes a necessity for maintaining its consistency in  RTDB systems. Such temporal data

imposes unique temporal constraints, which demands time-cognizant transaction processing[GRA92, STA

 88b,

and RAM93].

Many real-world applications involve time-constrained access to data, as well as access to data that

has temporal validity. Such applications involve gathering data from the environment, processing the

gathered information, generating timely response, while maintaining the integrity of the data[RAM93, STA 88a,

ULU92, and ULU 95b].

2.4. A Real-Time Database Model

Transactions in a RTDB system travel through various components until their termination. In this section,we present a  RTDB system model; i.e., as shown in Figure (2.1), and describe its various components; the

model of this section is adopted with minor modifications from[STA91]

. The rest of our study concerning  RTDB

systems will assume transactions to be the schedulable unit in contrast to tasks in conventional real-time oper-

ating systems. Thus, unlike the model proposed in[BUC 89] where the system’s load consists solely of tasks, and

each task may contain a number of transactions within it, our model contains transactions as counter-

schedulable-units to Buchmann’s et al. tasks. In Buchmann’s et al. model where tasks are the main schedul-

able unit, one has to consider the following issues:

•  The type of deadline that is associated with a task; i.e., soft, firm, or hard,

•  The type of deadline that is associated with the transactions within a task; i.e., soft, firm, or hard,

•  Derive a deadline for each of the transactions within a task such that all deadlines are initially feasi-

ble,

•  Whether the transactions within a task are cooperative or independent,

•  Whether the execution of transactions within a task is serial or concurrent,

•  If a transaction’s deadline is soft, then what happens if a transaction misses its deadline? That is, if a

transaction is allowed to execute passed its absolute deadline, could such a behavior jeopardize the

execution of the other transactions within the task?

•  Would the task abort or continue its execution if one of its transactions misses its deadline? If a trans-

action’s failure signals the task’s failure, it implies that the failure of one transaction jeopardizes the

execution of the remaining transactions within the task.

To the best of our knowledge, the above model has only been suggested; however, a thorough treatment of 

the model is still an open research area. Our model, on the other hand, consists of transactions only, where

each transaction has attributes just like tasks in conventional real-time operating systems; e.g., deadlines, pe-

riodicity, criticalness, priority, etc. Furthermore, the database itself is considered the main resource. Therefore,

transaction scheduling in our study is concerned with scheduling the transactions’ accesses to the database,

Page 28: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 28/71

28

which is a separate issue from CPU scheduling. Nonetheless, CPU scheduling is assumed to exist at a lower

level below that of the database.

Disk 

Concurrency Control

Buffer Access

Transaction

Priority

Assignment

Computation

Restart

Miss

Hit

Commit

Re-submit

Terminate

Block 

Abort

Request/Release

A data object

AdmissionControl

Submit

Figure (2.1)

•  Any new transaction must pass through an admission control mechanism, which monitors and regulates

the total number of concurrently active transactions within the system in order to avoid thrashing10

.

•  Every new or resubmitted transaction is assigned a priority level, which orders its scheduling preference

relative to the other concurrent transactions within the system.

•  Before a transaction performs an operation on a data object, it must go through the concurrency control

component in order to achieve the required synchronization. If the transaction’s request for a granule is

denied, the transaction will be placed into a wait queue. The waiting transaction will be reactivated whenthe requested granule becomes available, after which the transaction performs its operation.

•  Similarly, if a transaction requests an item that is currently not in main-memory, an I/O request is initi-

ated and the transaction will be placed into a wait queue. The waiting transaction will be reactivated when

the requested granule becomes available in main-memory, and there is no active higher-priority transac-

tion.

•  When a transaction completes all of its operations, it commits its result(s) and releases all of the data items

in its possession.

•  A transaction may abort/restart a number of times before it commits. There are various types of aborts

quoted in Huang et al. [HUA 89]:

1.  Terminating abort :

-  An abort due to missing a deadline, or

-  Self-abort – a transaction may abort itself due to an exceptional condition.

 10

 Thrashing will be formally defined in the Admission Control, section 2.6.

Page 29: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 29/71

29

2.   Non-terminating abort : An abort due to a deadlock or a data conflict. In this case, the transaction may

be restarted if its deadline remains feasible.

Priorities can be assigned to real-time transactions by many of the same strategies used to assign priorities

to real-time tasks. However, as transactions are generally less predictable than tasks, priority assignmentstrategies using information about runtime behavior; i.e., execution time and resource requirements, may not

be feasible in RTDB systems[GRA92]

. Priority scheduling in a RTDB system is a mechanism for including time-

liness in concurrency control mechanisms in the absence of complete knowledge of timing and resource re-

quirements. The knowledge of a transaction’s priority makes it possible for the scheduler to release for execu-

tion the more critical transactions, and for the concurrency control mechanism to resolve conflicts with other

transactions based on their relative priorities. Thus, data can be allocated in a manner that is consistent with

the priorities enforced by the scheduler[BUC89]

. It has been shown that priority assignment policies can greatly

influence the overall system’s performance [HUA89].

In a  RTDB system, like tasks in conventional real-time systems, each transaction imparts a value to the

system. However, priority scheduling in a  RTDB system differs from the problem of priority-based CPU sched-

uling in a traditional real-time system, due to the heterogeneity and multiplicity of the  DBMS re-

sources/controllers. In a database system, there are several controllers where priorities could be incorporated;

e.g., CPU, primary memory, disks, admission control, concurrency control, and the recovery manager. Adding

priority at these decision points could reduce the response time of high priority transactions. The studies con-

ducted in [CAR89, SIN88] showed that regardless of which resource tends to be the system’s bottleneck, priority

scheduling on the critical resource must be complemented by a priority-based management policy of other sys-

tem shared resources. That is, the entire system should have a  preempt-resume behavior in order to achieve a

priority-based behavior complementary to that of the CPU. However, the manner in which preemption and re-

sumption are performed must be modified in order not to suffer the possible degradation to be experienced due

to rollbacks and restarts in case of conflicts between the preempting and preempted transactions.

2.5. Scheduling RTDB Transactions

The primary performance determinant in a  RTDB system is the policy used for scheduling transactions. A

scheduling policy determines when service is provided to a transaction, thereby directly impacting whether a

transaction meets its deadline. A special feature of  RTDB systems, in addition to standard physical resources, is

the data objects stored in the database, and transactions accessing this data have to be scheduled in accordance

with real-time performance objectives [HAR92].

The scheduling process of transactions in a  RTDB system consists of  concurrency control and conflict 

resolution and indirectly involves recovery. In this section, we briefly discuss the former two issues, while in

the next chapter we will address them in more detail.

2.5.1. Concurrency Control

Page 30: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 30/71

30

A typical database transaction is a sequence of operations performed on a database. Conventional database

systems are centered on the ACID properties – atomicity, consistency, isolation, and durability. The  DBMS has

to ensure such properties while providing maximum concurrency in order to increase throughput, and maintain

correctness of the database. Data is a unique resource, and therefore requires a separate form of scheduling

from that considered for hardware resources. Serializability is well-established notion of correctness for inter-

leaved scheduling of transactions operations. Data-access scheduling policies used in a database system are

commonly referred to as concurrency control protocols. Concurrency control protocols preserve the database

integrity by resolving non-serial concurrent executions in a manner that includes a serialization order among

the conflicting transactions. That is, concurrency control is a mechanism to ensure non-interference of trans-

action execution; thus, isolation of concurrently executing transactions [BER87, CHR94].

The fundamental challenge of  RTDB systems is the unification of priority-driven CPU scheduling and da-

tabase concurrency control protocols in order to maximize both concurrency and resource utilization, while

subjected to data consistency (logical and temporal), transaction correctness, and transaction timing constraints

[RAM 93, STA 88a].

There are several techniques that can implement concurrency control protocols for conventional non-real-

time database systems, such as locking, time-stamping, multiversion, and validation (also known as certifica-

tion and as optimistic concurrency control protocols). Each of these mechanisms is designed around different

assumptions, all of which have the same goal; i.e., enforcing serializability. Existing concurrency control pro-

tocols ensuring serializability are based on either blocking and/or restarting transactions. Blocking periods

might outlive the prescribed deadlines in addition to introducing deadlocks and the priority-inversion problem,

while restarting transactions wastes processing time and systems resources, in addition it might cause the re-

started transactions to miss their prescribed deadlines [GRA92, KAO95, ULU92, ULU 95b]. The performance and charac-

teristics of these mechanisms have been investigated in depth for conventional database systems. However,

such data access protocols need to be modified and their trade-off(s) must be reevaluated under  RTDB systems

[HAR92].

2.5.2. Conflict Resolution

While priority assignment governs CPU scheduling, conflict resolution protocols determine which of the

conflicting transactions will actually obtain access to a data item. Conflicts are usually the result of concurrent

executions of transactions performing incompatible operations; i.e., read vs. write on the same data item at the

same time. Schedulers differ in detecting resource conflicts among transactions and the manner in which con-

flicts are resolved once they are detected. For shared database resources, the preempted transaction usually

must be rolled back if in-place updates are being employed [ABB89].

When a transaction requests a lock on a data item while the lock is being held, in a conflicting mode, by

another transaction, both transactions have time constraints, yet only one can hold the lock. The conflict

should be resolved according to the characteristics of the conflicting transactions.

Page 31: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 31/71

31

Priority-based Wound-Wait Conflict Resolution

The Wound-Wait technique was originally proposed by Rosenkrantz et al.[ROS78]

for avoiding deadlocks.

The original scheme was designed to use timestamps. However, Abbott and Garcia-Molina[ABB88, ABB89]

modi-

fied the scheme so that it uses priorities instead of timestamps and applied the modified version to resolve con-

flicts in  RTDB systems. The modified version is known as  High-Priority (HP) and as Priority-Abort (PA)

[ABB88, ABB89]. The outline of the general algorithm is as follows:

 Let: P (T i) be the priority of transaction T i.

  T r  request a lock on data item D

  if  (no conflict) then T r  accesses D

  else – T h is holding the requested data item, resolve the conflict as follows

if   (P(T r ) > P(T h)) then T h is aborted

else T r  waits for the lock; i.e., blocks. In the HP scheme, if two transactions are involved in a conflict, then abort the lower-priority transaction

in order to free up the required resources for the higher-priority transaction. To preserve serializability, the

preempted lower-priority transaction is rolledback, and when restarted it must execute from the beginning.

However, a lower-priority transaction in the  HP approach is allowed to wait on a higher-priority transaction.

Thus, all conflicts are resolved in favor of the higher-priority transaction(s). Depending on how the priorities

of the transactions are derived, different conflict resolution protocols can be produced. For example, Huang et

al. [HUA89] used different priority assignments (with minor modifications to the basic algorithm above) to derive

the following five different protocols:

•  Protocol 1 – Based on a virtual clock.

•  Protocol 2 – Based on combining various transactions attributes.

•  Protocol 3 – Based on deadline first then criticalness.

•  Protocol 4 – Based on deadline, criticalness and estimation of remaining execution time.

•  Protocol 5 – Based on criticalness only.

In the next chapter, many other conflict resolution protocols will be presented along with their advantages

and disadvantages.

2.5.3. Deadlocks

The use of a locking scheme may cause a deadlock, thereby requiring deadlock detection, deadlock pre-

vention, or deadlock avoidance. In this section we will only discuss deadlock detection since it is the one

mostly used in conventional database management systems. A deadlock detection scheme is invoked when a

transaction is to be queued for a locked data item. Whenever a set of transactions gets involved in a circular

wait in what is known as a wait-for graph[BER87]

, a deadlock occurs. If a deadlock cycle is detected, one of the

Page 32: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 32/71

32

transactions involved in the cycle must be aborted in order to break the cycle. In such situations, a transaction

involved in the cycle is chosen for abortion in order to break the deadlock. In a RTDB system, the victim trans-

action should be chosen such that the largest number of remaining transactions can meet their deadlines

[KAO95]. Transactions in any time-critical system should be aborted taking into consideration their timing re-

quirements in addition to incurring a minimal cost upon the system. Five deadlock resolution policies that take

into account the timing properties of the transactions and the cost of abort operations have been presented in

[STA91].

Policy 1: Always aborts the transaction invoking deadlock detection.

Policy 2: Trace the deadlock cycle, and abort the first tardy transaction encountered in a deadlock cycle. If 

no tardy transaction is found, abort the transaction with the furthest deadline.

Policy 3: Trace the deadlock cycle, and abort the first tardy transaction encountered in a deadlock cycle. If 

no tardy transaction is found, abort the transaction with the earliest deadline.

Policy 4: Trace the deadlock cycle, and abort the first tardy transaction encountered in a deadlock cycle. If 

no tardy transaction is found, abort the transaction with the least criticalness.

Policy 5: Abort the infeasible transaction with the least criticalness. If all transactions are feasible, then

abort a feasible transaction with the least criticalness. This policy is sensitive to the accuracy of 

the computation time because it requires information about remaining execution time; thus, total

execution time requirements at the start of each transaction must be known.

For one to realize the severity of deadlocks, we quote the following claim from[GRA

 81b] “  Deadlocks, per 

second, rise as the square of the degree of multiprogramming, and as the fourth power of transaction size”.

2.6. Admission Control

How should real-time transaction processing be managed when the arrival rate exceeds that of the sys-

tem’s capacity? In  RTDB systems that interact with the environment, catastrophic consequences can arise.

Therefore, scheduling preference must be given to transactions that are critical to the performance of the sys-

tem even under overloads. An overload is intended to mean a high load over all system resources; e.g., CPU,

memory buffers, I/O queues, and the database itself, due to having a large number of transactions competing

on all such resources.

Incorporating priority into the admission control decision, by ordering waiting transactions according to

priority and preempting lower-priority transactions in favor of higher-priority transactions, is a way of tailor-

ing admission control to RTDB systems objectives [CAR89]. However, if periodic low-priority maintenance trans-

actions are postponed due to the arrival of more important activities, it may eventually be necessary to shut

down the system due to lack of maintenance [RAM93].

Page 33: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 33/71

33

We have already discussed overloading in the context of conventional real-time systems along with the as-

sociated performance reduction. In this section, we will discuss the effects of an overload when a database

management system is involved.

An overload in virtual storage systems can cause thrashing. Thrashing is a phenomenon where an increase

in the load results in a decrease of throughput, and/or a decrease in the number of transactions/tasks meeting

their deadlines. To show the effect(s) of thrashing on the system’s throughput, we present what is known as the

load-throughput function in Figure (2.2) below, which is adapted from [HEI91]. From the load-throughput func-

tion presented below we note the following observations:

•  The throughput grows almost linearly, thereby increased parallelism, under an underload condition,

•  When the finite capacity of the system is totally utilized, the system reaches a saturation point where

the throughput function flattens out,

•  Further increase in the load (overload) cause a drop in the throughput; that is, the system experiences

a thrashing effect .

with thrash ing

wi thout th rash ing

Satura t ion bound

saturationu n d erloa d ove r loa d

T h r o u g h p u t

L o a d

Figure (2.2)

An important point to note is that as the arrival rate increases higher than a specific bound, the system

starts thrashing. Such thrashing is caused by several factors [HEI91, LEE96]:

•  Contention on data granules.

•  Contention on physical resources even when there is no data contention. The occurrence of such

thrashing can significantly increase the system’s response time. Resource contention thrashing hap-

pens because too many transactions are tied up in resource queues; thus, reducing the system’s utili-

zation.

•  An increase in transaction(s) management.

Page 34: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 34/71

34

Thrashing has a severe effect on  DBMS’s more than it has on non-database systems due to concurrency

control protocols and their associated delays. Furthermore, thrashing can degrade the performance of  RTDB

systems more than conventional DBMS’s due to the requirement of meeting timing constraints.

To realize the severity of an overload on the performance of a database system, consider the overall size of congested transactions within the system. Analytical models have shown that the mean number of blocked

transactions in lock-based concurrency control protocols is a quadratic function of the total number of active

transactions! In non-blocking, restart-based, concurrency control protocols, the mean number of restarts is also

an over-linearly-increasing function of the concurrency control level[TAY85]

. That is, thrashing can be avoided

only in a system with unlimited resource capacity. Therefore, counter measures that limit the load must be

adapted in order to prevent overloads and their associated thrashing degradation.

There are various techniques suggested in the literature to control the load and thereby avoid thrashing

effects. Due to space limitation, we present next only some of such algorithms without any of their involved

details.

•   Fixed upper bound : The maximum number of concurrent transactions is a fixed system parameter. This

approach has been found effective in commercial conventional database systems due to the relative stabil-

ity of the load[HEI91]

. However, we believe that this technique is not as effective in  RTDB systems due to

the inherent variation in the load as well as the periodicity and timeliness of transactions within such sys-

tems.

•   Feedback control : By dynamically monitoring the concurrency level and the behavior of the system, a

model independent control mechanism can be devised to dynamically adapt to the environment along with

its ever changing parameters. For this technique, Heiss and Wagner[HEI91]

proposed two algorithms:  In-

cremental Steps ( IS) and Parabola Approximation (PA). The simulation conducted in[HEI91]

showed that

both IS and PA where able to prevent thrashing. The interested reader may refer to[HEI91]

for the details of 

the two algorithms.

•  Haritsa et al.[HAR91]

proposed a modified version of  EDF called Adaptive-Earliest-Deadline ( AED), which

uses a feedback mechanism in order to stabilize the overload performance of the traditional  EDF policy.Having established AED, Haritsa et al.

[HAR91]introduced another modified version of  AED called  Hierar-

chical-Earliest-Deadline ( HED), which integrates the value of a transaction with its deadline. Due to

space limitation, we limit our discussion only to the AED technique.

In the  AED algorithm, active transactions are divided into two groups,  Hit  group and  Miss group.

Each transaction upon its arrival is assigned to one of the groups based on the following technique: the

newly arrived transaction is assigned a random integer (key). The transaction is then inserted into a key-

ordered list of active transactions according to its assigned key. In addition, the system defines a dynamic

variable known as the Hit-capacity, which acts as a marker. All transactions whose keys are less than the

Hit-capacity are within the  Hit group, and the rest make up the  Miss group. The system schedules the

Page 35: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 35/71

35

transactions within the  Hit group based on  EDF policy, whereas transactions in the  Miss group are exe-

cuted randomly, if executed at all.

Transactions have attributes and requirements; e.g., criticalness, deadlines, and I/O requests; mean-

while, the random assignment to a  Hit/Miss group does not consider any such attributes and/or require-ments. Therefore, we believe that the AED is effectively equivalent to a random rejection of transactions at

an admission gate, regardless of the characteristics of the transactions.

•  Further investigation was conducted by Pang et al.[PAN92]

, which was aimed towards the class of transac-

tions that miss their deadlines under the EDF policy when operating under overload condition. The inves-

tigation revealed that the  EDF priority assignment policy is biased  in the sense of significantly discrimi-

nating against longer transactions within the system when operating under overload conditions. Based on

this observation, Pang et al.[PAN92]

proposed a modified version of the AED, called Adaptive Earliest Vir-

tual Deadline ( AEVD). The AEVD policy has the same Hit and Miss groups of the AED policy along with

a fixed-capacity threshold. However, it uses an  Earliest Virtual Deadline11

instead of the traditional  EDF 

policy to manage the Hit group of the AED policy.

The AED and  AEVD algorithms assign incoming transactions to either a  Hit or a  Miss group. How-

ever, these assignments are arbitrary, without regard to the system’s profile or the transactions’ profile;

i.e., current system load or tightness of deadlines. Furthermore, the AEVD suffers from several drawbacks:

•  The Hit and Miss group assignment is not a true overload-management policy since it is arbitrary and

independent of the system’s and the transactions’ profiles.

•  The scheme is based on the assumption that a transaction’s size is correlated to its time constraint,

which may not be completely valid, because it is quite possible that long transactions have short dead-

lines, and vice versa.

•  Under overload conditions, the performance of the  AEVD is shown in [DAT96] to deteriorate rather

sharply. Although, the AEVD deterioration is slower than that of the  EDF policy, it is still very sig-

nificant.

The analysis conducted in [DAT96] showed the enormous amount of computational overhead of the  AEVD

technique. In fact, the analysis showed the amount of overhead to be even more significant than we initially

expected! Note that by computational overhead we refer to both time and space.

Datta et al. [DAT96] proposed a dynamic admission control and priority based scheduling policy for disk 

resident  RTDB systems, which was called  Adaptive Access Parameter ( AAP). The  AAP scheme considers the

arrival times along with the timing constraints of the transactions. It is claimed that the  AAP admission control

 11

 A virtual deadline = absolute deadline – arrival time.

Page 36: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 36/71

36

policy serves dual purpose: overload-management as well as bias control, where bias control refers to reducing

discrimination towards particular transaction classes.

To circumvent the difficulties of traditional transactions’ structures and their associated unpredictable be-

havior, the AAP technique assumes that the real-time workload to solely consist of canned-transactions. Thus,users do not run arbitrary programs. Rather, the system executes specific functions out of predefined set, where

each function is an instance of a canned transaction. Based on the assumption of canned transactions, the  AAP

assumes that transactions arrive with a read/write-set, denoting the data items to be read and written, respec-

tively, by the transactions.

The  AAP works as follows: the size of a transaction is estimated using its fixed read and write sets, this

size is known as the access-parameter ( AP) of the transaction. Another value known as the deadline-access-

 parameter-ratio ( DAPR) is computed based on the AP and the transaction’s deadline. The DAPR indicates how

the size of a transaction relates to its deadline. Each time a transaction fetches one of its required pages (via

I/O), its AP is reduced and the corresponding DAPR is recalculated.

The initial calculation of the AP is not an exact quantity. Rather it is a probabilistic quantity that takes into

consideration the possibility of having the set of data items laying physically on one page and/or being scat-

tered over as many page frames as there are items within the transaction’s read/write set. We believe that such

a probabilistic approach makes the APP applicable to many RTDB systems, which might vary between various

environments. The AAP accounts for worst-case access as well as best-case access, including the possibility of 

having some of the required page frames already existing in memory, due to overlaps between read/write sets

of different transactions. Therefore, it can reflect the total amount of I/O that might be actually required by a

transaction independent of the size of the transaction.

The admission control of the AAP technique incorporates a load dependency mechanism. Thus,  AAP uses

such an adjustment mechanism to maintain a maximum miss ratio below a specific threshold. Note that a

miss-ratio refers to the transactions that are executed, but yet might miss their deadline, and it does not refer

to, or account for, the transactions that miss their deadlines due to being blocked by the admission control.

Thus, it is very possible that, under an overload condition, a highly critical transaction gets blocked and possi-

bly misses its deadline, while less (much less) critical transactions are currently occupying the system, mainly

due to admission control delay!

Aside from the performance analysis conducted in [DAT96] showing the superiority of the  AAP technique

over other methods, we believe it is certainly one step forward in the right direction. Unlike previous methods,

the AAP design was not abstracted away from the environment in which it is to serve. We believe that Datta et

al.[DAT96]

had successfully recognized the system’s profile, but rather fallen short in recognizing the transac-

tions’ profile; i.e., criticalness of blocked transactions due to admission control. Nonetheless, the approach

used in the construction of the AAP is scientific with a promise to a better solution.

Page 37: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 37/71

37

2.7. Memory Management

Pang et al.[PAN94] stated “without the proper admission control and memory management, the frequency of 

  I/O operations would be increased and hence hindering accomplishment of systems objectives. Thus, the

benefits of high levels of multiprogramming can only be achieved with the proper memory management; oth-

erwise, more degradation than enhancement would be the net result ”. Due to the important role of memory

management on the overall performance of the system, we dedicate this section to its discussion. However, due

to space limitation, we only present two techniques for memory management.

Memory management is concerned with three types of decisions: transaction admission, buffer allocation,

and buffer replacement. Buffer allocation strategies attempt to distribute the available buffer frames among

concurrent database transactions, while buffer replacement strategies attempt to minimize the buffer fault rate.

In a real-time environment, the goal of data buffering is not merely to reduce transaction response time, but

more importantly, to increase the number of transactions satisfying their timing constraints. To achieve thisgoal, buffer management should consider not only transaction reference behavior, but also the timing require-

ments of the referencing transactions [STA91].

When a transaction arrives at the system, the buffer manager is responsible for admitting the new arrival

into the system. The buffer manager must determine the set of buffers to allocate to the incoming transaction.

If there are no free buffers, the buffer manager determines which of the data pages currently in the buffer pool

should be replaced. Furthermore, once a transaction submits a request for a page that is currently not in pri-

mary memory, the buffer manager must determine how to allocate extra buffers to accommodate the transac-

tion’s request(s). Allocating a set of page-frames and eviction of data-pages from the buffer pool should be pri-

ority/time-cognizant in  RTDB systems. Buffer management becomes more complicated when priorities are in-

volved, as the buffer demands of different transactions can no longer be treated equally [CAR89, PAN94].

Global Least Recently Used (G-LRU ) Buffer Management

In the G-LRU buffer management scheme, when a buffer frame is required by a transaction and no free

frame is available, the frame with the least recently accessed data is selected for replacement [MIL92]. The G-

 LRU algorithm is simple and is the one most commonly implemented in commercial database systems. Carey

et al. [CAR89] suggested that priorities could be incorporated into the scheme by dynamically organizing the

buffer pool into priority levels (buckets). At the systems startup, all the buffer frames are free and are arranged

as a free list. When a transaction with priority P allocates a frame from the free list, the frame is inserted into

the  LRU queue of frames whose owners have priority P. In order not to have too many priority-queues in the

system, every queue or bucket holds a range of priorities. When the buffer manager is required to evict a page

frame, it starts searching for the LRU page at the lowest bucket. If a selected victim is younger than a specific

age (timestamp), it is left and the next higher priority bucket is searched; otherwise, it is evicted. If there are

not enough free page frames, then the buffer manager will have to suspend or abort a lower-priority transac-

tion.

Page 38: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 38/71

38

Priority Memory Management ( PMM )

A memory allocation strategy known as  priority memory management (PMM ) scheme[PAN94]

operates us-

ing one of two strategies, both of which operate under the EDF policy:

•   Max: In the Max strategy, a transaction must be allocated the maximum level of memory that it might

need during its execution, or not allocated any memory at all.

•   Min-Max: The Min-Max strategy allows some low-priority transactions to run with their minimum re-

quired memory, while the high-priority transactions get their maximum requirements. When operat-

ing in Min-Max mode, PMM is able to admit more transactions into the system.

The choice between the strategies is dependent on the current workload. The Min-Max strategy starts from

the highest priority first, given each transaction just enough memory to start its execution. If there are leftover

buffers at the end of this phase, another pass is made to the list of admitted transactions, beginning with the

highest priority. In the second pass, the allocation of each transaction is topped off to its maximum. The allo-

cation process terminates when either all of the available memory has been allocated, or all of the transactions

have received their maximum allocation. Consequently, at the end of this memory allocation process, the

higher-priority transactions will have their maximum allocation while the lower-priority transactions just have

their minimum.

The Max strategy by insisting on the maximum memory allocation eliminates the thrashing problem that

can result from admitting too many low-priority transactions into the system. However, the  Max strategy may

severely strict the multiprogramming level if every transaction requires a substantial amount of memory. The

 Max strategy is preferable if memory is abundant; whereas the Min-Max strategy is more suitable for memory

constrained systems[PAN94]

.

The PMM  algorithm uses a feedback mechanism to monitor the state of the system, and it revises its

choice of allocation strategy as necessary. Initially, the Max mode is selected. The operational mode switches to

the Min-Max strategy if all of the following conditions are met:

•  One or more transactions during this period (since last test) had missed their deadlines,

•  The utilization of the system resources had fallen below a minimum threshold,

•  There is memory contention as reflected by the admission control, and

•  Transactions are finishing too soon before their deadlines.

After switching to Min-Max, the PMM monitors the target multiprogramming level. If it drops below the

average multiprogramming level that was realized in the  Max mode, PMM reverts to the  Max strategy. The

detailed discussion of the Max and Min-Max can be found in[PAN94]

.

2.8. Disk Scheduling

Page 39: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 39/71

39

In conventional non-database real-time systems, disks are seldom accessed. However, in  RTDB systems,

the reading and writing of archival data is essential. Thus, when transactions have time constraints disk sched-

uling becomes a significant problem due to the large difference in speeds between the CPU and I/O subsystem.

In a disk-based database system, disk I/O occupies a major portion of transaction execution time. As with CPU

scheduling, time-cognizant disk-scheduling algorithms can significantly help a time-critical system achieve its

objectives. The order in which I/O requests are serviced has an immense impact on the response time and

throughput of the I/O subsystem [KAO95].

Scheduling I/O operations consists of two parts:

1.  Assigning priorities to the various I/O requests. Such priorities determine the order in which various

operations are performed.

2.  Scheduling the disk head itself in order to cope with the physical limitations of the device, and thereby

minimizing delays such as seek time in order to increase throughput.

Optimizing both issues is a conflicting goal and many attempts have been made to optimize these issues to

a relatively high level, all of which constitute the subject of this section. However, before we indulge in the de-

sign of the above two issues, we list various concerns, which seem to be very important from an implementa-

tion viewpoint, yet we have not found sufficient answers in the literature. Abbott and Garcia-Molina[ABB90]

made the following assumptions:

1.  All read requests are issued by uncommitted transactions, and they should receive service in accor-

dance with the time constraints of the transactions that issued them.

2.  Write requests do not have any explicit time constraints since the transactions that issued them have

already committed.

Therefore, servicing write requests may interfere with the timely required service of read requests, and

such an interference should be minimized. However, write requests should be serviced in accordance with the

average arrival rate in order to free enough buffer space for incoming transactions. The above design issues

and assumptions pose the following set of questions:

1.  How are the deadlines of I/O operations determined from the deadlines of the issuing transactions?

The simplest solution is to give an I/O request the same deadline as the issuing transaction. Certainly,

this solution is unacceptable, because it does not account for future I/O requests issued by the transaction,

nor it accounts for the required time to perform any operation after the I/O request is fulfilled. Another

solution presented in[ABB90]

is to account for future requests to be issued by the transaction as well. How-

ever, this solution requires knowing the data requirement of each transaction, which contradicts the fun-

damental assumption of not knowing the exact data requirement of general transactions. A pessimistic

Page 40: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 40/71

40

approach could account for the maximum set of required data, which could be too pessimistic and results

in unfeasible I/O deadlines.

It must be recognized that what is important is the meeting of transactions’ deadlines and not the in-

dividual deadlines that may be attached to I/O requests

[RAM93]

. We are not aware of any answer that canproduce a feasible deadline and yet account for future I/O requests and operations issued by the same

transaction.

2.  What happens if a deadline of an I/O request is not met? Should the request be cancelled, if so, what hap-

pens to the transaction that issued the request? Should the transaction resubmit the request again and be

subjected to further delay or should the transaction abort? We have not been able to find an answer to this

problem in the literature.

3.  How do we determine when to flush updated buffer pages without jeopardizing the read requests? Abbott

and Garcia-Molina [ABB90] proposed a system model that suggests the use of a lower level k -buffer pool,

with k being the number of available frames. This buffer is to be managed by the disk controller separately

from the systems buffer. A modified page is first copied from the systems buffer into a frame in the k -

buffer, which frees the corresponding page frame in the systems buffer. This copying operation is done

immediately upon transaction(s) commitment in order to free the systems buffer and not jeopardize admis-

sion control.

Copying the contents of the k -buffer to the disk is another matter and Abbott and Garcia-Molina

[ABB90] suggested two methods. First, flush the k -buffer when a predefined threshold level is reached, or

when there are no current read requests. That is, preference is given to read requests as long as the free

space is above the threshold level. Second, create an artificial deadline for every frame within the k -buffer.

Whenever a frame’s deadline becomes earlier than the earliest read request, then writing the frame takes

precedence. Abbott and Garcia-Molina defined a linear function for calculating the frame’s deadlines,

which takes into consideration the arrival rate in order not to jeopardize admission control. However, in-

troducing artificial deadlines allows write request to interfere with read requests but only at a different

level!

The Elevator Algorithm

Classical disk scheduling schemes attempt to minimize the average seek distance. For example, in the ele-

vator algorithm, the disk head is in either an inward-seeking phase or an outward-seeking phase. While seek-

ing inward, it services any requests it passes until there are no more requests ahead. The disk head then

changes direction, seeking outward and servicing all requests in that direction as it reaches their tracks[MIL92]

.

In order to support priority, the elevator algorithm can be modified in the following way [CAR89]: disk re-

quests are grouped based on their priority, and the elevator algorithm is used within each group. There is one

queue per priority level for buffering outstanding disk requests. Within each queue, requests are arranged in

Page 41: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 41/71

41

order of their physical (track) addresses. While seeking inward or outward, the disk services any requests that

it passes in the currently served priority queue until there are no more requests ahead. On the completion of 

each disk request, the scheduler checks to see whether a disk request of a higher-priority is waiting for service.

If such a request is found, the scheduler switches to the queue that contains the request(s) of the highest prior-

ity among those waiting and starts serving that queue. When it switches to a new queue, the request with the

shortest seek distance from the head’s current position is used to determine the direction in which the head will

move. An important side effect of introducing priority in disk scheduling in this fashion is that the average

seek time can worsen as the number of priority levels increases.

The D-SCAN and FD-SCAN Algorithms

Abbott and Garcia-Molina[ABB90]

proposed two real-time I/O scheduling algorithms known as  Earliest 

 Deadline SCAN ( D-SCAN ) and Feasible Deadline SCAN (FD-SCAN ), both of which are briefly presented next.

•  The D-SCAN algorithm modifies the traditional SCAN algorithm by moving the disk head towards the

read request with the earliest deadline. Thus, the disk head seeks in the direction of the read-request

with the earliest deadline servicing all read requests along the way, which are requests with later

deadline.

•  The FD-SCAN  is similar to the  D-SCAN except that only read requests with  feasible deadlines are

chosen as targets to determine the scanning direction. A deadline, d , is feasible if d  ≥ t  + Access (n)

where t  is the current time and  Access (n) is the expected time needed to service a request that is n

tracks away. FD-SCAN will adopt SSTF (Shortest Seek Time First ) if all remaining requests have in-

feasible deadlines.

The simulation conducted in [ABB90] concluded that FD-SCAN and  D-SCAN outperform non-real-time disk 

scheduling algorithms. In addition, FD-SCAN outperforms the D-SCAN . More importantly, a real-time system

can greatly benefit from distinguishing read priorities from write priorities, and it can also benefit from adopt-

ing a separate k -buffer for managing write requests. However, the study did not conclude with any definitive

performance superiority between the space-threshold and the artificial-deadline  k-buffer flushing techniques.

The SSEDO and SSEDV Algorithms

Two disk-scheduling algorithms for real-time systems are presented in [CHE91] and discussed further in

[STA91]. The two algorithms are called SSEDO (Shortest Seek Earliest Deadline by Ordering) and SSEDV 

(Shortest Seek Earliest Deadline by Value). The two algorithms combine deadline information and disk service

time information in different ways. Both algorithms maintain I/O requests in a queue sorted by deadlines. A

window of size m is defines as the first m requests in the queue.

The SSEDO algorithm begins by assigning a weight to each request in monotonically increasing order.

Then it assigns a value, which combines the weight and the distance from the disk arm, to each request within

Page 42: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 42/71

42

m. Since the queue is sorted by deadlines and the weights are monotonically increasing, then the earlier the

deadline of a request, the smaller the weight assigned to it. The idea of the algorithm is to assign higher pri-

orities to requests with smaller deadlines in order to be serviced earlier. This can be accomplished by choosing

the request with the minimum value among waiting requests.

m

 EDF 

W 1 W 2 …

Figure (2.3)

In the SSEDV algorithm, each request is assigned a scheduling value with an adjustable parameter to con-

trol the sensitivity of the algorithm to distance and deadline. Scheduling in the SSEDV algorithm is based on

α d + (1-α ) l, where 0 ≤   α  ≤  1, d and l represent the corresponding deadline and distance from the arm position,

respectively. Thus, α serves as a control switch to place more/less emphasis on the deadline or on the distance.

Due to having the distance of a request from the arm position as a parameter of the scheduling value, we

have the following two cases:

CASE (1): A request with a far deadline whose requested track is close to the current disk arm position will re-

ceive a high priority and get serviced. This behavior may cause a request with a closer deadline,

whose track is further away from the arm position, to miss its deadline.

CASE (2): If loosing one of the requests is inevitable, then service the request with the smaller service time;

i.e., service the request whose track is closer to the current disk arm position. These observations

lead[CHE91]

to the development of the SSEDV algorithm.

For case (1) above, the deadline needs more emphasis than the distance, whereas in case (2) above, the

distance needs more emphasis than the deadline. That is, while SSEDO could make the wrong choice in bothcases, the EDF would have made only one wrong choice, and SSTF would have made only one wrong choice.

By incorporating α in the scheduling decision and since α is a static value, emphasis will be placed on either

the deadline or the distance. That is, the SSEDV algorithm will make either the choice of  EDF or the choice of 

SSTF in both cases due to the static nature of α. Therefore, the SSEDV algorithm will make the right choice

only in one of the cases, but certainly not both cases.

Therefore, we believe that the use of α is in the right direction; however, α should be a dynamic value

whose value should depend on system parameters at run time. Note that none of the algorithms we presented so

far will make the right choice in both cases listed above under the assumption that servicing an intermediate

Page 43: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 43/71

43

request causes some amount of delay. However, if servicing an intermediate request does not cause any delay

and its cost is negligible, then the FD-SCAN as well as SSEDO and SSEDV algorithms will make the right

choice in case (1) and (2) above.

The simulation conducted in

[CHE91]

indicated a significant performance improvement when using theSSEDO and SSEDV algorithms over FD-SCAN . The scenario we presented earlier in cases (1) and (2) com-

bined with the simulation results of [CHE91]

suggest that a further performance improvement is still feasible by

finding a dynamic value for α in the SSEDV algorithm.

Fixed Priority ( FP), Read Preference Priority ( RPP), and Dynamic Priority ( DP)

Kim and Srivastava[KIM91]

proposed three strategies to assign priorities to I/O requests aside from the ac-

tual physical arm movement. Their strategies are called Fixed Priority (FP), Read Preference Priority ( RPP),

and Dynamic Priority ( DP).

•  In the FP technique, all I/O requests have the same priorities as the transactions that issued the re-

quests.

•  The RPP technique, transactions that issued write requests have already committed and. Meanwhile,

all read requests are issued by transactions that wish to meet their deadlines. Therefore, all read re-

quests have the same priorities as the transactions that issued them, while write requests are assigned

the lowest priority.

•  The DP technique is a very clever policy proposed by Kim and Srivastava[KIM91]

, which dynamically

assigns priorities to I/O requests. I/O requests generated by the same transaction may have different

priorities at different times depending on whether there is any transaction waiting for the release of a

write lock. That is, all read requests are assigned the priorities of the transactions that issued them. If 

no transaction is waiting for the release of a write lock, the write lock is assigned the lowest priority

among all I/O requests. However, if there is a transaction waiting for the release of this write lock, the

write request inherits the priority of the waiting transaction. Thus, the disk scheduler not only consid-

ers the transactions requesting I/O operations, but the transactions, which are blocked by data con-

flicts as well.

The simulation conducted in[KIM91]

showed that the  DP technique has significant performance improve-

ment over FP and RPP, and its performance does not degrade severely as the performance of  FP and  RPP un-

der high loads.

As we stated in the introduction of this chapter, there are many other issues that we have not addressed

throughout our discussion; however, an understanding of such issues is important to the overall construction of 

 RTDB systems. Nonetheless, the reader at this point should be more aware of the research issues involved in

 RTDB systems. The next chapter is meant to provide the reader with an overview of synchronizing transac-

Page 44: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 44/71

44

tions’ access to the database, and the various techniques to employ such synchronization schemes into  RTDB

environments, along with the impact of each technique.

Page 45: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 45/71

45

 3 – Concurrency Control

3.1. Introduction

The requirement of maintaining data consistency is the essential feature of a conventional database sys-

tem. Concurrency control is a mechanism to ensure non-interference of transaction execution; thus, isolation of 

concurrently executing transactions. Since serializable schedules provide correct results and leave the database

consistent, serializability became the notion of correctness, and concurrency control protocols are the mecha-

nism that enforces/implements serializability [BER87, CHR94, and ELM 94].

Concurrency control schemes in general can be classified as  pessimistic or optimistic. The principle un-

derlying pessimistic concurrency control protocols is to get permission before any transaction performs an op-

eration on any data object. On the other hand, optimistic concurrency control schemes neglect such permission

and allow transactions to access their data items freely. However, at transaction’s commitment, a validation

test is conducted to verify that all of the transaction’s accesses to the database maintain serializability. In the

rest of this chapter, we explore various concurrency control protocols namely:

•  Locking

•  Optimistic

•  Speculative

•  Multiversion

•  Dynamic Adjustment of Serialization Order

We address the characteristics of each protocol along with its trade-off(s) and adaptation of the protocol to

a real-time domain.

3.2. Locking Concurrency Control

Locking data items is a technique to prevent multiple transactions from accessing the same data items

concurrently in conflicting modes (read vs. write operations). Thus, locks synchronize access to the databaseobjects. A lock is a variable associated with a data object in the database, and it describes the type of operations

that are to be performed on the corresponding data object. The lock manager of a  DBMS manages all locks.

Using locks without any control over the moments at which lock and unlock operations are performed does not

ensure serializability. Therefore, a mechanism that controls the use of locks is required in order to maintain the

database consistency. To satisfy logical consistency, concurrency control techniques such as Two-Phase-

 Locking are employed [ESW 76, BER87, CHR94, and ELM 94].

Two-Phase-Locking (2PL)

Page 46: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 46/71

46

The basic 2PL protocol, is a pessimistic locking techniques, that guarantees the serializability of an execu-

tion by controlling the instants at which lock and unlock operations are performed. It divides a transaction’s

execution into two phases. During the first phase, which is known as the growing-phase, a transaction can ac-

quire all its locks dynamically as the need arises, but it can not release any of the locks it holds. However,

during the second phase, which is know as the shrinking-phase, the transaction dynamically starts releasing

the locks it holds, and cannot acquire any more locks. That is, once a transaction releases a lock, it can not ac-

quire any more locks. In addition, a transaction is allowed to upgrade from a read lock to a write lock only

during the first phase. Under 2PL, when a transaction requests a lock that is being held by another transaction,

the requestor waits (blocks) until the release of the lock. Blocked transactions are queued in the  DBMS lock 

manager. The technique used in organizing this queue is of great concern in time-critical systems. If every

transaction in a schedule follows the 2PL protocol, the schedule is guaranteed to be serializable and does not

need to be verified [ESW 76, BER 87].

Performance studies of concurrency control algorithms for conventional database systems have shown that

under most operating circumstances, locking protocols outperform optimistic techniques [AGR 87]. However,

 RTDB systems have a different set of characteristics, design objectives, and constraints, necessitating various

performance measures and previous assertions to be reevaluated [HAR 90a, HAR92]. A close look at 2PL reveals that

transactions can not release locks as soon as they are through using the corresponding data objects; rather, they

must wait until they enter their second phase. The period between a transaction finishes using a data object and

releasing the lock is bounded by the lifetime of the transaction, during which other transactions might be

blocked on that particular unused data object. Thus, 2PL can limit the degree of concurrency in the system,

which lowers the system’s utilization level. Such delay gets even worse under conservative and strict   2PL

protocols [BER87]. A Strict  2PL protocol is the most common protocol implemented in commercial database sys-

tems, in which a transaction cannot release any of its locks until after it terminates (commits or aborts). Thus,

a strict 2PL is too “strict” and introduces extra delays. Meanwhile, sacrificing strictness subjects the system to

cascading-aborts  [BER87, ELM 94]. Blocking-based protocols, i.e., 2PL, suffer long blocking delays and lack of 

consideration to timing information, they tend to introduce deadlocks and priority inversion, which is clearly

unsuitable for any time-critical environment[HAR90, HAR92 KAO95, STA91, and ULU

 95b]

. Due to the negative impact of 

2PL on transaction processing in a real-time environment, many attempts have been made to augment it with

priority cognizance techniques or circumvent its use along with its associated drawbacks. We review such en-

deavors in the rest of this chapter.

Before we discuss other concurrency control techniques, we present the problem of  priority-inversion

within the context of  RTDB system, along with the various attempts that have been proposed in the literature to

eliminate/reduce the effect(s) of such a problem.

3.2.1. Synchronizing RTDB Transactions in Locking-based Protocols

Priority-inversion is very undesirable in  RTDB systems due to the fact that since the lower-priority trans-

action is discriminated against in its use of system resources, the blocked higher-priority transaction is essen-

Page 47: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 47/71

47

tially running at an effective priority equal to that of the lower-priority transaction[KAO 95]

. For a RTDB system

to cope with such degradation, 2PL needs to be augmented with priority-driven scheme to ensure that higher-

priority transactions are not delayed by lower-priority transactions. Various schemes such as Priority Inheri-

tance [SHA90]

, Priority Abort  [ABB88]

, Priority Ceiling [SHA91]

, and Conditional Priority Inheritance [HUA 91b, HUA92]

have been proposed as basic mechanisms for incorporating priorities in locking-based protocols, all of which

are described next.

2PL Wait Promote ( 2PL-WP)

The 2PL Wait-Promote (2PL-WP) algorithm [ABB89] is the counter-scheme of priority-inheritance in con-

ventional real-time systems. The scheme is identical to the basic 2PL in its resolution of conflicts, that is,

transactions always block whenever a lock request is denied. The difference is that it includes a priority in-

heritance mechanism[SHA90]

. With this mechanism, whenever a request is blocked behind a lower-priority lock 

holder, the lock holder’s priority is promoted to that of the requester. In other words, the lower-priority lock 

holder inherits the higher priority of the lock requester, and the holder retains this elevated higher priority un-

til termination. Note that unlike a pure priority inheritance, if the higher priority transaction is aborted while it

is being blocked, the elevated priority transaction retains the elevated priority until termination.

Haritsa et al. [HAR92] stated that the 2PL-WP algorithm retains the resource-conservation features of 2PL. In

addition, it reduces blocking time of high priority transactions by increasing the priority of the conflicting

lower-priority lock holders – these low-priority transactions execute faster and therefore release their locks

earlier. A drawback of the algorithm is that the blocking times of high-priority transactions are still uncertainin their duration [HUA 91b]. In fact, under high data contention, 2PL-WP could result in most or all of the trans-

actions in the system executing at the same priority. In this situation, the behavior of the  RTDB system would

effectively reduce to that of a conventional DBMS [LEE96].

2PL High-Priority ( 2PL-HP)

The 2PL High-Priority (2PL-HP) [ABB88, ABB89] algorithm was presented in the previous chapter as “prior-

ity-based Wound-Wait” under conflict resolution. The scheme modifies the basic 2PL protocol by incorporat-

ing the priority of a transaction in resolving a conflict, which ensures that high-priority transactions are not

delayed, by low-priority transactions. The 2PL-HP scheme resolves all data conflicts immediately in favor of 

the transaction with the higher-priority. In particular, when a transaction requests a lock on a data object held

by another lower-priority transaction in a conflicting mode, the lock holding transaction is aborted/restarted

and the requester is granted the lock. If the requester had a lower-priority than that of the holder, it would wait

for the requested data object to be released. In addition, a new reader can join a group of readers only if its pri-

ority is higher than that of all writers waiting for the lock. A secondary benefit of the 2PL-HP scheme in addi-

tion to not being subjected to priority-inversion is that it is free of deadlocks.

Page 48: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 48/71

48

A drawback of 2PL-HP algorithm is that a transaction may be restarted by a higher-priority transaction

that later misses its deadline and is discarded. This means that the restart did not result in the higher-priority

transaction meeting its deadline, apart from the loss of system resources due to the restart. Therefore, such

wasted restarts may result in performance degradation. In addition, 2PL-HP loses some of basic 2PL beneficial

blocking factor due to the partially restart-based nature of the High-Priority scheme [HAR 92].

The experimental results of [ABB89, HUA 91b]

showed real-time concurrency control algorithms based on High-

Priority perform considerably better than those based on Priority-Inheritance. Furthermore, it was found in

[STA91]that the 2PL-HP outperforms the 2PL-WP scheme in the context of a  RTDB environment, and it was

concluded in [STA91] that the basic priority-inheritance is inappropriate for conflict resolution under 2PL.

The Priority Ceiling Protocol

Another technique that is proposed for solving the priority inversion problem in locking-based protocols isthe Priority-Ceiling Protocol (PCP)

[SHA91]. The PCP protocol is as follows:

1.  When a lower-priority transaction, TL, blocks the execution of a higher-priority transaction, TH, TL inher-

its the priority of TH and executes at such elevated priority until completion. If there is more than one

higher priority transaction being blocked on the same data object, then the lowest-priority transaction in-

herits and executes at the highest-priority among all blocked transactions.

2.  A total priority ordering must be established among all active transactions, which can be achieved by de-

fining three parameters for each data object in the database: write priority ceiling, absolute priority ceil-ing, and read-write priority ceiling.

•  Write ceiling of a data object is the highest priority that may write the object.

•   Absolute ceiling of a data object is the highest priority that may read or write the object.

•    Read-Write ceiling of a data object is set dynamically at run time. When a transaction writes a data

object, the read-write ceiling is set equal to the absolute ceiling. However, the read-write ceiling is set

equal to the write ceiling for read operations.

A transaction cannot obtain a read or write lock on a data object unless the ceiling-rule is satisfied, which

states that the priority of the requesting transaction must be higher than the read-write ceiling of all data ob-

  jects. The protocol is shown to be deadlock-free and single-blocking[SHA 91]

. Note that single-blocking means

once a transaction starts executing after being blocked, it may not block again.

One of the properties of this protocol is that transactions with priorities that are lower than or equal to the

current write priority ceiling are not allowed to read the data object. Thus, lower-priority transactions may not

access the same data object even in compatible modes. Such a pessimistic measure is taken to ensure that a fu-

ture high-priority transaction will not block on multiple readers if it wishes to perform a write operation.

Page 49: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 49/71

49

The protocol is mainly designed to ensure that if a higher-priority transaction, TH, is ever blocked by a

lower-priority transaction, TL, then it is blocked by at most one lower-priority transaction. The priority-ceiling

protocol presented here is a counter-technique to that we presented for conventional real-time systems. How-

ever, in a conventional real-time system, the conflict could last as long as executing a critical section and free-

ing the corresponding semaphores. In a RTDB system with 2PL, the blocking period is the lifetime of TL. Thus,

if TL is a long transaction; i.e., a transaction whose duration exceeds the deadline of T H, or performs many I/O

operations, thereby extended delays, then its completion may not occur until after the deadline of TH. Such a

behavior seems to defeat the purpose of priority assignment and time-driven schedulers. Therefore, the original

technique assumes main-memory databases to circumvent I/O delays, but what about long duration lower-

priority transaction! Furthermore, the technique is very pessimistic and could lower the degree of concurrency

due to denying lower-priority transactions access to data objects that are being accessed by higher-priority

transaction, even in compatible modes. Hence, this technique does not only force execution to be serializable,

but rather seems to force execution to be serial. Huang et al.[HUA

 91b, HUA92]

stated that the overall execution of 

the priority ceiling protocol would be a serial execution in the order of the ceilings. Based on these observa-

tions, we strongly believe that this technique along with the entire philosophy of priority inheritance is inade-

quate and unsuitable for RTDB systems.

Due to the deficiency(s) of the priority-ceiling protocol that we have presented in this section, other re-

searchers attempted to either remedy the protocol or avoid its use altogether. An attempt to remedy the prior-

ity-ceiling protocol is made by Nakazato and Lin[NAK93]

by using what they called the Convex-Ceiling protocol

in an attempt to reduce the blocking period of the priority ceiling protocol. Another attempt was made by Lam

et al. [LAM 97] by using dynamic adjustment of serialization order12.

Conditional Priority-Inheritance (CPI)

Huang et al.[HUA 91b]

proposed a combined priority inheritance and priority abort scheme, called Condi-

tional Priority Inheritance (CPI ). The basic idea behind the scheme is the following: when priority inversion is

detected, if the lower-priority transaction is near completion, it inherits the higher-priority involved in the

conflict; otherwise, the lower-priority transaction is aborted. Consequently, the scheme lowers the amount of 

wasted resources by avoiding aborting the lower-priority transactions that are near completion, and avoids long

blocking delays experienced by high priority transactions.

For conditional priority inheritance to work, one must know precisely what it means to be near comple-

tion. Huang et al.[HUA

 91b, HUA92] assumes that the transaction’s length – defined as the number of steps, is

known in advance. Therefore, Huang et al. defines a threshold value to measure the remaining number of 

steps to be executed by the lower-priority transaction involved in a conflict. Based on the threshold value, one

can decide whether to elevate the lower-priority or abort the corresponding transaction.

 12

 Dynamic adjustment of serialization order is the subject of section 3.6.

Page 50: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 50/71

50

Huang et al.[HUA

 91b, HUA92]

defined the threshold based on the number of steps remaining to be executed by

the lower-priority transaction involved in a conflict. However, Huang et al. did not consider the semantics of 

these remaining steps. That is, since different operations require different amounts of time, it is possible to

have a single step requiring more time than many other steps due for example to the order of magnitude differ-

ence between the speed of the CPU and I/O subsystem. Such a delay is not only due to I/O operations, but also

can be the result of loops, conditional statements, and communication channels and their associated failures

and overhead. Furthermore, there is no guarantee that the remaining few steps, which satisfy the threshold-

test, will not be subjected to deadlocks and/or chained-blocking.

The problem of the CPI protocol is that it is incompatible with  RTDB transactions. Transactions in  RTDB

systems are subject to deadlines, which are measured by clock ticks. On the other hand, the CPI protocol is

deadline-incognizant. It uses number of steps remaining to execute, which is incompatible and insensitive to

deadlines. Therefore, a high priority transaction could miss its deadline although the threshold test was satis-

fied. If one wishes to use the CPI scheme, then one must quantify the remaining amount of time left to execute,

and the threshold should not be static. Rather, the threshold should be a dynamic measure that accurately re-

flects the deadline of the higher-priority transaction involved in a conflict. Only then, one can confidently al-

low a conflicting lower-priority transaction to continue executing without jeopardizing the higher-priority

transaction and comply with real-time priority-driven transaction scheduling.

3.3. Optimistic Concurrency Control

Optimistic Concurrency Control (OCC ) revolves around the concept of validation, which is also known as

certification, of transactions operations at the end of their execution. OCC does not require any checking to be

done during the execution of a transaction. The idea behind OCC is to do all the necessary checks at once (at

the end), so that transaction execution proceeds with a minimum overhead and without blocking delays. If 

there is little contention and interference among the transactions, most transactions will be validated and

committed successfully. However, the more interference there is among the transactions, the more transactions

will be aborted and restarted; thus, reducing system resource utilization[AGR87, BER87, ELM 94]

.

In classical OCC  [KUN 81]

, transactions read and update data items freely, storing their updates into a private

workspace. These updates are made public at commit time. Before a transaction is allowed to commit, it has to

pass a validation test. Validation tests check whether there is a conflict between the validating transaction and

other transactions that have committed since the validating transaction began its execution. The validating

transaction is restarted if it fails this test. This technique is also known as  forward  OCC .

Since writes effectively occur at commit time, the serialization order selected by an OCC is the order in

which the transactions commit. In other words, the effect of a collection of transactions is effectively similar as

if each transaction were executed atomically at its commit time[GRA 92]

.

OCC -Broadcast Commit (OCC -BC )

Page 51: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 51/71

51

A variant of the  forward  technique is a backward  OCC , which incorporates a  Broadcast-Commit  [MEN 82,

ROB 82, and HAR 90a]. The classical OCC is changed a little to include  Broadcast-Commit  in order to suite  RTDB

systems environments. In the resulting algorithm, OCC-BC , when a transaction commits, it notifies other cur-

rently running transactions which conflict with it, and these conflicting transactions are immediately restarted.

Note that there is no need to check for conflicts with already committed transactions, because if the currently

validating transaction were in conflict with any committed transaction, then it would have been restarted be-

fore it reached the current validating state. This implies that once a transaction reaches its validating phase, it

is guaranteed commitment. The broadcast-commit method detects conflicts earlier than the basic OCC algo-

rithm, resulting in both, earlier restarts and less wasted resources, which increases the chances of meeting

transactions’ deadlines [HAR 90a, HAR 90b].

In the rest of this section, we will address various OCC protocols, Priority-Sacrifice, Priority-Wait , and

Wait-50 [HAR 90b, and HAR 92].

OCC -Sacrifice

The OCC-Sacrifice algorithm [HAR 90a, HAR 92] modifies the OCC-BC protocol by incorporating a priority

sacrifice mechanism. Define a conflict-set  to be the set of currently running transactions that conflict with the

validating transaction. In the OCC-Sacrifice scheme, a transaction that reaches its validation stage checks for

conflicts with currently executing transactions. If conflicts are detected and one or more of the transactions in

the conflict-set has a higher priority, then the validating transaction is restarted. That is, the validating trans-

action is sacrificed in an effort to help the conflicting higher-priority transactions make their deadlines.

OCC-Sacrifice satisfies the goal of giving preferential treatment to higher-priority transactions. However,

it suffers from the potential problem of wasted sacrifices, where a transaction is sacrificed on behalf of another

that is later discarded. Such sacrifices are useless and cause performance degradation. This drawback of  OCC-

Sacrifice is analogous to the wasted-restart problem of 2PL-HP.

OCC -Wait

The OCC-Wait  algorithm [HAR 92] modifies the OCC-BC protocol by incorporating a priority wait mecha-

nism. In this algorithm, a transaction that reaches its validation and finds higher priority transactions in its

conflict-set is forced to wait. This waiting period gives the higher-priority transactions a chance to make their

deadlines first. While a transaction is waiting, it is possible that it will be restarted due to the commit of one of 

the conflicting higher priority transactions. There are several features of OCC-Wait scheme that may have a

positive impact on performance:

•  Precedence is given to high-priority transactions, thus helping them to meet their deadlines.

•  The problem of wasted sacrifices does not exist here because the waiting transaction cannot be restarted by

any higher-priority transaction that misses its deadlines.

•  If all conflicting higher-priority transactions become tardy, then the waiting transaction can commit.

Page 52: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 52/71

52

•  Since transactions wait instead of immediately restart, a blocking effect is derived, which conserves re-

sources.

•  The fact that a higher-priority transaction commits does not necessarily imply that the waiting transaction

will be restarted. This is because if the waiting transaction conflicts with a higher-priority transaction, the

converse may not be true. That is, data conflicts may be unidirectional [ROB 82]. A validating lower-priority

transaction Ti conflicts with an active higher-priority transaction T j only if:

Write-set (Ti) ∩ Read-set (T j) ≠ φ

However, for T j to conflict with Ti, then: Write-set (T j) ∩ Read-set (Ti) ≠ φ

Thus, Read-set (T j) ∩ Write-set (Ti) ≠ φ, does not mean that T j conflicts with Ti. Therefore, it is possible to

have Ti conflict with T j, while the converse does not hold true. Hence, by reversing the committing order,

T j → Ti instead of Ti → T j, (committing T j before Ti) both transactions can commit without restarting ei-

ther one. Based on this observation, if there are no other conflicting higher-priority transactions in unidi-

rectional conflicts, then the waiting transaction can commit immediately after the conflicting higher-

priority transaction has committed. Thus, the OCC-Wait  scheme has the potential to actually eliminate

some data conflicts[HAR 90b]

. Note that the set of lower-priority transactions whose execution order can be

adjusted with respect to the validating transaction is known as the reconcilable set in the “Dynamic Ad-

 justment of Serialization Order” scheme; i.e., the subject of section 3.6.

While the waiting scheme appears to have many positive features, it has some drawbacks as well[HAR 92]

:

•  If a transaction finally commits after waiting for some time, it causes all of its conflicting lower-priority

transactions to be restarted at a later point in time. The delayed restarts decrease the chances that these

transactions will meet their deadlines in addition to wasting more resources.

•  Blocking causes an increase in the average number of transactions in the system, thus generating more

conflicts and a greater number of restarts. The validating transaction may develop new conflicts during its

waiting period, which increase the size of its conflict-set, thereby leading to more restarts[HAR 92]

.

Wait-50

The Wait-50 algorithm is an extension of the OCC-Wait algorithm. It incorporates a wait control mecha-

nism, which monitors the transaction’s conflict state and dynamically decides when a validating transaction

should be made to wait for higher-priority transactions in its conflict-set. In the Wait-50 algorithm, a validat-

ing transaction is made to wait only while, at least, 50% of the transactions in its conflict-set have higher-

priorities. That is, a transaction is made to wait only if half or more of its conflict-set is composed of higher

priority transactions[HAR 92]

.

Page 53: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 53/71

53

While OCC-Wait and OCC-BC represent the extremes; that is, OCC-Wait  always waits for all conflicting

higher priority transactions and OCC-BC never wait, Wait-50 is a hybrid approach that dynamically controls

the amount of waiting. We can view OCC-BC , Wait-50, and OCC-Wait as all being special cases of a general

algorithm Wait- x, where x is the cutoff percentage level, with  x taking the values {0, 50, 100}, respectively.

Various experiments were conducted in [HAR 90b] regarding to cutoff percentage level.

•  Lowering the cutoff value to 25% results in a slight improvement of normal load performance, but worsens

the heavy load performance. This behavior is due to the increased wait factor that is delivered by the low-

ered cutoff value.

•  Raising the cutoff value to 75% has the opposite effect.

The experiments conducted in [HAR92] showed that at light loads, where data contention levels are low,

waiting is always beneficial. On the other hand, at heavy loads when data contention is high, waiting can de-

grade performance. Wait-50 is effective in dynamically making this transition and therefore provides a good

performance across the entire range of loading. We do not understand why Haritsa et al. did not make x to be a

dynamic value that is adjusted dynamically based on the system’s load! Possibly to avoid the associated over-

head of monitoring the system’s load and dynamically adjusting the conflict-set.

The results of the experiments conducted in [HAR 92] showed that the control mechanism of Wait-50 is very

effective and provide the best overall performance among optimistic algorithms, over a wide range of work-

loads and operating conditions. Due to the relatively good performance of the Wait-50 scheme, it was con-

cluded that utilization of priority information in conflict resolution improves the performance of  OCC proto-

cols.

Pessimistic vs. Optimistic under RTDB Environment

In conventional database systems, a blocking-based conflict resolution policy conserves resource, while an

optimistic approach with its restart-based conflict resolution policy tends to waste resources. In  RTDB systems,

these protocols tend to behave differently. That is, 2PL-HP loses some of the blocking advantages of basic 2PL

due to the abort/restart of lower priority transactions involved in a conflict. On the other hand, optimistic pro-

tocols with the broadcast method tend to detect conflicts earlier than the basic OCC algorithm, resulting in

earlier restarts; thus, less wasted resources. In general, blocking-based algorithms tend to reduce the degree of 

parallelism as they construct serializable schedules; meanwhile, optimistic approaches attempt to increase par-

allelism to its maximum, after which they prune some transactions in order to satisfy serializability. The delay

in detecting conflicts by the optimistic approach is actually advantageous. In 2PL-HP, a transaction could be

restarted by, or wait for, another transaction that will be aborted later. Such restarts and/or waits are useless

and cause performance degradation. However, in optimistic approaches when incorporating the broadcast-

commit scheme, only validating transactions can cause restarts of other transactions. Moreover, since all vali-

dating transactions are guaranteed commitment and completion, then all restarts generated by such optimistic

algorithm are useful [HAR 90a, HAR 90b, and HAR 92].

Page 54: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 54/71

54

OCC  has the advantages of being non-blocking and free of deadlocks, which is desirable for real-time

systems. Owing to its potential for a high degree of parallelism, OCC is expected to outperform 2PL when in-

tegrated with priority-driven CPU scheduling in  RTDB systems[HAR 90a, HAR 90b, HAR 92, KAO 95, and STA 91, BES96, and

LEE96]

. However, the overall effects and impact of the overheads involved in implementing real-time OCC wereinvestigated on RT-CARAT testbed in

[HUA 89, HUA 91a]. The study reported that the blocking time under optimistic

protocols was limited and more predictable compared with 2PL. The study showed optimistic algorithms to

outperform locking protocols under low data contention. However, at high data contention, locking protocols

outperformed optimistic protocols. These results differ from those found in[HAR 90a, HAR 90b]

.

An important result found in[HAR 92]

is that there is a crossover-point  in the database size. Below the

crossover point, the data contention is high, which causes a locking-based concurrency control protocol; i.e.,

2PL-HP, to demonstrate the negative effect of blocking approaches; i.e., wasted restarts. However, above the

cross-over point, the data contention is reduced due to an increase in the database size, causing 2PL-HP to be-

have like conventional  DBMS, overcoming the timing constraints of a real-time system. Thus, while an opti-

mistic approach, Wait-50, outperforms 2PL-HP below the crossover point, the two algorithms switch their

performance superiority above the crossover point. These results explain the contradiction in the results found

in [HAR 90a, HAR 90b] and [HUA 91a]. The studies of Haritsa et al. considered small database size; i.e., below that of 

the crossover point, while the studies of Huang et al. considered large database size that was above the cross-

over point.

DB-Size

Crossover Point

Miss %

Optimistic

2PL-HP

 High Contention

 Low Contention

Figure (3.1)

Previous performance studies on conventional database systems in[AGR87]

showed that locking algorithms

that resolve data conflicts by blocking transactions outperform restart-based algorithms in an environment

where physical resources are limited. Also, the work showed that if resource-utilization is low enough so that

a large amount of wasted resources can be tolerated, and there is a large number of transactions available to

execute, then a restart-based algorithm is a better choice.

Page 55: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 55/71

55

The experiments conducted in[HAR 92]

demonstrate that under sufficiently high data contention, optimistic

algorithms outperform locking algorithms over a wide range of system loading and resource availability under

the assumption of  firm-deadlines. Such results have recently been reconfirmed in[LEE96]

.

The general result of the study conducted in

[LEE96]

is that results from previous performance studies onconcurrency control in RTDB systems including

[HAR90, HAR92, and HUA 91a]are not contradictory at all. The studies

are all correct within the limits of their assumptions, particularly their assumptions about resource availability

and policy for dealing with tardy transactions. Thus, a reasonable model for any set of assumptions is critical

for the relative performance studies of concurrency control algorithms in  RTDB systems.

3.4. Speculative Concurrency Control

A major disadvantage of the classical OCC when used in  RTDB systems is that transactions’ conflicts are

not detected until the validation phase, at which time it may be too late to restart. OCC-BC attempts to solve

this problem by notifying all concurrently running conflicting-transactions of the commitment of a conflicting

transaction. The OCC-BC detects conflicts earlier than the basic OCC protocol. The major weakness of OCC 

protocols in general is that an OCC protocol ignores the occurrence of a conflict between two transactions until

the validation phase of one of them, at which time it might be too late to correct the problem by restarting one

of the transactions.

Based on this observation, Bestavros[BES93, BES94, and BES96]

introduced a new class of concurrency control

protocols that is especially designed to suite  RTDB applications. This class of protocols is called Speculative

Concurrency Control (SCC ), which relies on the use of redundant computations to produce serializable sched-

ules at early stages, thereby having a better chance to meet prescribed deadlines.

SCC protocols allow conflicting transactions to proceed concurrently while detecting conflicts as soon as

they occur. That is, SCC  protocols combine pessimistic and optimistic protocols to achieve their advantages

while avoiding their disadvantages. In SCC protocols, a new version (shadow) of the conflicting transaction is

initiated as soon as a conflict is detected and another serializable schedule is constructed using the newly initi-

ated version. The primary version executes as any transaction would under OCC protocols – ignoring any con-

flict that may develop during the course of its execution. Meanwhile, the shadow-version executes as any

transaction would under pessimistic protocols – subjected to locking/blocking and restarts. The purpose of the

shadow version is to keep a clean version (a version without any conflicts) in case it is ever needed. When a

transaction reaches its validation phase and forces another transaction, T, to be aborted, T would not have to

restart from the beginning. Rather, the shadow version is promoted to become the primary version and T would

resume execution on the primary version. Once the shadow becomes a primary, it starts executing as any trans-

action would under OCC protocol. In addition, new shadow will be created as soon as a conflict is detected.

Thus, after a transaction encounters a conflict, there will be more than one version of the transaction in the

system, each with its own computation. The version that ensures serializability is the one to be committed.

Page 56: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 56/71

56

The period between the moment of a conflict occurrence and the moment of aborting/restarting a transac-

tion is essentially lost, or unutilized, in OCC protocols. Such a period is utilized or invested in the construction

of the shadow version(s) in SCC protocols. Thus, an aborted/restarted transaction has a better chance of meet-

ing its deadline. Notice that for performance purposes, a newly initiated shadow does not have to start from the

beginning. Rather, it can take a consistent execution from another shadow, if there is one. Clearly, SCC  is a

better class of concurrency control protocols and is more adequate for  RTDB systems where time is of an es-

sence.

Updates made by each transaction are made on local copies and therefore are not visible until the updating

transaction is committed. SCC protocols adopt a forward validation. That is, the set of objects read by all active

transactions is checked against the set of objects written by the validating transaction. The general scheme is as

follows:

•  When a transaction starts a new execution, a primary shadow is created.

•  When a potential conflict is detected, a new shadow is created, and a new shadow is created for every

potential conflict within an earlier shadow.

•  When a shadow reaches the point that caused the conflict, which initiated it, it blocks.

•  When the primary version commits, all of its associated shadows are discarded in addition to any

other shadows whose serializability contended on discarded shadows are also discarded.

Bestavros[BES93, BES94, and BES96]

discussed various versions of the SCC protocols. The first class is known as

OCC-OB (Ordered-Based ). It is argued in [BES94, BES96] that SCC-OB would generate O (n!) shadows for every

transaction. Due to the exponential number of shadows and the associated overhead, SCC-OB is modified into

SCC-CB (Conflict-Based ). The modified scheme contains only one shadow for each conflict between two

transactions. That is, there will be one primary and (n-1) shadow version(s) for each one of the n active trans-

actions. Thus, there will be a maximum of O (n2), shadows for each transaction during the course of its execu-

tion.

To further limit the number of shadows along with the associated maintenance overhead, Bestavros[BES94,

BES96]proposed k-Shadow SCC (SCC-kS) class of protocols. The SCC-kS protocols allow only k shadows as a

maximum to execute on behalf of any uncommitted transaction in the system.

The last member of these multiple shadow protocols is SCC-2S (Two-Shadow), which allows a maximum

of two shadows per transaction: a primary version and a single standby shadow. The primary version runs un-

der optimistic assumptions, whereas the standby shadow runs under pessimistic assumptions as we explained

above.

The simulation conducted in[BES 94]

showed that the performance superiority of SCC protocols over that of 

OCC becomes more pronounced as the deadlines become tighter; i.e., as the slack time becomes smaller. Fur-

thermore, the performance degradation of SCC-2S is negligible as data contention increases, whereas it se-

Page 57: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 57/71

57

verely degrades for OCC-BC . Such a performance improvement was promised for soft  as well as  firm dead-

lines.

Another technique known as Alternative Version Concurrency Control ( AVCC ) was developed in a sepa-

rate effort at the University of Florida by Hong et al.

[HON95]

, was basically similar to the SCC mentioned above.

As an enhancement to these protocols, Yoon and Park [YOO97]

suggested that by dynamically adjusting the

serialization order, an initiation of a new shadow might not be necessary. Rather a conflicting lower-priority

transaction can be stopped and resume its execution after the commitment/abortion of the higher-priority

transaction. Such an approach can significantly reduce the number of versions of a single transaction along

with the associated overhead of managing alternate versions. In addition, it can conserve system resources that

otherwise would have been spent on the newly initiated version(s)/shadows.

3.5. Multiversion Concurrency Control

In a single version locking-based concurrency control, conflicting operations on a data item prevent each

other from accessing the data item at the same time. Such conflicts and the associated locking/blocking delays

can be avoided by using multiple versions of data items. In a multiversion concurrency control protocols, the

old values of updated data objects are kept while new updates create new versions of the updated data items.

Thus, several versions of the same item may exist at any moment. When a value is updated, the update is not

in-place; rather, a new version is created and the old version is retained. Since write operations produce new

versions of data items, different write operations do not conflict, and read operations can access older versions

of the requested data items. Thus, the degree of concurrency can be increased with a corresponding reduction

in rejected operations [KIM91].

Maintaining multiple versions may not add much to the cost of concurrency control[BER87]

, because the

versions may be needed by the recovery subsystem. The obvious cost of maintaining multiple versions is stor-

age space. Thus, to control this storage requirement, versions must be periodically purged or archive.

Since write operation(s) create new version(s) of an updated data item  x, the data manager keeps a list of 

various versions of  x. For each  Read [x] operation, the schedulers must tell which version of  x is required;

thus, Read [x] must be appropriately mapped into Read [xi]. When the scheduler decides to assign a particular

version of  x to a read operation, the value returned is one produced by either an active transaction or a com-

mitted transaction. Recoverability requires that if the value returned for x was one that was produced by a cur-

rently active transaction, then the reader’s commitment must be delayed until after the commitment of the ver-

sion’s producer. However, if the producer aborts, then the reader must also be aborted, since the abort process

invalidates the involved version[BER87]

.

Furthermore, to maintain serializability, the commitment of a transaction that produced a new version of 

 x, must be delayed until after the commitment of all transactions that read an older version of  x. Otherwise,

Page 58: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 58/71

58

the commitment of the producer will violate serializability. Thus, controlling the order of commitment is very

crucial to achieving the potential of increased concurrency under multiversion concurrency control techniques.

In general, multiversion concurrency control protocols are based on view-serializability instead of conflict-

serializability 

[ELM 94]

. Therefore, verifying whether a given schedule is serializable is  NP-Complete problem[PAP84]

. Consequently, while it is possible in single-version concurrency control techniques to construct a sched-

ule and verify its serializability via an acyclic serialization graph[BER87]

, such an approach is not feasible for

multiversion concurrency control schemes. Rather, one must properly control the production of multiple ver-

sions and the corresponding serializable read operations one at a time as they are performed. Therefore, time-

stamp ordering or locking techniques must be used in conjunction with multiple versions.

An important open research area is whether these timestamping and locking techniques are actually suit-

able for RTDB systems. In addition, how would one incorporate the new parameters, i.e., priorities, deadlines,

and temporal-consistency, in managing multiversion concurrency control? Would these new parameters pose

new limitations to the benefits achievable from multiversion concurrency control, or would they actually ease

the management of multiple versions of data?

An initial attempt to answer these questions was made by Kim and Srivastava[KIM91]

. However due to

space limitations, we eliminate the discussion of their proposed algorithms.

3.6. Dynamic Adjustment of Serialization Order

Let, TH and TL, be two transactions with TH having the higher priority. If TL writes a data object before TH

reads it, then TL precedes TH in the serialization order as reflected by the execution history. Current

concurrency control protocols resolve the conflict either by blocking TH until TL releases the object or by

aborting and restarting TL in order to speed up the lock release. Neither solution is truly adequate in a real-time

context. Blocking could cause high priority transactions to miss their deadlines. While restarting lower-priority

transactions also subjects them to missing their deadlines in addition to wasting systems resources.

Lin and Son[LIN90]

proposed a protocol known as Real-Time Locking ( RTL) that uses locking and dynamic

adjustment of serialization order for conflict resolution. Rearranging the serialization order in the execution

history can significantly reduce both blocking and aborting. That is, if TL mentioned above is not committed

yet, the serialization order is rearranged such that TH precedes TL in the execution history. Consequently,

higher priority transactions can be executed first and they are not blocked by uncommitted conflicting lower-

priority transactions. In addition, lower-priority transactions may, or may not, be aborted/restarted as a result

of such serialization adjustment, thereby reducing the frequency of abort/restart operations.

In order to implement the dynamic adjustment of serialization order, in  RTL, the execution of the trans-

action is phase-wise as outlined next.

Page 59: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 59/71

59

•  In the first phase called  Read-phase, a transaction reads from the database and writes to its local work-

space as in OCC . However, unlike OCC where conflicts are resolved only in the validation phase, RTL re-

solves conflicts in the read phase using transaction priority.

•  In the Wait-phase of  RTL, a transaction is forced to wait until its commitment. A transaction is allowed tocommit only if all higher priority transactions that must precede it in the serialization order are either

committed or aborted. Once a transaction in its wait phase gets the chance to commit, it switches to its

write-phase and releases its read locks. In addition, once a transaction commits all the transactions that

must come before it in the serialization order need to be aborted.

•  In the Write-phase of  RTL, the final serialization order is determined, and updates are made permanent to

the database. The use of a phase-dependent control and local workspace for transactions also provides the

potential for a high degree of concurrency.

The reader should note that adjusting the serialization order in OCC protocols is recognition of the unidi-

rectional conflict that we mentioned earlier under OCC-Wait protocol.

Other refinements and versions of the technique can be found in[SON92, LEE93, and LAM95]

. The study conducted

in[LEE93]

proposed a concurrency control technique that dynamically adjusts the serialization order of conflict-

ing transactions. The technique is called optimistic concurrency control time interval (OCC-TI ), which is a

priority-ignorant scheme. In the OCC-TI scheme, conflicting transactions at a validating phase are divided into

two classes: reconcilable and irreconcilable transactions. The serialization order of the reconcilable transac-

tions is dynamically adjusted at the validation phase with respect to the validating transaction in an attempt not

to restart the reconcilable transactions, and thereby reducing the number of restarts. On the other hand, the ir-

reconcilable transactions are those transactions whose serialization order cannot be adjusted with respect to the

validating transaction and therefore, either the validating transaction or the irreconcilable set of transactions

must be restarted. Notice that the priority information has not been used in the construction of either class.

That is, if 50% of the conflicting transactions were reconcilable, then OCC-TI can spare 50% restarts, and the

larger the conflicting set becomes the more significant the performance improvement will be. The OCC-TI 

scheme was shown in[LEE93]

to outperform all of OCC protocols, including priority-cognizant optimistic proto-

cols; i.e., Wait-50.

We have mentioned earlier that Haritsa et al.[HAR 90b, HAR92]

claimed that the incorporation of priorities in

resolving conflicts is a main determinant in enhancing the performance of  OCC  protocols. Meanwhile, the

studies conducted in[LEE93, DAT97]

suggested that the main determinant in the performance of real-time

concurrency control protocols is the number of restarts. In addition, they proposed that reducing the number of 

restarts is the major factor that can pierce the performance limitations of such protocols. An important question

would be, would the incorporation of priorities in conflict resolution of OCC protocols reduce restarts, and by

how much? The rest of this section will indulge this matter, which will pose more questions than the answers itprovides.

Page 60: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 60/71

60

In order to reduce the number of restarts, Datta et al.[DAT97]

attempted to use priority information in con-

flict resolution protocols that dynamically adjust the serialization order; i.e., OCC-TI . It was mentioned in

[DAT97]that a better choice would be to restart the validating transaction and allow the irreconcilable transac-

tions to proceed provided:

•  The deadline of the validating transaction is far enough so that it will meet its deadline if restarted, and

•  The irreconcilable set of transactions is large enough to take such risky sacrifice.

Consequently, both of the validating/restarted transaction and the irreconcilable set of transactions have a bet-

ter chance of meeting their deadlines.

In order to establish any confidence in meeting a transaction’s deadline if restarted, Datta et al. [DAT97] at-

tempted to bind the transaction’s required execution time on a restart run. Datta et al. [DAT97] claimed that it is

possible based on the assumption that the restarted transaction will access the same data set it had accessed

during its first run13, and since it had reached its validating phase, it must have its entire data set in main

memory. Thus, all I/O operations are eliminated. In addition, optimistic protocols do not require any lock-

ing/blocking, thereby eliminating all data contention delays. Based on such bound, Datta et al. [DAT97] devised

an optimistic concurrency control technique that restarts the validating transaction in order to spare its irrec-

oncilable set of transactions. The protocol is called OCC-APR (  Adaptive Priority), which will restart a vali-

dating transaction only if at least two of its irreconcilable transactions had a higher-priority. Datta et al. [DAT97]

also introduced a whole set of techniques that investigated the ratio of irreconcilable higher-priority transac-

tions in order to decide when to restart a validating transaction.

As the load increases, the number of higher priority irreconcilable conflicts with validating transactions

should also increase due to the increase in the total number of conflicts. Such an increase in the higher-priority

irreconcilable transactions is an essential factor for priority-cognizant policies; i.e., Wait-50 and OCC-APR, to

gain their performance improvement. However, the simulation conducted in [DAT97] showed that:

•  Beyond the overload point, a transaction waits longer in different resource queues. Thus, less data op-

erations are being performed and fewer conflicts are being generated.

•   EDF as a CPU scheduling policy tends to schedule the earlier deadline transactions. Smaller transac-

tions will apparently tend to have earlier deadlines than longer transactions. Probabilistically speak-

ing, the smaller a transaction is, the fewer conflicts it is likely to have. Thus, under heavy loads the

active transactions are mostly small and thereby generate fewer conflicts. This result is a confirmation

of the bias behavior of  EDF that we have mentioned under load-management in the previous chapter.

Surprisingly and contradictory to one’s intuition, Datta et al.[DAT97]

showed that the set of the higher-

priority irreconcilable transactions decreases as the load increases. On average, such set was shown to be less

Page 61: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 61/71

61

than 0.35 of the conflict-set! Thus, committing the validating transaction is more likely to happen, with and

without, priority-cognizance. It was concluded in[DAT97]

that there appears to be little advantage to be gained by

the incorporation of priorities in the validation phase of optimistic conflict resolution protocols for firm  RTDB

systems. This conclusion was reached for optimistic algorithms that dynamically adjust the serialization order

of conflicting transactions. Such conclusion contradicts with that reached in [HAR 90a, HAR 90b, HAR92] for OCC 

protocols that do not adjust the serialization order of conflicting transactions!

We believe somewhere in this discussion, there is a hidden fact that does not comply with our knowledge

of  RTDB systems. Based on Datta’s et al. claim, OCC-BC should have a relatively similar performance to that

of Wait-50, yet simulation results showed that Wait-50 significantly outperforms OCC-BC . Furthermore, in-

cluding a dynamic adjustment of the serialization order, intuitively, should confirm that incorporating priori-

ties in conflict resolution of OCC enhances their performance, yet it confirms the opposite!

 13

 The property of accessing the same data set on a subsequent run is known as access-invariance  [FRA90, FRA92]

.

Page 62: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 62/71

62

4 – Open Problems and Future Directions

Various issues along with their impact on the overall design and systems performance have been presented

throughout our review of  RTDB systems. In the introduction of this review, we stated that the engineering of 

data-intensive real-time systems could be improved through adaptation of the techniques and principles of da-

tabase management systems. However, as we have seen throughout the review, our claim might not necessarily

be true under various circumstances and operating conditions. Based on the encountered limitations and defi-

ciencies, which are mainly due to many “old” underlying assumptions, one has to reconstruct many compo-

nents of conventional database systems before they are deployed in RTDB environments.

We believe that CPU scheduling and the manner in which priorities are derived and assigned to the col-lection of transactions within the system is of great concern. The notion of correctness in the presence of tem-

poral data and temporal consistency is another matter of great concern. Memory and I/O management are of 

immense severity on performance, and therefore, should be reconstructed with timeliness as a primary objec-

tive. The techniques in which transactions in conventional database systems access the database and manipu-

late its contents need a total reconstruction due to their severity on the overall systems performance and their

current inadequacy to timeliness.

In the rest of this chapter, we present five problems that we identified from the previous chapters. These

problems are very important to the advancement of  RTDB systems, and their solutions will bring technology

closer to remedy the deficiencies of current  RTDB systems.

CPU Scheduling

When the system is under-loaded, the  EDF policy was shown to be a very successful technique for sched-

uling tasks/transactions within a system. However, as the load increases, the performance of the  EDF policy

worsens. To overcome such degradation, many researchers have resorted to overload management policies.

Unfortunately, many of such policies are too expensive to fully implement in practical systems. In addition, the

amount of load, represented in the number of tasks/transactions, to be shed-off the system’s load might be pro-

hibitively large.

Under a very strict and unpractical set of assumptions, it was shown that there exists an upper bound on

the performance of any on-line scheduling algorithm. That is, in contrast to the  EDF policy whose perform-

ance is limited by a certain load, could there exist an on-line scheduling algorithm whose performance is

similar to that of the  EDF policy up-to and including the  EDF  load limits, and outperforms the  EDF  under

higher loads? Simulation results indicate that scheduling by deadline and criticalness (as in one measure) out-

performs scheduling by the EDF policy under overload conditions. The question is how much can we increase

Page 63: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 63/71

63

the load above the limits of the  EDF policy, without suffering the same degradation, and without employing

expensive overload management techniques.

Cascadeless vs. Strict Executions in RTDB systems

A cascadeless execution, as we stated at the beginning of chapter two, is faster than a strict execution, and

it ensures durability and recoverability. However, it is subject to the lost-update problem. On the other hand, a

strict execution ensure durability, recoverability, and is not subjected to the lost-update problem; however, it is

slower that a cascadeless execution. Based on this observation and on the notion of  temporal-consistency,

could one sacrifice strictness for speed? Furthermore, since some data is  persistent while other data is  perish-

able, is there any systematic technique that could automatically switch between strict and cascadeless execu-

tions based on the data that is being handled in order to better suite  RTDB environments? Alternatively, could

there be a technique that could systematically sacrifice strictness only when such a sacrifice does not violate the

consistency of the database?

Memory Management

Since in general, it is impossible to know transactions’ read/write-sets, and therefore, it is impossible to a

priori anticipate a transaction’s memory requirement. Consequently, I/O requests have to be initiated dynami-

cally as the need arises. Would it be possible to monitor a transaction’s past and present behavior and reference

pattern, so that one can probabilistically predict a transaction’s future references?

If this technique is feasible, then the more accurate the prediction becomes, the more that a disk-resident

 RTDB system can effectively behave as a main-memory RTDB system. Thus, dynamic I/O requests and their

associated delays are eliminated or at least minimized, and thereby predictability is enhanced, without sacri-

ficing the advantages of disks; i.e., stability and large memory capacity.

Disk Scheduling

Disk access is an essential process in disk-resident RTDB systems. Many techniques have been proposed in

the literature to enhance such a process by ordering I/O requests. On other hand, conventional operating sys-

tems have attempted to reorder the physical layout of the data on the disk itself. Would such techniques be ap-

plicable to RTDB systems, and what is their impact on the overall system’s performance, in particular, the im-

pact on I/O operations and delays?

Concurrency Control

Locking-based techniques are very attractive in environments with limited resources, due to their conser-

vative nature. However, all such techniques are based on two-phase locking (2PL), which is not suitable for

 RTDB environments due to its vulnerability to deadlocks and priority inversion. In contrast to 2PL, which is a

non-real-time locking technique, could there exist  RTDB-locking techniques that do not suffer deadlocks or

Page 64: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 64/71

64

priority inversion, yet maintains serializability and be as efficient, and as simple, as the conventional 2PL pro-

tocol?

How much efficiency and simplicity are we willing to sacrifice, and how much overhead are we willing to

accept in order to construct and employ a real-time locking protocol – a locking protocol that is time and pri-ority cognizant.

Future Directions

It has been shown in previous studies that CPU scheduling in real-time systems improves the performance

of real-time transactions by about 80%, and a conflict resolution mechanism could further improve the per-

formance by an additional 12%. Furthermore, conflict resolution protocols become more effective as the dead-

lines become tighter; i.e., a phenomenon that is more likely to occur under overloads. Based on such observa-

tions and the fact that locking concurrency control protocols are not prone to waste systems resources, an in-

tensive amount of research should be focused on CPU scheduling under overload conditions. An equivalent

effort needs to be directed towards devising real-time locking techniques that are more suitable for  RTDB sys-

tems than the conventional two-phase locking protocol.

Page 65: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 65/71

65

References

[ABB 88a] R. Abbott and H. Garcia-Molina, “Scheduling Real-Time Transactions”, SIGMOD Record , Vol.

17, No. 1, March 1988.

[ABB 88b] R. Abbott and H. Garcia-Molina, “Scheduling Real-Time Transactions: A Performance Evalua-

tion”, Proceedings of the 14th

VLDB Conference, Los Angeles, California, March 1988.

[ABB 89] R. Abbot and H. Garcia-Molina, “Scheduling Real-Time Transactions with Disk Resident Data”,

Proceedings of the Fifteenth International Conference on Very Large Databases, pp. 385-396, 1989.

[ABB 90] R. Abbott and J. Garcia-Molina, “Scheduling I/O requests with Deadlines: A Performance Evalua-

tion”, Proceedings of Real-Time Systems Symposium, pp. 113-124, 1990.

[AGR 87] R. Agrawal, M. Carey, and M. Livny, “Concurrency Control Performance Modeling: Alternatives

and Implications”, ACM Transactions on Database Systems, 12(4), December 1987.

[ALO 90] R. Aloson, D. Barbra, and H. Garcia-Molina, “Data Caching Issues in an Information Retrieval

Systems”, ACM Transactions on Database Systems, 15(3), pp. 359-384, September 1990.

[AUD 90] N. Audsley, A. Burns, “Real Time System Scheduling”, Technical Report No. YCS 134, Department

of Computer Science, The University of York, UK, 1990.

[BAD 92] B. R. Badrinath and K. Ramamritham, “Semantics-Based Concurrency Control: Beyond Commuta-

tivity”, ACM Transactions on Database Systems, 17(1), pp. 163-199, March 1992.

[BAR 91a] S. Baruah, G. Koren, B. Mishra, A. Raghunathan, L. Rosier, D. Shasha, “On-Line Scheduling in

the Presence of Overload”, Proceedings of the 32nd 

  Annual IEEE Symposium on Foundations of 

Computer Science, Puerto Rico, October 1991.

[BAR 91b] S. Baruah, G. Koren, D. Mao, B. Mishra, A. Raghunathan, L. Rosier, D. Shasha, and F. Wang,

“On the Competitiveness of On-Line Real-Time Task Scheduling”, Proceedings of the 12th Real-Time Systems Symposium, pp. 106-115, 1991.

[BER 87] P. A. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency Control and Recovery in Database

Systems, Addison-Wesley Publishing Company, 1987.

[BES 93] A. Bestavros, “Speculative Concurrency Control”, Technical Report TR-93-002, Computer Science

Department, Boston University, Boston, MA, 1993.

[BES 94] A. Bestavros and S. Braoudakis, “Timelines via Speculation for Real-Time Databases”, Proceedings

of Real-Time Systems Symposium, PP. 36-45, 1994.

[BES 96] Bestavros, “Value-Cognizant Speculative Concurrency Control for Real-Time Databases”,  Informa-tion Systems, Vol. 21, No. 1, pp. 75-101, 1996.

[BIY 88] S. R. Biyabani, J. A. Stankovic, and K. Ramamritham, “The Integration of Deadline and Criticalness

in Hard Real-Time Scheduling”, Proceedings of the IEEE Real-Time Systems Symposium, pp. 152-

160, 1988.

[BUC 89] A. P. Buchmann, D. R. McCarthy, M. Hsu, and U. Dayal, “Time Critical Database Scheduling: A

Framework For Integrating Real-Time Scheduling and Concurrency Control”, Proceeding of Real-

Time Systems Symposium, pp. 470-480, 1989.

[BUT 95] G. Buttazzo, M. Spuri, and F. Sensini, “Value vs. Deadline Scheduling in Overload Conditions”,

 Real-Time Systems Symposium, pp. 90-99, 1995.

Page 66: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 66/71

66

[CAR 89] M.J. Carey, R. Jauhari, and M. Livny, “Priority in DBMS Resource Scheduling”, Proceedings of the

Fifteenth International Conference on Very Large DataBases, pp. 397-410, 1989.

[CHE 90] H. Chetto, M. Silly, and T. Bouchentouf, “Dynamic Scheduling of Real-Time Tasks under Prece-

dence Constraints”, The Journal of Real-Time Systems, Vol. 2, pp. 181-194, 1990.

[CHE 91] S. Chen, J.A. Stankovic, J. F. Kurose, and D. Towsley, “Performance Evaluation of Two New Disk Scheduling algorithm for Real-Time Systems”, The Journal of Real-Time Systems, Vol. 3, pp. 307-

336, 1991.

[CHR 94] P. Chrysanthis, “Serializability-based Correctness Criteria”, Performance of Concurrency Control Mecha-

nisms in Centralized Database Systems, Vijay Kumar Ed., Prentice Hall, 1994.

[DAT 96] A. Datta, S. Mukherjee, P. Konana, I.R. Viguier, and A. Bajaja, “Multiclass Transaction Scheduling

and Overload Management in Firm Real-Time Database Systems”,  Information Systems, Vol. 21, No.

1, pp. 29-54, 1996.

[DAT 97] A. Datta, I.R. Viguier, S.H. Son, and V. Kumar, “A study of Priority Cognizance in Conflict Reso-

lution for Firm Real-Time Database Systems”, Real-Time Database Systems: Issues and Applications,S.H. Son, K.J. Lin, and A. Bestavros (eds.), Kluwer Academic Publishers, 1997.

[DU 89] W. Du and E. Elmagarmid, “Quasi Serializability: A Correctness Criterion for Global Concurrency

Control in InterBase”, Proceedings of the International Conference on Very Large Databases, pp.

347-355, The Netherlands, August 1989.

[ELM 94] R. Elmasri and S. Navathe, Foundations of Database Systems, Addison-Wesley Publishing Com-

pany, 1994.

[ESW 76] K. P. Eswaran, J. N. Gray, R. A. Lorie, and I. L. Traiger, “The Nations of Consistency and Predicate

Locks in a Database System”, Communications of the ACM , Vol. 19, No. 11, pp. 624-633, November

1976.

[FAR 90] P. A. Franaszek, J. T. Robinson, and A. Thomasian, “Access Invariance and Its Use in High Con-

tention Environments”, IEEE 6 th

International Conference on Data Engineering, pp. 47-55, 1990.

[FAR 92] P. A. Franaszek, J. T. Robinson, and A. Thomasian, “Concurrency Control for High Contention En-

vironments”, ACM Transactions on Database Systems, pp. 47-55, 1992.

[GAR 79] M.R. Garvey, and D.S. Johnson, “Computers and Intractability: A Guide to the Theory of NP Com-

pleteness”, W.H. Freeman, San Francisco, 1979.

[GAR 82] H. Garcia-Molina and G. Wiederhold, “Read-Only Transactions in a Distributed Database”,  ACM 

Transactions on Database Systems, 7(2), pp. 209-234, June 1982.

[GAR 83] H. Garcia-Molina, “Using Semantic Knowledge for Transaction Processing in a Distributed Data-

base”, ACM Transactions on Database Systems, 8(2), pp. 186-213, June 1983.

[GAR 87] H. Garcia-Molina and K. Salem, “SAGAS”, Proceedings of ACM SIGMOD Conference on Man-

agement of Data, pp. 249-259, May 1987.

[GRA 81a] J. Gray, “The Transaction Concept: Virtues and Limitations”, Proceedings of the 17 th

International

Conference on Very Large DataBases (VLBD), pp. 144-154, September 1981.

[GRA 81b] J. Gray, P. Homan, H. Korth, and R. Obermark, “A Strawman Analysis of the Probability of Wait-

ing and Deadlock”, Technical Report RJ3066 , IBM Research Laboratory, 1981.

[GRA 92] M. H. Graham, “Issues in Real-Time Data Management”, The Journal of Real-Time Systems, 4, pp.

185-202, 1992.

Page 67: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 67/71

67

[GRA 93] J.Gray and A.Reuter, “Transaction Processing: Concepts and Techniques”, Morgan Kaufmanns

Publishers, 1993.

[HAE 83] T. Haerder and A. Reuter, “ Principles of Transaction-Oriented Database Recovery”, Computing

Surveys, Vol. 15, No. 4, pp. 287-317, December 1983.

[HAR 90a] J.R. Haritsa, M.J. Carey, and M. Livny, “On Being Optimistic about Real-Time Constraints”, Pro-

ceedings of the ACM Symposium on Principles of Database Systems (PODS) , pp. 331-343, 1990.

[HAR 90b] J.R. Haritsa, M.J. Carey, and M. Livny, “Dynamic Real-Time Optimistic Concurrency Control”,

 Real-Time Systems Symposium, pp. 94-103, December 1990.

[HAR 91] J. R. Haritsa, M. Livny, and M. J. Carey, “Earliest Deadline Scheduling for Real-Time Database

Systems”, Proceedings of the IEEE Real-Time Systems Symposium, pp. 232-242, 1991.

[HAR 92] J. R. Haritsa, M. J. Carey, and M. Livny, “Data Access Scheduling in Firm Real-Time Database

Systems”, The Journal of Real-Time Systems, 4, pp. 203-241, 1992.

[HEI 91] H. Heiss and R. Wagner, "Adaptive Load Control in Transaction Processing Systems", Proceedings

of the 17th International Conference on Very Large DataBases (VLDB), pp. 47-54, Barcelona, Sep-

tember 1991.

[HOM 94] N. Homayoun and P. Ramanathan, “Dynamic Priority Scheduling of Periodic and Aperiodic Tasks

in Hard Real-Time Systems”, Real-Time Systems, 6, pp. 207-232, 1994.

[HON 95] D. Hong, S. Chakravarthy, and T. Johnson, “Alternative Version Concurrency Control (AVCC) for

Firm Real-Time Database Systems”, Technical Report UF-CIS-TR-95-031, The University of Flor-

ida, 1995.

[HUA 89] J. Huang, J. Stankovic, D. Towesly, and K. Ramamritham, “Experimental Evaluation of Real-Time

Transaction Processing”, Proceedings of the 10th Real-Time Systems Symposium, pp. 144-153, 1989.

[HUA 91a] J. Huang, J.A. Stankovic, K. Ramamritham, and D. Towesly, “Experimental Evaluation of Real-

Time Optimistic Concurrency Control Schemes”, Proceedings of the 17 th

VLDB Conference, Sep-

tember 1991.

[HUA 91b] J. Huang, J.A. Stanckovic, K. Ramamritham, and D. Towsley, "On Using Priority Inheritance in

Real-Time Databases", Proceeding of Real-Time Systems Symposium, pp. 210-221, 1991.

[HUA 92] J. Huang, J.A. Stanckovic, K. Ramamritham, and D. Towsley, "Priority Inheritance in Soft Real-

Time Databases", The Journal of Real-Time Systems, 4, pp. 243-268, 1992.

[HUA96] J. Huang and L. Gruenwald, “Impact of Timing Constraints on Real-Time Database Recovery”, Pro-ceedings of the Workshop on Databases: Active and Real-Time (Concepts Meet Practice),  DART’96 ,

pp. 54-58, Rockville, Md., November 1996.

[JEN 85] E.D. Jensen, C.D. Locke, and H. Toduda, “A Time-Driven Scheduling Model for Real-Time Oper-

ating Systems”, Proceedings of Real-Time Systems Symposium, pp. 112-122, 1985.

[KAO 95] Ben Kao and Hector Garcia-Molina, “An Overview of Real-Time Database Systems”,  Advances in

 Real-Time Systems, S.H. Son (ed.), Prentice-Hall, Englewood Cliffs, NJ, 1995.

[KIM 91] W. Kim and J. Srivastava, “Enhancing Real-Time DBMS Performance with Multiversion Data and

Priority Based Disk Scheduling”, Proceedings of Real-Time Systems Symposium, pp. 222-231, 1991.

Page 68: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 68/71

68

[KIM 93] Y. Kim and S. H. Son, “An Approach Towards Predictable Real-Time Transaction Processing”,

Proceedings of the 5th

Euromicro Workshop on Real-Time Systems, IEEE Computer Society Press,

1993.

[KIM 96] Y. Kim and S. H. Son, “Supporting Predictability in Real-Time Database Systems”,  IEEE Real-Time

Technology and Application Symposium (RTAS’ 96), Boston, MA, June 1996.

[KOR 88] H. F. Korth and G. D. Speegle, “Formal Model of Correctness Without Serializability”,   In Pro-

ceedings of ACM SIGMOD Conference on Management of Data, pp. 379-386, May 1988.

[KOR 90] H. F. Korth, E. Levy, and A. Silberschatz, “Compensating Transactions: A New Recovery Para-

digm”, Proceedings of the 6 th

International Conference on Very Large DataBases, Australia, August

1990.

[KUN 81] H. T. Kung and J. T. Robinson, “On Optimistic Methods for Concurrency Control”, ACM Transac-

tions on Database Systems, 6(2), pp. 213-226, June 1981.

[KUO 91] T. Kuo and A. K. Mok, “Load Adjustment in Adaptive Real-Time Systems”, In Proceedings of 

Real-Time Systems Symposium, pp. 160-170, 1991.

[LAM 95] K. Lam and S. Hung, “An Efficient Real-Time Optimistic Concurrency Control Protocol”, First In-

ternational Workshop on Active and Real-Time Database Systems (ARTDB-95), pp. 209-225, 1995.

[LAM 97] K. Lam, S. H. Son, and S. Hung, “A Priority Ceiling Protocol with Dynamic Adjustment of Seriali-

zation Order”, The 13th

IEEE Conference on Data Engineering (ICDE 97), Birmingham, UK, April

1997.

[LEC 88] P. Lecuyer and J. Malenfant, “Computing Optimal Checkpointing Strategies for Rollback and Re-

covery Systems”, IEEE Transactions on Computers, Vol. 37, No. 4, pp. 491-496, April 1988.

[LEE 93] J. Lee and S. H. Son, “Using Dynamic Adjustment of Serialization Order for Real-Time Database

Systems”, Proceedings of Real-Time Systems Symposium, pp. 66-75, 1993.

[LEE 96] J. Lee and S. H. Son, “Performance of Concurrency Control Algorithms for Real-Time Database

Systems”, Performance of Concurrency Mechanisms in Centralized Database Systems, V. Kumar

(ed.), Prentice-Hall, 1996.

[LEV 91] E. Levy, H. Korth, and A. Silberschatz, “A Theory of Relaxed Atomicity”,   In Proceedings of the

 ACM Symposium on Principles of Distributed Computing, August 1991.

[LIN 90] Y. Lin and S.H. Son, “Concurrency Control in Real-Time Databases by Dynamic Adjustment of Seri-

alization Order”, Proceedings IEEE of Real-Time Systems Symposium, pp. 104-112, 1990.

[LIU 73] C.L. Liu and J.W. Layland, “Scheduling Algorithms for Multiprogramming in a Hard Real-Time

Environment”, Journal of the ACM , Vol. 20, No. 1, pp. 46-61, January 1973.

[LOC 86] C. D. Locke, “Best-Effort Decision Making for Real-Time Scheduling”, PhD. Thesis, Department of 

Computer Science, Carnigie Mellon University, PA., 1986.

[MEN 82] D. Menasce and T. Nakanishi, “Optimistic vs. Pessimistic Concurrency Control Mechanisms in

Database Management Systems”, Information Systems, 7(1), 1982.

[MIL 92] Milan Milenkovic, “Operating Systems: Concepts and Design”, McGraw-Hill, 1992.

[MOH 92] C. Mohan, D. Haderle, B. Lindsay, H. Pirahesh, and P. Schwarz, “ARIES: A Transaction RecoveryMethod Supporting Fine-Granularity Locking and Partial Rollbacks Using Write-Ahead Logging”,

 ACM Transactions on Database Systems, Vol. 17, No. 1, pp. 94-162, March 1992.

Page 69: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 69/71

69

[NAK 93] H. Nakazato and K. Lin, “Concurrency Control Algorithm for Real-Time Tasks with Real/Write

Operation”, Proceedings of the 5th

Euromicro Workshop on Real-Time Systems, IEEE Computer So-

ciety Press, pp. 42-47, 1993.

[NAT 92] S. Natarajan and W. Zhao, “Issues in Building Dynamic Real-Time Systems”,  IEEE Software, pp.

16-21, September 1992.

[NIC 90] V. F. Nicola and J. M. Van Spanje, “Comparative Analysis of Different Models of Checkpointing

and Recovery”,   IEEE Transactions on Software Engineering, Vol. 16, No. 8, pp. 807-821, August

1990.

[OBE 80] R. Obermarck, “IMS/VS Program Isolation Feature”, IBM RJ2879 (36435), 1980.

[ONE 86] P. E. O’Neil, “The Escro Transactional Method”  ACM Transactions on Database Systems, 11(4),

pp. 405-430, December 1986.

[ONE 95] P. E. O’Neil, K. Ramamritham, and C. Pu, “A Two-Phase Approach to Predictability Scheduling

Real-Time Transactions”, Performance of Concurrency Mechanisms in Centralized Database Sys-tems, V. Kumar (ed.), Printice-Hall, 1995.

[PAN 92] H. Pang, M. Livny, and M.J. Cary, “Transaction Scheduling in Multiclass Real-Time Database Sys-

tems”, Proceedings of the Real-Time Systems Symposium, pp. 2830-74, 1992.

[PAN 93] F. Panzieri, R. Davoli, “Real Time Systems: A Tutorial”, Technical Report UBLCS-93-22,

ftp://ftp.cs.unibo.it/pub/techreports, University of Bologna, Bologna (Italy).

[PAN 94] H. Pang, M.J. Cary, and M. Livny, “Managing Memory for Real-Time Queries”, Proceedings of the

 ACM SIGMOD Conference, pp. 221-232, May 1994.

[PAP 84] C. H. Papadimitriou and P. C. Kanellakis, “On Concurrency Control by Multiple Versions”,  ACM 

Transactions on Database Systems, Vol. 9, No. 1, pp. 89-99, March 1984.

[PU 91a] C. Pu and A. Leff, “Replica Control in Distributed Systems: An Asynchronous Approach”, Technical

 Report CUCS-053-90, Department of Computer Science, Columbia University, January 1991.

[PU 91] C. Pu and A. Leff, “Epsilon-Serializability”, Technical Report CUCS-054-90, Department of Com-

puter Science, Columbia University, January 1991.

[RAJ 91] R. Rajkumar, “Synchronization in Real-Time Systems – A Priority Inheritance Approach”, Kluwer

Academic Publishers, 1991.

[RAJ 95] R. Rajkumar, L. Sha, J.P. Lehoczky, and K. Ramamritham, “An Optimal Priority Inheritance Policy

for Synchronization in Real-Time Systems”,  Advances in Real-Time Systems, (ed. S.H. Son), Pren-tice Hall, Chapter 11, 1995.

[RAM 93] K. Ramamritham, “Real-Time Databases”,   Distributed and Parallel Databases, Vol. 1, No. 2,

1993.

[RAM 94] K. Ramamritham and C. Pu, “A Formal Characterization of Epsilon Serializability”,  IEEE Trans-

actions on Knowledge and Data Engineering, 1994.

[RAM 96] K. Ramamritham, N. Soparkar, “Report on DART ’96: Concepts meet Practice”, Databases: Active

and Real-Time, 1996.

[ROB 82] J. Robinson, “Design of Concurrency Controls for Transaction Processing Systems”, Ph.D. Thesis,Carnegie Mellon University, Pittsburgh, PA, 1982.

Page 70: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 70/71

70

[ROS 78] D.J. Rosenkrantz, R.E. Stern, and P.M. Lewis, “II System Level Concurrency Control for DistributedDatabase Systems”, ACM Transactions on Database Systems, 3(2), pp. 178-198, June 1978.

[SHA 90] L. Sha, R. Rajkumar, and J. Lehoczky, “Priority Inheritance Protocols: An Approach to Real-TimeSynchronization”,   IEEE Transactions on Computers, Vol. 39, No. 9, pp. 1175-1185, September1990.

[SHA 91] L Sha, R. Rajkumer, S.H. Son, and C. Chang, “A Real-Time Locking Protocol”,  IEEE Transactions

on Computers, Vol. 40, No. 7, pp. 793-800, July 1991.

[SHE 90] A. Sheth and M. Rusinkiewicz, “Management of Interdependent Data: Specifying Dependency andConsistency Requirements”,   In Proceedings of the Workshop on Management of Replicated Data,pp. 133-136, Houston, November 1990.

[SHI 86] K.G. Shin, T. Lin, and Y.Lee, “Optimal Checkpointing of Real-Time Tasks”, IEEE 5th Symposium on

 Reliability in Distributed Software and Database Systems, pp. 151-158, 1986.

[SIN 88] Mukesh Singhal, “Issues and Approaches to Design of Real-Time Database Systems”, SIGMOD Rec-

ord , Vol. 17, No. 1, December 1988.

[SIV 95] R.M. Sivasankaran, K. Ramamritham, J.K. Stankovic, and D. Towsely, “Data Placement, Loggingand Recovery in Real-Time Active Databases”, Proceedings of the 1

st International Workshop on

 Active and Real-Time Database Systems, pp. 226-242, Sweden, July 1995.

[SON 92] S.H. Son, J. Lee, and Y. Lin, “Hybrid Protocols Using Dynamic Adjustment of Serialization Orderfor Real-Time Concurrency Control”, The Journal of Real-Time Systems, 4, pp. 269-276, 1992.

[SPR 88] B. Sprunt, J. Lehoczky, and L. Sha, “Exploiting Unused Periodic Time For Aperiodic Service UsingThe Extended Priority Exchange Algorithm”, Proceeding of the IEEE Real-Time Systems Sympo-

sium, pp. 251-258, 1988.

[SPU 95] M. Spuri, G. Buttazzo, and F. Sensini, “Robust Aperiodic Scheduling under Dynamic Priority Sys-tems”, Proceeding of the IEEE Real-Time Systems Symposium, pp. 210-219, 1995.

[STA 88a] J. Stankovic, “Real Time Computing Systems: The Next Generation”, Tutorial Hard Real-Time

Systems, ed. J. A. Stankovic, pp. 14-38, IEEE (1988).

[STA 88b] John A. Stankovic, “On Real-Time Transactions”, SIGMOD Record , Vol. 17, No. 1, March 1988.

[STA 91] J. Stankovic, K. Ramamritham, and D. Towsley, “Scheduling In Real-Time Transaction Systems”,Foundations of Real-Time Computing: Scheduling and Resource Management , edited by Andre vanTilborg and Gary Koob, Kluwer Academic publishers, pp. 157-184, 1991.

[STA 93] J.A. Stankovic and K. Ramamritham, Advances in Real-Time-Systems, (eds.) .A. Stankovic and K.Ramamritham, Computer Society Press, Los-Alamitos, California, 1993.

[TAY 85] Y.C. Tay, N. Goodman, and R. Suri, “Locking Performance in Centralized Databases”,  ACM TODS,Vol. 10, No. 4, pp. 415-462, December 1985.

[THO95] A. Thomasian, “Checkpointing for Optimistic Concurrency Control Methods”, IEEE Transactions

on Knowledge and Data Engineering, Vol. 7, No. 2, pp. 332-339, April 1995.

[ULU 92] Özgür Ulusoy, “Current Research on Real-Time Databases”, SIGMOD Record , Vol. 21, No. 4, De-cember 1992.

[ULU 95a] Özgür Ulusoy, “An Annotated Bibliography on Real-Time Database Systems”, SIGMOD Record ,Vol. 24, No. 4, December 1995.

Page 71: Realtime Application

8/3/2019 Realtime Application

http://slidepdf.com/reader/full/realtime-application 71/71

[ULU 95b] Özgür Ulusoy, “Research Issues in Real-Time Database Systems”,   Information Sciences 87 , pp.123-151, 1995.

[UPA 88] S. J. Upadhyaya and K. K. Saluja, “An Experimental Study to Determine Task Size for Rollback Re-covery Systems”, IEEE Transactions on Computers, Vol. 37, No. 7, pp. 872-877, July 1988.

[WU 92] K. L. Wu, P. S. Yu, and C. Pu, “Divergence Control for Epsilon Serializability”, Proceedings of the8

thInternational Conference on Data Engineering, IEEE Computer Society, February 1992.

[YOO 97] I. Yoon and S. Park, “Enhancement of Alternative Version Concurrency Control Using DynamicAdjustment of Serialization Order”,   Real-Time Database Systems: Issues and Applications, S. H.Son, K. J. Lin, and A. Bestavors (eds.), Kluwer Academic Publishers, 1997.

[YU 94] P. S. Yu, K. Wu, K. Lin, and S. H. Son, “On Real-Time Databases: Concurrency Control and Sched-uling”, Proceedings of the IEEE , Vol. 82, No. 1, pp. 140-156, January 1994.