178
DISTRIBUTED COMPUTER CONTROL SYSTEMS 1995 (DCCS'95) A Postprint volume f rom the 13th IFAC Workshop, Toulouse-Blagnac, France, 27 - 29September1995 Edited by A.E.K. SAHRAOUI AS du CNRS, Toulouse Cedex, Fnce and J.A. DE LA PUEE ETSI Telecomunicaci6n, Ciud Universitaria, Madrid, Spain Published r the TERNATIONAL FEDERATION OF AUTOMATIC CONTROL by PERGAMON An Impnt of Elsevier Science

Distributed computer control systems 1995 (DCCS ¹95)

  • Upload
    others

  • View
    6

  • Download
    0

Embed Size (px)

Citation preview

Page 1: Distributed computer control systems 1995 (DCCS ¹95)

DISTRIBUTED COMPUTER CONTROL SYSTEMS 1995

(DCCS'95)

A Postprint volume from the 13th IFAC Workshop, Toulouse-Blagnac, France, 27 - 29September1995

Edited by

A.E.K. SAHRAOUI LAAS du CNRS, Toulouse Cedex, France

and

J.A. DE LA PUENTE ETSI Telecomunicaci6n, Ciudad Universitaria,

Madrid, Spain

Published for the

INTERNATIONAL FEDERATION OF AUTOMATIC CONTROL

by

PERGAMON An Imprint of Elsevier Science

Page 2: Distributed computer control systems 1995 (DCCS ¹95)

UK

USA

JAPAN

Elsevier Science Ltd, The Boulevard, Langford Lane, Kidlington, Oxford, OX5 1 GB, UK

Elsevier Science Inc., 660 White Plains Road, Tarrytown, New York 10591-5153, USA

Elsevier Science Japan, Tsunashima Building Annex, 3-20-12 Yushima, Bunkyo-ku, Tokyo 1 13, Japan

Copyright© 1995 IFAC

All Rights Reserved. No part of this publication may be reproduced, stored in a retrieval system or transmitted in any form or by any means: electronic, electrostatic, magnetic tape, mechanical, photocopying, recording or otherwise, without permission in writing from the copyright holders.

First edition 1995

Library of Congress Cataloging in Publication Data A catalogue record for this book is available from the Library of Congress

British Library Cataloguing in Publication Data A catalogue record for this book is available from the British Library

ISBN 0-08-042593 3

This volume was reproduced by means of the photo-offset process using the manuscripts supplied by the authors of the different papers. The manuscripts have been typed using different typewriters and typefaces. The lay-out, figures and tables of some papers did not agree completely with the standard requirements: consequently the reproduction does not display complete uniformity. To ensure rapid publication this discrepancy could not be changed: nor could the English be checked completely. Therefore, the readers are asked to excuse any deficiencies of this publication which may be due to the above mentioned reasons.

The Editors

Printed in Great Britain

Page 3: Distributed computer control systems 1995 (DCCS ¹95)

WORKSHOP ON DISTRIBUTED COMPUTER CONTROL SYSTEMS 1995

Sponsored by: IFAC - International Federation of Automatic Control Technical Committee on Computers.

Organised by: Ass ociation Fran9&se des Sciences et Techn ologies de I 'Information et des Systemes

International Committe

J.A. De La Puente, Chairman (Spain) L. Boullart (Belgium) A. Bums (United Kingdom)) A. Crespo (Spain) F. Cristian (USA) F. De Paoli (Italy) J.C. Fabre (France) M.A. Inamoto (Japan) L. Ivanyoe (Hungary) A. Keijzer (Netherlands) H. Kopetz (Austria) R. Lauber (Germany) I. McLeod (USA) J.J. Mercier (France) A. Mok (USA) S. Narita (Japan) G. Qin (USA) K. Ramamrithan (USA) M.G. Rodd (United Kingdom) G. Suski (USA) J.P. Thomesse (France) G. Zhao (Singapore)

National Organizing Committee

A.E.K. Sahraoui (Chairman) E. Bemauer J.C. Deschamps E. Dufour M.T. Ippolito M. Tuffery D. Vielle

Page 4: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

AN APPROACH DESIGNING PARALLEL SOFTWARE FOR DISTRIBUTED CONTROL SYSTEMS

H. Unger*, B . Dane** and W. Fengler ••

*University of Rostoclc, Department of Informatics, D-18051 Rostoclc; Germany. E-mail: [email protected]

**Technical University of llmenau, Department of Informatics and Automation, D-98684 llmenau; Germany. E-mail: bdaene!wfengler@theoinf. tu-ilmenau. de

Abstract. Petri Nets have been proved to be an effecient tool to represent complicated systems. Nevertheless, in general it is not easy to implement a technical system given as a Petri Net on a multiprocessor system. This contribution presents a new approach for this procedure. The main difference compared to other methods is the effective use of message passing communication during the implementation.

Key Words. Petri-nets; Distributed computer control systems; Parallel programs

l. INTRODUCTION

Progress in hardware design makes it possible to use multiprocessor architectures even in small au­tomation systems. Parallel programming requires effective methods to find out parallel executable parts in a given algorithm (Boillat, et al., 1991). Therefore it is necessary to solve a lot of problems in a transparent way for a wide group of users. That is why Petri Nets, a graphical language of description, became more and more important for modelling parallel software solutions (Reisig , et al., 1987). But there are only a few approaches for implemen­ting Petri Net models on different multiprocessor architectures (Thomas, 1991; Unger, 1994). An overview is given in figure l.

one process per transition "token players'

etc.

less processes

classes of conflicts

pre place systems

INDIRECT state machine covering p-invariants

token flow analysis place construcis

Fig. 1. An overview about existing implementation ap­proaches

From the authors point of view all known methods fall into two basic types. The first one - the so cal­led direct type - implements processes according

to the transitions of the net. The second, indirect one is to decompose or to cover a given Petri Net by state machines, and then to implement one pro­cess for every state machine. Especially if a Petri Net has many transitions, the first method yields in each case a solution with plenty superfluous of processes and a large communication overhead. In general, the second group of methods generates a more efficient code, but, in contrast to the first one, it does not apply to all Petri Nets.

The main disadvantage of known approaches is the transformation of a subset of places into glo­bal data objects in a shared memory. These data objects normally contain integer values correspon­ding to the number of tokens in the places. Ac­cessing the data objects by more than one process causes a lot of management problems and aggra­vates real parallel work of these processes. In the end a lot of technical systems like transputer sy­stems or PVM implementations 1 require a client server relation instead of a shared memory for sol­ving this problem and so the number of parallel working processes is increased.

·

The present paper shows a new approach for an implementation avoiding the disadvantage descri­bed above.

1 Parallel Virtual Machine for UNIX clusters from the Oak Ridge National Laboratory (Sunderam, 1990)

Page 5: Distributed computer control systems 1995 (DCCS ¹95)

2. BASIC CONCEPTS

Usually a Petri Net cp is a 5-tupel (P, T, F, V, mo) such that

(i ) P, T are disjoint finite nonempty sets,

(ii ) (iii) (iv )

the sets of places and transitions, respectively F � P x T U T x P, the set of arcs V : F -+ N, the multiplicity function mo : P -+ No, the initial marking (N and No denote the sets of positive and nonnegative integers, respectively.)

A transition t E T is able to fire at a marking m if for every p E P , (p, t) E F

m(p) 2: V((p, t))

Firing t E T at m means to substitute m by mnew where

{ m(p) - V((p, t)) mnew(P) = m(p) + V((t,p))

m(p)

for any p E P.

(p, t) E F

(t,p) E F else

Additionally is defined: pF = {ti(p, t) E F}, Fp = {ti(t,p) E F}, tF = {pl(t,p) E F} and Ft= {pl(p, t) E F}.

For modelling automation systems it is necessa­ry to add some components to the standard Petri Net definition in order to describe the input and the output of data (Fengler and Philippow, 1991): (1.) Wx,

a set of boolean expressions associated to the transitions. If t ET, wx(t) is considered to be an additional condition to fire t.

(2.) wy, a set of boolean output variables associated to the places of the Petri Net. Wy (p) E wy is TRUE, if p is labeled.

(3.) Wa, a set of procedures associated to the places of P. Procedures are started when a new token reaches the place.

Implementing a given Petri Net means to trans­form it into a program by interpreting sets of ele­ments as structures of a parallel program. When doing so, the state of the program or a class of its states can be derived from an actual marking and vice versa.

2

3. TRANSFORMATION

In the following, a Petri Net transformation is shown resulting in a net with particular proper­ties. It is based on separating conflict structures followed by a transformation of the remaining net . Afterwards, the net can be implemented in a mes­sage based manner.

3.1. Conflict situations

Conflicts directly influence the transformation of a Petri Net . Places with more than one posttran­sition are the reason for conflicts in a Petri Net. Such constructs are called static conflict situati­ons. For the present contribution it is necessary to consider several static conflicts in a given Petri Net cfl in a more detailed way (see figure 2) . All the structures consist of a set of transitions A and a set of preplaces S of the transitions of A in such a way that there is at least one transition to each other one which has a common preplace.

All non-free-choice conflict structures result in problems during the (basic) transformation and have to cut out in a first step described below.

� � a) free choice b) standard c) unsolved d) connected

Fig. 2. Static Conflict Structures in a Petri Net

Let II and e be set systems for all conflict struc­tures of a given Petri Net with

and

The function K(II, e) is defined as follows:

Obviously, there is a k EN such that Kk(II, 0) =

Kk+1(11, 0). In this case Kk(II0, 00) is called a

Page 6: Distributed computer control systems 1995 (DCCS ¹95)

maximal conflict set.

For Q = {qlq E P, lpF(q)I > 1}, (Ila, Go) with

Ila= {M;IM; = {q;}, i = l(l)IQI}

and

Go= {N;jN; = {tl(q;, t) E F},

i = l(l)IQI}

is the set of places and their posttransitions which could be the source of a conflict. Furthermore, the connection between some of such sources via their transitions (figure (2d)) is represented in the maximal conflict set Kk(IT, G). In order to get a set with all preplaces oft E G in (IT, G) = Kk(IT0, Go) the set system IT is modified by II' = {pl3t E G : (p, t) E F}.

For further transformation such structures (see fi­gure 3) have to be cut out from a given Petri Net c:I>. The main idea consists in a functional separa­tion of the pre- and the postarea of a transition. The fireability of such a transition can completely be tested in the first subnet . The postarea of the transition located in the second subnet only sets tokens on places, when this transition has got a message from the prearea.

1 Message

Fig. 3. Separation of Conflict Structures

A later discussion shows that only the more dif­ficult conflict situation in figure (2c) must be cut out .

3.2. Transformation of the remaining Petri Net

The transformation of the modified Petri Net cI> = (P, T, F, V, mo) (a net without static con­flict structures) described in this section is car­ried out in three steps. At first, an unmarked

3

place construct ( P' (p), T' (p), F' (p), V' (p)) is de­fined for each p E P of a given Petri Net <P. After doing so, these constructs will be joined by arcs, and a corresponding marking m' is de­fined. Thus, one gets a corresponding Petri Net <I>' = (P', T', F', V', m') with P' = LJ P'(p), T' = LJT'(p) and F':) LJ F'(p).

(1.) Let p E P, tout E T the only transition with (p , tout) E F and Vout the multiplicity of (p, tout)· Then is defined

u = Vout + max(V; Ii= l(l)!Fpl)- 1.

Now each p E P will be transformed into a place construct with a set of places P'(p) defined by

P'(p) = {p�, .. ,p�, X1, . . , Xe}

with i = O(l)u and e = l(I)ltoutFI.

For the definition of the sets of transitions and arcs C1(p), C2(p) and C3(p) are defined by:

C1(p) {(a , b, c)la = O(l)Vout - 1, b = l(l)IFpl,c= a+ Vi: a+ Vi< Vout}

C2(p) {(a , b, c)la = Vout(l)u, b = 0, c = a - Vout : a 2: Vout}

C3(p) {(a, b, c)la = O(l)Vout - 1, b = l(l)IFpl, c =a+ Vi, - Vout : a+ Vi 2: Vout}

With these definitions let

3 C(p) = LJ C;(p).

i=l

Corresponding to the elements of C(p), the fol­lowing transitions and arcs are added for each (a, b, c) E C(p) to the sets T'(p) and F'(p), re­spectively:

ta,b,c(P) E T' (p),

(p�, ta,b,c) E F'(p) with V((p�, ta,b,c)) = 1 and

(ta,b,c,P�) E F'(p) with V((ta,b,c,P�)) = l.

Finally, for each (a, b, c) E C2UC3 arcs have to be added with

Page 7: Distributed computer control systems 1995 (DCCS ¹95)

(ta,b,c, Xi) E F'(p) with ((ta,b,c, x;)) = 1

for all i = l(l)JtoutFJ. In a last step places without pretransitions and their postarcs and posttransitions will be removed from the place constructs. An example of such a place construct is given in figure 4.

Vout = 1

t1 transformation

• tout to,1

ltoot Fl=l Fig. 4. Example of an Easy Place Const ruct

(2.) Let e = l(l)JtoutFJ. Then one Xe E P'(p) exists corresponding to each of the postplaces vi, V2, . . , Ve of tout· Furthermore ta,b,c(ve) E T'(ve) are the transitions of the corresponding place con­structs. Now for all (a,b,c) E C1(ve)UC3(ve) add an arc to F' with

(3.) A marking m' of <I>' is said to be corresponding to m of <I> if for all place constructs

(i) Lp'EP'(p) m'(pi) = 1 (ii) Vi,J: m'(x;(p)) = m'(xj(p)) (iii) Vi,j: if m'(pi) = l, p� E P'(p)

i + m'(xj(p)) * Vout = m(p)

The result of the transformation is a transformed Petri Net <I>' which simulates the behaviour of <I>. An important property of <I>' is that the multipli­city function is equal 1 for all arcs of the net.

4. IMPLEMENTATION

Implementing <I>' means to find out interpretati­ons for special elements of the given Petri Net. This work falls into two parts: implementing the conflict structures and implementing the transfor­med remaining Petri Net <I>'.

The main problem with implementing conflict structures results from the shared use of a data ob­ject representing places with more than one post­transition. The new approach avoids these pro­blems, because all elements of the conflict struc­ture (K(II, 8)) will be cut out and implemented

4

as a single process, containing all elements for the complete solution of the conflict in a loop. The connection of the conflict structures with the re­maining net can be represented by messages, as described above.

Now consider the remaining Petri Net <I>'. One ad­vantage of the described transformation is that the place constructs without the places Xi are state machines. These state machines are connected via Xi and their incident arcs, thus forming so-called systems of concurrent state machines (SCS) .

In a first approach these SCS can be implemented by creating a single process of a parallel program for each state machine (Unger, 1992) . Following this idea, the Xi-elements of <I>' are interpreted as communication structures between these proces­ses.

Places connecting state machines are usually im­plemented as data objects in a shared memory or a server process. But resulting from the transfor­mation described above, each Xi has prearcs only relating to transitions in exactly one state machi­ne, and has postarcs only relating to transitions in exactly one other state machine. Therefore, infor­mation about the state of any Xi will be managed by only one process and so this communication can be implemented by the use of send and recei­ve procedures and the belonging message buffers.

A second approach implementing the transformed Petri Net is based on special structure effects in the transformed Petri Net. Consider <I>' without the places Pi of P' and without their transitions ta,b,c derived from the elements of C2(p) . It can be shown that such nets consist of six basic elements with an interpretation shown in figure 5. Because the multiplicity of all arcs is equal 1 , each token in one of the Xi-places corresponds to a set of parallel processes corresponding to the given interpretati­on of elements. For more than one token one gets a superposition of such process groups.

Sequence

message

Waiting for start

o-1 oq End of process Alternative

Start of a End of process with new process synchronization

Fig. 5. Elements in the Reduced Transformed Petri Net

In all cases there is the restriction that in a gi­ven moment only one transition of each place con-

Page 8: Distributed computer control systems 1995 (DCCS ¹95)

struct can be fired . This will be achieved by a spe­cial interpretation of the p;-elements of the trans­formed Petri Net . The marking of these places can be considered as special values of a marking of p in the original net . Thus the values can se­lect the fireable transitions and in this way sol­ve the conflicts in the processes. In the parallel program the value of a counter will be implemen­ted by messages circulating between the proces­ses. Only one process can receive the message, and therefore only one process can do the next step corresponding to the firing process of exact­ly one transition. Leaving the sector of the given place construct the process sends a message with the new counter information and any process that needs this information can receive it.

At last, consider the interpretation of the transiti­ons ta,b,c derived from the elements of C2(p). Fi­ring one of these transitions entails creating tokens on x; and processes, respectively. The firing pro­cess of these transitions directly depends on firing ta b c if ta b c derives from C3 and c 2: Vout. This alg�;ithm

'i� implemented by creating a new pro­

cess which receives as its argument the data from the circulating message. The mentioned process creates other new processes, changes the informa­tion of the message (-Vout for each new process up to the moment when the data are less Vout) and sends the updated message to any process re­quiring it. The choice of the implementation me­thod depends on the properties of the given net . If the number of places is not too high, the first approach is more effective, a lower number of to­kens favours the second method but a mixed use of both methods is possible too.

Results from a experimental implementation of a control program achieved by several methods are shown in figure 6 .

transitions: 18 rank of parallelism: 4

processes: 6, 6, 1 8 (respectively) 35����������.

30

25

20

• 1 "short way" • 2 "long way" • 3 one process

per transition

1 2 3 4 5 6 7 8 9

Fig. 6. Time Behaviour of a Parallel Program

5

5. CONCLUSION

A new method for the automatic generation of parallel software from Petri Nets has been shown and some transformation and implementation de­tails have been discussed. The method enables the generation of more efficient parallel code by pre­venting some communication overhead resulting from conflict situations in the net. A first expe­rimental implementation has shown the expected results.

6. REFERENCES

Boillat, J . E. et al. (1991). Parallel Computing in the 1990's. Institut fiir Informatik, Universitiit Basel.

Fengler, W. and I. Philippow (1991). Entwurf in­dustrieller Mikrocomputersysteme. Carl Hanser Verlag, Miinchen-Wien.

Reisig, W., W. Brauer and G. Rozenberg (1987). Petri nets: Applications and Relationships to Other Models of Concurrency. In: LNCS 255. Springer Verlag, Berlin-Heidelberg-New York.

Sunderam, V.S. (1990). Parallel Virtual Machi­ne. In: Concurrency: Praktice and Experience, No. 12, 315-339.

Thomas, G.S . (1991). Parallel Simulation of Petri Nets. Technical report, University of Washing­ton.

Unger, H . (1992). A Petri Net Based Method to the Design of Parallel Programs for a Multipro­cessor System. In: LNCS 634. Springer Verlag, Berlin-Heidelberg-New York.

Unger, H. (1994). Untersuchungen zur Implemen­tierung von Petri-Netz-Modellen auf fr!ehrpro­zessorsystemen. Dissertation, TU Ilmenau.

Page 9: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IF AC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

MULTIAGENT-BASED CONTROL SYSTEMS:

AN HYBRID APPROACH TO DISTRIBUTED PROCESS CONTROLt

Juan R. Velasco, Jose C. Gonzalez, Carlos A. Iglesias and Luis Magdalena

ETSI Telecomunicaci6n, Universidad Politecnica de Madrid Ciudad Universitaria sin., E-28040 Madrid, Spain.

email: jvelasco I jcg I [email protected] -- [email protected]

Abstract: In this paper a general architecture and a platfonn developed to implement distributed

applications, as a set of cooperating intelligent agents, is presented. Second, it will be shown how

this architecture has been used to implement a distributed control system for a complex process:

the economic control o" a fossil power plant.

Agents in this application encapsulate different distributed hardware/software entities: neural and

fuzzy controllers, data acquisition system, presentation manager, etc. These agents are defined in

ADL, a high level specification language, and interchange data/knowledge through service

requests using a common knowledge representation language.

Keywords: Agents, Distributed control, Fuzzy expert systems, Machine learning, Power

generation

1. INTRODUCTION

This paper presents a way to undertake the distributed

control problem from a multiagent systems point of view.

To summarize, agents are autonomous entities capable of

carrying out specific tasks by themselves or through

cooperation with other agents. Multiagent systems offer

a decentralized model of control, use the mechanisms of

message-passing for communication purposes and are

usually implemented from an object-oriented perspective.

tThis research is funded in part by the Commission of the European Communities under the ESPRIT Basic Research Project MIX: Modular Integration of Connectionist and Symbolic Processing in Knowledge Based Systems, ESPRIT-9119, and by CDTI, Spanish Agency for Research and Development CORAGE: Control mediante Razanamiento Aproximado y Algoritmos Genericos, PASO-PC095.

The MIX consortium is formed by the following institutions and companies: Institute National de Recherche en lnformatique et en Automatique (INRIA--Lorraine/CRIN--CNRS, France), Centre Universitaire d'Informatique (Universite de Geneve, Switzerland), Institute d'Informatique et de Math6matiques Appliquees de Grenoble (France), Kratzer Automatisierung (Germany), Fakultat fiir lnformatik (Technische Universitiit Mtinchen, Germany) and Dept. lngenierfa de Sistemas Telematicos (Universidad Politecnica de Madrid, Spain).

The CORAGE Consortium is formed by UITESA, Dept. lngenierfa de Sistemas Telematicos (Universidad Politecnica de Madrid), IBERDROLA and Grupo APEX.

7

Page 10: Distributed computer control systems 1995 (DCCS ¹95)

The multiagent architecture that has been developed can be used to implement any kind of distributed application, not only distributed control system. In this general framework, several software elements (the agents)

cooperate to reach their own goals. System designer has to •decide the set of agents that will be involved in the

task, specifying their particular capabilities. At a high level, this part of the design work is carried out by describing the agents in ADL (Agent Description Language) (see below and Gonzalez, J.C et al. (1.995)).

The problem of how to intercommunicate data between agents is solved by using a common knowledge representation language.

As an example of how to apply this architecture for distributed control, a real system is going to be shown: a fossil power plant. In particular, the goal is to achieve

strategic (not tactic) control: the system has to reduce the

heat rate (the ratio combustible/generated power), suggesting appropriate set · points for automatic controllers or human operators.

At this moment, two versions (distributed and non-distributed) of a control system for a real power plant sited in Palencia (Spain) (Garcia, J.A. et al , 1.993)

are being implemented. This paper is focussed mainly on the distributed one.

2. AGENTS DESCRIPTION

The proposed architecture has been designed according to the following lines:

·• Use of mechanisms of encapsulation, isolation and local control: each agent is an autonomous, independent entity.

• No assumptions are made regarding agents' knowledge or their problem-solving methods.

• Flexible and dynamic organization is allowed.

AGENT OBJECT I CONTROL

\ Mailbox Destination Policy Policy

\�) � I COMMUNICATION � n \� I ,�

Mailbox I i1::<1! DATABASE I ,� � �-I �) � -�, Agent State ! � \�

Fig. 1. Agent model

Every agent is composed of a control block, a database

(including an agent model, an environment model, the

8

agent state, some private objects and global data), and a communications block (the network communications

model and a mailbox).

Any agent may include some agent goals (processes

which start when the agent is born), offered services (the agent offers to the rest of agents a set of services, and these services may be executed in concurrent --as an

independent process--, or non-concurrent mode), offered primitives (a set of internal services which may modify

some of the agent's private objects) and required services (a list with the names of the services that this agent

requires).

One of the mayor features of these agents is that their services (if concurrent) are executed as separated processes, so the agent control loop can. continue its job.

In this way, the same (concurrent) service can be executed several times, each one called from C1 different agent.

3. MIX MULTIAGENT PLATFORM

At the network level, coordination among agents is

carried out through specialized agents (called "yellow pages" or YP). Whenever an agent is launched, it

registers first to YP, informing about its net address, its offered services and the services it will request from

other agents. In the same way, agents can subscribe to

"groups". Groups refer to dynamic sets of agents, and can be used as aliases in service petitions. So, service petitions can be addressed to particular agents, to every

agent in a group or to all the agents offering a service.

YP agents update continuously the information needed by

their registered agents. Therefore, these are able to

establish direct links among them, so avoiding collapse

due to YP saturation or (some) network failures.

Regarding agent communication, several primitives are

offered, including different synchronization mechanisms (synchronous, asynchronous or deferred) and higher level protocols, as Contract Net.

At this moment, the MIX platform (Gonzalez, J.C. et al., 1.995) is made up of four elements:

• MSM (Multiagent System Model) C++ library, with the low level functionality of the platform. It is a modified version of the work carried out by Dominguez (1.992).

•AOL translator. ADL (Agent Description Language)

is the language designed to specify agents. ADL files gather agents descriptions, and the translator generates C++ files and the appropriate makefile to obtain executables.

Page 11: Distributed computer control systems 1995 (DCCS ¹95)

• CKRL ToolBox. A restricted version of CKRL (Common Knowledge Representation Language), developed by the MLT ESPRIT consortium (Cause, K et al., 1.993), has been implemented to interchange infonnation between agents1• This toolbox includes static and dynamic translators from CKRL descriptions to C++ classes and objects and vice-versa.

• Standard ADL agent definitions and CKRL ontologies.

4. AN APPLICATION: ECONOMIC CONTROL OF A FOSSIL POWER PLANT

A fossil power plant is a very complex process with a large number of variables upon which operators can actuate. The objective of this control system is to reduce the combustible consumption while generated power is kept constant. The first problem is that there not exists a reliable model of the process; so the system needs to learn how the power plant works. The second problem is that the quality of combustible used -a mix of anthracite and soft coal in the particular case of the power plant where the control system is going to be installed­changes every 5 minutes (there is a small homogenization of the last hour combustible, so coal quality changes with a smooth curve). 1bis coal quality is part of the heat rate calculation, that is the optimization variable.

Ii ' I I I I I i I I

POWER PLANT

i Acquisition I System

� � \ \ Controllers )

l I

�r"I� j r-L,....e_a_m-,-in-g--, � \ /

System ,� ( Operators 1 �

Fig. 2. Application diagram.

This last problem implies that the control system can have access only to an indirect estimation of the real heat rate. To solve it, a new perfonnance criterion has to be

1The platform let us to use any other language for intercommunication between processes. In this way, KIF (Knowledge Interchange Format) (M. Genesereth, 1.992), anotherwidely used language, is being considered as the second native language of the platform.

9

determined. At design time, two variables are being analyzed to substitute the heat rate:

I.- Principal air flow to the boiler: This air flow carries

the coal powder from mills to the boiler. So if this variable decreases, the combustible consumption decreases whichever the coal quality is.

2.- Boiler output gas temperature: A common sense analysis says that a lower temperature at the output of the boiler is better than a high one. In other case, heat is being wasted, so the plant is burning too much coal.

In both cases, the real optimization variable will be the ratio selected-variable/generated power, to obtain a relative consumption. After some perfonnance tests in the power plant, one of both variables will be selected as objective.

In order to obtain good quality values for the control variables, a data acquisition system will filter the signals that reach the control system from sensors. The acquisition module gets 200 variables, and gives 23 to the optimization module. This 23 variables are known as

the context vector. The optimization module will give I I suggestions (over I 1 operation variables) to controllers or operators -the so called operation vector-. The acquisition/filtering module is a very important part of the whole system: reliable inputs are even more needed that in the case of conventional control systems.

The control system (for some variables, a suggestion system) uses fuzzy logic to obtain the operation vector every I 0 minutes. In order to make this fuzzy controller

. more accurate, the space of known states is divided in several big areas (called macrostates). These macrostates can be defined by experts (Velasco, J .R. et al., 1992), or computed using fuzzy clustering techniques (Velasco J .R. and V entero, F.E., 1994) or a neural network. In this case, the second approach has been used ..

To create the fuzzy knowledge bases, a modified version ofthe C4.5 algorithm (Quinlan, J.R., 1993) is used. This modification creates fuzzy rules from sample data files: to make the C4.5 function learn, the system must provide it a set of input vectors (context vectors) and the appropriate class for each vector. The system compares two consecutive vectors to determine when a cost reduction has been obtained and so, to classify the actions in the operation vector as bad, regular or good ones. After this classification, the algorithm creates fuzzy control rules.

The control system has as many rule bases as macrostates. When a new data vector is obtained, the control system asks the fuzzy clustering function about the appropriate macrostate. Since a given state may belong with different degrees to several macrostates, this

Page 12: Distributed computer control systems 1995 (DCCS ¹95)

function selects the knowledge bases (KB in the following) to be used, along with their respective validity

degree.

If the performance of the power plant is bad after several

input vectors and several suggestions, the control system

will ask the rule base generator for a new KB. This new

KB will replace the old bad one.

Finally, suggestions made by the control system are used

as set points by conventional controllers or human

operators.

5. ADL AND CKRL SPECIFICATION

For the design of this application with the MIX platform,

this distributed control system has to be seen as a set of

agents with their respective goals and services,

communicating them through exchanging messages.

Figure 3 shows a graphical description of this system

where each main action or group of actions may be seen

as an agent with several goals/services.

Fig. 3. Agents description

The Acquisition agent gets data from process sensors and

gives context vectors to the optimizer upon demand. The

Optimizer will ask the Class_state agent for the

apropriate macrostate, and will use the correct(s)

Knowledge Base(s) to obtain the operation vector. The

values of the variables of this vector will be sent to

specific Controllers as set. point or will be shown to

operators for a manual adjust. The optimizer agerit will

ask for a new KB to the Learning system if it sees that

the cost value (the indirect heat rate) is growing up.

The MIX architecture uses ADL (Agent Description

Language) as a specification/design language. From the

ADL file, the MIX platform creates C++ agent files.

After compiling and linking these files with the libraries,

each agent will be an independent executable program

which can run in a different computer. The complete

ADL file for this application is shown in an appendix at

the end of the paper. In this section just the agent

IO

definition process is going to be presented and it is going

to be focused on the optimizer agent.

The Optimizer agent has as its proper goal the

optimization of the heat rate. The pseudocode for this

goal is as follows:

Repeat for ever

Get context vector

If heat rate is bad for n times

Ask for new Knowledge Bases

Ask for macrostate(s)

Generate operation vector

Set operation points to the controllers

Tell operators manual actions

Wait delay-time

In the code, bold face lines show service petitions that

will be asked to different specialized agents: The

Acquisition agent will give the context vector, the

Learning agent will create new KBs, the Class_states agent will classify the context vector and each Controller will try to adjust the different set points.

However, at design level, the agent description only

needs to know the name of r�quired services (it does not

have to know which agents will be available to perform

them), the name of the functions that implement the

services and goal, and the C++ file where this functions

are described. The ADL description of the Optimizer agent is:

AGENT Optimizer -> BaseAgent

RESOURCES

REQ_LIBRARIES: "optimizer. C"

REQ_SERVICES: Give_Last_Data;

Give_RB;

Classif_State;

Set_Point;

Send_ Vector

GOALS

Optimize: CONCURRENT optimize

END Optimizer

When a service is specified, input and output types must

be specified too. For instance: ·

AGENT Learning -> BaseAgent

RESOURCES

REQ_LIBRARIES: "learning. C"

REQ_SERVICES: Give�Histo_Classified

SERVICES

Give_RB: CONC1JRJU!:N'l' give_rb

REQ_MSG_STRUCT powplant::Class

ANS_MSG_STRUCT powplant::Rules

END Learning

In this case, Class and Rules are CKRL structures defined

in the CKRL file. The MIX platform provides translation

mechanisms to convert CKRL objects into C++ variables

and vice-versa. The complete CKRL file is shown in the

appendix.

Page 13: Distributed computer control systems 1995 (DCCS ¹95)

6. CONCLUSIONS

Multiagent systems are proposed as an adequate

approach for the design and implementation of

distributed control systems. In particular, the multiagent

platfonn developed for the MIX ESPRIT-9119 project is

being used for the economic control of a fossil power

plant Although full evaluation of the system has not been

yet finished, we can advance some preliminary

conclusions. In comparison with the conventional

(centralized) architecture previously used, the distributed

solution shows evident advantages:

• Interfaces are more simple, so ·speeding up the

development phase of the system life cycle.

• Control is more versatile, in the sense that this

approach facilitates the simultaneous use of several

controllers based on different techniques (with their

own errors depending on the problem state).

• If error estimation is available as part of the output of

the controllers, this information can be used to

improve system accuracy.

• If a reil.l time problem is faced, as the controllers have

in general different response times, the system may

decide upon the solutions at hand in any instant.

• Systems are more reliable in terms of fault tolerance

and protection against noise.

7. REFERENCES

Causse, K, M. Csernel and J.U. Kietz (1.993). Final Discussion of the Common Knowledge Representation Language (CKRL). MLT

Consortium, ESPRIT project 2154, Deliverable

D2.3.

Dominguez, T. (1.992). Definici6n de un modelo concurrente orientado a objetos para sistemas multiagente. Ph.D. Thesis. E.T.S.I.

Telecomunicaci6n, Universidad Politecnica de

Madrid (in spanish).

Garcia, J.A., J.R. Velasco, J.A. Castineira and J. Martin

( 1.993). CORA GE: Control por Razonamiento Aproximado y Algoritmos Geneticos. Propuesta de Proyecto. Project proposal for PASO-PC095

CORAGE Project (in Spanish)

Genesereth, M., R. Fikes and others (1.992). Knowledge Interchange Fonnat, version 3.0. Reference manual. Computer Science Department, Stanford University.

Gonzalez, J.C., J.R. Velasco, C.A. Iglesias, J. Alvarez

and A. Escobero (1.995). A Multiagent Architecture for Symbolic-Connectionist Integration. MIX

Consortium, ESPRIT project 9119, Deliverable DI

11

Quinlan, J.R ( l .993), C4.5: Programs for Machine Leaming. Morgan Kaufmann, San Mateo, CA, USA.

Velasco, J.R., G. Fernandez and L. Magdalena (1.992).

Inductive Learning Applied to Fossil Power Plants

Control Optimization, in Symposium on Control on Power Plants and Power Systems", IFAC, Munich,

Germany

Velasco, J.R. and F.E. Ventero (1.994). Some

Applications of Fuzzy Clustering to Fuzzy Control

Systems in 3rd Int. Conf on Fuzzy Theory and Technology, (P. P. Wang (ed.)), 363-366 Durham,

NC, USA.

Page 14: Distributed computer control systems 1995 (DCCS ¹95)

APPENDIX: ADL AND CKRL FILES

A.I ADL file

#DOMAIN "power_plant_domain" #YP_SERVER "madrazo.gsi.dit.upm.es:6050"

II Server of Yellow Pages Agent #COMM_LANGUAGE CKRL #MIX_LIBRARY

"lexportlhome21mixltoolslMIXcurrent" #ONTOLOGY "powplant.ckrl"

II **** YP_Agent provides general services, as II checkin, checkout, etc

AGENT YP_Agent -> YPAgent END YP_Agent

II **** Process agent obtains data from II sensors (this is its goal), filters II them, and offer a service to give the II last obtained to any other agents (in this II example, to Optimizer)

AGENT Acquisition -» BaseAgent RESOURCES

REQ_LIBRARIES: "acquisition. C" GOALS

Collect_Data: CONCURRENT collect data SERVICES

-

Give_Last_Data: CONCURRENT give_last_data ANS_MSG_STRUCT powplant::Vector

END Process

II **** Optimizer agent asks for the last II obtained data vector, asks to Class_States II agent for the correct class of this vector, II and obtain the rule-file-name of this class II asking to Create_RB. II Suggestions for control the process is II given throw the standar output.

AGENT Optimizer -> BaseAgent RESOURCES

REQ_LIBRARIES: "optimizer. C" REQ_SERVICES: Give_Last_Data;

Give_RB; Classif_State; Set_Point; Send_Vector

GOALS Optimize: CONCURRENT optimize

END Optimizer

II **** Class_State offers two services: II Classif_State to Optimizer, in order II to clasify vectors in the appropriate II macrostate, and Give_Histo_Classified that II suplies a file with data to learn rules.

AGENT Class_States -> BaseAgent RESOURCES

REQ_LIBRARIES: "class_states.C" GOALS

Create_States: CONCURRENT create_states SERVICES

Classif_State: CONCURRENT classif state REQ_MSG_STRUCT powplant::Vect;r ANS_MSG_STRUCT powplant::Class;

Give_Histo_Classified: CONCURRENT give_histo

ANS_MSG_STRUCT powplant::Vector END Class_States

II II This agent creates rule bases, and II give them to Optimize agent when II they are needed II AGENT Learning -> BaseAgent

RESOURCES REQ_LIBRARIES: "learning. C"

12

REQ_SERVICES: Give_Histo_Classified SERVICES

Give_RB: CONCURRENT give_rb REQ_MSG_STRUCT powplant::Class ANS_MSG�STRUCT powplant::Rules

END learning

II **** Interface shows operation vector II to operator

AGENT Interface -> BaseAgent RESOURCES

REQ_LIBRARIES: "interface. C" GOALS

Show: CONCURRENT show_actions SERVICES

Send_Vector: CONCURRENT send_vec REQ_MSG_STRUCT powplant::Vector

END Interface

II **** Controllers sets the specified value II for the variable that it controls.

AGENT Controller_l -> BaseAgent RESOURCES

REQ_LIBRARIES: "controllers. C" GOALS

Control: CONCURRENT control SERVICES

Set_Point: CONCURRENT set_point REQ_MSG_STRUCT powplant::Point

END Controller_l

AGENT Controller n -> BaseAgent RESOURCES

REQ_LIBRARIES: "controllers.C" GOALS

Control: CONCURRENT control SERVICES

Set_Point: CONCURRENT set_point REQ_MSG_STRUCT powplant::Point

END Controller_n

A.2 CKRL

II Class of the process data. II The class number is used to select the II apropiate rule base for the process state defsort intpos range (integer (l:*)); defproperty class_number sortref intpos; defconcept Class

relevant class_number;

II Process data is a list of values of II variables obtained form sensors. II Each variable has two values: II the measure and a valid flag defsort data list (real (0.0:1.0)); defsort valid list (integer (0:1)); defproperty vectordata sortref data; defproperty vectorvalid sortref valid; defconcept Vector

relevant vectordata,vectorvalid;

If Set points are normalized real data defsort point range (real (0.0:1.0)); defproperty pointdata sortref point; defconcept Point

relevant pOintdata;

II Rules for rule-base communication II rules are wrotten as strings, and II decoded by optimizer defsort regla_s range string; defproperty regla_p sortref regla_s defconcept Rules

relevant regla_p;

Page 15: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

ON THE MODELLING OF DISTRIBUTED REAL-TIME CONTROL APPLICATIONS

Martin Torngren

DAMEK-Mechatronics, Dept. of Machine Design, The Royal Institute of Technology, S-100 44 Stockholm, Sweden.

Email: [email protected], Fax: +46-8-20 22 87

ABSTRACT: For the successful design and implementation of distributed real-time control applications, models that adequately state real-time behavioural requirements and provide information needed to assess different decentralization approaches are essential. Real-time behavioural models for control applications based on precisely time-triggered actions, synchronous execution and the specification of multirate interactions are introduced. Contemporary computer science models are subsequently evaluated. The use of the modelling approach in design of distributed control systems is discussed.

Keywords: Distributed control, Control applications, Distributed computer control systems, Real-time systems, Modelling.

1. INTRODUCTION

Technological progress and user needs are stimulating new areas of distributed computer control systems (DCCS). In a historical perspective, DCCS were first introduced in process control applications in the 70s, followed by manufacturing systems, and in the late 80s/ early 90s by machine embedded control systems. Further, related to this there is the mainstream of general purpose distributed processing and telecommunication systems. In view of the multitude of available distributed system concepts and applications it is clearly important and natural to focus on the characteristics and requirements of a particular applications class when considering a DCCS approach. In this context and in particular for distributed systems, modelling is very important.

This paper is motivated by distributed real-time control applications as found or being developed in for example vehicles, aircrafts, robots and process control. Even though such distributed applications have things in common with previous distributed applications, real­time requirements are different. Moreover, the inherent interdisciplinarity of real-time control applications constitutes an important problem for DCCS design.

Modelling is necessary to capture application requirements and to support the design process from specification to implementation. I.e. the modelling approach should support successive refinements and be formal enough to provide a basis for automated tool support. For real-time control applications it is

13

important that models support inter-disciplinary design (e.g. necessary cooperation between computer and control engineers). Key issues with respect to decentralization need modelling support. A DCCS implementation involves fundamental issues concerned with how to find a suitable mapping between an application and the underlying architecture. There are then many degrees of freedom for a designer with respect to implementation oriented application structuring, allocation and the choice of execution strategy. Such choices can only be found and evaluated if the application is modelled appropriately.

Paper purpose and structure: The paper focuses on real-time behavioural models of control applications. Section 2 reviews essential application characteristics. Section 3 presents a brief survey of existing real-time modelling approaches. Section 4 presents modelling extensions for real-time control applications and indicates their use in design. Section 5 gives conclusions.

2. CHARACTERISTICS OF REAL-TIME CONTROL APPLICATIONS

Control applications are often described using modes and a hierarchical structure. For each mode "top-level" functions that provide system services can be identified, compare for instance with the functions accelerate/decelerate in an automobile. These involve an interaction between several subsystems (decomposed functions), e.g. engine, braking,

Page 16: Distributed computer control systems 1995 (DCCS ¹95)

transmission and clutch control. A top-level function may therefore not have a clear correspondence to a particular unit or subsystem. Top-level functions may have different characteristics depending on the particular mode. This could relate to for example sampling frequencies or control algorithms used. Top­level functions can be further decomposed in a hierarchical fashion until elementary functions have been obtained.

2.1 Control, data-flow and multirate interactions

There are many names used to denote a thread of control in a computer system, including usecase, process, task, thread of control, transaction, etc. In this paper the term activity will be used to denote control­flow which may stretch over several elementary functions. It is important to distinguish between a specified activity and its implementation. In the implementation, an activity may involve the execution of sequences of elementary functions in several nodes. These entities are referred to as processes.

A simple control system is characterized by few couplings between elementary functions. In more complex systems, elementary functions can be involved in the provision of several top-level functions, and often performed with different sampling rates, referred to as multirate systems. Data-flows between elementary functions typically have a connectivity of one-to-one or one-to-many. Data-flows take place within a thread of control and between threads of control with equal or different periods. Such interactions between threads of control with different periods are referred to as multirate interactions, Torngren ( 1995). A specific problem concerns how to model and treat such interactions. Consider for example a multiple drive system in a paper machine where there is a need to maintain a strict tension of the paper, thus requiring associated drives to be sufficiently synchronized. There is a local controller per drive and a coqrdinating controller that based on feedback from the local controllers maintains the synchronization by providing speed references to the local controllers. If the system is multirate it is typically desirable to specify the control delays (constant or bounded) of the data­flows between local and coordinating controllers, in order to fulfil control objectives. A control delay can be principally defined as the delay from sampling to actuation.

Data-flows include discrete (e.g. system mode) and continuous (e.g. shaft position) valued data. For discrete data (e.g. limit switch on or off, system mode one to five) a value change is very significant. Continuous data (e.g. shaft position or position reference) is a representation of a continuous entity which is discretized for use in the computer, e.g. by analog to digital conversion or by computation. The difference between two consecutive samples in this case depends on the bit resolution (giving a quantization error), the sampling frequency and the measurement accuracy (in case of sensor sampling). With sufficient resolution, sampling frequency and low noise, the difference is usually much less significant compared to the former discrete case. These two types of data-flows are referred to as discrete vs. continuous.

14

2.2 Timing requirements

Control applications often involve both sampled data control and sequence control (event-response control). The control system thus needs to handle external events (periodically or aperiodically) and perform periodic control. The triggering of activities/elementary functions is expressed using different terminology in various disciplines. In computer science and in control theory the following terms are used respectively: periodic (time-triggered)/aperiodic (event-triggered), Kopetz et al. ( 199 1 ) vs. continuous/discrete/discrete event systems. These terms are similar but there are some important discrepancies. The most common interpretation of periodic execution in computer science is an execution that takes place some time during its period. This has the effect to introduce period variations (termed jitter). Periodic operation as understood in control theory, however, is interpreted to mean periodic sampling and actuation with zero jitter, perfect synchronization in multirate systems and constant control delays (also for multirate interactions), Wittenmark et al. ( 1 995), Torngren ( 1995). There is thus a gap between real-time control applications and computer science models.

From discrete time control theory the following three requirements are readily identified: Periodic actions with equidistant inter;vals, Synchronized actions, and Response time, see Astrom and Wittenmark ( 1990). Whereas, disturbances, model inaccuracies and sensor noise, have been extensively treated at the controlled process side, little work has treated deficiencies in the computer system implementation of the control system with respect to time-variations, Wittenmark et al. ( 1995), one exception being response times. Time­variations can refer to varying sampling periods, i.e. varying intervals between successive sampling actions, varying control delays, i.e. varying intervals between related sampling and control actions, or non (perfectly) synchronized sampling and actuation in multivariable and multirate control systems. Variations are among other things due to run-time scheduling and varying execution times. Time-variations have the effect to deteriorate control performance and can potentially cause instability, Torngren ( 1995).

To capture behavioural requirements of sampled data control systems, the concept of "Precisely time­triggered actions " with tolerances was introduced by Torngren (1995). They are used for specifying the timing behaviour of sampling and actuation actions (corresponding to a realistic interpretation of the assumption that such actions occur exactly at specified time instants). The applicability of precisely time­triggered actions with tolerances is however believed to be larger and can be introduced for both relative and absolute timing constraints. Kopetz and Kim ( 1990) used a similar notion to manage consistency constraints in distributed systems (update of discrete data, simultaneous mode changes). For sampled data applications a relative accuracy (bounded clock drift) is sufficient. The tolerances can also be interpreted as synchronization requirements (precision) for precisely time-triggered actions in multivariable control systems, e.g. synchronizing actuation of several outputs. Tolerance specifications can be used to define the allowed deviation from nominal timing. For automatic control applications, tolerances can in principle be derived from control analysis, Torngren ( 1995).

Page 17: Distributed computer control systems 1995 (DCCS ¹95)

3. A PPLICABILITY OF PROPOSED MODELS FOR REAL-TIME CONTROL SYSTEMS

Other surveys of modelling approaches can be found in Lawson (199 1), Motus and Rodd ( 1994) and Bucci et al. ( 1995). Modelling approaches are here categorized based on their origin, or essential aspect.

Structured methods based on control and/or data flow. Examples in this category include the models used in the MARS system, Kopetz et al. ( 199 1 ), HRT-HOOD, Bums and Wellings ( 1994) and GRAPE, Lauwereins et al. ( 1995), and DARTS/DA, Gooma ( 1989). This category can be further subdivided into control and data flow oriented models. Models with a basis in real-time scheduling theory are characterized by a focus on control flow and explicit considerations of timing specifications, although with an emphasis on response time requirements. Methods in structured analysis have been extended towards real-time systems by introducing also control-flow aspects in data-flow graphs, e.g. Hatley and Pirbhai ( 1987). The methods only in a limited fashion consider timing specifications and further mix control- and data-flow specifications which reduces clarity.

Finite automata and extensions. Examples in this category include Modecharts, Jahanian et al. (1988) and Petri-Nets and their extensions, Bucci et al. (1995). These models focus on system behaviour. However, real-time is most often treated as an extension. The Real Time Logic used in Modecharts is an exception in this regard and allows the specification of both relative �nd absolute timing of events. However, only response tlme requirements are considered in given examples. Petri­nets are powerful, but have been criticized for not scaling up to larger applications and for making modelling complex, Motus and Rodd ( 1994).

Formal methods. Examples include CSP, Hoare (1978), the Q-model, Motus and Rodd ( 1994), and synchronous programming languages, Halbwachs (1993). One major problem with the majority of these approaches is their focus on non real-time systems and the problems of later incorporating rea�-time. J?e synchronous programming languages provide a family of models, with a state or data-flow style. In the models, actions are considered to take "no observable time", e.g. instantaneous broadcasts of data are considered. This facilitates verification of qualitative temporal behaviour. Specification of timing requirements is possible but the specification of multirate interactions is unclear. The Q model is discussed below.

Models in automatic control and signal processing. The models in this category describe functional (transformations), behavioural aspects and data-flow in terms of discrete-time equations. Timing specifications and assumptions were briefly discussed in Section 2.

Bucci et al. (1995) classifies real-time models into operational (essentially state machine

_b�sed, compare

with the two first groups) and descnptive (based on mathematical notations, compare with the two last groups) approaches. This corresponds well t? the above classification. The surveyed groups are typically good at some, but not all of the structural, functional and behavioural aspects, that are all essential for modell�ng of real-time systems. A few selected modelling approaches are now further discussed.

15

3.1 Precisely time-triggered actions

The fact that other timing requirements than response times have largely been neglected, is being realized to some extent in the "real-time computer science community", see e.g. Locke (1992), Lawson ( 199 1), Motus and Rodd ( 1994). A pragmatic approach to implement precise time-triggering (and to reduce time­variations) in priority based scheduling is to create a separate "high" priority process that performs jitter sensitive actions. Since the priority of the process is relatively high, the time-variation can be made small.

In static scheduling, release time and deadline attributes for tasks, {R, D}, are used, Xu and Parnas ( 1990). A precisely time-triggered action can be translated into these attributes according to a given tolerance specification. If precedence relations are added, i.e. as control flow specifications over a number of elementary functions (objects, processes, etc.), a sequence of time-triggered actions can be described. To describe a constant delay, consider a control system performing sampling (Cs), computation (Cc) and actuation (Ca), with execution times given in parenthesis. Assume that the same tolerance, Toi, is given on the period, T, and the control delay, tc. This activity then needs to be modelled as the following three precedence related processes:

Sampling process: { Cs Rs=O, Ds<Tol, T} Computation process: { Ce> Dc<te> Rc=Tol, T} Actuation process: { Ca, Ra=te> Da<tc+Tol, T} .

Release times and deadlines are in the example used to enforce the precedence relations as well as the two precisely time triggered actions. The combination of precedence relations and { R, D} attributes, for each process, only seems to have been applied in static scheduling. Limited attempts in this direction for priority based scheduling are described by Audsley et al. (1995). However, deadlines (shorter than periods) and release times have been used in process models for priority based scheduling, Audsley et al. ( 1995). Similar to the example, this model can be used to implement precedence relations and precisely time­triggered actions. There are also response time oriented approaches based on precedence graphs, used together with both static and priority based scheduling. A deadline is in this case associated with the complete precedence graph, compare e.g. with HRT-HOOD, Bums et al. ( 1994). The MARS system scheduler, Fohl er ( 1994) guarantees that a period specification corresponds to the implemented period.

While release times, deadlines and precedence relations are useful for implementation, they are definitely not ideal for specification of control systems. Further, time synchronized actions are not considered by these approaches. Synchronization, if provided in computer science models, most often refer to resource synchronization, blocking communication . or qualitative timing properties. One notable except10n with regard to precisely time-triggered and synchronized actions is the Q-model, Motus and Rodd ( 1994). A group of synchronous and time-trigg�red functions is referred to as a synchronous cluster. A tlme trigger is defined in terms of a "null channel". and the activation is specified as follows: The equ�valence interval defines the maximum response time for functions within a cluster, counted from the time-

Page 18: Distributed computer control systems 1995 (DCCS ¹95)

trigger. The simultaneity interval defines the maximum time difference in the activation of functions within a cluster. Tolerance interval is basically used as in this paper. The simultaneity interval is supposed to be derived from the tolerance by subtracting a safety margin.

3.2 Evaluation with respect to multirate interactions

Multirate interactions hardly seem to have been addressed at all in computer science models. Delays in communication between pairs of periodic functions have been investigated by Motus and Rodd (1994), but not in the context of multirate sampled data systems. One recent attempt in this direction is the introduction of so called end to end deadlines/delays, Klein et al. ( 1994), Tindell and Clark (1994). Although not explicitly considered for multirate interactions, an end to end deadline can be associated with the data-flow between activities with different rates. However, unless time synchronized execution between activities is considered, the approach can only be used to specify bounded delays, further discussed in section 4.

A rather similar approach could be based on a specification of the allowed "age" of data in multirate interactions. Although the notion of data age has been used (primarily in relation to consistency) it does not appear to have been used in relation to multirate interactions. Another approach is to consider a time­triggered precedence graph which has a period equal to the largest common divisor (LCD) of periods. The elementary functions part of the graph are specified to execute at an integer multiple of the LCD. While this model is simple it is also limited since restrictions on the execution times of functions are introduced (each function must complete within the LCD). A similar technique is well known in signal processing, allowing operations like down- and up-sampling. In the GRAPE system, for digital signal processing applications, Lauwereins et al. ( 1995), multirate relations of this type are considered, referred to as "merged synchronous execution". GRAPE is based on extended data-flow models in which timing requirements are oriented towards bounded delays.

4. MODELLING EXTENSIONS FOR REAL-TIME CONTROL APPLICATIONS

Based on the above discussion it can be concluded that most current real-time computer science oriented modelling approaches do not adequately address issues pertinent to control applications. Limitations exist primarily with respect to behavioural aspects. Modelling extensions are sketched in the following.

4.1 Requirements on a model supporting design of

distributed control applications.

An important aim with the model is to be able to predict the consequences of applying a particular decomposition, partitioning and allocation strategy with respect to resource utilization and timing. The model therefore needs the following information:

Data-flow (structural view): The connections between elementary functions and data sizes must be specified.

1 6

Execution requirements (functional view): For all relevant elementary functions the resource requirements must be stated, or possible to derive, i.e. execution time (or e.g. the number and type of arithmetic operations required by algorithms) and memory requirements.

Control flow and timing requirements (behavioural view): The control-flow specifies when a function executes. Relevant timing on data- and control-flow must be specified, including periods, control delays, and tolerances for (synchronized) precisely time­triggered actions.Further, most typically a designer would like to specify optimization directives for sampling periods and control delays, e.g. to provide a minimized control delay.

Dependability. Dependability requirements are important but outside the scope of the paper.

With the above specifications it is possible to derive estimates of real-time processing and communication requirements (e.g. bytes/s and operations/s). With an architectural model and relevant parameters, these estimates can be translated into execution and communication times. Load balancing is easily assessed. It is also possible to estimate delays of data­and/or control-flow as a function of communication, execution and expected overheads. It should be noted that the derivation of execution time estimates is a problem for dimensioning, in particular early in design.

4.2 Real-time behavioural model extensions

To capture essential behavioural requirements of control applications, two modelling ingredients are introduced, namely elementary fanction triggers and multirate interaction descriptions. The elementary function is the basic entity used in the following discussion. It constitutes the basic entity used in modelling and design.

Triggers for elementary functions. Starting triggers are defined for elementary functions A trigger can be a time-trigger (referring to periodic triggering) or an event-trigger. Time-triggers can further be defined with a tolerance as introduced in section 2.2. Functions execute when triggered and can be associated with a computational model: Read input channels; Perform computations; Write the computed output to the output channels. It is thus assumed that the read action is performed at the beginning of the function, and that the function ends by writing its result. A single elementary function with a time trigger and a tolerance can be used to specify periodic sampling. A sequence of two elementary and precisely time-triggered functions can be used to specify a constant delay from sampling to actuation for a simple sampled data system. The second trigger is in this case defined with a constant delay relation to the first trigger and the tolerance for the second trigger refers to this constant delay.

Synchronous execution. Two or more periodic elementary functions (activities) execute synchronously when related period start points of the functions always are separated by a known constant called phase, within a specified tolerance. Synchronization can be provided by a global clock or by explicit synchronization. For synchronous execution in multirate systems the phase refers to the

Page 19: Distributed computer control systems 1995 (DCCS ¹95)

least common multiple of periods - thus interpreting "related period start points". Consequently, in asynchronous execution the period start points of activities do not have a guaranteed relation, typically varying depending on the characteristics of the local clocks.

Explicitly separated data- and control-flow specifications. An activity specifies the control-flow over several elementary functions by use of a precedence graph, function triggers and, if required, explicit deadline requirements. A precedence relation between elementary functions A and B corresponds to B being event-triggered by the completion of A. For control applications it is common that control and data flow coincide. However, the separation of control and data-flow specifications is still preferred for clarity and to be able to appropriately model multirate interactions.

Multirate interactions. For multirate systems the delays of data-flows between activities with different rates need to be specified. Three modelling approaches, overlapping, extended separate and merged are introduced for this purpose by Torngren (1995). The approaches are best illustrated using an example: a small multirate control system, see Fig. 1 .

torque � [) position .--�����-1-...-������-; $1

EF torque fil 2 +----+I (u

position s2 Fig. l . Data-flow graph of the example system.

The system involves two "local" elementary functions, EF1 and EF2, corresponding to two motor servos, and one coordinating function, EF3 that performs feedback based synchronization. Correspondingly, three activities can be identified. It is assumed that EF1 and EF2 have period T and EF3 period T3. The interfacing functions performing sampling and actuation are denoted Si and Ai (for i=l ,2). Si:l . . 2 (similarly for Ai:l . . 2 and EFi:l .. 2) is used to denote S1 and S2

Control-flow and timing specifications using the extended separate and overlapping approaches are now illustrated for the coordinating controller. The graphical notation used is shown in Fig. 2, which is a specification of the local controllers. Circles are used to denote elementary functions (control algorithms, samplers and actuators). Fig. 2 illustrates the use of time-triggers (TT), trigger source (CLK), period, and for the second trigger also a delay (t) with a minimization directive (<) and a tolerance specification (to!). Note that the same delay is specified for both local activities and that the local controllers are specified to execute synchronously. Alternatively, if different delays are required, the controllers are specified

17

separately. I f the local controllers had been synthesized using continuous time design, the time-trigger for actuation could be removed and possibly replaced by an explicit deadline specification (bounding the control delay sufficiently below n. Horizontal arrows are used to denote precedence relations and vertical arrows time-triggers. In case a function has both a precedence relation and a time-trigger, the latter specifies when execution occurs and the former is interpreted as a specification: The preceding functions and data-flows should be performed prior to the time-trigger.

TT: CLKb T, tolr

T TT: CLKb T:<t, to/1

T e Fig. 2. Control-How spec. of local functions.

The basic idea in the extended separate approach is to model activities separately and to introduce rate interfacing fu.nctions (R/Fs) which ascertain that the delays between activities are bounded. Computer science models that have considered the possibility of multirate interactions tend to fall in this category. Extended separate modelling is used in Fig. 3 for the coordinating function.

TT: CLK2, <T3 TT: CLK2, T3:<t3

T T I 8; 12 f-&1 EF; u I "input RJF' "output RIF'

Fig. 3. Extended separate modelling of coord.function.

Boxes (in control-flow graphs) are used to denote R/Fs. These functions retrieve or provide data from or to an elementary function that belongs to another activity (and which has another period). R/Fs are used together with timing specifications for an activity to ensure that that control delays are bounded. The input RIF is interpreted as the retrieval of data produced by functions S 1 and S2 (Si:J __ 2). Since the activities in this approach are asynchronous (i.e. coordinating vs. local) each data item that is retrieved may have been created up to one sampling period, T (period of data source) ago. The output RIF is interpreted as the provision of data to functions EF1 and EF2 (EFi:J_.2). There is a detection delay ofup to T, associated with the delivery. For the coordinating activity, the control flow specification can thus be interpreted as follows: Retrieve data from Si: 1 __ 2, execute EF 3 and provide data to EFi:l . . 2 according to the given timing specifications. Due to asynchronous execution, a time-varying but bounded delay ( <2T in this example) is introduced. Consequently, the extended separate style of modelling is sufficient only when a time-varying delay is acceptable.

When constant delays in multirate interactions are required (complying with sampled data models), the overlapping approach is suitable provided that period ratios are integers. A hierarchical view in terms of a sampling frequency hierarchy of the system is utilized.

Page 20: Distributed computer control systems 1995 (DCCS ¹95)

The idea is to specify an activity at a particular level by including all elementary functions at the same and lower layers. As a consequence, an elementary function can be used in several activity specifications (hence overlapping). A function can because of this be associated with several sets of timing requirements which must be resolved during scheduling. Overlapping modelling is exemplified in Fig. 4.

TT· CLK <T TT: CLK2, T:<t3 • 2• 3 � �&&<9

"overlapping" "overlapping"

Fig. 4. Overlapping modelling of coord. function.

When period ratios are rational numbers, merged modelling may need to be applied. In this approach, all related (communicating) activities are modelled together in one precedence graph as one merged activity. To do this, the precedence graph must be extended to the least common multiple of the periods with which the merged activity is periodic.

4.3 Design of distributed applications using the model

The introduced modelling concepts, function triggers (precisely time-triggered actions), synchronous execution and approaches for specifying multirate interactions, can be used together with the other more traditional concepts like precedence graphs, data and control-flow specifications to describe the real-time behaviour of control application. The model emphasises the fact that a control application can be described in terms of two top-level timing requirements: Precisely time-triggered actions, and End to end deadlines. The latter arises for constant delay specifications, in an activity that includes more than one precisely time-triggered action. Once the instants of precisely time-triggered actions have been fixed, phase relations can be interpreted as end to end deadlines. End to end deadlines also appear for conventional response type activities.

During design, the system model can be successively refined while the overall timing requirements are maintained. For example, during system decomposition more functions and data-flow paths are added. During evaluation of different allocation approaches, assessment can be done with regards to resulting communication and execution load.

The model is believed to be essential for execution strategy considerations, used to refer to a set of selected policies for triggering, synchronization and scheduling. These policies have a major impact on the timing of a system. When comparing a desired timing behaviour with constraints imposed by design situations, the system hardware architecture and resource management policies may be more or less fixed, thus constraining the allowable solution space. Therefore, in particular when subsystems are provided by different vendors, it is important to define suitable system design policies and interfaces, which typically are at higher layers than normally considered by communication standards and protocols. As an

1 8

example, consider the implementation of the coordinating function in a distributed system. This means that the precedence graph of Fig. 4 will be split in e.g. two parts yielding two processes. To achieve the constant delay specified in Fig. 4, synchronous execution is required. If this is not possible, various transformations to reduce the induced time-variations are possible, e.g. to change the latter trigger from time­triggering to an aperiodic server of some type. As discussed in section 3, the model can be used to assess the applicability of scheduling approaches. These issues are further discussed by Torngren ( 1995).

5.CONCLUSIONS

The paper has described essential modelling ingredients for distributed real-time control applications. The modelling concept has been applied in an industrial robot case study, Torngren (1995), and so far promises to be a useful approach.

6.REFERENCES

Audsley N.C .. Bums A .. Davis R.L, Tindell KW, Wellings A.J. ( 1 995). Fixed priority pre-emptive scheduling. J. Real-Time Svstems. 8. 1 73 - 1 98. Kluwer.

-

Bucci G .. Campani M. , Nesi P. ( 1 995). Tools for specifvi ng real-time systems. J. Real-Time Svstems. 8. 1 1 7-1 72.

Burns A .. Wellings AJ., (1994). HRT-HOOD: A Structured Design Method for Hard Real-Time Systems. J. of Real­Time Sysrt·ms. Vol. 6. No. I , January 1 994.

Foh ler G. (. 1 994 ) . Flexibilitv in staticallv scheduled hard rea/­time svstems. PhD thesis, Techniscfie Universitlit Wien, lnstitut fur Technische Informatik. April 1 994.

Gomaa H. ( 1 989). A software design method for distributed real-time applications. J. of Systems and Software, 9. 8 1 -94. Elsevier Science Publishing.

Halbwachs N. (1 �93 ) . . 'iv�chronousprogramming ofreacti1·e svstems. ISBN 0-792J-93 1 1 -:2. Kluwer.

Hatfoy D.J., Pirbhai I .A . ( 1 987). Srrare!{ies for real-time svstem specification. Dorset House Pub!. . New York.

Hoare C.A .R. ( J 97�). Communicating Sequential Processes. Comm. A CM. \ol. 21 , No. 8. pp. 666-677.

Jahanian F.. Lee R.. Mok A. ( 1 988). Semantics of Modechart in Real Time Logic. Proc. o(2 lsr Hall'aii Int. Conf. 011 Systems Sciences, pp 479-489. Jan. 1 988.

,

Klein M . H . , Lehoczky J.P .. Rajkumar R ( 1 994). Ratc­M onotonic Analysis for Real-Time Industrial Computing. IEEE Computer. January , pp. 24-32.

Kopetz l-1., Zainlinger R., Fohler G . . Kantz H., Puschncr P. . Schutz W ( 1 99 1 ) . The design of real-time systems: From spc�i�c?ti?n to imp�c:memat�on and _v�rification. IEE So.ft�\ aie Eng. J . . 6( J J. pp. 7 _-82. Ma) 1 99 1 .

Kopetz H. , Kim K. ( 1990). Real-time temporal uncertainties rn interactions among real-time objects. Proc. 9th IEEE Svmposium on Reliable and Distributed Systems, Huntsvil le, AL.

Lauwereins R.. Engels M., Ade M . , Peperstracte J .A. ( 1 995). Grape-II : A svstem-level prototypmg envi ronment for DSP applicatfons. IEEE Compute1; Feb. 1 995, pp. 35-43.

Lawson H. ( 1 99 1 ) . Parallel Processing in Industrial Rea/­Time Applications. Prentice Hall.

Motus and Rodd �1 994). Timing analysis of real-time sofrware. Penramon ( Elsevier science).

Tindell K. and Clark J . ( 1 994). Holistic schedulabilitv analysis for distributed hard real-time systems.

·

Microprocessing and microprogra111mi11g, 40: 1 1 7- 1 34. T<.irngren M . 0 99)). l1-todelling and Design ofDistrilmted

Real-Time Control Svstem. PhD thesis. Dept. of Machine Design, The Roval fostitute of Technologv, Sweden.

Wittenmark. B .. Ni1sson J . . Tl)nwren, M. ( 1995). Timing Problems in Real-ti me Control Systems: Problem Fonnulation. In Proc. o(tht• A111aica11 Control Conj

Xu J ., Parnas L.P. ( 1 990). Schedulin,g processes with release times. deadlines. precedence ana exclusion relations.

.. IEEE Trans. on Software Engineering. 1 6. 360-369. Astrom K. J., Wittenmark B. ( 1 990). Computer controlled

svsterns. theorv and design. 2:nd edition, 1 990, Prentice Hall. ISBT\ 0- 1 3- 1 72784-2.

Page 21: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

TEMP ORA L VALIDATION OF

DISTRIBUTED COMPUTER CONTROL SYSTEMS

W. A. Halang• , M. Wannemacher• and J. J. Skubicht

• Fern Universitat, Faculty of Electrical Engineering, D-58084 Hagen, Germany t Institut National des Sciences Appliquies, Dipartement d 'Informatique,

F-69621 Villeurbanne, France

Abstract. An independent test facility is described, which simulates the environments of distributed computer control systems with special emphasis on the exact modeling of the prevailing time conditions. Its main application areas are software verification and safety licensing. Following the black-box approach, just by providing worst case oriented input patterns to integrated hardware-software-systems and monitoring the corresponding out­puts, the time behavior of such control systems can be determined precisely. High accuracy time information is provided by employing a hardware supported timer synchronized with legal time, viz., Universal Time Coordinated, as received via satellite from GPS, the global navigation and positioning system.

Keywords. Black box testing, software verification, simulation, safety licensing, high­precision timing, GPS.

1. INTRODUCTION

To keep pace with external processes, the acquisition and evaluation of process data, as well as appropriate reactions, must be carried through on time in syn­chrony with the events occurring in the external pro­cesses. Control systems usually take care of more than one variable, because most processes consist of a number of more or less independent subprocesses operating in parallel. For multi-variable control ap­plications, this implies the necessity to acquire the state variables simultaneously, and even to send out the control signals together at one time. It is a dif­ficult design issue to guarantee punctuality of a real­time system, because the response times depend on the computers' actual workload and the operating sy­stem overhead, and may differ from one external re­quest to the next.

Sampled data and ordinary discrete time control theory usually assumes synchronous execution (e.g., sampling), zero jitter, and zero - or, at least, con­stant and known - delays between the measurement of controlled variables and the resulting control ac­tions. This assumption is erroneous. Even a small delay can have a marked destabilizing influence on a control loop, and can cause the loop to lack robust­ness with respect to parameter variations or distur­bance inputs. It is common practice in digital control

1 9

to close one or more feedback loops through a com­puter which generates the control laws. To share its services among several control loops and, generally, among various other tasks, the computer intermit­tently becomes unavailable to handle the requests of an individual control loop immediately. Such irre­gularities in computer service result in deteriorated control quality and may even render control systems unstable. Distributed computer implementations of control systems can cause faulty behavior because of inadequate timing performance or due to data loss or corruption.

The problems encountered in establishing proper real­time behavior of distributed computer control sy­stems are difficult and manifold and, hence, not sa­tisfactorily solved yet. They are exacerbated by the need to prove that the timing behavior of integra­ted hardware/software systems meets given specifica­tions. This holds, in particular, for systems working in safety related environments, which must be ap­proved by licensing authorities in a fully independent manner.

Only in trivial cases and when not using operating systems it may be possible to predict time beha­vior by analyzing program code. The common te­sting practice of instrumenting programs with out­put statements to obtain information on intermediate

Page 22: Distributed computer control systems 1995 (DCCS ¹95)

programs states is not applicable to timing analysis, because it is intrusive and, thus, alters the time be­havior. Furthermore, tests in actual control environ­ments are often either too expensive, too dangerous, or simply impossible for other reasons. Hence, such environments must be simulated, and the black box approach has to be taken in examining computer con­trol systems, i.e., they may not modified under any circumstances and any internal details are disregar­ded. It is only to be observed whether instants and values of outputs generated conform to the given re­quirements. Thus, maximum objectivity in validation and safety licensing is guaranteed.

In this paper, it is shown that an examined distri­buted control system's environment can effectively be simulated, and its operation in the time dimen­sion supervised and monitored using standard real­time computers extended by a number of hardware and software features. The software implemented in the examined system is checked by one or more test plans prescribing external stimuli and their respective timing. These test plans are totally independent on the examined software, and are executed on different machines.

2. FEATURES OF A TESTING DEVICE

In contrast to the more or less conventional white­boz teding, requiring information about the testees' implementation, black-box testing allows tests solely on the basis of specifications, see DeMillo et al. (1987) for an introduction. Since

• the hardware and software for generating and evaluating test data is totally independent of testee implementation, thus allowing their de­velopment by other people - even at the same time as the implementation -, and

• the tests start at the external (standard) inter­faces settled in the specifications suggesting the use of universal test environments,

the here pursued approach seems especially suitable for safety licensing.

Nevertheless, some of the "ingredients" necessary for this were applied for similar purposes relatively early. Thus, it is quite usual to use simulators for testing process control computer systems1 in some speciali­zed industrial domains; for examples see Mohan and Geller (1983).

Furthermore, the use of monitoring systems to ob­serve the behavior of computer systems was propaga­ted in scientific literature, especially for error detec­tion in distributed systems (see McDowell and Helm­bold (1989) for an overview). Though, the interacti­vity of this search process makes it almost indispensa­ble to provide mechanisms for re-runs (see, e.g. Tsai

1 Leaving aside the already classical testing of (VLSI) hardware, which is inconceivable witho�t simulation.

20

et al., 1990). Especially in the domain of real time systems, even pure monitoring is feasible (Schmid, 1994), particularly for verifying the observance of lo­gical and, above all, timing conditions.

Though, there are only few publications about inte­grated test environments with monitoring and simu­lation components, e.g. Schutz (1990) and Schoitsch et al. (1990). The mentioned contributions are tai­lored towards special architectures and mainly desi­gned for white-box testing. A good survey of this -established by Glass (1980) and still existing - "lost world" of the practice of industrial software testing in the real time domain can be found in Schutz ( 1992).

According to the application conditions mentioned in the introduction, a testing device useful to support safety licensing of real time software needs to provide the following services:

• simulation of the environment, in which an ex­amined control system is working, by gene­rating input patterns oriented at typical and worst-case conditions,

• surveillance if the outputs produced by the ex­amined system are correct and appear within given time limits,

• no interference with the examined system, par­ticularly no lengthening of its program code and execution time,

• interfacing to the examined system with the same hardware connectors as in the actual ope­rating environment,

• access to the legal time, i.e., UTC, for cor­rect time stamping of events, correct timing of simulated external events, and to allow for putting distributedly acquired monitoiing data into correct relation,

• providing easily readable, concise reports on the test results.

We have built a prototype of a simulation unit mee­ting these requirements. In the following chapters its hardware architecture will be detailed and the func­tions of the software tools implemented will be des­cribed.

3. HARDWARE OF THE TEST DEVICE

For an overview of the hardware structure of the unit we refer to Fig. 1.

Grouped along the internal 1/0 bus of a standard mi­croprocessor, there is a video terminal to operate the unit, a printer for the reports, and a mass storage device. The latter may hold larger data sets to be provided as inputs to the examined system, besides the files needed internally. In accordance with the sy­stem's interconnection pattern to its environment, the simulation unit is equipped with process peripherals such as digital interfaces, analogue converters, and

Page 23: Distributed computer control systems 1995 (DCCS ¹95)

mass CRT printer

storage

micro- I I processor / internal I/O bus

I I GPS mask inter- inter-register . . . register timer register face face

J\ J\ I '1t J, alarm � � . . . - com pa- com pa-....-rator rat or

1' JI\

. . .

,, ,, interrupt lines I/ 0 address bus I/O data bus

embedded system under test

IEEE RS-. . . 488 232-C :){ digital

I/O . . .

I/O slots

Fig. 1. Hardware structure of the testing device (handling one node each of a distributed computer control system)

IEC 488 attachments as well as with various serial line interfaces. Since the number and type of these connections varies with different systems examined, there is the possibility to insert corresponding peri­pherals into I/O slots of the unit. All external lines are brought to appropriate plugs, to enable the easy connection to various systems.

In contrast to this, the further attachments mentio­ned now are always present. A bidirectional interface to the examined system's I/O data bus allows to si­mulate other peripheral devices. Their addresses are freely selectable. A number of registers is provided that work independently and in parallel, into which the microcomputer can store such addresses. Each register's output and the I/O address bus lines of the system on test are fed into corresponding hardware comparators which send signals to the high-precision timer when they detect matches of their inputs. The timer generates a time stamp for all detected signals and forwards these time stamps together with a corre­sponding number to the microprocessor. This feature

21

can also be used to record precisely all times of access to existing I/O devices.

A similar feature is used to generate, as specified in test plans, exactly timed signals, to be supplied to the interrupt input lines of the examined system. To this end the microcomputer transfers these interrupt times as so-called alarm jobs to the high-precision timer. The timer keeps track of all these alarm jobs and compares the alarm time of the earliest alarm job with its real-time clock. When this alarm time is re­ached, the timer raises an alarm signal. This signal is forwarded to all interrupt lines whose correspon­ding bits in the mask register are set. This interrupt generation feature with exact timing is indispensable for a software verification device, because the envi­ronment of an examined system must be modeled as closely as possible, and because time signals delayed by randomly distributed durations may lead to erro­neous test results. The feature is not only employed to feed interrupt signals into an examined system, but also to initiate data inputs synchronized with a time pattern.

Page 24: Distributed computer control systems 1995 (DCCS ¹95)

alarm job input alarm interrupt sources

a Ii I2 In - - - - - - - - - - - - - - - - - - - - -I ASIC • 7 -

. . .

I input -

I FIFO . .

I . or

I .a • 7 I

I S I F 0 .§.mallest input

-.--I I fll"St Q.Ut encoder I I r

40 clk..out

I I 32 I� 8

I '-.. . 7 • 7

I comparator alarm � / -

I � 1� I 32 I �7 clk..in I 32

� oscil-D lat or

I rl alarm clock I I

I 32 event FIFO

I assembling register

I L SDA SCL out..rdy a elk.. out - - - - - - - - - - - - - - - - - - _ I

• 7 time mark

microcontroller

. "" . ,. GPS Till RxD

antenna control data

\[7 ' 7 ' 7 GPS

system bus receiver

µP

Fig. 2. Functional diagram of the high-precision timing and interrupt controller

22

Page 25: Distributed computer control systems 1995 (DCCS ¹95)

4. SOFTWARE OF THE TEST DEVICE

The environment simulator is furnished with a real time multitasking operating system, and a program­ming environment containing, as the main compo­nent, a high level process control language, extended by a few features that support the special hardware. In this language, test plans are written which formally specify the requirements embedded systems have to fulfill. For each interrupt source to be simulated, a test plan contains a time schedule that generates pe­riodic or randomly distributed interrupts according to worst-case conditions. The identifications and the occurrence times of these interrupts are recorded for later usage in performance reports. According to the temporal patterns of interrupts, the test plan proces­sor writes appropriate data to the different outputs that are fed into an examined system. Analogously, the test plan specifies those events, coming from an examined system, that are to be awaited, and the re­actions to be carried out upon their occurrence.

With the help of the I/O address comparison fea­tures, the inputs of the environment simulator are supervised to determine a tested system's reaction times, and whether it provides correct output values. Together with their sources and their arrival times these data are also recorded to be used in the final reports. Since only the external reactions to a given workload are considered, the simulation method also takes the operating system overhead into account. By the possible interconnection of the I/O busses of the testing and the tested device, any kind of peripherals, including DMA units, can be simulated. To carry through this, a test plan only needs to specify a sui­table data source or sink, and an appropriate transfer rate. Further useful functions, that can be invoked in test plans, are I/O bus traces and the logging of 1/0 port activities with time specifications, to be provided in appropriate buffer areas of the simulation unit.

5. HIGH-PRECISION TIMING

An important aspect of distributed real-time control is the observability of the environment, i.e., it must be possible to observe every event of interest genera­ted by the external process and to determine their correct timing as well as temporal and causal or­der. This holds in particular for simultaneously oc­curring events in distributed systems whose simulta­neity will, of course, only be recognized at a later stage when they will be (sequentially} processed, and is a prerequisite for correctly processing avalanches of asynchronous interrupts. To enable observability and to establish information consistency between in­ternal real-time data bases and the environment, dis­tributed real-time control systems - and correspon­ding testing devices - require a common time base to measure the absolute time of event occurrences.

To this end, we have extended the high-precision time processor already presented at the 1994 IFAC Workshop on Distributed Computer Control Systems (Wannemacher and Halang, 1994) by an interrupt re-

23

ceptacle (Fig. 2} for the time-stamping of all moni­tored process events. When a signal arrives at one of the newly provided input lines Ii . . . I0, a correspon­ding time-stamp is formed by the interrupt's arrival time and the encoded interrupt line number. The time-stamp descriptor is latched in the event FIFO and then transferred via the microcontroller to the processor.

As mentioned above, the time processor also hand­les alarm jobs to ensure correctly timed generation of outgoing stimuli. If an alarm job becomes due, an alarm signal is raised. Similar to the arrival of ex­ternal interrupts, a time-stamp is formed and latched in the event FIFO together with the corresponding alarm job number.

This high-precision timer and interrupt controller consists of an application specific integrated circuit (ASIC) implementing an alarm job handler and a time-stamping unit for arriving interrupts, a GPS (global navigation and positioning system) receiver with attached antenna, and a microcontroller interfa­cing the GPS receiver with the ASIC and transferring all events (interrupts and alarms) to the processor.

The information obtained by the GPS-receiver inclu­des, among others, UTC time and date with a pre­cision of better than nominally 100 ns, position and GPS-status. It is transmitted via a serial data in­terface to the microcontroller. At system set-up and every midnight the time information is transferred into the alarm clock. To this end, first the infor­mation is assembled in a corresponding register and, then, transferred to the alarm clock. Thus, the alarm clock keeps track with leap-seconds. Our alarm clock prototype has a resolution of 100 µs. It is driven by a free-running oscillator and synchronized with UTC every second using the time mark signal as provided by the GPS-receiver, which has an accuracy of 1 µs. All components inside the dashed box in Fig. 2 are implemented as an application specific integrated cir­cuit (ASIC) by using the ES2 1.5 µm CMOS standard cell technology.

6. CONCLUSION

A very important subject of future research activities will be the automatic generation of test plans from specifications, i.e.,

• (automatic) generation of test plans and test data for all standardizable testing options, and

• (automatic) checking of the test results

for non-trivial specifications, requiring to join formal and experimental verification. Naturally, the idea of creating appropriate test plans for any kind of testees and test options to be investigated with the help of expert systems is unavoidable. Such expert systems will be similar to those ones already used for the de­tection of hardware malfunctions.

Page 26: Distributed computer control systems 1995 (DCCS ¹95)

7. REFERENCES

DeMillo, R.A., W.M. McCracken, R.J. Martin, and J.F. Passafiume (1987). Software Testing and Eualuation. Benjamin Cummings Puhl.

Glass, R.L. (1980). Real-Time: The "Lost World" of Software Debugging and Testing. Comm. ACM 23(5) , 264 - 271.

.

McDowell, C.E., and D.P. Helmbold (1989). Debug­ging Concurrent Programs. A CM Computing Surveys 2 1 (4), 593 - 622.

Mohan, J.M., and M. Geller (1983). An Environ­mental Simulator for the FDNY Computer­Aided Dispatch System. In: Glass, R.L. (Ed.) , Real- Time Software, 75 - 90. Prentice-Hall.

Schmid, U. (1994). Monitoring Distributed Real­Time Systems. Real- Time Systems 7 33 - 56.

Schoitsch, E., E. Dittrich, S. Grasegger, D. Krop­fitsch, A. Erb, P. Fritz and H. Kopp (1990). The ELEKTRA Testbed: Architecture of a Real-Time Test Environment for High Safety and Reliability Requirements. Proc. IFA C SAFECOMP '90, Gatwick, 59 - 65.

Schiitz, W. (1990). A Test Strategy for the Distri­buted Real-Time System MARS. Proc. IEEE CompEuro '90, Tel Aviv, 20 - 27.

Schiitz, W. (1992). The Testability of Distributed Real- Time Systems. Ph.D. Thesis, Technical University Vienna.

Tsai, J.P., K.-Y. Fang, H.-Y. Chen and Y.-D. Bi (1990). A Noninterfering Monitoring and Re­play Mechanism for Real-Time Software Te­sting and Debugging. IEEE Transactions on Software Engineering SE-16(8), 897 - 916.

Wannemacher, M. and W.A. Halang (1994). A High-Precision Time Processor For Distributed Real-Time Systems. Proc. IFA C Worshop on Distributed Computer Control Systems, Toledo.

24

Page 27: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

MODELLING AND VERIFYING TIMING PROPERTIES IN DISTRIBUTED COMPUTER CONTROL SYSTEMS

A.G. Stothert* and I.M. MacLeod*t

*University of the Witwatersrand, Department of Electrical Engineering, Johannesburg, South Africa

ton leave at University of Newcastle, Centre for Industrial Control Science, NSW 2308, Australia

Abstract: Temporal variables can be used to verify and enforce the timing properties required for the safe operation of physical processes and devices. A theory of non-continuous intervals is used to represent temporal variables. This results in a set of five axioms which form the foundation for an intuitive, deductive temporal logic. A simple simulated distributed real-time control example is used to demonstrate the application of the proposed temporal logic. Advantages of the approach are that it does not suffer from the problem of state explosion and does not require graphing techniques to maintain temporal relationships between variables.

Keywords: Temporal logic; real-time control; distributed computer control; consistency; safety; verification; process control

1 . INTRODUCTION

Maintaining temporal consistency between a physical plant and a distributed computer control system requires both a temporal modelling tech­nique (to verify consistency) and a method for generating temporal controllers (to enforce tem­porally consistent behaviour) . Temporal variables which can be used to reason in, with and about time provide the foundation for ensuring temporal consistency.

Existing temporal logic frameworks have de­veloped from predicate logic (Ostroff, 1989; Moszkowski, 1986) or natural language processing (Allen , 1983 ; Allen, 1984) . Apnlying these frame­works to real-time control problems often presents difficulties. Temporal logic relies on the use of a state representation (Moszkowski , 1986; Seow and Devanathan, 1994) of the problem and state dia­grams (Ostroff, 1989) . While interval temporal logic (Allen, 1983; Allen, 1984) moves away from the state diagram approach, a graph-theoretic ap­proach (Allen, 1983; van Beek, 1992) is adopted to aid in maintaining relationships between tempo­ral variables . As is the case with state diagrams, the graphing technique can easily become compu­tationally expensive.

A temporal variable cons1stmg of a set of non­overlapping time regions (called periods) that form an interval is discussed . In a similar way to standard logic this representation allows temporal variables to be used to generate other temporal variables-the aim being to represent all tempo-

25

ral relationships as intervals rather than using a graphing technique. The deductive nature of the proposed logic and its ability to manipulate vari­ables allows the development of formulae which can be used either to verify temporal relation­ships or to generate temporal variables that sat­isfy temporal relationships. For example, given the statement: X can only be true when Y holds, it is required to find the interval (possibly non­continuous) when X can be true.

2. REPRESENTING TIME

The time axis is defined as the set of real numbers plus the three special "points" , - oo , +oo and 8 , i .e . , T = { IR , -oo, +oo, 8 } . T is an ordered set such that -oo < t < +oo, where t E IR and 5 is defined as the next real number bigger than zero, i .e . , 8 = limx-o+ x. It is necessary for implemen­tation reasons as discussed in section 3 .

Using the time axis, a period P is defined as

and an interval I is defined as a finite ordered set of periods,

I [{P1 , P2 , . . . , Pn} : n E IN , Pn E P

and (Pn+l > Pn)] U 0

where IN is the set of all natural numbers.

Notation. For A E I the nth period is referred to as A.Pn , while A.Piast refers to the last period

Page 28: Distributed computer control systems 1995 (DCCS ¹95)

- 1 0 -5 0 5 1 0

Fig. l . A temporal variable

of A. The coordinates of period n are referenced as A.Pn .X 1 and A.Pn .X2 .

A n example of an interval is shown i n Fig. l . The interpretation of a temporal variable A is

'Vt , t E T , A E I ,

3P E A : (A.P.x1 :S t :S A.P.x2) ( 1 )

that is, A is true when t lies between the end points of periods of A. The temporal variable in Fig. 1 is represented as an interval containing four periods, each period (- oo to 4, 6 to 8 , 1 5 to 17 and 23 to oo) defines where the temporal variable is true and the interval defines the truth value (true or false) of the temporal variable across all time according to ( 1 ) .

2 . 1 . Axioms

The logic of temporal variables is developed from five axioms, two of which develop a standard Boolean logic (Millman and Grabel, 1988) for temporal variables and three of which introduce temporal logic into the framework based on the logic of Manna and Pnueli (Ostroff, 1989; pp. 155-171 ) .

A 'Vt , t E T, A E I, P E A ,

,ll P : P.x 1 - 8 < t < P.x2 + 8 (2)

Equation (2) is interpreted as not A is true when there does not exist a period of A which has a start value less than t and end value greater than t , where t ranges across all time.

For intervals A and B with periods M and N , respectively,

A /\ B Vt , t E T , A, B E I , M E A,

N E B, 3M, 3N :

(M.x1 < N.x2) and (M.x2 > N.x1 )

and [max(M.x1 , N.x1 )

:S t :S min(M.x2, N.x2)] (3)

Axioms (2) and (3) are the temporal equivalents of

26

1 5 2 0 2 5 3 0

the standard not and and operators, respectively. From these the standard logic system can be de­rived.

The following axioms extend the temporal system developed thus far to include temporal operators.

DA

!A

A U B

Vt , t E T , A E I, P E A :

(P1ast -X1 :S t) and (P1ast -X2 = +oo) (4)

Vt , t E T, A E I , P E A :

(t 2: P1 .x 1 ) (5)

Vt , t E T, A, B E I,

3P, P E A, 3Q, Q E B :

(P.x2 2: Q.x1 ) and (t 2: P.x 1 ) and

(t :S Q.x2) and [ ,llR, R E B, R 'f:. Q : (R.x2 2: P.x1 ) and (R.x2 < Q.xi)

and (R.x1 :S P.x 1 )] (6)

DA is read as "henceforth A" , !A as the event "start A" and A U B as "A is true until B is true" .

2 .2 . Derived Formulae

The axioms of section 2 . 1 are now used to con­struct some useful basic formulae. The formulae, which are used to deduce new temporal relation­ships from existing temporal variables, demon­strate how the axioms can be combined to con­struct more complicated temporal expressions.

The formulae are,

A V B = A /\ B Either A or B is true.

A Efl B = (A /\ B) V (A /\ B) Either only A or only B is true (exclusive or) .

<>A = DA It is not true that henceforth A is false, or eventually A is true.

jA = ! DA Not the start of henceforth A is false , or the event which stops A .

Page 29: Distributed computer control systems 1995 (DCCS ¹95)

Temporal Variables

LJ I I I A

I I I L B I I

B u n ti l A LJ w eak A befo re B

I

0 5

Fig. 2. Illustration of temporal logic formulae

A (3 B =! (A /\ B /\ 0 B) The sub-formula A /\ B /\ 0 is used to isolate the region where B is false, A is true and eventually B will be true, i .e. , A is "before" B . From this we need to decide on the interval where A before B is true. For example , consider a pump which can only be switched on if A is true before B . When is it valid to start the pump? Surely it can be switched on as soon as A becomes true and we know that B will be true in the future, hence !(ABOB) . This definition for before is loose in the sense that we only require A to be true once. A tighter definition would require that after B goes false A must again be true before B is true.

A B B iB /\ [ [(ABOB)/\ O(ABOB) U B] VB U ABOB]

This definition of before is more restric­tive, it results in an interval that is true when either A is true or will be true and B is true after A is true. Notice that the sub-formula ABO B plays an important role in deciding the final result, this sub­formula can be thought of as a root for before.

A a B =' [(A /\ DB )/\ !B] True when A is true and B is false from then on, but B was true at some time. This is a weak version of A after B-a more restrictive version must check every occasion that B is true to see if A is true after B and before B is true again.

A 0 B =! (A B B/\A U B) /\ i(A U B) /\A

I w eak B afte r A

I A befo re B

I B over lap A

1 0 Time

1 5 20

27

A is true before and until B and A and B are true, or A overlaps B . Overlaps can be thought of as "leads into" .

@ A = iA An interval which is true at least until A is true.

@ A = (!A) An interval which is true before A is true.

Plots of some derived temporal logic formulae are shown in Fig. 2 . The axioms and formulae pre­sented above provide a mechanism for deducing the relationships between temporal variables and a mechanism for combining temporal variables to generate further temporal variables. This pro­vides a foundation for reasoning with and about time.

3 . IMPLEMENTATION DETAILS

To facilitate manipulation and verification the temporal logic described in section 2 was imple­mented on a personal computer. Two-dimensional matrices were used to represent intervals and spe­cial matrix elements were used for -oo and oo . In the implementation o , which is used to calculate A in such a way as to avoid "divided instant" prob­lems (Allen, 1 983; Jixin and Knight , 1994) was set to 10- 5

The temporal axioms A, A/\B, D A, !A and A U B were coded from first principles. Only the imple­mentation of A differed from the axiom represen­tation. The approach was to shift the period start and end points so that end points become start points and vice versa. The point -oo or oo was

Page 30: Distributed computer control systems 1995 (DCCS ¹95)

then added as required. All other temporal for­mulae were constructed using the axioms.

4. DISTRIBUTED REAL-TIME CONTROL EXAMPLE

The distributed process to be controlled is shown in Fig. 3 . The cooler is restricted in that it can cool the feed from only one mixer at a time. Other restrictions regarding the start-up procedure for the mixers and cooler are introduced later. Three processing nodes are used to monitor and control the plant . A processor monitors each mixer and its inlet valve . The third processor controls the cooler and its inlet and outlet valves . Communi­cation between the processors is limited to mes­sage passing.

A two-tiered design approach is followed . First the constraints on operation are specified then a controller is designed. The controller is deemed correct when it satisfies the constraint require­ments. This approach distinguishes between two

Feed1 Va�v e 1 I Val�e2 'i' 'i'

Mixer1

Valve3 Valve4 'i' 'i'

Feed2

Mixer2

Va�� LJ Prod1

Cooler

Fig. 3. Process plant

uses of the temporal logic formulae. Temporal logic can be used to find intervals where a prop­erty holds or to generate an interval which satisfies a given property with another interval . Note that the two uses of temporal logic relate to being able to reason about time (first type) and with time (second type) . The differences in temporal logic use are highlighted by the process plant example.

4 . 1 . The Constraint System

Each node in the distributed system implements a diagnostic system that uses temporal logic to de­termine whether the equipment being controlled by the node satisfies temporal operating con­straints . Each constraint system takes as input

28

the temporal variables that describe when the con­trolled equipment is being operated.

The cooler is the more complex of the plant com­ponents in terms of its operating requirements. The cooler can only be switched on if either of its inlet valves is already on and can not be on when both inlet valves are on. Also, the cooler can only be on when one of the outlet valves is on. These requirements are implemented in temporal logic on the cooler processor node as follows

templ ( Valve3 U Cooler ) V ( Valve4 U Cooler )

temp2 ( Valve3 /\ Cooler ) E9 ( Valve4 /\ Cooler )

temp3 ( Valve3 /\ Valve5 ) E9 ( Valve4 /\ Valve6 )

Csafe templ /\ temp2 /\ temp3

Cerror Csafe /\ Cooler

Each mixer processor node must ensure that the mixers are only switched on after both the relevant feed and inlet valve have been on. Additionally, the mixers can only be on while their feed and inlet valves are off and before the outlet valve is switched on. A message passing protocol must be used to communicate the outlet valve temporal variable from the cooler processor to the mixer processor.

templ ( Feed f3 Mixer ) /\

( Valve f3 Mixer ) temp2 Mixer /\ ( Feed /\ Valve ) temp3 Mixer /\ Outlet Valve /\

() Outlet Valve

Msafe templ /\ temp2 /\ temp3

Merror Msafe /\ Mixer

A sample output from the constrainer is shown in Fig . 4. The plot immediately shows at which times it is safe to operate the cooler and mixers and when an error in operation would occur . The plot was generated by using the controller output but neglecting to open Valve4 while the cooler was on and failing to shut Valve2 while Mixer2 was on.

4 .2 . The Control System

In addition to the constraint system implemented by each processing node each node also uses tem­poral logic to control plant equipment . The con­trol system takes as input the intervals where

Page 31: Distributed computer control systems 1995 (DCCS ¹95)

T e m poral V a ria b l e s

I LJ

u 4 6 8

Fig. 4. Output from constraint system

Prodl and Prod2 are required and outputs the intervals (future) where plant equipment must be turned on. The controller is designed to satisfy the constrainer developed in section 4. 1 . Con­troller design is an iterative and intuitive pro­cess using the constrainer to guide the choice of the controller equations. For input intervals Prodl = [8 1 1] and Prod2 = [3 6] (time zero is the present) a sample plot of the intervals gener­ated by controller is shown in Fig. 5 .

Consider the cooler processing node. The initial control choice is to start the cooler and down­stream valves directly from the control inputs, making sure that the cooler is not on when both products are required:

Cooler

Valve5

Valve6

Prod 1 EB Prod2

Prodl /\ Cooler

Prod2 /\ Cooler

The control of the inlet valves to the cooler is more complex, involving intermediate steps. The aim is to open the valves before the cooler starts and to ensure that only one of the valves is open at a time. Making sure that only one valve is on at a time is the more difficult design problem.

Valve3

Valve4

templ

@ ( Cooler A Prodl ) A @ ( Cooler A Prod2 )

@ ( Cooler A Prod2 ) A @ ( Cooler /\ Prodl )

[ ( Cooler /\ Prodl ) B ( Cooler A Prod2 ) ]

C safe

C erro r

M 1 safe

M 1 e rro r

M 2 safe

M 2 erro r 1 0

T i m e

I

l

I L

L

1 2 1 4 1 6

[ ( Cooler /\ Prod2 ) B ( Cooler /\ Prodl ) ]

V [ ( Cooler /\ Prodl ) B ( Cooler /\ Prod2 ) ]

temp3 ( templ /\ Valve3 ) V ( temp2 /\ Valve3 /\ Valve4 )

temp4 ( temp2 /\ Valve4 ) V ( templ /\ Valve4 A Valve3 )

Valve3 temp3 /\ [4 +oo]

Valve4 temp4 /\ [4 +oo]

The intermediate values temp3 and temp4 rep­resent IF statements that choose which valve to turn off while the other valve is on. The choice is made based on which valve is required first­templ is true when Prodl is true before Prod2 and temp2 is true when Prod2 is true before Prodl . The final two lines are needed since the intervals generated by temp3 and temp4 could start at -oo.

Once the values for Valve3 and Valve4 are known they are passed via messages to the proces­sors that control the mixers. The mixer control is readily controlled via temporal logic.

Mixerl ( @ Valve3 ) A [3 +oo]

Mixer2 ( @ Valve4 ) A [3 +oo]

Valvel (@ Mixer l ) /\ [O +oo]

Valve2 (@ Mixer2 ) A [O +oo]

Feedl Valvel

Feed2 Valve2

V [ ( Cooler /\ Prod2 ) B ( Cooler A Prodl ) ]

temp2

The distributed controller and constraint system described relies on message passing between the

29

Page 32: Distributed computer control systems 1995 (DCCS ¹95)

T e m po ra l V a ria bles

I [ I

_J l

_J I

0 2 4 6

Fig. 5. Output from control system

cooler and mixer processing nodes. Message com­munication delays could result in the outlet valve temporal variables being received by the mixer processors after the valve had been turned on, meaning that the mixer controllers would not have time to turn the mixers on before the outlet valve. The constraint system would not be affected by this , it would still identify a violation . How­ever the control system would have to be altered to take maximum communication delays into ac­count when determining the start time for the outlet valves. Loss of communication messages is more fatal, neither the constraint system nor the control system could operate properly. For these reasons the message passing protocol used must guarantee delivery of messages within a known maximum delay time.

5. CONCLUSIONS

A logic of temporal variables has been presented. It allows the representation and manipulation of temporal variables and supports reasoning with and about time which is essential for verifying and enforcing temporally-consistent behaviour in con­trol systems. However mechanisms for reasoning in time have not been presented . This requires an extension to include the time point 'f/ (now) . Including the time point now will complete the temporal system and provide mechanisms to han­dle causality deadlines and problems like delayed communication messages.

Application of the temporal logic to a simple dis­tributed computer control example demonstrates the intuitive feel of the framework and highlights the advantages over other techniques which re-

I I I L C o o le r I L V a lve3 I V a lve4

L M ix e r 1

I M ix e r 2 V a lv e 1

V a lv e 2 8

T i m e 1 0 1 2 1 4 1 6

30

qmre state- and/or graph-theoretic representa­tions.

6 . ACKNOWLEDGEMENTS

The support of the South African Foundation for Research Development , the University of the Wit­watersrand and the Department of Electrical En­gineering and Computer Science at the University of Newcastle is gratefully acknowledged.

7 . REFERENCES

Allen, J .F . ( 1984) . Towards a general theory of action and time. Artificial Intelligence, 23(2) ' 123-154.

Allen, J .F. ( 1983) . Maintaining knowledge about temporal intervals. Communications of the ACM, 26( 1 1 ) , 832-843 .

Jixin, M . and B. Knight ( 1994) . A general tempo­ral theory. The Computer Journal, 37(2) , 1 14-123 .

Millman, J . and A . Grabel ( 1 988) . Microelectron­ics. McGraw-Hill, New York, pp. 209-2 19 .

Moszkowski , B . ( 1986) . Executing temporal logic programs. Cambridge University Press.

Ostroff, J .S . ( 1989) . Temporal Logic for Real- Time Systems. Wiley, New York.

Seow, K .T. and R. Devanathan ( 1994) . A tem­poral framework for assembly sequence repre­sentation and analysis. IEEE Transactions on Robotics and Automation, 10(2) , 220-229 .

van Beek, P. ( 1 992) . Reasoning about qualitative temporal information . Artificial Intelligence, 58(1-3) , 297-326 .

Page 33: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1 995

ON THE DUALITY BETWEEN

EVENT-DRIVEN AND TIME-DRIVEN MODELS

Francesco Tisato, Flavio DePaoli

Dipartimento di Scienze dell'lnformazione Universita degli Studi di Milano

Via Comelico 39, l-20135 Milan, Italy (tisato, depaolij @dsi.unimi.it

Abstract: The event-driven and time-driven models can be viewed as dual. in the sense that each of them can be sufficient to model a system. In real situations, however. they must co-exist to meet contrasting application requirements. The paper introduces a unifying model based on the separation among atomic actions performed by reactive agents. control performed by a time­driven control machine. and planning performed by an event-driven planning machine.

Keywords: Real-time, Programming languages, Agents, Events, Deterministic behaviour.

I . INTRODUCTION

The fin a l panel at D C C S '94 focu sed on the comparison between the time-driven and control-driven approaches to the design of RT systems. That discussion stimu lated the writing of this paper, which tries to highlight the duality of the two models and to show how they can be i ntegrated i nto a uniform architectural model .

The even t-dri ven model focuses on the causal relat ionship among extern al events and actions performed by a system , i .e . , on " why " something happens, whereas the time-driven model focuses on the timing of actions, i .e . , on "when" something happens. In the former case, actions are executed "as soon as possible" on events arrival , and time can be handled by timi ng signals w h i c h are treated as external asynchronous events. Conversely. in the latter case actions are executed "at the right time" according to a schedule, and external events can be handled by pol l ing mechanisms.

The choice of one of the two models depends on the appl ication dom ain. In complex systems the two models should coexist to fu lfi l co ntrast i n g req u i rements [ S tank o v i c 9 1 ] . Accord i n g l y , a programming language should allow the designer to

31

choose and intermix abstractions to capture in the most expressive way the key concepts related to a specific problem or sub problem. This i s what happens in natural languages. which allow a speaker to intermix the two models according to the context.

Exist ing program m i n g l ang uages and sy stem architecture are biased towards either of the two models. Therefore programming paradigms based on different models seems to be antithetical. The basic conjecture of this paper is that this antithesis arises from historical reasons, implementation issues and lack of separation of concerns. whereas a well-thought paradigm can unify the two models in terms of both expressive power and implementation.

The unifying model we propose in this paper is based on two major concepts: the separation between control and actions, and the basic rol e of the time-driven control model.

The separation between control and actions can be achieved by introduc ing reactive agents and control machines. A reactive agent is a system component which encapsulates a status and a set of actions which can be triggered by commands. Agents are not aware of the control model under which commands are di spatched. Control machines are i n charge of

Page 34: Distributed computer control systems 1995 (DCCS ¹95)

dispatching commands by interpreti ng control clauses which may be either temporal or reactive, according to the control model each machine supports.

The proposed control model unifies the time-driven and event-driven models by choosing the former one as primitive. The idea is that a basic time-driven control machi n e dispatches commands according to plans defi ned in a timel ine, whereas asynchronous events trigger planning activities. B oth the pure time-driven control model and the pure event-driven control model are i n cluded as particular cases. In the former one, asynchronous events are not considered and planning activities are performed once and for all. In the latter one, there are no pre-defined plans and the planning activity associated with an event s imply schedules a command to be executed as soon as possible.

The next sect ion presents an i n formal discussion, based on a s imple example expressed in a natural language, on the duality of the time-driven and event­driven model s . We wil l show that both of them are sufficient to tel l a story ( i .e . , to describe a sequence of actions) and that a story expressed i n a model can be translated i nto the other one. The example shows also that i ntermixing the two models is useful to achieve a h igh degree of expressiveness. The resul t of the discussion is a l i st of features a program m i n g paradigm should exhibit . Section 3 w i l l discuss conventional programming paradigms to show that none of them provide all the desirable features. Section 4 and 5 w i l l present agents and c ontrol machines respectively. Section 6 will discuss the unified model we propose and Section 7 wil l draw some conclusion.

2. THE STORY OF JOHN AND MARY

We start our discuss ion on the time-driven and event­driven control models by a simple story that a l lows us to understand and compare the models .

control clause t i m e e v e n t

I At 1 0.00 am

A story describes a collection of actions performed by agents. The sequencing of the actions is specified by control clauses. A control c l ause can be either a time specification or an event spec ification. The control clause at line I of Fi gure I (shortly, clause I ) has an absolute time reference; Clause 2 refers to an internal event (John is the subject of the story); Clauses 3 and 4 refer to external asynchronous events; Clause 5 refers to both time and events. The last clause is an example of i n termixed control : It specifies a relative time interval (ten minutes) which starts after an event occurred (the completion of action 4 ).

The proposed story is an example of intermixed usage of different models of contro l . S ince we have stated the duality of event-driven and time-driven models, we should be able to tel I the s ame story by using one control model only . Fi gure 2 shows how it can be achieved.

The result is that we can tel l the story, but loosing some degree of expressiveness in each version. Since some clauses are i ntrinsical ly related to events or to time, it is cumbersome to express them in the dual model.

For instance. the time-clause I states that action I should take p l ace at I O.OOam . The corresponding event-clause introduces unnecessary details (it is not relevant to specify how John knows that it is ten o 'clock) , and i t does not place the action (and consequently the whole story ) into an "absolute" reference time. On the other side, the event-clause 4

states that action 4 takes place when Mary leaves the train. The specificati on of that clause in terms of absolute time could l ead to embarrassing situations (who does John kiss if Mary's train is late?). The specification of c l ause 5 i n term s of events is cumbersome and is l eft to the reader (for instance, John could start an alarm c lock after kissing Mary -not a romantic behaviour).

a c t i o n

John goes t o the station

2 When John enters the station he sits in the bar 3 When the train is announced he goes to platform 7

4 When Marv leaves the train he kisses Mary

5 Ten minutes l ater he asks her to m arry him

Fig. I . A sample story .

control clause a c t i o n t i m e e v e n t

I At 1 0.00 am When the alarm clock rings John goes to the station 2 At 1 0 .32 am When he enters the station he sits in the bar

3 At 1 0 .45 am When the train is announced he goes to platform 7

4 At 1 0 .53 am When Marv leaves the train he kisses Mary 5 At I 1 .03 am ? he asks her to m arry him

Fig. 2. Dual versions of the story .

32

Page 35: Distributed computer control systems 1995 (DCCS ¹95)

This elementary example allows us to identify some characteristic features. First, an agent (John in our case) is an autonomous entity that performs some atomic actions. The detailed specification of how each action is performed is embedded inside the agent. Second, the control, i.e., the specification of when and why actions are performed, is external to agents. It is modelled by plans which can be specified either in terms of time or i n terms of events. Third, the two control models are dual, in the sense that both of them are sufficient to tell a story and that a story expressed with a model can be translated into a story expressed with the other one. Fourth, natural languages allow the speakers to intermix the control models according to the focus of the attention and to the expressiveness of the discourse.

3. PROGRAMMING PARADIGMS

As pointed out above, a programming paradigm should provide abstractions to allow the designer to define the behaviour of a system in a natural and effective way. Therefore we should look for programming paradigms which resemble the features we listed in the previous section. A brief review of existing programming paradigms shows that they lack of one or more of the desirable features.

The aim of sequential programming languages is the definition of algorithms, i .e . , sequences of elementary actions. They do not separate actions from control, which is spread throughout the code. Though procedures, if properly used, help defining abstract actions, they were i nvented primarily to improve programs modularity, n ot to separate actions from control . Algorithmic languages are suited for defining actions at the programming-in-the-small level, not for specify i n g the behaviour of a complex system modelled as a collection of agents interacting among themselves and with the environment.

Concurrent paradigms capture the idea that several sequential algorithms, i .e . , processes, can be executed c on curre n t l y . C o n current l a nguages d e fi n e synchronisation primitives which are i nvoked b y processes to specify partial orderings of actions. As a consequence, the defi n ition of actions and of their logical ordering is again spread throughout the code. Moreover, the actual ordering and timing depend on invocations of synchronisation primitives performed by each process, on the language run-time support and on the operating system kernel. It means that control and timing cannot be ful ly expressed by a well­distinguished set of language constructs . This characteristic makes concurrent languages unsuitable for real-time systems.

Both sequential and concurrent paradigms are basically algorithm-oriented, i .e . , they focus on the algorithm which defines the flow of control and drives the interactions with the environment. Despite recent proposals [Andre 94] , c oncurrent algorithms are usually viewed as asy nchronous. Neither time nor external events are represented by language constructs.

33

The event-driven paradigm. which relies on the object­oriented model , focuses on external events which control the execution of actions performed by agents (or objects, or actors, or the l ike). It supports the separation between actions performed by agents and control driven by the en vironment. However, the concept of time is still missing, and the actual sequencing of actions is still left to the run-time support. In real-time contexts, event-driven languages can be exploited for systems where sporadic actions must be executed as soon as possible.

The time-driven paradigm is algorithm-oriented, but it includes the concept of time to specify when actions should be executed. This paradigm, though being widely used in the process control area, does not capture the concept of event. Usually, reactions to unpredictable external stimuli are implemented by means of polling mechanisms. Time-driven paradigms are especially suitable for defining cyclic tasks which must be executed under an environment-independent timing.

Paradigms widely used in the Expert S ystems area support the dynamic planning of sequences of actions. However, the definition of the agenda depends usually on the internal status of the system, and does express neither the response to asynchronous events nor the explicit management of time.

4. REACTIVE AGENTS

The separation between actions and control can be achieved by introducing the concept of reactive agents. A reactive agent is a system component encapsulating a status and a set of actions. The execution of the actions is controlled by the environment by means of commands exported hy the agent. The behaviour of the agent, i .e . , the way it reacts to commands, can be represented by a finite state automaton.

Agents are not aware of "when" and "why" they perform an action; therefore they are not sensitive to the control model, which can be either event-driven or time-driven . To leave the control outside, an agent cannot include any control mechanism. It means that the execution of an action is atomic, i .e . , it cannot be logically suspended and interleaved with others, and that the effects are observable after the completion of an action. In terms of programming paradigm, this suggests that the " programming-i n-the-smal l " language used for defining actions cannot include classi cal s y n c h ron isation and communication constructs, which may embed control mechanisms either for suspending the agent or to activate other agents.

An agent interacts with its environment through an interface consisting of two kinds of p orts, namely command ports and data ports. Input command ports allow the environment to pass the control to the agent in order to execute an atomic action. Output command ports allow the agent to notify the environment about significant events; the agent is not aware about when

Page 36: Distributed computer control systems 1995 (DCCS ¹95)

asynchronous events

EVENT-DRIVEN CONTROL MACHINE

TIME-DRIVEN CONTROL MACHINE

time

------,commands ------

agent � G-G Fig. 3. Agents and Control Machines

and who is going to manage them. Data ports (whose discussion is not relevant in this context) allow the agent to exchange data with the environment without any s y n chro n i s ati o n . Note that none of the mechanisms allows the agent to take control-related decisions.

5 . CONTROL MACHINES

The management of the control can be modelled by a control machine that is i n charge of scheduling actions and of dispatching commands to agents.

To avoid confusion, it should be outlined that, in a possible interpretation of our sample story, John's brain supports both the activ ities of the control machine, which delivers commands according to control clauses, and the execution of the actions. The distinction becomes clearer if we look at John as a controller issuing commands to be executed by somebody - which, in this case, is John itself.

The time-driven and event-driven models yields to the definition of two different (and somehow dual) control machines, as sketched in Figure 3 .

The time-driven control machine relies o n a n internal representation of the time (John relies on his own watch) to set up and execute plans associating time values (or intervals) with commands. It defines one or more virtual clocks whose rate is specified according to a reference clock. The rate of a virtual clock can be dynamically modified; In particular, a virtual clock can be stopped, restarted and reset. The time-driven control machine is not sensitive to external asynchronous events.

On the opposite, the event-driven control machine can react to events by associating them with commands to be delivered as soon as possible. It does not manage the concept of time (John l istens to rings and announcements, but he does not need a watch).

In terms o f program m i n g p arad i gms, the "programming-in-the-large" constructs used for defining the control machines should consist of two subsets : A time-driven set should provide constructs

34

for the management of cl ocks and plans, and for the delivery of commands according to plans; A event­driven set should provide constructs for the association of events with commands and for their delivery on event occurrence.

The two control machines and the corresponding subsets of the language constructs rely on the same definition and implementation of agents. This is a significant step towards the definition of architecture and languages providing a uniform conceptual framework to the crowd of mechanisms required by real and complex systems.

Moreover, this approach provides a valuable support for the re-use of software components, i .e . , agents, under different control disciplines. However, as soon as both control models are used within the same system, a major integration problems arises: how can the two language constructs and the two control machines coexist in a smooth way?

6. A UNIFIED CONTROL MACHINE

The integration of differe nt models, and of the corresponding programming paradigms. can be tackled in two ways. We could build a suitable interface to let the implementations of the paradigms interact each other; or we could choose a paradigm as a basic platform and implement the others on top of it.

The former approach is often motivated by the need of reusing existing implem entati ons, by efficiency reasons, and by the incompatible features exhibited by the paradigms. The history of programming languages and software archi tecture contains a lot of huge, inefficient and unmanageable systems based on this approach.

The latter approach is basically motivated by economy of concepts and by cleanness of architectural design. It is especially attractive when the models are dual, as in our case, since the emulation of a model on top of the other becomes quite natural. Therefore we will follow this approach to define the architecture of a control machine accommodati ng both the time-driven and the event-driven model .

The basic question becomes: which paradigm is more suitable to be chosen as primitive? The choice might depend on several criteria: taste. familiarity, tradition, efficiency, s implicity . c leanness. and so on. Dealing with real-time systems, our proposal is to use a time­driven control machine as basic platform. There are several reasons supporting this choice.

First, it is a good design criterion to manage the most critical issues at a very basic level - and time is a critical issue indeed for real-time systems.

S econd, a programm i n g paradigm should be expre ssive, i . e . . it should provide abstractions reflecting the way an app l i c ation domain expert

Page 37: Distributed computer control systems 1995 (DCCS ¹95)

EVENT-DRIVEN PLANNER

------ commands ------

� G-8 Fig. 4. Planning Machine and Control Machine

models a system. In the process control area the concepts of time and speed play a major role . Therefore the i n troduction of time and speed as primitive concepts [Nigro 94] opens interestin g perspectives towards non-conventional programming styles which reduce the gap between the software architecture of embedded applications and the model of the overall system.

Third, the explicit management of time at the basic level supports the evaluation of temporal behaviours of a software system in an emulated environment. This concept can be extended to hardware-software codesign techniques: Agents could be implemented either as software or hardware components according to cost/performance evaluations.

Fourth, there is a long-term, but not fully satisfactory, experience in the use of concurrent and event-driven paradigms as basic platforms, that suggests to deserve some research effort to time-driven paradigms.

Finally, there is another non-technical motivation for this choice. There is a growing interest, in the area of n atural languages u nderstan d i n g , towards the interpretation of stories in the framework of timelines [Eco 94] . As it seems to be a natural way for the understanding of complex stories, it promises to ensure expressiveness in the specification of the behaviour of complex systems.

The unifying model relies on a conceptual and technological remark: The overall decision process, being i t performed by a human or by a machine, consists of two basic activities: planning and control. In general, events are not directly bound to actions. They trigger planning activities, which may cause the planning of one or more commands, whose execution may generate other events, and so on.

Figure 4 highlights the separation between planning and control. The control machine relies on clocks and plans. It selects commands planned for execution at the current time, and dispatches them to agents for execution. Events, generated either by the environment or by agents, trigger the planning machine to possible redefinition of plans.

35

The two control machines sketched in Figure 3 can be unified into the new model. To get a pure event-driven behaviour, the planner schedules commands on events arriving. A command is removed from the plan after its execution. On the opposite, to get a pure time­driven behaviour, the plan is defined once and for all, and commands are permanently associated with it. Cyclic tasks are modelled in a straightforward way by restarting the clock.

The proposed model allows system designers to shift in a seam less way from fu lly static to adaptive systems by just defining when planning activities are performed.

A comparison with the John and Mary story may help understanding the model. The basic control machine delivers command according to a plan, represented by time clauses in the first column of Figure 2. The planning machine is possibly triggered by events represented by event c lauses in the second column of the figures . If we refer to the intermixed model of Figure 1, clause 1 is staticall y pre-planned. Clauses 2 to 4 are dynamically planned for immediate execution (i .e. , at the current time) as a consequence of events. In particular, the planning of clause 2 is triggered by an internal event generated by John as an agent ("he enters the station"), whereas the planning of clauses 3 and 4 is triggered by external events ("the train is announced" and "Mary leaves the train") . Clause 5 is dynamically planned for execution at "current time + 10" as a consequence of an internal event ("John ends kissing Mary " ) .

7. CONCLUSIONS

The paper focused on the duality between time-driven and event-dri ven models . showing that a basic time­driven engine can be a sound basis for the integration of the two models. Issues under investigation are the use of static and dynamic scheduling techniques for the definition of plans, the management of exception conditions, the use of the concept of virtual time to emulate the temporal behaviour of a system in a host environment, and the introduction of the execution speed as a primitive programming concept.

Distribution issue req uires some comments. The coexistence of the time-driven and event-driven models is the basis for the identification of two classes of RT distributed systems. If the communication subsystem has a deterministic temporal behaviour, it is possible to define global plans in terms of a global time. On the other side, if the communication subsystem has a non-deterministic temporal behaviour, it is possible to define local plans i n terms of local times, and to consider communications as generators of external events from the poi nt of view of individual computing nodes. In the former case hard timing constraints can be managed in a gl obal way . In the latter one hard timing con straints must he managed locally to computing nodes. A general comparison of the two approaches fal ls out of the scope of this paper. It should be noted, however, that the proposed model

Page 38: Distributed computer control systems 1995 (DCCS ¹95)

accommodates both approaches and allows system designers to select the most suitable according to the features of the communication subsystem and to the requirements of the application domain.

The ideas discussed in the paper derive from previous work in the area of object-oriented paradigms for real­time systems [Nigro 93] , which resulted into the HyperReal project, centred on the defi ni tion of architectural abstractions which allow designing, analysing and implementing complex systems [Agnoli 95] [De Paoli 95] . The abstractions and the language are the basis for the design and development of experimental platforms which are being used for testing the s oundness of the approach in significant appli cati o n areas, for gai n i n g implementati on experiences, and for refining abstractions and platforms with an experimental approach.

The model has been implemented in different versions based on Oberon, C++ and C, and has been ported to different target platforms (Unix, MS/DOS, Intel 486 and 8 0 8 8 b are machines, and an ARM RISC processor). The experimental results allow us to conclude that the model can be implemented in an efficient way and that it can be exploited to build systems in different application domains.

Future work will deal with the definition of language constructs for time management, in the context of the Esprit OMI/CORE project; with the integration of the basic time-driven engine into a modular real-time kernel, in the context of the Esprit MODES project; with the defi n i tion of connectors, i . e . , system components which define complex protocols for the communication among agents in a d i s tri buted environment; and with the use of formal methods based on timed Petri nets for the analysis of the temporal behaviour of a system.

REFERENCES

[Agnoli 95] Agnoli M . , Poli A., Verdi no A . , Tisato F . , " HyperReal One: the Implementation, the Environment and a n E x a m p l e " , EUROMICRO workshop on Real Time S y stems, Odense, June 1 995.

[Andre 94] Andre C. , Peraldi M. , B o ufaied H . , "Distributed synchronous processes for control systems", in Proc. of the 1 2th IFAC Workshop on D i stri buted Computer Control Systems, Toledo, Spain, September 28-30, 1 994.

[De Paoli 95] De Paoli F. Tisato F., "Architectural Abstractions and Time Modelling in HyperReal" , EUROMICRO workshop on Real Time Systems, Odense, June 1 995.

[Eco 94] Eco U. , " S ix Walks in the Fictional Woods", Harvard University, Norton Lectures 1 992- 1 993

36

[Nigro93] Ni gro L .. Tisato F . , " RTO++: a Framework for building Hard Real­Time Systems " , JOOP, vol. 6, no.2; May 1 993; page 35.

[Nigro 94] Nigro L . . Tisato F . , " Timing as a Program m i ng- i n - the-Large Iss ue " , JMCL'94. Joint Modular Languages Conference. Ulm. Sept. 28-30, 1 994.

[Stankovic 9 I ] Stankovic J . . "The Spring Kernel: a New Parad i g m for Real-Time Systems". IEEE Software, May 199 1 , pp. 62-72.

Page 39: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

DYNAMIC TASK MAPPING FOR REAL-TIME CONTROLLER OF DISTRIBUTED COOPERATIVE ROBOT

SYSTEMS Tim Lueth, Thomas Laengle, and Jochen Heinzman

Institute for Real-Time Computer Systems and Robotics (IPR), University of Karlsruhe, D-76128 Karlsruhe, Germany, email: t. lueth @ieee.org

Abstract: Intelligent control architecture of autonomous robot systems must be modular, scaleable,

extendible, and adaptable to the available capacity of computer control systems. For the support of these

properties, the concept of adaptive control structure, ACS, is presented in this paper. Experiments with

small autonomous assembly robot show the successful use of ACS.

1 . INIRODUCTION

The design of intelligent controllers for autonomous robots will always require engineering, development, and experimental effort. There is neither a general purpose robot control algorithm nor an ultimate robot control architecture. With the development of new sensors and actuators, more powerful robot systems can be build that require additional and different control strategies. On the other hand, new and better sensors may also decrease the complexity of used models and simplify control architectures. These considerations have already lead to new con ­cepts of distributed robot controllers in which several

independent local models and observer simply switch among different control strategies to achieve a robust control behavior (Lueth, 1995, Lueth et al., 1995). In comparison with the serial information flow among sensors, model, planner, and the execution controller, the information is now processed locally in a distributed control network consisting of independent modules.

Since it is evident that parts of the control architecture always will be adapted, changed, extended, or deleted during the life time of the robot , the consideration of architectural dynamics is an important research topic A framework is required that supports the descrip ­tion, implementation, and change of control architec ­tures. In this paper, a concept for the description of dis ­tributed robot control systems is presented. This concept suppons the dynamic mapping of control tasks to real-time robot controllers within a network of cooperative robot control systems.

37

The concept supports the design of modular, scaleable, and cooperative robot systems that

distribute locally control tasks among each other.

2. BOTILENECK OF IN1ELLIGENT CONTROL

Autonomous robots measure, model, and observe their environment to achieve a specified environment goal state or to stabilize the goal state. Therefor, they have to independently plan and execute environment changes that will reduce the "distance" among the current and the goal environment state. The goal must be achievable, even if there are dynamic disturbances. This task is performed by a so called intelligent control system. The classical method to solve this problem is to use sensors for maintaining a fixed and previously defined environment model. The goal state is defined within the same environment model. Also the use of actuators is described as state changes in this envi­ronment model. The main advantage of such a fixed global model is to know the relation among all envi­ronment states and to have explicit access to this information. It looks quite well, but becomes extremely problematic in real robotics applications. In Fig. 1 , there are the four components of a closed control loop (CL): a sensing (S) module, a state observation/modeling (0/M) module, a control or planning (C/P) module, and an execution control (EC) module for changing the environment. If the EC module itself contains a closed control loop, it has a similar structure consisting of these four components.

Page 40: Distributed computer control systems 1995 (DCCS ¹95)

Sensor

external state obseNation, 1.._ __ _, communication

Fig. 1 : Information flow for intelligent control

The serial information flow trough one centralized O/M and one C/P module leads to a synchronized cycle time for generating commands for execution controller. The cycle time and the time delay among sensing and execution depend on the model size and the capacity of the used processor. Unfortunately, the required minimal cycle time is defined by dynamic environment effects. Furthermore, not all process states can be continuously measured. Most can be measured with more or less delay but some states can be interpreted only as events. Those are observable with delay by analyzing a sequence of former process states. Several hard problems arise suddenly, when robot control systems become more complex by • adding new control capabilities, • adding new sensors or actuators, and • coupling of independent intelligent controllers. The problems are related to limited capacity bottle ­neck of the controller hardware regarding • information processing, • information maintenance and storing, • information transportation (communication), and the missing possibility to separate the control architecture into independent modules and to dis­tribute it optimally regarding the available process­ing and communication capacities. On the other hand, the separation of global models into independent sub models also generates new problems. The global control optimum may not be achievable and independent operating C/P algo­rithms will possibly generate contrary commands for the following execution controller. If contrary demands are integrated locally, this will possibly prevent the achievement of the goal. Nevertheless, only experiments can show whether local models will work appropriately or not.

3. STAIB OF THE ART

The basics of intelligent control have been published by Wiener (1948). First experiments with distributed

38

controlled autonomous mobile robots were per­formed by Shannon (1951) and Walter (1961). The idea to control autonomous mobile robots by central­ized global models have has been initiated by Nilsson (1969). Systems for centralized planning and control were published (Fikes at al., 1972). First ideas on distributed planning were published by Hayes-Roth (1979).

The description of the information flow during intel­ligent control by hierarchical sensing, modeling, planning, and controlling components has been introduced by Albus et al. (1981). Braitenberg (1984)

and Brooks (1986) have presented control systems in which simple distributed local information processing is used to reactively control autonomous robots. A reactive planning systems was presented by Georgeff and Lansky (1987). Distributed processing of global information has been used in Hayes-Roth (1985) and Thorpe et al. (1988). Arkin presented a more sound concept for reactive motor control ( 1990). In Musliner et al. ( 1 993) the idea of cooperative control architectures were published first. Event driven control has been presented by Bejcsy in 1994. Distributed planning and control of multiple autonomous robot systems was presented by Laengle and Lueth (1994).

Up to now, robot control architectures have been considered as something static. It has been assumed that after an initial design, the control structure will not change anymore. Therefor, no attention has been spend to open, dynamic, extendible architectures for modular and scaleable systems. Experiences with complex robot projects (Lueth and Rembold, 1994)

have shown that control architectures and structures will quickly change if new sensors, actuators, controller, applications are required. A framework for the description of control structures that support dynamic changes of the control flow during run-time is missing.

4. THE ADAPTIVE CONIROL STRUCTURE

The idea of the adaptive control structure(ACS) is to allow changes of the control structure during run­time to avoid and master capacity bottleneck situa­tions. As mentioned above, these bottleneck situa­tions are caused by new or changed control algo­rithms that use additional or different memory, processing, or communication capacities. For adaptive control structures, it is necessary to distinguish between two types of information: • control loop information (CLI) that is used within

a control loop, and • control structure information (CSI) that is used to

modify control loop structures. The CLI can be processed in a network of linked S, O/M, and C/P modules. Afterwards, the control information is similar processed by a network of linked execution controllers (Fig. 2).

Page 41: Distributed computer control systems 1995 (DCCS ¹95)

The CSI is required to dynamically establish and change appropriate connections among the CLI modules. In the serial information flow approach, the CSI is part of the C/P module (Fig. 1) and also pro­cessed there. In the adaptive control structure approach, the CSI describes the changes in the net­work of the control flow. The CSI can be used explicitly as knowledge in a central C/P module or implicitly in distributed local connection switching modules.

Sensor (active) Fig. 2: CLI processes the infonnation within the con ­trol loops, CSI is used to switch decentralized the in­formation flow.

To be capable of changing the information flow by

the CSI, a flexible information routing system and separable decision making modules are required to support adaptive control structures. This must be supported by a robot operation system. Furthennore, it must be possible to separate decision making processes into basic units, that can be pro­cessed as CSI. This allows on-line the flexible switch among explicit planning and implicit control.

5. IMPLEMENTATION ASPECTS

For the implementation of adaptive control structures several basic software mechanisms have to be im ·

plemented on top of a multi tasking kernel. The basic mechanisms are the following: The information transfer among the control modules is perfonned via shared memory concepts or streams (buffer, communication network). The infonnation is processed and exchanged among individual modules • on demand (individual request - backwards),

• continuously (time period based), or

39

• on event (information state changes - forwards).

c) change description lnfonnation lnfonnation processing 1-ch�an-g�in-g-:i:-nf�o::nn;;;:a!!;t::.10-n--� processing

Fig. 3: Information transfer on demand (a), periodi­cally (b), and event based (c)

Switching connections among different OIM, C/P, and EC modules is performed by local control tables. The effect is similar to information integration based on weights and priorities but here, the result is exclu ·

sive.

Control Structure Information ,----:+---,

xecubon Controller

Integration of independently processed control information is perfonned based on priorities and

weights of the generating modules. The resulting control command consists of a simple weighted vec­tor sum of the control infonnation with the highest priority. Resulting priority and weight depend on additional information processes.

lnfonnation U=Lw . . u . integration 1----...;1__;.1_.._.

task P; = max ({P1 , ' pn })

Fig. 5: Control integration by priorities and weights

6. DYNAMIC TASK MAPPING

By the explicit separation of CLI and CSI within a complex control system, it is now possible to analyze the control bottleneck problem in more detail. The capacity requirements for memory and process ing of control modules and the communication among the modules can be described. Therefor, it can be checked, whether control modules can be executed on a single sensor-processor-actor system, or whether the modules must be distributed to a

Page 42: Distributed computer control systems 1995 (DCCS ¹95)

network of processors, which are linked by commu­nication channels. Flexible linkage and of communi ­cation channels can be virtually achieved on Multi Tasking Systems or on some special hardware archi­tectures. Typically, the communication capacity among processors is fixed by hardware constraints (Fig. 6).

� - - - - - - -

b) Fig. 6: Mapping of S, O/M, C/P, EC control modules to (a) a fixed network of processors. (b) Flexible routing of communication channels (Lueth et al., 95)

For the distribution of the control tasksnoops con­sisting of S, O/M, C/P, and EC modules, two differ­ent types of control separation can be distinguished. 1 . Separation of one control loop to several control

loops of less complexity (capacity requirements) that are executable in parallel.

2. Separation of one control loop to several control components that can be distributed to sequentially linked processing nodes.

a) t

l�I l�I

Fig. 7: a) Separation into independent control tasks. b) Separation into coupled control components.

Furthermore, it is possible to combine both separa­tion methods. By these techniques, control tasks are flexibly separable into subtasks that can be executed even on machines with low capacities.

7. EXPERIMENTS

The described concept bas been implemented on small robots (Fig. 8) of the Khepera type (K-Team 1994) using the C programming language interface and the Khepera's multi tasking kernel. To achieve the desired capabilities of changing the control structure during run-time, an on-board oper­ating system (called controller behavior) is con ­nected to an external C/P system (on a SUN

40

workstation) that is able to separate control loops with respect to the required and available capacity. The structure of the loops is described by using a special CSI command language. The commands correspond to the activation of tasks, the routing of communication channels among tasks, and the deactivation of tasks. The CSI is generated either by the external C/P module or by already run -ning tasks that generate CSI. The operating system itself is also implemented as one task.

Fig. 8: Khepera robot is grasping a "spacer" of the "spacer box"

In our experiments, the small robots should perform assembly tasks. For this purpose, different S, O/M, C/P, and EC modules must run as tasks in parallel or sequentially under real-time constraints. Since the capacity of the small robots regarding memory and information processing is limited, the external system downloads and activates only a part of the overall control/planning system. In the following, we describe the individual control modules in groups related to useful behaviors. This helps to understand the functionality of the control loop modules even if the concept used here is more general, and easier to understand than the behavior based approach. It is important to notice that the explained behaviors have very different structures, but all consist of one or more similar modules for closed control loops.

7 1 The Basic Control Behayiors

The following behaviors (network of control mod­ules) are sufficient for simple assembly. The modules can be used for many other behaviors too. sensoric: This behavior is an O/M module that mea­

sures the values of the serial link, IR distance sen -sors, the IR ambient light sensors, motor encoder, motor speed, motor PIO counter, etc., and dis­tributes them to several shared memory areas. Furthermore, it generates and maintains a history of former sensor values.

controller: This behavior consists of an O/M module that filters the global input buffer (linked to serial interface). It interprets CSI to install and to delete tasks, shared memory areas, or communication channels by C/P and EC modules. Furthermore, it

Page 43: Distributed computer control systems 1995 (DCCS ¹95)

can adjust its own cycle time and activation frequency to estimate the robot's load by measuring delays and cycle time fluctuation.

integrator: This behavior is a C/P module that inte -grates (Fig. 5) for the wheel speed execution con ­troller the desired state changes and calculates depending on weights and priorities the output speed. It is able to �alyze whether different re­quests can be integrated to one resulting com ­mand, must be sequentialized, are not integratable at all.

poscomp: An O/M module that transforms the motor encoder information into a relative Cartesian and some other coordinate systems. It just updates a communication channel (shared memory).

detectcrash: An O/M module that uses both IR sen ­sor and wheel encoder information to generate an event in case of a detected collision.

moveforw: An C/P module without CLI as input (open loop controller) that sends equal speed commands for both wheels to integrator.

avobst: A closed control loop that uses a model of the distances among the IR-Sensors and possible obstacles within 50 mm range. As soon as an ob­stacles comes nearer than 20 mm, the controller turns the robot into a new direction that will in -crease the distance to the obstacle during further movements.

dontstay: A closed control loop that consists of a O/M module for detecting position oscillations, and a C/P module for position control to leave the place of the oscillation. In future dontstay will also process CSI to prevent oscillation.

attach: This closed control loop moves the robot to the nearest obstacle within its sensor range and stops the robot a few millimeter in front of the obstacle: terminates itself and generates an event

leave: This behavior consists of a combination of avobst, moveforw, and dontstay. It integrates the resulting controller commands and is active as long as there is an obstacle within the IR sensor range.

move along: This behavior consists of a combination of moveforw, a vobst, and a new one that is con ­trary to avobst. It is a closed control loop that tries to reduce the distance among the IR on the left side of the robot and an obstacle to less than 30 mm. The closed control loop uses the same model as avobst but tries to minimize the distance. The combination leads to a resulting behavior, i.e., moving along an object

findedge: OIM module that is linked to poscomp . With some delay it is able to generate an "there was an edge" event.

gripit: Closed control loop that closes the gripper as far as possible if the optical sensor inside the gripper jaws does detect an obstacle. Otherwise, it opens the gripper.

41

search: This behavior consists of attach, movealong , movefo rw, and avobst. It moves arbitrarily (avobst, moveforw) as along as there is no obstacle detected. Afterwards, attach will move to the obstacle. Next, movealong controls a movement along the obstacle. During the movement, a con­volution process checks the difference among the generated history result and a former recorded his­tory result. The recorded history information is stored in a global memory area that belongs as input memory area to search.

gripspacer: This complex behavior consists of sev­eral of the above mentioned tasks for sensing, ob­servation, planning, and control. The tasks are ac­tivated or stopped depending on decisions, that are made and supervised by the gripspacer task. Gripspacer moves to an edge, moves the manipu­lator down, and closes the gripper (gripit).

insenspacer: Similar to gripspacer, this is a network of combined individual control modules. It is not a combination of behaviors, but a combination of control modules that belong to other behaviors. Insenspacer tries to move the manipulator down and to rotate afterwards the robot on the spot. If the robot can't be rotated on the spot, the spacer must be inside the hole.

7 2 The l Jse of the Behayjors

During the assembly, the behaviors of the small robots are activated by the external C/P Module that accepts assembly tasks and separates them into net­works of control modules. Afterwards, the behavior search is activated with the input area "spacer box". The robot moves arbitrarily until an object is found by the IR sensors. After exe­cuting the attach behavior, the robot moves along the obstacle to prove by convolution whether the object is the desired"spacer box" or not If the object isn't the box, the robot starts the leave behavior and afterwards again the search behavior. If the "spacer box" has been found, the gripspacer behavior is activated. A simple loop of search and gripspacer comes to success after one or more tries. If the spacer is gripped (Fig. 8), search is activated with the "side plate" history pattern as input area. After finding the sideplate, the behavior insertspacer performs the parts mating operation. A loop of insenspacer and search guarantees the success after one or many tries (Fig. 9) This combination of behaviors has been proved suc­cessfully by experiments (Video available).

7.3 Problems with Local Models and Processing

As it bas been shown, it is possible to perform a pick-and-place operation with the small robot by an almost fixed sequence of behaviors. If there are many spacer, the average distance among robot and

Page 44: Distributed computer control systems 1995 (DCCS ¹95)

spacer is small and also the average time to find a spacer. The more spacer are found, taken, and inserted during the assembly, the more time is required to fmd the next one. Furthermore, more and more spacer boxes are correctly detected but do not contain spacer any more. A similar situation is valid for the holes in the side plate. This means, the effi ­ciency of this simple strategy decreases with the time. Therefore, a system with more global knowl­edge about the overall situation must estimate the task execution time and possibly use other robots for later assembly tasks or guide the Kheperas better.

Fig. 9: Inserting "spacer" into "side plate"

8. CONCT..USION AND RITURE WORK

In the paper, a concept for the description and implementation of robot control tasks has been presented. A closed control loop is separated into four modules: sensing, modeling/observation, plan ­ning, and execution control. Typically, real-time control of a robot requires the asynchronous and par­allel execution of several control loops at a time, which are based on the same control modules. Therefor, control tasks are described as event driven and dynamically changeable networks of control components. The components can be implemented as individual tasks with flexible information flow routing on a real-time multi task operation system. If a task requires a component that is not running already, a new task for this component is automati­cally generated. All tasks can locally measure whether they can guarantee real-time control or not. The concept has been implemented on the KACORs robot system. It has been used successfully for sim ­ple assembly tasks. Up to now, the control loops have been down loaded completely to one individual robot. The next step is to develop and download control loops that are dis­tributed to more than one robot. For example, one robot is performing the sensing and modeling task and a second robot will perform planning and execu -lion. The distribution of the overall task can be per­formed in future either by the external system or by one robot that has not enough capacity free to per­form all the required tasks components by itself.

42

9. ACKNOWLEDGMENT

This research work has been performed at the Institute for Real-Time Computer Systems and Robotics (UKA-IPR), Prof. Dr.-Ing. U. Rembold and Prof. Dr.-Ing. R. Dillmann, Faculty for Computer Science, University of Karlsruhe. Thanks to Johan Helqvist (Univ. of Lund L TH, Sweden) for his work.

REFERENCES

Albus, J.S., Barbera,A.J. , Nagel,R.N. (1981): Theory and practice of hierarchical control. IEEE Comp. Soc. · Int. Conf., pp. 1 8-39.

Arkin, R.C. ( 1990): Integrating behavioral, perceptual, and world knowledge in reactive navigation. Robotics and Autonomous Systems, 6 ( 1 &2), pp. 105-122.

Braitenberg, V. (1984): Vehicles. Experiments in Synthetic Psychology, MIT Press.

Brooks, R.A. (1986): A Robust Layered Control System for a Mobile Robot. IEEE Trans. on Robotics and Auto­mation, 2, pp. 1 4-23.

Fikes, R.E.; Hart, P.E.; Nilsson, N,J, (1972): Learning and Executing Generalized Robot Plans. Artificial Intelli­gence, 3(4). Reprinted in Readings in Planning (1990), Morgan Kaufmann Publishers, pp. 25 1-288.

Georgeff, M.P.; Lansky, A.L. ( 1987): Reactive Reasoning and Planning. National Conference on Artificial Intelli­gence, Menlo Park, CA. Reprinted in Readings in Plan­ning (1990), Morgan Kaufmann Publishers, pp. 729-734.

Hayes-Roth, B . ( 1985): A Blackboard Architecture for Control. Artificial Intelligence, 26. Reprinted in Readings in Distributed Artificial Intelligence ( 1988), Morgan Kaufmann Publishers, pp. 505-540.

Hayes-Roth, B.; Hayes-Roth, F. (1979): A Cognitive Mo­del of Planning. Cognitive Science, . Readings in Plan­ning (1990), Morgan Kaufmann Publishers, pp. 245-262.

K-Team 1994 K-Team (1994): Khepera User Manual, Ver. 3.0, LAMI-EPFL, Lausanne, Swiss.

Laengle, Th; T.C. Lueth (1994): Decentralized Control of Distributed Intelligent Robots and Subsystems. AIRTC Symp. on Artificial Intelligence in Real-Time Control, Valencia, Spain, October, 3-5, pp. 447-452.

Lueth, T. (1995): Task Driven Control of Multi Robot Sy­stems - Optimality and Fundamental Interactions. Tuto­rial at IAS-4 Intelligent Autonomous Systems, Karlsruhe, Germany, March.

Lueth, T.C., U. Rembold, T. Ogasawara (1995): Task Spe­cification, Scheduling, Execution, and Monitoring for Centralized and Decentralized Intelligent Robots. ICRA IEEE Int. Conf. on Robotics and Automation, Nagoya, Japan, May, WP2 Task Driven Intelligent Robotics.

Lueth, T.C.; Rembold, U. (1994): Extensive Manipulation Capabilities and Reliable Behavior at Autononomous Robot Assembly. IEEE Int. Conf. on Robotics and Au­tomation, San Diego, CA, May 8-13, pp. 3495-3500.

Musliner, D.J., E.H. Durfee, K.G. Shin (1993): CIRCA: A Cooperative Intelligent Real Time Control Architecture. IEEE Trans. on System Man and Cybernetics, 23,6, pp. 1561-1574.

Nilsson, N.J. ( 1969): A Mobile Automaton: An Applica­tion of Artificial Intelligence Techniques. DCAI Int. Joint Conf. on Artificial Intelligence, Washington D.C., USA.

Thorpe,C., Hebert,M., Kanade,T., Shafer,S. (1988): Vision and Navigation for the Carnegie-Mellon Navlab. IEEE Trans. on Pattern Recognition and Machine Intelligence, 10, 3, pp. 362-373.

Page 45: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

PROGRAMMING APPROACHES FOR DISTRIBUTED CONTROL SYSTEMS

Ronald Schoop* and Alan Strelzoff**

* AEG Schneider A utomation, Steinheimer Strasse 1 1 7, 63500 Seligenstadt, Germany, email: [email protected]

**AEG Schneider Automation, One High Street,North Andover, MA 08145-2699, USA, email: [email protected]

Abstract: Based on a general design model for distributed control systems and using standardized languages of IEC 1 13 1 -3 for control, three approaches for programming are investigated. The first is

based on IEC Programs with extensions, the second is a decomposition of Programs with SFC

notations and the third approach uses Function Blocks corresponding to the !EC TC65 Function

Block Standard. The approaches are specified, compared and conclusions for the use and for further

work are made. The intention of the contribution is to discuss possibilities for open programming

models more than to present final results.

Keywords: Distributed control, Distributed models, Functional blocks, Open control systems,

Programming approaches, Sequential control, Standards

1 INTRODUCTION

The importance of open, standardized functional

notation for control systems is strongly increasing. In particular the programming model and the languages of the IEC 1 1 3 1 -3 are playing a key role. However,

these programming languages are intended for centralized control systems.

On the other hand, centralized control systems are

increasingly being displaced by distributed systems.

Several problems exist for the use of the mentioned

functional notation for distributed real-time systems.

These problems are • How to represent the distributed application to

the user without complicated of communication issues.

• How to break up and distribute the functionality

of the application. • What is the execution model for the distributed

application. • How to share data.

43

In this paper, based on a general design model, the

first three problems for the programming of distributed control systems as well as three possible

approaches for solving them will be discussed. These

variants will be explained, compared and conclusions for the applicability will be made.

2 GENERAL DESIGN MODEL

In line with well known approaches for the design of distributed control systems (Ferling and Hingst,

1987; Chapurlat and Pru net, 1 994 ), three design stages are distinguished: Programming, Configuring

and Loading/Debugging/Tuning.

The Programming covers i) the functional structuring of the control application (including

control , motion and MMI functionality) based on a

entity relationship model and ii) the functional specification of the Programs (e.g. with !EC 1 1 3 1 -3

language editors) and their Connections (e.g. by

editing the communication attributes) .

Page 46: Distributed computer control systems 1995 (DCCS ¹95)

...--P_R_o_G_RA_M_M_•_N_G_--./ .... I _P_R_O_J_E_C_T_.I "" ...---c_o_N_F_•_c_u_R_I_N_G _ ____,

Objects: Objects: Q g g •Station Groups O O •Stations

•Program Grou�s •Programs •Channels NAVIGATION •Connections

Objects: •FBs •Language Elements of ST, IL, LD, FBD, SFC

... MAP­PING

STATIONS

Objects: []]] •Station Type •Components •Comm. Interface •Program Mapping •

Fig. 1 . Engineering Disciplines in the Design Model

Configuring covers i) the topological structuring of on small controllers and intelligent actuators and

the control equipment (stations and networks) and ii) sensors.

the physical specification of the stations and the

networks: assignment of control software to tasks

running on the stations and parametrization of the

system. The Loading/Debugging/Tuning covers the

on-line support for the commissioning of the

designed control system.

This model (Fig. 1 ) supports a top down as well as a

bottom up design. As a result of the Programming, a hierarchical functional description of the application

is created and as a result of Configuring the

description of the physical system is designed, which

already contains the information about the mapping of Programs to Stations. In the scope of this paper

the functional units for distribution, which are

assumed to be unbreakable, are called Atoms.

3 REQUIREMENTS

For the functional notation of the application, several

requirements regarding the quality and quantity for the models and languages exist:

• The programming languages and models should

correspond to standards, esp. the IEC 1 1 3 1 -3 , to

minimize training and maintenance costs. • For the application software a structure of top

level Sequential Function Chart (SFC) coordinating several Actions is assumed. This

structure allows both distribution (since some

Actions are associated to distributed machine components) and parallelization (SFC breaks the

strict data flow oriented or sequential execution order of other IEC languages).

• The functional notation should provide a large number of small sized Atoms for fine grained

mapping (Thielen, 1 994), which is important especially for distributed fieldbus systems based

44

4 APPROACHES

General: Starting point for the investigation of the

granularity is the programming model introduced by

the IEC 1 1 3 1 -3 (Fig. 2). Programs, written in control

languages (e.g. Function Block Diagram or Instruction List) or general high level languages (e.g.

C) are assigned to Tasks for execution control. The

Programs are instantiated within a Resource, which

itself is contained in a Configuration. A

Configuration is considered as a model for a

Programmable Controller. In this moc:iel, the

mapping shown in Fig. I means a distribution of

Programs to Configurations respectively Resources.

Problems. In principal there exist two ways for distribution: taking the Programs as Atoms or

breaking a Program into smaller units, which are

taken as Atoms.

The first way would put the burden of distribution

on the user by designing a ,,large" number of ,,small"

Programs. The communication between these

Programs would be especially hard to handle in this case since

Program Program

Resource Resource Configuration

Fig. 2. !EC Model

Page 47: Distributed computer control systems 1995 (DCCS ¹95)

• special Communication Function Blocks have to

be used increasing the communication notation at

the expense of the functional notation, • no data distribution is provided above the level of

a Configuration and • the local process inputs of one Configuration

have to be distributed explicitly for network wide

use.

The second way would show the user only one single

Program for the distributed application, without any communication but only the control functionality.

But here the burden of distribution would be put on

the design tool and the run time system. The reason

for this is that an automatic decomposition and

parallel execution would have to be realized. The

difficulties here are the different models: The IEC

Program model supposes a centralized, sequential

working machine with synchronized execution of parts, but the run time system represents a

decentralized, parallel working machine with

asynchronous execution of Program parts.

Proposals. In order to explore the solutions three

variants have been investigated: 1 . In a Program Approach, the programs are

taken as unbreakable. These programs and the

related communication connections are extended

by additional features for data exchange between

stations.

2. For a more fine grained mapping a further

decomposition of Programs is realized in a

Program Partition Approach. For several reasons (structure of program, use of

asynchronous relations, locality of partitions) the

principal focus lies on a decomposition of SFC with refined Actions.

3. A more general Block Approach uses

"containers" with control and data input/outputs.

These blocks may be filled with programs or program (esp. SFC) partitions and can therefore

be applied for 1 ) and 2).

In all approaches it is supposed, that the control

system consists of several stations, communicating

via a fieldbus network.

4.1 Program Approach

General. The stations of a distributed control system are modeled in the IEC 1 1 3 1 as Configurations. Therefore the system is supposed to consist of

several Configurations with Programs running on

these. The Programs (Fig. 3 ) may exchange data across Configurations via Communication Function

Blocks. The execution of any Program is controlled by task features.

45

Event

Data

Fig. 3. IEC Program

Data

Communication Interface

Specification. For this kind of a system the

definition of an IEC Program is extended here. The

limitation of global variables inside of one

Configuration only, as done by the IEC model, will

be broken by introducing Configuration external

variables. These variables would be mapped to a

cyclical driven communication channel with the

facility of multicast (if supported by the selected network). For such a Network Variable only a single

source exists but several copies are distributed in the

system.

I MOTOR I / � I. / ::"\

Program 1 / ..... VAR_EXTERNAL Motor: BOOL; Program 2 END_VAR

� VAR_EXTERNAL Motor: BOOL; END_ VAR

pot.iu�� � matlc

\. Configuration B

\.. Configuration A

Fig. 4. Distributed SFC with Network Variables

The distributed Programs are executing

asynchronously. The only synchronization is realized at the application level by using the Network

Variables, as shown in Fig. 4.

With the same mechanism it is possible to distribute

configuration-local 1/0 to programs running on other

Configurations, simply by declaring the l/O variables as configuration-external, which could be done

automatically by an appropriate tool.

Attributes. By this extension, a high metric of

reusability and of free mapping of programs would

be provided. This is valid independent, if the programs are running inside the same or on different

configuration(s).

4.2 Program Partition Approach

General. In terms of !EC the Program Approach

means, that one or multiple Program(s) are mapped to one Resource, where one or several Resource(s)

are located in one Configuration. A splitting of one

Page 48: Distributed computer control systems 1995 (DCCS ¹95)

Program across several Resources or Configurations

isn't possible. The only way would be to realize a ,,Distributed Configuration".

Specification. To find a good partitioning for one IEC Program running on such a Distributed

Configuration, it seems to be useful to determine the

unbreakable Atoms. If these are found, the mapping of Program partitions to Stations can be realized by

assigning of Atoms, without any further decomposition.

An example of the Program structure which has to be distributed is shown in Fig. 5. Two SFC networks

consisting of Steps (Si) and Transitions (Tj),

checking Conditions (Ck) and coordinating Actions

(Al) .

A5 I

A3 l AG I jc2 � . .

0 0 IR I A4 1 A7 1 D active Atom

Fig. 5. Sequential Function Chart

For example the Steps S2 and S4 and the Actions

A 1 , A3 and A5 are active and at least the conditions

C2 and C4 have to be checked. Below this ,,application semantic", a fixed evaluation

order, given by the IEC, is assumed (Fig. 6):

Phase I : Evaluation of Transitions of all networks and corresponding activation or deactivation of Steps

in all networks.

Phase 2 : Evaluation of the execution condition for

each Action (via a hidden Action Control Block) and

corresponding Action execution.

Phase 1

Phase 2

Program • • •

Step(s) � Qual.(s) � . C:=J Step(s) � Qual.(s) � ... - . . c::J

• • •

Fig. 6. Execution Model

hidden to the user

46

a) Sequential Order

SF Ci �l SFC2 � ! SFC1 C T

SF Ci

SFC2

SFC1

Fig. 7. Evaluation Order

s A time

data exchange and/or synchronization

time

Applying this execution model to a single processor

machine, like it is supposed in the !EC, leads to a

sequence shown in Fig. 7a).

Applying this to a distributed Configuration, like it is

desired here, leads to an order shown in Fig. 7b ). The

modifications are • All Steps may be evaluated in parallel, the same

is valid for all Transitions. • All Conditions may be evaluated in parallel, if no

side effects between the Conditions are

programmed and expected (these side effects

may be used in 7a), but would lead to ,,nasty"

Programs). • All Actions may run in parallel, if there also no

side effects are programmed. • Phases for data exchange and synchronization

have to be inserted.

With the called restrictions both orders would

produce the same Program semantic.

The possibility of parallelization may decrease the

overall reaction time, but the communication

overhead has to be regarded. The needed messages

between the Atoms are illustrated in Fig. 8.

For example the two SFC networks of Fig. 5 with a

Step change in each are assumed. The example starts

with the execution of the Conditions, afterwards

Fig. 8. Messages between distributed Atoms

Page 49: Distributed computer control systems 1995 (DCCS ¹95)

transmitting the results (TRUE or FALSE) to the

assigned Transitions. These transmit messages for

Step activation/ deactivation. Afterwards a

synchronization has to be realized, so that the

Actions would work with a consistent Step marking.

For this the active or deactivated Steps transmit to

the Actions. The Action Control B locks are

evaluated and the Actions are executed

corresponding. Finally all executed Actions transmit

messages for synchronization, esp. to freeze the

inputs for the Conditions, and the cycle runs again.

Attributes. To minimize the number of messages,

several compositions of the C, T, S and A elements

are possible. These combinations and the needed

number of messages is shown in Table 1 . There for

example CT stands for a Partition of a Condition

evaluation and a Transition evaluation with local

data exchange only. While CT and SA stand for a

pairwise combination, TS stands for a partition of

all Steps and Transitions.

Table l : Number of needed messages

Parti- multicast point to

tions produced consumed

C, T, S, A 1 2 30

CT, S, A 10 28

C, T, SA 12 26

CT, SA 1 0 24

C, TS, A 1 0 26

point

30

28

26 24

26

The numbers are given for the example above, but

they are in principle similar for other SFC

applications, like more networks, parallel branches

or multiple Actions for one Step.

The first conclusion is, that the differences between

multicast and point to point communication are

considerable. For networks with producer/consumer attributes, like CAN, FIP or Modbus+, the number of

messages is below 40% that of networks with point

to point communication. The second conclusion is,

that independent of the selected partitioning, the

communication overhead for the distributed

execution control is huge. For a machine of a

transfer line (supposing 1 0 SFC networks and CAN

Bus with l Mbit/s), where a cycle time for the whole control is desired below 1 0 ms, the communication

overhead would itself be about 5 ms.

4.3 Block Approach

General. A general Function Block model for distributed control systems is under standardization

in the IEC TC65/WG6. There a collection of Devices are providing distributed Applications, which are

build up by Function Blocks (Fig. 9).

47

I ( � Application � � � �

I I Application � � �

Device Device J l Device Industrial Process Measurement and Control System

(IPCMS)

Fig. 9. Function Block Model of TC 65

A Function Block combines both control and data

flow. Any Block (Fig. 1 0) provides data inputs and

outputs, which may be connected to local I/O or with outputs or inputs of other Blocks. The execution of

the algorithms contained in the Block Body is

controlled by a specific Execution Control

mechanism, which is linked to other Blocks via

Events (Fig. 1 0).

Events

Data Data

Fig. 10. TC 65 Function Block

Specification. These Blocks could serve as an

implementation model for the application software and could be used for whole programs (like

described in 4 . 1 ) or for program partitions (of 4 .2).

In Fig. 1 1 a distributed SFC example is shown. The

SFC networks are contained in a Function Block and

one Action is contained in another Function Block both are running on different Devices.

In this example the Execution Control provides the execution of the SFC networks. Caused by the active

Steps inside this FB, Events are generated. These

Events have impact on the Execution Control (which

Execution Control

Device A

Fig. 11. Distributed SFC with FBs

Action Body

Device B

Page 50: Distributed computer control systems 1995 (DCCS ¹95)

could be an implementation of the Action Control

Block) of the FB containing the Action.

The Event linking allows both asynchronous or synchronous execution of the FBs. In the first case,

the Events are used for Start and Stop of the Action.

This behavior is comparable with asynchronously

running Programs, like described in 4. 1 . In the

second case, the Action FB may be forced by the

SFC FB to execute every time the SFC FBs is

executed. This synchronization may be refined up to

the level of single Steps or Transitions, like it is described in 4.2.

Attributes. Additionally advantages will be reached,

if this model would be used as a programming

model. This would provide a smooth fitting of

programming and implementation model - resulting

in better performance and transparency.

5 COMPARISON AND CONCLUSIONS

Essential features of the three approaches are

compared in Table 2.

Table 2 : Comparison of Approaches

Execution

Model

Granularity for Mapping

Decompo-

sition

Communic.

Overhead

Program Partition Block

Approach Approach Approach

asynchr. synchr. asynchr./ synchr.

low high medium

manual automatic manual

low high low/

high

Taking Programs as Atoms for distribution is based

on an asynchronous execution model. This reduces

the effort for communication. On the other hand, the granularity for mapping depends on the notation by

the user, but is quiet low in general . For a use of ,,many small" Programs the introduced Network

Variables are very helpful to concentrate on functionality and not on communication aspects. Further work has to be done for the mapping of

Network Variables onto different communication profiles. Especially a communication caused by

changed values of variables is of interest. But to

avoid specific proprietary solutions, an open and

standardized approach is desired.

Breaking an IEC Program into smaller Atoms for distribution is based on a synchronized execution model. Because of the small size of the Atoms, this approach allows a fine grained mapping. But this

would be paid by a high communication overhead,

since a great amount of synchronization is caused by

48

the model and not only by the application. A

disadvantage is that the reaction time of the whole

application would be increased by the slowest Atom.

This approach may be only useful in combination

with the Program Approach: Using clusters of

asynchronous Atoms (Programs) and synchronized

Atoms (Program Partitions).

Because of the general concept for Event

connections and Execution Control, the Block

Approach allows both the modeling of synchronous

and asynchronous execution. This model can be used for describing the Program Approach as well as the

Program Partition Approach. This facility makes a

combination of both models possible, with

advantages for high granularity and low

synchronization need (only where required by the

application). But to use these advantages, further

investigations and standardization is necessary.

Especially a link of the used event model to a

message oriented model is desirable.

6 ACKNOWLEDGEMENTS

The authors would like to acknowledge the fruitful

discussions with Heinz-Dieter Ferling and his helpful

remarks as well as the analysis work concerning the

IEC SFC execution model of Rudy Belliardi.

REFERENCES

Chapurlat, V. and Prunet, F. ( 1 994). Modular

specification, structured analysis and simulation

of distributed control system: the ACSY-R

model. In: Third International Conference on

Automation, Robotics and Computer Vision,

Singapore.

Ferling, H.-D. and Jilngst, E.-W. ( 1 987). Embed­

ding the !EC programming languages into an

overall systems design methodology. In:

Proceedings of IEEE workshop on languages for automation, pp. 1 2- 1 5 .Wien.

IEC 1 1 3 1 -3 ( 1 993). International Standard.

Programmable Controllers Part 3 :

Programming Languages . !EC, Geneve.

IEC TC65/WG6 ( 1 995). Committee Draft. Function

Blocks for industrial-process measurement and control - Part I : General Requirements. IEC,

Technical Committee 65, Working Group 6.

Thielen, H. ( 1 994). Automated Design of Distributed Computer Control Systems with Predictable Timing Behaviour. In: Preprints of IFAC

workshop DCCS'94, pp . 47-52.Toledo.

Page 51: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

DISTRIBUTED HARD-REAL-TIME SYSTEMS :

FROM SPECIFICATION TO REALIZATION

L. Carcagno, D. Doors, R. Facca, B. Sautet

/nstitut de Recherche en Informatique de Toulouse Universite Paul Sabatier 118, Route de Narbonne

31062 Toulouse Cedex, France

email : [email protected]

Abstract : Real-time applications cover a lot of promising industrial fields such as

aeronautics, robotics, automotive control, . . . Most of them are inherently complex and safety

critical, thus involve computer systems that maintain a permanent interaction with their

environment. But using computers in these critical roles, human become hostages to their good performances. Therefore, such systems must be designed so as to be highly reliable. In

this regard, this paper describes a determinist and realistic approach in large-scale distributed systems to automatically obtain the most suitable machine to process a given hard-real-time

application.

Keywords : Distributed design systems, Hard real-time, Methodology

1 . INTRODUCTION

A hard-real-time system is defined as a system that must react to stimuli issued from its environment,

and deliver correct results at intended point in time,

otherwise the system fails with catastrophic

consequences. This requirement is complicated by

the fact that real-time systems may fail not only

because of hardware or software failure, but also

because the system is unable to execute its critical

work load in time. So the correctness of such a system is dependant not only on the correctness of its

results, but also on meeting stringent timing requirements. Because the probability of failure must

be very small, hard-real-time systems must deliver

the expected service even in presence of faults and

have to be designed according to the guaranted

response paradigm (Kopetz and Verissimo, 1 993).

Redundancy must be provided in order to make the system fault-tolerant. Hardware redundancy protects

against random hardware faults. But redundancy

alone does not guarantee fault tolerance. For a

49

redundant system to continue correct operation in the

presence of fault, the redundancy must be managed

properly (Laprie, 1 992).

Guaranted response systems are based on the

principle of ressources adequacy, that is, there are

enough computing ressources available to handle

every possible situation, in particular the specified

peak load (Lawson, 1 992). The processing step

between a stimulus from the environment and the

response to the environment is time constrained. The

peak load can be expressed by specifying the minimum time interval between each processing step. If a hard-real-time system is not designed to

handle the peak load, it may fail in case of

emergency. Ressource adequacy implies that the

behaviour of the controlling real-time computing

system must be predictable. That is, it should be

possible to ensure at design time that all the timing

constraints of the application will be met as long as a set of assumptions about the behaviour of the environment are satisfied. So the environment as well

as its evolution have to be perfectly determined a

Page 52: Distributed computer control systems 1995 (DCCS ¹95)

priori and the execution time of the processing step

have to be known at design time. It is difficult in

practice to obtain exact execution time, there are

several factors that make this problem very difficult.

The imprecise computation approach (Liu, 199 1 ) is

one technique to ensure correct timing behaviour, if

the complexity of a computation can vary, as in

conventional intelligent applications. Because chains

of inference leading to conclusions can vary greatly

in length, it is extremely difficult to bound the

program execution times. Algorithms are usualy

based on approximation incremental techniques

where accuracy is sacrified for time. In many hard

real-time applications, obtaining an approximate

result before the deadline is much better than an

exact one after the deadline.

Distribution is useful to achieve fault-tolerance but

also essential as far as complex real-time applications

are concerned. As a matter of fact, to guarantee that the response time will be met, it is necessary to process the tasks of the system in parallel. But

exploiting parallel processors to improve system

throughput does not mean that timing constraints will

be met automatically. Unless the architecture of the

computing system is carefully tailored to match that

of the application, the system may not be able to

handle all of the task load (Stankovic, 1988). So an application is usually decomposed into a set of

cooperating tasks which are then assigned to a set of

processors in order to exploit the inherent parallelism

in application execution. Because architectures must

change with a change in applications, architectures based on dedicated hardware and software are neither

cost effective nor well utilized. So it is better to

develop a distributed architecture suitable for broader

class of real-time applications.

Other factors besides fast hardware or algorithms

determine predictability. The implementation

language is one of them. Current practice in real­time programming relies heavily on manual machine

level optimization techniques. These techniques are

labor intensive and tend to introduce timing

assumptions about internal instructions sequences on

which the correctness of an implementation depends.

As the complexity of real-time systems increases, it

is necessary to use the programming abstractions

provided by high-level languages. But sometimes the

language may not be expressive enough to prescribe

certain timing behaviour. For instance, the delay

statement of ADA puts only a lower bound on when

a task is next scheduled (Parrish, 1988). More, there

is no language support to guarantee that a task cannot be delayed longer than a desired upper bound

(Stankovic, 1 988). However, a program's

performance must be predictable so that it could be

possible to verify if timing constraints can be met.

50

Most of system design and verification techniques

are based on abstractions that ignore implementation

details. But in real-time systems, timing constraints

are derived from the environment and the

implementation. Testing is not the right way. The

problem with this approach is that the testing is

limited and generally cannot include the entire set of

possible inputs. Moreover, the worst case for time

consumption might not be uncovered in testing. So,

as far as meeting stringent timing constraints is

essential, new approaches must be considered. It is

necessary to be able to express timing constraints in

specifying the problem, capture them in the

designing, and establish that the realization of the

system complies with the specifications.

This project's target is to offer a determinist and

realistic approach in large-scale distributed systems

to automatically obtain the most suitable machine to

process a given hard-real-time application.

So, there is a need for tools which analyse the tasks

description and determine the tasks execution time.

The maximum execution time must be determined by

an off-line analysis of the source code written in a

restrictive high-level programming language. All the

constructs provided by the language must be time­

bounded. These tools would allow a priori analysis of

software timing properties. But an analytical

verification of the temporal properties is only

possible if the underlying hardware guarantee a

predictable temporal behaviour.

The objective is to develop an architecture that

supports requirements for an engineering approach to

the development of reliable real-time systems and to

develop a systematic software design methodology. This way comes close to the MARS project's one

(Kopetz, 199 1 ).

The design methodology is based on a distributed

synchronous data flow computational model

allowing the designer to describe a hard-real-time

system without explicitly having to care about timing

constraints (Carcagno, 1 995). An executable specification language (Carcagno, 1 992) and a generic architecture (Feki, 1993) have been derived

from the computational model.

In order to build distributed hard-real-time

computing systems, a set of development tools have

been developed. To assist the designer, a designing

tool allows the capture of the model based

description and the timing constraints to be

respected. This tool verifies the synchronization

mechanisms. But, the modular structure of the specifications will surely be different from the

modular structure of the implementation. It is the

reason why the tool has been defined so that the

designer could concentrate on the logical

Page 53: Distributed computer control systems 1995 (DCCS ¹95)

specification of his problem without being preoccupied with the implementation, that must

respect temporal constraints (data flow rate, response

time ... ). The tool automatically takes into account the

timing constraints to translate the system

specification into an implementation configuration

which, at the realization level, can be seen as a

particular configuration of the generic architecture. The dedicated machine is then obtained by

interconnecting material components of the generic

architecture and loading an executable code,

according to the implementation configuration.

After describing the modelling principles, the

characteristics of the designing tool R.S.D. T. will be

presented. To highlight its functionalities, the

different steps of a hard real-time system design will

be presented in an example.

2. MODELLING

An application is partitioned or decomposed into a set of computational modules, whose interconnection

represents a direct module graph. The inclusion of

input data changed the module graph for an acyclic

data flow graph (ADFG). The nodes of an ADFG

correspond to the computational modules, the data

move along the edges, each of which connects a pair

of nodes.

Similarly, a distributed computing system can be

represented by a direct graph called the processor

graph where the nodes correspond to the processors

and the edges represent communication links

between processors. The interconnection of

processors, which are configured and arranged based

on a functional decomposition of the computational

module to exploit the great potential of pipelining

and multiprocessing, provides a cost-effective solution for hard-real-time problems.

Generally such a system is asynchronous. An

important issue in hard-real-time is to know the

maximum running time in order to predict a temporal

behaviour. But asynchronous system are not

predictable (Berry, 1989). So a distributed synchro­

-nous computational model has been defined.

2 . 1 . Computational model

It is a time-triggered computational model, that is,

one that reacts to significant external events at pre­

specified instants (Kopetz and Verissimo, 1993 ).

The underlying computing model allowing a

determinist evaluation of the running time is

represented by a structure :

5 1

S = ( Q, 3 , C r )

Q is a set of nodes. A node will be called a module. A module exchanges information either outside, or

with other modules, using input and output ports. It

has a cyclic running (data input, process, data

output). The process can only start when all data are

available. The cycle period must be short enough to

match the data input rate and long enough for the

duration of the processing.

3 is a set of unidirectional commun ication channels. Each channel connects exactly two

modules of the system. A channel carries a stream

from an output port of a module to an input port of

another one. A stream is defined as series of values,

each value can be read only once. The successive

values of a stream are buffered into a two element

FIFO.

s : 3 x Q -> Q is a partial mapping representing the interconnection among the modules.

Intuitively, for each module m e Q and channel c E 3 , s (c,m) ( if defined) is the module connected to m

via channel c.

Streams are only top-down so the interconnection function s can be represented by an acyclic graph,

which visualizes a partial execution order, induced by a relation of timing precedence. In such a graph, the modules of the same row are executed according

to a true parallelism, while modules on the same data

path are executed in a pipelined mode.

r is a communication channel to broadcast

messages for which there is no temporal relation

between the production time of the message and its

processing time. This channel connects every modules of the system and carries a permanent data. A permanent data is always available and

simultaneously present all over the system. It can be

read as often as required by the modules that use it,

but only the producing module can modify it.

2.2. Description Model

A system description is modular and hierarchical. It is composed by the system interface and body descriptions. The interface between the system and

its environment is composed by input and output interfaces.

A system body is described as a set of sub-systems that can communicate through permanent data. A

sub-system description can be seen as follow :

Page 54: Distributed computer control systems 1995 (DCCS ¹95)

A sub-system S is defined as an input system IS connected to a control system CS, which is connected

to an output system OS. An input system IS is a non empty set of input modules IM. An input module IM can be either a direct input d i. or a non empty set of input modules IM connected to a processing module PM. A control system CS is a processing module PM. An output system OS is a non empty set of output modules OM. An output module OM can be either a direct output d.o. , or a processing module PM connected to a non

empty set of output modules OM. A processing module PM can be either an elementary module EM, or a network module NM. An elementary module EM runs a sequential process.

A network module NM is a set of processing modules PM interconnected according to a dependance graph.

in streams

out streams fig. I « System modelling »

2.3. Description language

To describe a system according to the model, the

RSDL language has been defined (Carcagno, 1 992).

It is a synchronous specification and programming language that ensures a determinist behaviour.

It is a three level hierarchy language. The first level

is to describe the interface with the environment, the

sensors and actuators are defined there, as well as the

temporal constraints. Furthermore the system body is

defined, i.e. the decomposition into sub-systems. The

second level is to describe the part of the

transformational aspect related to the parallelism of

the application using network modules. The last

level is to describe the sequential behaviour of an

elementary module. The instructions are ADA like

instructions adapted to the model (no recursion, for loops . . . ). The only constraint is to declare the input

52

and output characteristics and the timing constraints

when describing the system interface. The system

temporal behaviour can be predicted because of the

language limitations, so the running times of the

modules can be bounded.

Some aspects of the modeling , though they can be

textually described, are intuitively visual. Thus, it

appeared interesting to allow the user to describe

directly in a graphic way the different levels of the hierarchy of the application, and the structural aspect.

The behavioral aspects of the elementary modules

will be described textually using a sequential code.

3 . R.S.D.T. PRESENTATION

Before presenting the characteristics of the designing

tool RSDT, the designing steps are described.

3 . 1 . Designing steps

Using this tools from specification to

implementation, requires three main steps :

System specification and design

System level compiling

Architecture compiling

System specification and design. From a functional

description (structural and behavioural) of the

system, and from temporal characteristics specified

in the environment description, a functional

description is produced. It can be simulated in

V.H.D.L. in order to check its logical correctness.

The structure and environment descriptions are

graphical, while the behavioural one is textual.

When the system is described using the visual

language, the characteristics of external and internal

information associated to inputs and outputs are

defined with attributes. The textual version of these

characteristics is automatically generated.

fig. 2 « System environment and specification »

Page 55: Distributed computer control systems 1995 (DCCS ¹95)

Different levels of description can be distinguished in the previous figure. The less tinted part corresponds to the system interface. An input module (IM.) collects an input and generates the internal information corresponding to the information given in the interface specification. An output module (OM) generates an external information suitable for the peripheral or the actuator that uses it according to the internal information and to the characteristics of the information type acceptable by the output user. Input and output modules are automatically generated by the design aid system.

When defining the system bbdy, all the sub-systems that make it up are described with their links. These links, when existing, are compulsorily permanent data type, because, if not they are in the same sub­system.

fig. 3 . « System body structure »

The accordance with the model is interactively controled along the graphic input. With regard to packages and elementary modules bodies, the input is textual, and the syntactic analysis is performed at the designer' s request.

Here is an example of an elementary module R.S.D.L code :

elementary module INJECTION is variableTOPO · BOOLEAN; variable TOP : BOOLEAN := FALSE; variableTIME_COUNT · TIME;

init OUVERT_INJ := FALSE;

loop input TOPI, TEMPS_INJ; TOPO := TOP; TOP := TOPI ; i f OUVERT !NJ then

TIME COUNT := TIME COUNT + PERJOD !NJ; ir TIME COUNT>=TEMPS INJ then

-

OuvERT_INJ:= FALSE;-end if;

elsif TOPO and not TOP then OUVERT_INJ:= TRUE; TIME_ COUNT := O;

end if; output OUVERT_INJ;

end loop; end INJECTION;

From the structure described by the designer and the temporal constraints to be respected indicated in the environment description, a synchroniser determines

53

the flow's temporal attributes that will be used to calculate the elementary modules allocated times. Then the synchroniser builds a flow valuated structure.

Then the system logical coherence has to be checked. At the elementary module level, this checking can be formal because it consists in the verification of a sequential program behaviour. Classical methods with associated tools can be run. On the other hand, the validation of the entire system comes under distributed systems modular proof (Audureau, 1990). Tools allowing to validate such systems do not exist yet, so system simulation seems to be the only way.

To perform this simulation without building a R.S.D.L. simulator, the system structure is translated into V.H.D.L. so as to run on existing simulators.

The simulation cannot check if temporal constraints are met. So the system level compiling handles that matter, and produces a functional description temporally correct.

System level compiling. According to the model, a description is considered temporally correct if every elementary modules has a processing time lower than its allocated time. The processing time of every module is calculated from its internal representation (Magnaud, 1990).

The modules that do not comply with this temporal condition must be decomposed. The result of the decomposition is a network of modules that matches with the model and every constituent module meets the temporal constraint. This decomposition is based on a parallelization method that produces a just sufficient parallelism (Magnaud, 1990).

Because the designer's description may not be optimal regarding the amount of required elementary modules, a last optimization phase reduces the number of necessary modules (De Michie], 1 994).

Architecture compiling. This phase consists in the production of an architectural description taking into account the generic architecture material constraints. It is a modular and reconfigurable MIMD architecture (Feki, 1993), that allows the implementation of any system described according to the model.

At the end of the previous phase, the system is represented as an acyclic graph, whose node represent elementary modules and edges precedence constraints among modules. The target architecture has to be configured according to this structure, by producing the processors executable codes, and by determining tables from which a processor could

Page 56: Distributed computer control systems 1995 (DCCS ¹95)

know at every cycle, the input to be consumed as well as the output to be produced.

lt.SJl.L.-- 11.&D.L-­-- --

fig.4. « Design Steps »

The dedicated machine allowing to perform the

application in real-time, will be obtained by

interconnecting material elements from the generic

architecture and loading the executable code

according to the deduced configuration.

3 .2. Designing Tool

To implement the design steps the designer handles

design primitives through a graphic interface. These primitives allow to describe the system, check and

validate the description, automatically determine a

representation that complies with the specified

temporal constraints, and obtain a configuration of

the target machine (Carcagno, 1 995).

4. CONCLUSION

In the context of hard real-time applications we have

proposed a computationnal model yielding both

determinism and distribution capabilities. It is a

distributed synchronous data-flow computationnal

model in which a system is decomposed into

concurrent deterministic modules that cooperate in a

deterministic way. Deterministic concurrency is the

key to the module development of distributed hard

real-time systems.

To specify such applications we have developed a

modular and hierarchical language allowing the

expression of these applications, as a distributed

network of modules. A specification tool is used to

specify applications according to the model using

this language, and to verify the description of the

flow coherence. An architecture compiler performs

transformations of the concurrent code in order to

match a target generic architecture.

This work can be considered as a step towards a

direct synthesis of distributed hard real-time systems

from their specifications.

54

REFERENCES

Audureau, E. et al. ( 1 990). Logique temporelle : Semantique et validation de programmes

paralleles. Masson Berry, G. ( 1 989). Real Time programming : general

purpose or special purpose languages. In IFIP World Computer Congress.

Carcagno, L. et al. ( 1 992). R.S.D.L : a Real-time

System Description Language. l.R.l. T Technical Report, Toulouse, France.

De Michie!, M. ( 1 994). Recherche de la

Configuration Optimisee d'une Architecture cible

pour une application Temps-Reel. Doctorate Dissertation, UPS, Toulouse III, France.

Feki, A. ( 1 993). Architecture Parallele Generique

pour la realisation de systemes Temps-Reel.

Doctorate Dissertation, UPS,Toulouse III, France.

Kopetz, H. et al. ( 1991 ). The design of real-time

systems : from specification to implementation

and verification. Software ingenering journal, 6 (3) (UK).

Kopetz, H. and P. Verissimo ( 1 993). Real Time

Dependability Concepts. In : Distributed Systems, (Sape Mullender, (ed.)), ACM Press, New York.

Lala, J.H. et al. ( 1 99 1 ). A Design Approach for

Ultrareliable Real-Time Systems. IEEE Computer, vol. 24, number 5, May.

Laprie, J.C. ( 1 992). Dependability : Basic Concepts and Terminology. In Dependable Computing and Fault Tolerant Systems, volume 5, Springer

Verlag, Wien, New York.

Lawson, H.W. ( 1 992) Cy-Clone : An approach to the

engineering of resource adequate cyclic real-time

systems. Journal of Real-Time System, 4, pp 55-83.

Liu, J.W.S et al. ( 199 1). Algorithms for Scheduling

Imprecise Computations. IEEE Computer, vol. 24, number 5, May.

Magnaud, P. ( 1990). Methodologie et outil de

conception d'une architecture parallele temps­

reel. Doctorate dissertation, UPS, Toulouse III,

France.

Parrish, L. ( 1 988) Running in real-time : A problem

for Ada. Defense Computing, September-October.

Stankovic, J.A. ( 1 988). Misconceptions about

Real-Time computing : a serious problem for

next generation systems. IEEE Computer, October.

Page 57: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

HEURISTICS F O R S CHEDULIN G P ERIOFIC C O MP LEX REAL-TIME TASKS IN A DISTRIBUTED SYSTEM

J-P. BEAUVAIS and A-M. D EPLANCHE*

*LAN (URA 823), ECN, Universite de Nantes. E-mail beauvais@lan. ec-nantes.fr and deplanch e@lan. ec-nantes.fr

Abstract. The great majority of tasks in a real-time system are periodic. Moreover, a complex task can be modelized by precedence and communication graph. This paper compare two static heuristic algorithms which try to pre-run-time schedule periodic complex real-time tasks on sites in a distributed system fully interconnected by a bus. These algorithms have to deal with the periodicity constraints, and commu­nication and precedence requirements while imposing no migration to the subtasks (the elementary software instances) between the different periods. They determine the mapping of the subtasks, their schedule on the processors and the schedule of the communications along the bus.

Key Words. Scheduling algorithms, Real-time tasks, Heuristics, Distributed computer control systems.

1 . PRESENTATION

The paper focuses on the pre-run-time map­ping/scheduling of the logical software compo­nents onto the required physical resources in hard real-time systems. It is concerned with three closely related problems :

• task allocation : the allocation of the software objects in the logical architecture to processors of the distributed physical architecture. • network scheduling : managing the shared re­source of the network so as to evaluate message delays. • processor scheduling : determining the schedule which will ensure that all tasks on all processors will meet their deadlines.

The task computation model we have to deal with is that of a set of periodic hard real-time tasks with deadlines. Due to communication and precedence constraints, a task is defined by a directed acyclic weighted graph. The underlying physical system is modeled as a set of identical processors cou­pled by a bus-like network with contention. The only resource constraints we consider are thus re­stricted to the processor and the bus. Few publications on hard real-time scheduling treat scheduling algorithms for distributed sys­tems, and take into account communication de­lays and precedence constraints when a schedule is constructed (Ramamritham, 1990; Fohler and Koza, 1990; Peng and Shin, 1989; Verhoosel et al,

55

1991 ; Tindell et al, 1992) . Because the allocation and scheduling problem being considered is NP­hard, algorithms are heuristic in nature. In this paper, two off-line scheduling algorithms are studied and their performances are compared thanks to results of strictly identical simulation studies. The first algorithm is a classical list al­gorithm which allocates and schedules the tasks of the global graph one at a time, in a stepwise way. It is guided by heuristic functions taken from (Hwang et al. , 1989) where they were applied to a no periodic and no real-time system. The sec­ond algorithm inspired by (Ramamritham, 1990 ) , begins to cluster together subtasks that have " sub­stantial" amount of communication among them and then assigns the clusters to the sites while de­termining a feasible schedule for the subtasks as well as the communications between them. It uses a search technique driven by task characteristics. Since these two algorithms implement two differ­ent strategies, our aim is to characterize the con­figurations one algorithm is the most convenient for. Specially, our intent is to evaluate the effec­tiveness of a preliminary clustering step .

2. THE MODEL

The model considered is deterministic, i .e . all the hardware and software characteristics are assumed to be known a priori. The underlying distributed system consists of a set P of m sites, P = {pj , j = 1 , · · · , m} , each

Page 58: Distributed computer control systems 1995 (DCCS ¹95)

with one processor . The sites are identical , i.e. the speeds of all processors are equal. They are fully connected by a multiple access network . It is assumed that the contention-free communication delay does not depend on the distance between sites, but on the amount of exchanged data. How­ever, when two communicating subtasks are allo­cated on the same site, the communication delay between them is considered to be negligible and is set to zero. The software system is modeled as a set G = { G; , i = 1 , · · · , g} of g periodic hard real-time tasks whose deadline is the end of the period. A task G; is a directed acyclic weighted graph of period Ti and is defined by a tuple G; = ( S; , E; , r;) . S; = { Sij , j = 1 , · · · , n; } denotes the set of task nodes or subtasks Sij of the graph G; . Each subtask is valued by its worst case processing time with no preemption on a processor : pt(sij ) . The set E; represents the oriented edges symbol­izing the precedence and communication relations which may exist between two subtasks of a task. E; = { e;j / , j = 1 , · · · , n; , .l = 1 , · · · , n; } means that the subtask Sij precedes s;1 . Each edge is weighted by ct( eij l ) , the communication time incurred along the edge (measured as if the network was dedi­cated exclusively for the communication between the two subtasks) . Because of the periodic behavior of tasks, the study can be restricted to the interval [O , L] where L is the least common multiple of the periods r; (i = 1 , · · · , g) . Each task G; will give rise to L/r; instances in the time interval [O , L] . Its instance number k (k integer, k E [ 1 , L/ri] ) is denoted Gf = (Sf , Ef) with Sf = {sfj } and Ef = {efj 1 } .

The problem is then to find three applications :

• Subtask allocation : A g LJ S; __,. P i=l

s;j --+ A (s;j ) A (s;j ) is the processor to which Sij is assigned.

• Network scheduling : N g ff T, �

i�l ��J Ef-{ efj 1 / A(s_ij ) = A(sil ) ]}-+JNxJN efj z --+ N(efj z ) = (st(efj z ) , ft(efj 1 ) )

st ( efj z ) and ft( efj 1 ) are respectively the start time and the finish time of the communication between k d k sij an sa .

• Subtask scheduling : S

iGl (:Q: Sf) --+ JN x JN

sfj - S(sfj ) = (st (sfj ) , ft(st )) st (sfj ) and ft(sfj ) are respectively the start time and the finish time of sfj on the processor A( Sij ) .

These applications must verify the following con­straints: Vi = 1 · · · g VJ. = 1 · · · n · \-l k - 1 · · · L/�· ' , ' l l z , V - l l ' Z

56

• No preemption of a subtask : ft(sfj ) = st(sfj ) + pt(sfj )

• No preemption of a communication :

ft(efj z ) = st(efj z ) + pt(efj1)

• Periodic activation :

st( sfj ) 2: at( sfj ) = ( k- 1 )xr; = activation t . of sfj

• Deadline respect :

ft ( sfj ) :::; dt ( sfj ) = k x T; = deadline time of s�j

These two last constraints must be especially veri­fied for subtasks which have either no predecessor, either no successor in the period. • Precedence constraints taking into account com­munication delays : ( max (ft( eklj ) ) )

( k ) 1/e�1j EE7 AA(s7j );tA(s7z ) '

st s;j :?:max max (ft(sk ) ) lfe:1j EE�J\A(s7j )=A(s71 ) ' 1

'�'112)=50 60

s12 SO

20 40

100

Fig. l . A software configuration of two tasks

3 . THE ALGORITHMS

3 . 1 . The global graph

40

'23 80

The algorithms have to deal with a global graph :::�:·d

.

�y ( :�:;r::;,,::.::·::�:::: initial and final subtasks. Dummy edges have been introduced to connect these dummy subtasks and also the graph instances on two consecutive peri­ods . Fig. 1 shows an example of a task configuration we have to deal with, whereas fig. 2 depicts the global graph that results from it. At this stage , we can define for each subtask : • the earliest start time of a subtask which de­pends on the schedule of the predecessors of a sub­task and on its allocation .

• the latest start time of a subtask which depends on the knowledge we can have about the commu-

Page 59: Distributed computer control systems 1995 (DCCS ¹95)

nications between the subtask and the final node :

lst( s7j ) ::::: min

ct(e�l'! )=50

Gj @so

20

s� 50

40

40 '

_.,-- ';s �1 ISO d!(G[ )= 500 - -

-�-G�,)-=-500��\ �,��_,-- ��- W

ct(ei12 )=50 Gj ..f2 so

d!(c;'l) = 1000

� I / Sii pt(s?z>=lOO

· - - - - - - - - - - - - ---> :�' Fig. 2. A Global Graph

3 . 2 . The list algorithm : LIST

\

al(� ) = 0

sb 100 SO sb

40

s� SO

The first algorithm is a classical list algorithm which tries to map and schedule one applicative subtask at once. There number is n ::::: Li (L/Ti ) x ni . At each decision point, a subtask is chosen as well as the processor to which it is allocated. Then the communications received by the subtask are scheduled on the bus. At a decision point corresponds a Current Moment which is the time at which the decision is taken . To be chosen, a subtask must belong to the ready list which is composed by the subtasks which have there predecessors completed at the Current Mo­ment. To be chosen, the processor must be free at the Current Moment. Finally, when the ready list is empty, the Current Moment is incremented to the next time which updates the ready subtasks list or the free processors list. It must be noticed that the Current Moment which corresponds to decision points is not a priori the start time of the subtask chosen at this decision point. The choice of the subtask, the choice of the proces­sor and the choice of the schedule of the commu­nications obey to heuristic functions inspired by (Hwang et al. , 1989 ) . The schedule of the com­munications is guided by the earliest start time. We have tested two versions of this algorithm : • the first version (LIST _EST) gives the priority

57

to the smallest earliest start time of the ready sub­tasks. • the second version (LIST_LST) gives the prior­ity to the smallest latest start time. A particularity of this algorithm is that it man­ages two times in order to have a longer range vision : the current moment, which is the time at which a decision of allocation is taken and the next moment which corresponds a priori to the next planned current time (the end of the execution of a subtask or of a period) . A decision of allocation can be postponed to the next time if the earliest start time of the chosen subtask is greater than the next time. Actually, at this next time, a sub­task which could have a smaller earliest start time may become ready.

The pseudo-code of LIST

/*rEr D k = {sk E Sk /3ek E Ek} */ s ij il t ilJ t

BEGIN REPEAT FOR EACH S E S, ds = IDs I I +- {P1 , P2 , · · · , Pm } I •free processors* I A ,__ {s71 ID ,k = Init ial node} /•ready subtasks•/

I � i.l A +- © /•nellly ready subtasks*/

Q +- 0 /•scheduled subtasks•/

F ,__ 0 /*nelily enabled subtasks*/

CM +- O, NM +- oo WHILE IQ J < n DO BEGIN

WHILE I ;i: 0 /\ A -f. 0 DO BEGIN IF FIND( s, p) 1HEN

e8 = max(CM, est(s, p)) IF es < CM TIIEN BEGIN

SCHEDULE_CO!\LNODE() A +- A - {s}, I +- 1 - {p} , Q +- Q - {s} UPDATE-EST..READY..SUBTASKS() IF e5 '.S NM TIIEN NM ,__ ft(s)

ELSE EXIT THE INNER LOOP ELSE EXIT THE INNER LOOP

END CM +- NM, NM +- NEW..NM() REPEAT FOR EACH s , p , such s j ust finished on p

I ,_ I u {p} REPEAT FOR EACH s' / s E D 5 '

d 5 , +- d5, - l , IF d5, = O THEN F +- F U {s} REPEAT FOR EACH s E F

IF at(s) '.S CM 1HEN A' +- A' u {s} REPEAT FOR EACH s E A'

COMPlITE_EST..NEWLY..READY..SUBTASKS() A ,__ A u A' , A1 ,_ 0

END END LIST_EST : FIND(s , P)/est(s) = minsEA,pEI(est(s) , if A(s) = p)

LJST_LST : FIND(s, p)/lst(s) = minsEA (lst(s)) /\ est(s) = minpEI(est (s ) , ifA(s) = p))

3.3. The clustering algorithm : CL US

This algorithm inspired by (Ramamritham, 1 990) is divided in two phases : first a phase of clustering whose purpose is to reduce the communications, then the mapping and scheduling phase.

Page 60: Distributed computer control systems 1995 (DCCS ¹95)

• The clustering phase This phase which can be considered as a relative allocation , tries to reduce a priori the load of the bus by gathering some communicating subtasks which will have later to be allocated to the same site. Two successive subtasks Sij and s;z will be clustered if ct(eij z)/ri 2: CF x max(ct(e;j z )/r; ) . The communication factor CF, is a parameter of the algorithm which allows to work the clustering in a variable manner . If the value of CF is zero, the algorithm tries to gather all communicating subtasks, if its value is one, the algorithm avoids them to be on the same site. We can define the cluster application Sij +-- C£( Sij ) = the number of the cluster to which belongs Sij . The clustering application is executed on the initial graphs and it involves co-residence constraints between sub­tasks : C£( Sij ) = C£( Si/ ) :::} A( Sij ) = A( Siz ) . The size of a cluster can be computed as the sum of processing times of the subtasks which belong to the cluster c (possible instances over [O, L] in­cluded) . Before merging a subtask to a cluster, we verify that the size of the cluster don't exceed L. According to the rule stating that two subtasks belonging to the same cluster must be allocated to the same processor, the weight of communi­cation links between such tasks is set to zero. ( zero(eij z ) +-- true) . In the same way two commu­nicating subtasks separated by a link different of zero must be allocated to two different processors . At this moment , we know exactly the communica­tions that remain. The algorithm goes on by built­ing the global graph and then a global modified graph in which communications are considered as ordinary subtasks. In this graph, the nodes repre­sent the applicative subtasks and communication subtasks ; the edges are not valued and represent precedence relations . By this moment , it is pos­sible to compute for each subtask of the global modified graph its latest start time.

• The allocation phase In this second phase, the decision points of allo­cation and scheduling are taken in conjunction. A subtask will be said enable if all its predeces­sors are finished. They are two main differences with the previous algorithm : the communications are subtasks in the same way as the applicative subtasks and at a decision point, the algorithm does 'nt map and schedule only one subtask at the same time, but it tries to map and schedule the maximum number of ready tasks on the free pro­cessors in a single step . Before beginning the mapping, some verifications can be made in order to know if the decision point will lead to an unfeasible mapping . After these verifications, the search of a valid map­ping of the ready subtasks on the free processors can start . This operation is itself a search tree where each node is an arrangement of the ready subtasks on the free processors. This exploration is guided by the latest start time of the subtasks and stopped when a feasible mapping/schedule is found. Moreover, we must take into account that a decision may remain some processors idle. So before to start the exploration , we add as much

58

dummy idle subtasks to the ready subtasks list as there are processors and bus. We can name the al­gorithms CLUS_O , CLUS with CF=O and CLUS-4 with CF=0.4.

The pseudo-code of CL US

/ •let D k , (X k ) = the predecessors , (successors ) , s ij sij

(subtasks or com) of s71 * / BEGIN

I <-- {Pl , P2 , · · · , pm} / •'free processors*/

J' <-- 0 / •nell'ly chosen processors•/

A <-- {ski /D,k = I nitial node} /•ready subtasks•/ I t � il

A <-- 0 / •enabled ready subtasks•/

Q <-- 0 /•'finished subtasks•/

Q1 +- 0 /•nell'ly scheduled subtasks•/

Q11 +- 0 /*running subtasks•/

time <-- O, CF <-- CFo CLUSTERING() C01\1PUIE.LEAST-5TARLTIME() VVHIIE IA I f:. 0 v I I I f:. 0 v ( IA I = 0 A IA' I f:. 0) DO BEGIN

IF NOT('IHE-DECISION.POINT.JS_UNFEASIBIB()) THEN SORT.A..lN..DECREASING_ORDEILOF .LST() IF FEASIBLE-MAPPING.FOUND() THEN

SCHEDULE( QI , r' ) A <- A - Q1 , l <- r/J , Q11 <- Q11 u Q'

/*uPDA'IE TIME* I time <-- min(minsEA' (at(s) ) ,minsEQ" (ft(s))) /*uPDA'IE LlSTS* I I <-- {p E P/Jt (p) ::; time} Q <- Q U {e E Q11 /ft(e) = time} Q11 +- Q11 - {e E Q11 /ft(e) = time} REPEAT FOR EACH s E Q 11 / Jt(s) = time DO

Q <-- Q U {s}, Q1 1 <-- Q" - {s} REPEAT FOR EACH e E Xs DO A +- A u {e}

REPEAT FOR EACH s E S - { Q u A} DO IF ('Vs1 E Ds , s1 E Q ) THEN

A1 .,_ A1 u {s} IF st(s) ::; time THEN

A <--- A U {s} A1 <- A1 - {s}

ELSE EXIT THE LOOP ELSE EXIT THE LOOP

END END

4. RESULTS OF SIMULATIONS

4 . 1 . The task set generator

We have taken a great care to randomly generate the different configurations. A lot of parameters have been taken as inputs : the number g of tasks of the configuration varies from 2 to 6 ; the num­ber ni of subtasks in a task Gi is drawn randomly between to values : 6 or 10 ; the periods are ei­ther all equal to 1000, either different alternatively equal to 400 , 1 000 , 500 so that L, in this case is equal to 2000 ; the maximum number of sub­tasks over L : n is equal to 220 ; the laxity which measures the ratio between the size of the period and the amount of processing time varies from 0 .9 (severe) to 1 .6 ; the graph types of the tasks are chain (each node has one predecessor, one succes­sor) , tree (one successor) , rtree (one predecessor)

Page 61: Distributed computer control systems 1995 (DCCS ¹95)

or general ; the ratio between the communication times and the processing times varies between 0 . 2 to 1 .0 . From these inputs, we can compute the average of the processing times of the subtasks of a task Gi : ptave = r;/( laxity x n; ) . The processing times of subtasks of a task Ci have a uniform distribu­tion over [ptmin, ptmax] where ptmin = 0 .66 x ptave , ptmax = 1 .33 x ptave. Then the communi­cation times in a task Ci have a uniform distribu­tion over [com_ratio x ptmin, com_ratio x ptmax] . Obviously, if a subtask has a latest start time com­puted without taking into account the communi­cations smaller than the beginning of its period, the generated software configuration is rejected. Because of the great number of parameters in the generation and in order to be able to have a pre­cise statistical analysis of the results, we have gen­erated a great number of software configurations ( 1 1258) , each tested on 2, 3, 4, 5 and 6 processors , if the system is not a priori overloaded. In spite of these verifications, the generator can provide un­feasible configurations. But the decision problem to know if a configuration is feasible, i .e . respects the constraints, is an NP-complete problem. So, it is always a great difficulty to have a good esti­mation of the succes ratio .In order to have a quite good estimation of the success ratio, we have em­ployed one version of the clustering algorithm in an incremental manner . We have repeated this al­gorithm by varying the factor CF from 0 to 1 by step of 0 . 1 until a schedule would be found. In our study, a configuration has been said unfeasible , if none of the tested algorithms has been able to find a solution .

4 .2 . The results

A great number of data has been collected and only the main results are presented here : let the success_ratio(%) R _ nb of schedules found by the algorithm - nb of s chedules found by at least one algorithm • Globally, CLUS_Q > LIST _EST > LIST_LST > > CLUS-4. CLUS-4 is quite worse because it produces too much clusters and exclu­sion constraints (table 1 ) .

m 2 3 4 5 6 R

Table 1 The success ratio (%) : R

CLUS_o LIST .EST LIST .LST CLUS-4 88.74 82 .18 77.20 27 .72 82 .83 79. 12 75 .83 5 1 .37 79.45 74 . 1 1 70 .84 55.08 83.00 65.37 62.48 50 .92 89.62 60.33 57.66 48 .67 84. 86 68.50 65.41 49 .38

• The efficiency of LIST _EST and LIST _LST de­creases when the com....ratio increases (fig. 3) . Ac­tually, the LIST algorithms are inclined to start by distributing the subtasks over a maximum number of processors . This induces to manage communi­cations all along the schedule process when they

59

could have been set to zero. The consequences of this behavior are much more important as the communication times of the links between sub­tasks are great. This is confirmed by the fact that the success ratio is better for LIST when the graph is a tree. • The CL US algorithm has some difficulties when the laxity is severe (fig. 4) . CLUS tries to gather the maximum number of subtasks on a minimum number of processors . It don't exploit sufficiently the possible parallelism when it should do i .e . when the laxity is severe. The two precedent fac­tors produces a combined effect (fig. 5) . • Moreover , CLUS has a great difficulty to find a solution when the number of tasks is superior to the number of processors. Actually, CL US tries to built a cluster per task, the size of each cluster being quite important . Mapping these clusters on a hardware configuration including less processors than clusters is difficult (fig . 6) .

Table 2 The success ratio processing time (ms)

n CLUS_O 20 20 40 152 60 637 80 1077 100 3365 120 2854 140 5 1 1 1 160 7755

LIST 2 5 7 14 20 38 38 61

• Table 2 is the processing time when the al­gorithm is successfull . It clearly shows us that LIST _EST (LIST _LST is similar) is much faster than CLUS-0 . CLUS_O is penalized by the clus­tering phase, its manner to search at each step a map of the ready tasks on the free processors. The time has been measured on a SUN@ Spare Station 10 .

5 . CONCLUSION

The aim of our study is to compare different heuristic strategies to solve the problem of the al­location of periodic hard real-time tasks defined as a precedence and communication graph on a dis­tributed system. Moreover , we consider a commu­nication network as a bus with contention. Beside the temporal constraints, our model imposes that a subtask must process during the different peri­ods always on the same site. The fact that the tasks are hard real-time tasks leads us to verify that the produced mapping respects the tempo­ral constraints of the system. At the moment, the only way to prove that there exists a valid sched­ule for a mapping is to built one. The problem is well known to be NP-hard and that is why we have employed two heuristics : a classical list al­gorithm and an algorithm with a pre-clustering phase. By the great size of our randomly sam­ple, we show that, most of the time, introducing a clustering phase to reduce the communication cost

Page 62: Distributed computer control systems 1995 (DCCS ¹95)

before the mapping and scheduling phase, gives better results but is a much more costly method. Other finer results are available to characterise in which conditions the more classical list algorithm is better. Our purpose is now to employ new other less classical methods like simulated annealing and genetic algorithms by changing the problem in an optimization one, for example by searching a so­lution that minimize the lateness of the system.

6. REFERENCES

Fohler, G. and Koza, C. ( 1990) . Scheduling for Distributed Hard Real- Time Systems using heuristic Search Strategies. Research Report no 12/90. pp. 1-20 . Institut for Technische Infor­matik Technische Universitat Wien, Austria.

Hwang, J-J . , Chow, Y-C . , Angers, F .D . and Lee, C-Y. ( 1989) . Scheduling Precedence Graphs in Systems with Interprocessor Communication Times. SIAM Journal. Comput. , vol 18, no. 2, pp. 244-257.

Peng, D .T. and Shin, K.G. ( 1989) . Static Allo­cation of Periodic Tasks with Precedence Con­straints in Distributed Real-Time Systems. 9th Int. Conj. on Distributed Computing Systems, pp 190-198.

Ramamritham, K . ( 1990). Allocation and Schedul­ing of Complex Periodic Tasks on Multiproces­sors. 1 0th IEEE Int. Conj. on Distributed Com­puting Systems, pp. 108- 1 15 .

Tindell, K .W. Burns, A . and Wellings, A.J . ( 1 992) . Allocating Hard Real-Time Tasks : An N P-Hard Problem Made Easy. The Journal of Real-Time Systems, vol. 4, no. 2 , pp . 144-165.

Verhoosel, J .P.C . , Luit, E.J . and Hammer , D.K. ( 1991 ) . A Static Scheduling Algorithm for Dis­tributed Hard Real-Time Systems. The Journal of Real- Time Systems, vol. 3, no. 3, pp. 227-246.

2 sites

2 !!:, � 0.5

� o ��������� 0.2 0.4 conl'·?atio 0.8 1. 4 �es

i-:r:::::J 0.2 0.4 corfl·?atio 0.8 1 sTl:es

0.4 0.6 0.8 com_ratio

0 � �, g: 0.5

i

Fig. 3. success...ratio = f( com....ratio)

3 sites

0.4 0.8

0.4 corflJatio 0.8

0 CLUS 0 CLUS::4 UST EST usT::LsT

60

4 sites

...., - · - · -� ·� r:= 1 1 .2 1 .4 1 .6

laxity 6 sites

l-'.P� 1 1 .2 1 .4 1 .6

laxity

3 sites o '� "ill ' "-1 -· -r: �

1 1 .2 1 .4 1 .6 laxity

5 sites

l· '.§04 1 1 .2 1 .4 1 .6

0

laxity

CLUS 0 CLUS::4 UST EST usT::LsT

Fig. 4. success...ratio = f(laxi ty)

2 sites

F� 0.2 0.4 0.6 0.8 com_ratio

4 sites

� - · - - � o '�� 1·· :�=

0.2 0.4 0.6 0.8 com_ratio

6 sites g 1� i' ·� 0.2 0.4 0.6 0.8

com_ratio

3 sites

Q) - · - · - -, ·F=� j'· :=� 0.2 0.4 0.6 0.8 com_ratio

5 sites

l·� 0.2 0.4 0.6 0.8

0 +

com_ratJo

CLUS 0 CLUS-4 LIST EST usT::LsT

Fig. 5. success...ratio = f( com....ratio) with laxity=l . 0

2 sites 2 .. �, �o si.Qnifi�ti�e : 0.5 .. � 0 2 4

tasks _nb 4 sites o '� � �, -r:== 2 3 4 5 6

tasks_nb 6 sites t:�--: _ _ -

-.

1l - - -g w o ��������

2 3 4 5 6 tasks_nb

2 .. �, � 0.5

i 0 2

3 sites

4 tasks_nb

5 sites .� 1�: \ (ii � , '-1 · - -� 0.5 "' - - -§ � -

� 0 2 3 4 5 6 tasks_nb

o CLUS 0 + CLUS-4

UST EST LIST::LST

Fig. 6. success...ratio = f( nb_tasks)

Page 63: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

ALPHA MESSAGE SCHEDULING FOR OPTIMIZING COMMUNICATION LATENCY IN DISTRIBUTED SYSTEMS

Ludmila Cherkasova and Tomas Rokicki

Hewlett-Packard Labs, 1501 Page Mill Road

Palo Alto, CA 94303, USA

Abstract. Evaluation of interconnect performance generally focuses on fixed-size packet latency as a function of traffic load. To an application, however, it is the latency of variable-length messages, rather than the individual packets, that is important. We discuss how scheduling the packets of messages according to various strategies can lead to effective performance differences of more than a factor of three. We present a new scheduling technique, called alpha scheduling, that combines the bandwidth-fairness of round robin scheduling with close to the optimal performance of shortest-first scheduling. We demonstrate our results with a simple simulation model.

Key Words. Scheduling algorithms, distributed systems, interconnection networks, performance analysis

1 . INTRODUCTION

It is widely accepted that distributing processing is the high-performance computing paradigm of the fu­ture. High performance systems are increasingly constrained by the propagation delay and low impen­dance of interconnect systems (Holden and Langs­ford, 1990; Tanenbaum and van Renesse, 1985; Schroder-Preikschat, 1990). Minimizing the latency for short messages improves the scalability of appli­cations. Bringing the network bandwidth closer to that of the CPU memory system reduces the cost of communications.

Packet-switched interconnect hardware transmits fixed-size packets, while most operating system inter­faces provide for the transfer of variable-length mes­sages. Each message is therefore broken down into some number of packets for transmission by the hard­ware. In this paper, we consider scheduling strategies for the insertion into the interconnect hardware of the packets that comprise a message. Our primary re­sult is that using an appropriate strategy to insert the packets comprising messages into an interconnect can have a tremendous impact on the performance of that interconnect. Indeed, suitable selection of such a strategy can increase the effective performance of the interconnect by a factor of two or three over naive FIFO or round robin packet insertion.

2. ASUMPTIONS AND INVESTIGATION

We consider the tasks to be a set of messages that arrive to be delivered; the scheduling task is ordering the packets of the messages in such a way that av-

61

erage message latency is reduced while guaranteeing the delivery of all messages. We will assume that there is some finite number of applications sending messages, and each application sends a new message only after the previous message has been delivered. (If an application can send a finite number of mes­sages concurrently, it is easily modeled by that many component applications.) We only consider the queue latency since this is the waste time we can control through scheduling.

Questions we consider are:

• Should packets from multiple messages be in­terleaved? At what grain should interleaving be performed (per packet, every ten packets, etc.)?

• Is there a trade-off between average latency, fair­ness, and guaranteed delivery? Can this trade­off be controlled?

• How does the resulting scheduling strategy com­pare with classic scheduling strategies?

3. FIFO SCHEDULING

The simplest scheduling strategy is first-in, first-out. With such a strategy, starvation is impossible; each message from each application will eventually be de­livered. The maximum time waiting in the queue is proportional to the sum of the lengths of the mes­sages in the queue. To help keep this within a rea­sonable bound, messages longer than a particular size (perhaps IOOK bytes) can be broken up into smaller messages.

FIFO scheduling is not fair; for two fast applications

Page 64: Distributed computer control systems 1995 (DCCS ¹95)

each submitting messages continuously, the appli­

cation that submits the longer messages will get a

proportionately higher share of the bandwidth. The

average latency is also not optimal. If one appli­

cation submits a long message immediately before another application submits a short message, then the

short message will be delayed; the best average la­

tency in this case is to schedule the shorter message

first. FIFO is cheap to implement, requiring the least

computation by the host processor or interface board.

Finally, short control messages can be delayed by the

entire contents of the message queue at the time they

were submitted.

4. ROUND ROBIN

Another scheduling strategy is to iterate through the

messages currently in the message queue, interleav­ing packets from outstanding messages. If we assume that new packets are inserted at the end of the mes­

sage queue, then this strategy is maximally fair; each

application with an outstanding message will receive

the same share of the bandwidth. It also guarantees delivery, since there are a finite number of applica­

tions. In this case, the maximum delivery time is

proportional to the length of the message multiplied

by the number of applications; this is a better result

than for FIFO, and there need be no upper limit on

the message size.

The average latency is not optimal; interleaving a

short and a long message delays the short message by about a factor of two without changing the latency of

the longer message. The worst-case average latency

is when the final packets for all messages are sent

at approximately the same time; this is possible with

round robin scheduling. If all messages are about the

same length, the average message latency is twice as

bad as the optimal value. The increase in latency for

a one-packet control message is proportional to the

number of applications, and this is much better than

the FIFO scheduling strategy.

Another issue with round robin is that the 'current

message' is constantly changing with every packet.

Depending on how access to the actual message body is done, this can have negative effects on perfor­mance. Round robin will strain a finite buffer pool

used to store the messages on an interface board. Fi­

nally, each message has a certain amount of state that

will constantly need to be switched. If we are send­

ing more than a million packets a second, these state switches might have a large negative impact.

A minor variant of this round robin strategy is to insert new packets at the front of the message queue. In this case, messages of only one packet go out 'immedi­

ately' . Even in this variant, short messages of length

two or more suffer in latency. In addition, always inserting the short packets in the front of the queue

62

allows a few applications generating many short mes­

sages to indefinitely starve a long message.

5. SHORIBST FIRST

Another scheduling strategy is shortest message first.

In this case, shorter messages are always sent before

longer messages. If a message arrives that is shorter

than the remaining portion of the message currently

being sent, then that latter message is preempted and

the shorter message sent instead.

This strategy is optimal with respect to average la­

tency. Given a ordered set of tasks t; each of length l; for 0 ;:; i < n, the average delay for all tasks is

because each task delays the ( n - i) tasks in front

of it by the amount of time necessary to finish that

task. This weighted sum is minimized if the tasks are

sorted by decreasing l; .

Unfortunately, this scheduling strategy is subject to

starvation; two applications that constantly schedule

short messages can starve an application with a pend­

ing longer message. Because of this, the algorithm is

also unfair.

These different strategies are summarized below.

Starvation Fair Latency

FIFO Round Robin Shortest First

No No Yes

No Yes No Poor Moderate Optimal

6. ALPHA SCHEDULING STRAIBGY

We propose a scheduling strategy that lies between

FIFO and shortest-first, based on the value of a coef­

ficient. The messages are stored in a priority queue. Three parameters control the ordering of messages in

the queue:

• The node parameter c is a "clock" that starts

at zero and increments for each packet inserted

into the interconnect through the current node.

It is easy to keep this value bounded without

changing the scheduling solution as we shall see.

• The message parameter I is the number of pack­

ets in the message that have not yet been sent. Initially this is just the length of the message. As each packet is sent out, the message priority is decremented by a to keep the head message

priority up to date. Another strategy is to re­calculate the head message priority before pre­

empting it during the scan for insertion of a new message.

• The tuning parameter a controls the balance be­

tween fairness and latency minimization; it can

range from 0 to oo

Page 65: Distributed computer control systems 1995 (DCCS ¹95)

Messages are inserted into the delivery queue with a priority of

c + al. Messages with the lowest priorities get delivered first. A new message inserted into the queue with a priority lower than that of the sending message preempts the sending message.

If a = 0, then this strategy is simply FIFO.

If a = oo, then this strategy is simply shortest-packet first; this is optimal for latency.

If a = I or some other finite positive value, then the strategy will not allow any single application to be delayed indefinitely by the other applications, no matter what their message streams look like. Larger a provides better average latency; smaller a provides better fairness. The Alpha algorithm is tunable in favor of minimizing the latency of short messages.

7. SIMULATION AND RESULTS

In this section, we present the results of simulating these different message queue management strate­gies. Our simulation consists of three main com­ponents: a simplified model of the interconnect, an instantiation of the queue and its strategy, and a model to generate messages from a specific traffic pattern. We describe each in turn.

Since we are only interested in the impact of message scheduling, we simplified our model of the intercon­nect to be a service queue with an average delay of one. This is the default time unit for our simulation. For the probability distribution function, we use the sum of a constant 0.5 plus a negative exponential with an average of 0.5 to reflect the fact that the port has a specific maximum bandwidth, and that the dead time between packets can vary greatly.

Simulating the queue is straightforward for each of the strategies. We construct the simulation in such a way that we can use the same message generator and connect it to many different queues and packet accep­tor models in order to run many different parameters in parallel.

As a default traffic distribution, we consider 10% of the messages to be long, 20-packet messages, and the remaining 90% are from I to 5-packets in length. The average message length is therefore 4.7 packets. Given a traffic density u between zero and one, we generate new messages using a negative exponential distribution with mean interarrival time of 4.7 /u. The final simulator model has three inputs. The first input is the traffic load to use. The second input was a list of strategies to consider; the model included both variants of round robin, and all possible values

63

of a . Since FIFO corresponds to an a of zero and shortest-first corresponds to a very large a, these two strategies are implicitly included in the possibilities. The third input is the message length distribution to use. We collect statistics and report several different parameters for each run such as queue length in pack­ets, the latencies for each of the different classes of messages, etc.

Our primary results are summarized here.

• The effects of message scheduling increase with traffic load.

• Round robin and FIFO scheduling can always be out-performed with a judicious selection of the a parameter. A value of 10 will outperform both round robin and FIFO scheduling for traffic loads up to and including 98% of utilization for our traffic load. Other traffic loads show similar results.

• The a parameter trades long-message latency for short-message latency. Higher a gives bet­ter short-message latency and better average la­tency; lower a decreases the worst-case message latency.

• Heavier traffic requires larger a to obtain near­optimal average latency.

• An a of 10 works well over a wide range of workloads and utilizations.

We ran the simulation for the two variants of round robin and for values of a of 0, 1 , 2, 3, 6, 10, 20, 30, 60, 100, and 1000000. In addition to the results illustrated here, we also ran many tests with vari­ous different message length distributions. While the numbers varied, the conclusions drawn remain the same.

Jarency 4.00

3.80

3.60

3.40

3.20

3.00

2.80

2.60

2.40

2.20

2.00

1.80

1.60

1.40

t.20

LOO

0.00

Traffic vs Latency/Optimal

20.00 40.00

/ /!II! , ' . ./ :;/,' , , !

... / .i�,' ' , 1 i

60.00 80.00

!mod 100.00

Fig. 1 . Traffic versus overall latency as a fraction of optimal latency.

Figure 1 shows how average message latency changes with traffic load for the various strategies. Because of

Page 66: Distributed computer control systems 1995 (DCCS ¹95)

the large variance in latency for the different traffic loads and strategies, we adopt the shortest-first av­erage latency value as the vertical unit, and plot the other strategies as a fraction of that value, yielding the graph shown in figure I . In this graph, curves closer to the horizontal line y = I reflect more desirable latencies.

These results are also summarized in the table below, showing overall latency for various injection strate­gies and workloads.

Traffic Load 50 70 80 90 95

Round Robin 8.6 14.5 2 1 .8 43.8 88.8 FIFO 9.6 16.8 25.9 53.0 108.3

a==l 8.2 15 . 1 24.2 5 1 .5 107.3

a==3 7 . 1 12 . 1 1 9.9 45.7 100.8 a==lO 6.8 10.0 14.3 3 1 .4 77.2

a==30 6.8 9.7 1 2.9 2 1 .6 45. 1

a==lOO 6.8 9.7 1 2.7 20.5 34.6 Shortest First 6.8 9.7 12.7 20.5 34.4

Consider the strategies under a 50% traffic load. The optimal, shortest-first, gives an overall average queue latency of 6.8 time units. The Round Robin strategy yields an average of 8.6 time units, making the delay 26% higher than optimal. This is a large decrease in performance for such light traffic. The FIFO strategy yields an average of9.63 time units, 42% worse than shortest-first. Even a low a value such as 1 yields an average time of 8 .24, beating Round Robin and FIFO. An a value of only IO yields an average time of 6.82, within one percent of optimal.

Now let us consider a traffic load of95%. The optimal in-queue average latency is 34.4 time units. The Round Robin strategy yields an average of 88.8 time units, for an increase in delay of 158%. FIFO yields a poor 108.3 time units, more than three times longer than optimal. An a of 10, with an average delay of 77 .2, beats FIFO and Round Robin strategies handily. An a of 30 yields an average delay of 45. 1 , within 24% of optimal. An a of 100 yields an average delay of 34.6, within 1 % of optimal. In general, apart from fairness considerations, for all message lengths and traffic densities, Round Robin is always slower than shortest-first, and usually significantly slower. On the other hand, Round Robin is always faster than FIFO for short messages, and always slower than FIFO for long messages.

At high traffic loads, low a values behave more like FIFO than like shortest-first. Heavier traffic requires larger a to obtain near-optimal average latency. An a of 10 works well over a wide range of workloads and utilizations.

Figure 2 shows the impact of message length on mes­sage latency for a traffic rate of 95%;

At 95% traffic, the impact is very pronounced. In this

64

Message Length vs Latency for 95 % Traffic Latency

I -RR--380.00 ----+----�---+-----l- Fifb-

360.oo / arp11a;r--340.oo / ilph• ';l " -

/ i1pb.-.1o-32Q.oo ----J-----+---+-/-+---+-- ",1pij;, ;"30-300.oo / SF - - -280.00 ----+-------/-,4----�-2r.o.oo ----+------/+--+-----1--240.00 ----+-----;..-/-,£--+-----1--220.00 ----+----+-/_.,,__ _ __,__/_..:__, �-200.00 ----+------+-.,L----+---'-'---i-!

/,,

180.00 ----+----/-+r---/.;<e".,�'---i-1r.o.oo ___ _,_ __ / --+--.,�,..__._---"-'--t40.oo ___ _.___,,__--+---F-'�'l-----!-120.00 =�;;��/�2�:1:�::-�--�- ·r:· -=- -=�=- ·:,:,·:=,t 100.00 ... ; ,, ,.. 80.00 / ... ,,,, 60.oo ... r ... ""> "'

40.00 - -/

, ',/

20.00 /

,, ,,,. /

0.00 - -- - �

5.00 10.00

I I I

1.5.00 20.00

Fig. 2. Message length vs latency for 95% traffic.

MC1181gcSizc

case, for one-packet messages, average in-queue la­tency for Round Robin was 1 8.94, while for shortest­first it was only 1 .38, more than ten times faster. For twenty-packet messages, Round Robin yielded an av­erage in-queue latency of378.6; this was much worse than even shortest-first with an average of257 .8 . (The messages reflected in this graph have lengths of 1 , 2, 3, 4, 5, and 20 packets, so there are no data points between 5 and 20.)

The graph shows that increasing the traffic load re­quires a to increase in order to stay close to the op­timal throughput. At high traffic loads, low a values behave more like FIFO than like shortest-first. We next derive a quantitative analysis of the bad effects of a too-large a that will allow us to better understand the trade offs associated with this parameter.

8. AN ANALYSIS OF LARGE ALPHA

Our scheduling algorithm ensures several invariants, independent of a .

First, for finite a and message lengths, if the queue does not grow without bound, every message is even­tually delivered. That is, starvation cannot occur. This is because c increases with each packet sent, so eventually every new message will be inserted in the priority queue after a given message.

Secondly, if an application submits a message of length l at time c, no message of length l' 2: I that is submitted after the first message can slow down its delivery. This axiom does not hold for round robin. This means that for every message size, any a pro­vides FIFO scheduling among those messages.

More generally, no message of length l' submitted more than a(l - !') time units after c can slow down delivery of the original message. Messages that are

Page 67: Distributed computer control systems 1995 (DCCS ¹95)

much smaller than l have a longer time in which they can be submitted and enter the queue before the

original message than do messages that are close to l in size.

If we do not limit the number of applications, or do

not limit the number of outstanding messages from any given application, any number of messages might be inserted at a given point in time. Therefore, there is

no upper bound on the maximum delay that might be incurred due to optimal scheduling. We can compare

the situation to a FIFO queue, however. We can associate a meaning with the value slop = o:(l - 1 ) . That meaning is 'enter me into a FIFO queue, but you can pretend that I entered as late as the current time plus slop if it will improye the average latency' .

Thus, our scheduling algorithm optimizes the average

latency such that no ones slop constraint is violated.

A higher slop allows the average latency to be closer to optimal.

This raises the question of how much of slop is gener­

ally taken advantage of during a run. Our simulation results show that on average only long messages are slowed down, as we would expect. What is surpris­

ing is how little long messages are slowed down, even with high o: and therefore slop values. For instance,

at a traffic rate of 95%, an o: value of one million , the average queue wait time of long messages was 264.23 time units, versus 122.30 for FIFO. This is

the worst case observed during our entire simulation. While long messages were delayed by a factor of

two longer than normal, the overall average message latency decreased by more than a factor of three.

9. USING ALPHA AS A PRIORITY SCHEME

The discussion so far has assumed that o: is a constant

controlled by the queue manager, perhaps varying slowly over time but roughly the same for all mes­sages entering during any short interval. It is also

possible that o: can be used on a per-message basis as a rough priority indicator; messages with high o: can be displaced by messages with low o: .

1 0. ENSURING IN-ORDER DELIVERY OF MESSAGES

One possible objection to the use of a scheduling algo­rithm such as alpha scheduling is that messages sent from the same application might arrive out of order. ryle assume the interconnect provides some facility for ensuring that the packets from a single message ar­rive in-order where this is important.) This difficulty is easily resolved by associating with each applica­tion a field that stores the priority field of the most recently sent message. With this field, it is a simple matter to ensure that the priorities of successive mes­sages from the same application form an increasing

65

sequence, and thus will be delivered in order. This field can be reset to zero any time that the applica­

tion submits a message when there is no outstanding

message from that application in the queue.

1 1 . ADVERSARY APPLICATIONS

Another consideration in a message scheduling strat­

egy is knowledge of how mean-spirited applications can take advantage of the strategy to maximize their bandwidth. For instance, with the standard Unix sys­tem scheduler, the user with the most processes wins, tempting users to spawn many processes in order to maximize their share of computing resources. Yet, the very multiplicity of these processes decreases the

efficiency of the system by increasing the cache fault rate, the context switch rate, the page miss rate, and

swamping other system resources.

If applications are allowed to queue several concur­rent messages, then round robin message scheduling suffers the same fate-the more messages enqueued,

the larger share of interconnect performance an ap­plication gets. Similarly, for FIFO scheduling, an application that submits a large number of messages at the same time will significantly delay messages

submitted by later, more civilized applications.

The o: scheduling algorithm suffers the same diffi­

culties under such scenarios. A possible solution is to limit the number of messages a single application

may submit at any given time. The computational cost of implementing such a strategy, especially if it involves the generation of CPU interrupts or ad­

ditional in-line processing at the packet submission

point, may overwhelm its advantages.

If we take this situation to the limit, where an appli­

cation may only submit a single message at a time, the scheduling strategy can still be exploited. In this case, round robin is bandwidth-fair to all applications, regardless of message length, assuming that no ex­tra overhead is incurred between messages from the same application. Any such extra overhead would fa­vor long messages over short messages. FIFO, on the other hand, is message-fair to applications, which im­

plies that long messages will get a correspondingly greater share of available bandwidth. Shortest-first favors short messages, because long messages can be starved indefinitely.

The o: scheduling strategy is a continuum between FIFO and shortest-first. An interesting point would be the traffic- and workload-dependent value of o: for

which fairness is most closely attained. This happens when the message size versus latency curve is most nearly a straight line that passes through the origin. (We shift the line by a one time unit to include the time it takes to send the final packet through the intercon­nect.) Because we have so few message sizes in our

Page 68: Distributed computer control systems 1995 (DCCS ¹95)

sample workload, and because all of the curves do not fit a straight-line model, we calculate and compare the average latencies divided by the message length (or the effective packet period for messages of a given size) for each of the simulation runs to analyze the fit. We primarily consider the range of these values.

For a given traffic rate and workload, there exists a value of a for which the range between the min­imum and the maximum effective packet period is minimized. For instance, at a traffic load of 50%, an a of six yields an effective packet period of be­tween 1 .39 (for three-packet messages) and 1 .61 (for twenty-packet messages.) For this a , overall average latency is within 2% of optimal. Thus, we have the bandwidth-fairness of round robin but near-optimal average latency.

At the 95% traffic rate, an a of sixty gives a minimum effective packet period for messages of two packets of 2.25, while the maximum effective packet period is for messages of length twenty-five at 12.92. At this point, the average latency is still 5% greater than optimal, and it is 62% less than the average latency for round robin ! A bit more bandwidth-fair is an a of thirty, with effective packet periods running from 5.82 (for three-packet messages) to 12.01 (fortwenty­packet messages), with an average latency of 8.75, still more than twice as fast as round robin.

Note that these calculations do not take into account the latency due to packet assembly, direct-memory access, or delivery. These factors contribute to raise the latency curve, so a higher a than the ones calcu­lated above would likely be more bandwidth-fair.

12 . 1RACKING ALPHA

Rather than fixing a at a particular value, it might be useful to have a track the interconnect load as shown by the queue length in some fashion. Some reasonable upper limit (perhaps 100) and lower limit (perhaps 1), along with appropriate weighting of the queue length, would probably yield an overall scheduling strategy with a high degree of bandwidth fairness, no starvation, and a near-optimal average latency-the best features of the round robin, FIFO, and shortest-first strategies.

Depending on how traffic arrives at the node, and how packets are accepted by the interconnect, the dynamic behavior of the queue length over time might exhibit some extreme swings. For this reason, we recommend that if tracking alpha is implemented, no or little hysterisis be used. If the queue is empty for a period of time so a gets very small, a sudden influx of messages should allow a to climb relatively rapidly. This is an interesting avenue for future exploration.

Another avenue of exploration is how different work-

66

loads affect the appropriate value of a .

13 . CONCLUSION

In this paper, we have introduced a new scheduling strategy, alpha scheduling, and shown how it can improve the effective performance of interconnect communications by simply scheduling the packets within messages appropriately.

We implemented this algorithm for the P02 inter­connect as outlined in (Cherkasova, et al. , 1994). The P02 topology is the same as that of the Mayfly (Davis, 1992) the elements are combined in a wrapped hexagonal mesh topology to form a low latency, high capacity interconnection fabric for scal­able parallel processing systems containing up to hun­dreds of processing elements (PEs). P02 supports the transfer of messages which may vary in length but are physically transferred as a series offixed-length pack­ets. The network interface assumes all responsibility for fragmentation and reassembly, and notifies the re­ceiving PE only when the complete message has been placed in the receiving PE's memory.

A new scheduling algorithm, alpha scheduling, im­proves the interconnect performance 2-3 times over the results provided by FIFO scheduling. The perfor­mance benefit of using alpha scheduling increases as the message queue lengthens.

14. REFERENCES

Cherkasova, L., A. Davis, V. Kotov, I. Robinson, and T. Rokicki (1 994). How Much Adaptivity is Required for Bursty Traffic? In Proceedings of Seventh International Conference on Parallel and Distributed Computing, ISCA, Raleigh, NC, pp. 208-2 13 .

Davis A (1992) Mayfly: A General-Purpose, Scal­able, Parallel Processing Architecture. J. LISP and Symbolic Computation, vol.5, No.1/2, pp. 7-48.

Holden,D., and A Langsford ( 1990). MANDIS: Management of Distributed Systems. In LNCS, Progress in Distributed Operating Systems and Distributed Systems Management, vol. 433, Springer-Verlag, pp. 162-173.

Schroder-Preikschat, W. ( 1990). PEACE - A Dis­tributed Operating System for High-Performance Multicomputer Systems. In LNCS, Progress in Distributed Operating Systems and Distributed Systems Management, vol. 433, Springer-Verlag, pp. 22-44.

Tanenbaum, A and R. van Renesse (1985). Dis­tributed Operating Systems. J. Computing Surveys, vol. 17, No. 4, pp. 420-470.

Page 69: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

PREEMPTIVE AND NON-PREEMPTIVE REAL-TIME SCHEDULING BASED ON NEURAL NETWORKS

Carlos CARDEIRAl , Zoubir MAMMERI2

1 JSTllDMEC, Avenida Rovisco Pais 1096 Lisboa, Portugal, e-mail: [email protected]

2cR1N (CNRS URA 262) Ensem, 2, avenue de la Foret de Haye F54516 Vandoeuvre-les-Nancy, e-mail: [email protected]

Abstract: Artificial Neural Networks (ANNs) have already proved their usefulness in approximate solving, combinatorial optimization problems, and they have the advantage of reaching e�"tremely rapid solution rates. This paper presents an ANN based approach to solve real-time tasks scheduling, and it shows that by a careful choice of an ANN topology, one can solve real-time tasks scheduling problems taking into account timing constraints (such as deadlines, earliest times, periods, maximum execution times), preemption and non-preemption of tasks in mono or multiprocessor architectures. ANN­based scheduling algorithms are not time-consuming so they can be used on-line in real­time systems contrary to most approximate techniques for problem solving.

Keywords: Real-time, scheduling algorithms, neural networks, tasks, constraint satisfaction, multiprocessors.

1 . INTRODUCTION

Since a decade, researchers have become increasingly interested in real-time systems (RTS) design and validation, and specially in developing algorithms for scheduling tasks and messages meeting some timing constraints, see Liu and Layland (1973), Stankovic and Rarnarnritham (199 1), Xu and Parnas ( 1991). Several algorithms and techniques have been used to meet timing constraints in RTSs (see Cardeira and Mammeri (1994d) for a tutorial on existing real-time scheduling algorithms). However, very few studies have considered the use of artificial neural networks (ANNs) to solve scheduling problems, in spite of the emerging use of ANNs in a large number of domains.

ANNs have already proved their usefulness in approximate solving, combinatorial optimization problems, and they have the advantage of reaching extremely rapid solution rates when implemented in suitable parallel hardware (Hopfield 1982). The majority of studies about how far from optimal are

67

the solutions obtained by ANNs have focused on the traveller salesman's problem, a well-known benchmark for combinatorial optimization problems (Hopfield 1 985). These studies proved that the quality of the obtained result degenerates when the number of cities grows (Gee 1 993).

In recent papers, Cardeira and Mammeri ( 1 994a, b, c) have investigated the use of ANNs for real-time tasks scheduling. Particularly, they have shown how

-to translate timing constraints (such as deadlines, periods, earliest times) and the number of processors into ANN topologies. In their first study, they were interested in preemptive task scheduling. This paper tackles the problem of non-preemptive scheduling which is known to be NP-Hard in most cases of RTS requirements (Garey and Johnson 1979).

The ANN s used for real-time scheduling are those proposed by Hopfield (1985). Preemptive task scheduling is based on the k_out_of_N rule introduced by Taggliarini et al (199 1).

The rest of the paper is structured as follows: the topologies and type of neural networks used for real-

Page 70: Distributed computer control systems 1995 (DCCS ¹95)

time scheduling are presented in section 2. Next, in section 3, an ANN based algorithm is presented for scheduling preemptive sets of tasks. Section 4 shows how to extend the ANN building rules used in section 3 to deal with non-preemptive scheduling. Section 5 presents a scheduling example. Some conclusions appear in the last section.

2. HOPFIELD'S NEURAL NETWORKS AND T AGGLIARINI'S RULE

The use of neural networks to solve optimization problems has increased since Cohen and Grossberg derived the necessary conditions for an ANN to evolve to a stable state (Cohen and Grossberg 1983, Grossberg 1982). These conditions are:

• There is no self feedback (i.e., all the diagonal elements of the connection matrix are equal to zero).

• The neuron connection matrix is symmetric.

• There is no positive feedback (i.e., all the elements of the connection matrix are less or equal to zero).

If these conditions are met, Hopfield and Tank proved the existence of a Lyapunov function of the ANN so that the ANN will evolve in such a way that the L�apunov function will never increase (Hopfield 1985). This Lyapunov function is also called energy function by analogy with physical systems. This function is:

1 N N N E = - - � � T.x.x . - � J.x. 2 L.JL.J I] l J L.J I I

Where:

T;j

X; I l

i=l j=l i=l

Weight of connection between neurons i and j Output of neuron i

External Input of neuron i

( 1 )

As the stability conditions force the neuron outputs to evolve to states corresponding to decreasing values of the energy function, to solve an optimization problem with a neural network, one has to find a way to put its "cost" function in the form of an energy function. Hence, to solve an optimization problem by a Hopfield ANN, one has to follow these steps {Chen 1 992):

• Find an ANN topology in such a way that the neuron outputs can be interpreted as a solution of the problem.

68

• Find an energy function for the network of which minima correspond to the best solutions for the problem.

• Calculate the neuron inputs and connection weights from the energy function.

Once the ANN is built, one just has to let the network evolve and it will converge towards a solution of the problem. Comparing with other neighbourhood methods, with neural networks one does not need to test if the newer solution is better than the older one. It is the way how the network is built that warrantees that the newer solution is better than or at least equal to the older solution.

The k out of N rule introduced by Taggliarini (199 1) is a -ruie to build neural networks satisfying constraints of the type "exactly k neurons among N" should be active (an active neuron means a neuron having an output equal to " 1 ') when the network reaches a stable state. A cost function satisfying this constraint is:

and after some algebraic manipulations, this function has the same minima than the following function (Cardeira 1994e):

which is an energy function like (1) with:

T;i T;i I;

0 if -2 if 2k-1 'iii

i=j i-:tj

As one may see, setting the connections to -2 and setting the inputs to 2k-l will force the neural network to evolve to states with k active neurons. Moreover, this rule may be successively linearly applied to overlapping sets of neurons to take into account several types of optimization constraints. The new weighs and external inputs just need to be added to the existing ones. The network will evolve to a state satisfying all the constraints simultaneously (Taggliarini et al 199 1).

If there is no solution satisfying all the constraints, the network will converge to a solution that minimizes the energy function even if it doesn't reach zero. However, it is not guaranteed that the

Page 71: Distributed computer control systems 1995 (DCCS ¹95)

network will reach the global minima of the energy function due to the existence of local minima inherent to the energy function.

3. PREEMPTIVE TASKS SCHEDULING

The principle of a preemptive tasks scheduling algorithm based on ANNs and k_out_of_N rule may be summarized as follows:

The ANN built is composed of T x L neurons, T being the number of tasks and L the scheduling length. The ANN is considered as a matrix where each line is associated with a task, and each column is associated with a time unit. When the ANN reaches a stable state, an active neuron x. . means that the task i has a processor during th� time unit j .

Generally, a real-time task (Ti) has a ready time (Ri), a deadline (DJ and a computation time (CJ. To satisfy its timing constraints the task must start after its ready time and terminate before its deadline. In consequence, the ANN should converge to a stable state with exactly Ci neurons that are active among the neurons numbered from Ri tO Di.

The external inputs and connection weights among the neurons are calculated by the application of the k_out_of_N rule taking into account each task T;:

- Application of O_out_of_(Ri -1 ) to the neurons numbered from 1 to Ri-1 in the line associated with Ti to be sure that Ti cannot start before Ri.

- Application of Ci_out_of_(Di -Ri + 1) to the neurons numbered from Ri to Di in the line associated with Ti to be sure that Ti must terminate before Di.

- Application of O_out_of_(L - Di -1) to the neurons numbered from Di+ 1 to L in the line associated with Ti to be sure that no processor will be allocated to T; after its deadline D;.

• To guarantee that, at any time unit, no more than P processors may be used (P is the number of available processors of the system under consideration), and that each processor is used by only one task, some other k out of N rules must be applied. If the tasks fully

-utiliZe the processors

the number of active neurons during each time unit must be exactly equal to the number of processors. Hence this constraint may be solved by applying the P _out_of_T rule among neurons of the same column. Unfortunately, this would be a very restrictive assumption, since no idle time is allowed for processors. To ease this restriction, one must extend the neural network by some slack neurons. Slack neurons are a solution to

69

handle inequality constraints. The k out of N rule is used when "exactly" k active neuro�s are required when the network reaches a stable state. If the constraint is of the type "at most" k neurons should be active when the network reaches the stable state, it is still possible to apply this rule but it is applied to the network extended by k neurons. The extended neurons are called slack neurons as they are hidden and should not be taken into account when the ANN stops evolving.

Finally, as the k_out_of_N rule applications are linearly independent operations, one has to sum the connection weights and neuron inputs yielded by all the k _out_ of_ N rule applications to obtain the final weights and inputs of the ANN.

In Cardeira and Mammeri ( 1994b), the use of an ANN-based scheduling algorithm, called NSA (Neural Scheduler Algorithm) has been analyzed and more precisely this paper has analysed the percentage of success when compared to an optimal algorithm. The obtained results prove that the NSA performs pretty well for underloaded systems, but its performance decreases drastically when the load goes beyond 75% of the maximum load. As the execution time of the NSA is negligible (about 100 nanoseconds on a suitable hardware architecture), it is possible to re-run it many times in a reasonable amount of time. In the experiences conducted, if the NSA may be re-ran, it successfully schedules tasks till a 99.6% load which is very close to optimal.

4. NON-PREEMPTIVE TASKS SCHEDULING

The results obtained were very attractive, and conducted us to continue investigating the use of ANNs in real-time scheduling. Our first studies only dealt with preemptive scheduling which is somewhat easy to solve by classical techniques. The k_out_of_N rule is suitable for preemptive scheduling, but not for non-preemptive one. In this paper it is shown how to e>.-tend the obtained results to the non-preemptive tasks scheduling which is known to be NP-hard in most cases.

The k_out_of_N rule introduced by Taggliarini is useful when k active neurons are required once the ANN reaches its stable state, but such a rule does not force the k neurons to be necessarily successive in the connection matrix line associated with the task to be scheduled. In non-preemptive scheduling, the time units allocated to a task must be successive (since in non-preemptive scheduling, a processor allocated to a task must not be allocated to another one, until released by the first one). A new rule is introduced here for the selection of active neurons called Successive k out of N to deal with non� preemptive schedcli�g. SuCh � rule forces the ANN to evolve towards solutions with k active and successive neurons.

Page 72: Distributed computer control systems 1995 (DCCS ¹95)

After some complex computations and demonstrations (Cardeira 1994e), the neuron connection matrix required for the Successive_k_out_N rule is defined as follows:

r o if _

I -(4k - 6) if T. - � 2 · h( ) 1if •1 l- �n X;+P x j+1 -2znh(xi_l ' xj_1 ) if I; = (2k - 3)(2k - l) 'If;

Ii - A < 1 Ji - A > 1 li - A = 1 Ji - Ji = 1

inh(a,b,c . . . ) : inhibitory neurons.

Vi 'ii. j

"ifi, j -:t:- N "ifi, j -:t:- 0

An inhibitory neuron of a connection is a neuron that inhibits the connection (i.e., the connection is blocked, when the inhibitory neuron value is zero). In the Hopfield model the connections are static. The introduction of inhibitory neurons in this model leads to dynamic connections, since they depend on the state of inhibitory neurons. It must be noted that the inhibitory neurons are not integrated in t:h.e Hopfield ANN topology.

The principle of a non-preemptive ANN-based tasks scheduling algorithm is the same as the principle of the preemptive one, but the Successive_k_out_of_N rule is applied instead of k_out_of_N one for all the connection matrix lines associated with non­preemptible tasks.

5 . EXAMPLE OF NON-PREEMPTIVE SCHEDULING

Let us illustrate the Successive_k_out_of_N rule by using an algorithm that simulates the corresponding neural network behaviour (see fig. 1). The considered network has N neurons, it starts from a random initial state and it converges to a stable state with exactly k active neurons.

Running the algorithm presented in fig. l , yields the results reported on fig. 2. As shown by the examples of fig. 2, the yielded neurons are successive ones.

Finally the result of the application of such a rule to a scheduling problem is presented. The task set is the same as the one presented by Cardeira and Mammeri ( l 994c ), but now the set of tasks has to be scheduled for the non-preemptive case. Let us remember the task set constraints :

• Task 1 : Every 6 time units it must be activated, for 2 units.

• Task 2: It can't begin its execution before the 2nd

time unit; it requires 3 units for its execution and it must terminate no later than the 6th time unit.

70

• Task 3: Its ready time is equal to 0. It must terminate before the 4th time unit. Its execution time is equal to one time unit.

Variables Integer Vout [ N J / * each element c ontains a neuron state * /

Integer Connect ion_we ight s [ N , NJ / * each element i , j c ontains the weight of c onnect i on between neuron i and neuron j * /

Integer Externa l_inpu t s [ N J / * each e l ement i c ontains the value of neuron i external input value * /

/ * Var iable ini t i a l i zat i on * /

For i From 1 To N Ext ernal_input s [ i J : = ( 2 k-3 ) ( 2 k- l )

For j From 1 To N , j � i

If l i - j I = 1 Then Connect i on_we ight s [ i , j J : = 0

If l i - j J > 1 Then Connect i on_wei ght s [ i , j J = - ( 4k- 6 ) End For

End For / * Neural network s imu lat i on * / Do

i . - random number among 1 . . N

/ * choose a random neuron * / input : = Ext ernal_inputs (i J For j From 1 To N Do

input : = i nput + Connection_we ight s [ i , j ] * Vout [ j ]

/ * t o introduct i on of inhibitor neurons e f f ect * /

I f i # j And i , j < N-1 Then input : = input -2 * Vout [ i + l J *

Vout [ i + l J If i # j And i , j > 0 Then

input . - input -2 * Vout [ i - 1 ] * Vout [ i - 1 ]

End For I f Vout [ i J < 0 . 5 Then

Vout [ i ] : = O Else Vout [ i J : = 1

Until Vout array becomes s table / * i . e . , when Vout does not change anymore * / Pr int_resu l t s

Fig. 1 : Algorithm for Successive k_out_of_N rule building

Page 73: Distributed computer control systems 1995 (DCCS ¹95)

a) First example: N=7, K=3

Inputs 15 1 5 1 5

Connect i on we ight s 0 0 - 6 - 6 0 0 0 - 6

- 6 0 0 0 - 6 - 6 0 0 - 6 - 6 - 6 0 - 6 - 6 - 6 - 6 - 6 - 6 - 6 - 6

Initial State : - + - - -

F i r s t i t erat i on : + + + - - - -

Second iterat i on : + + + - - - -

- 6 - 6 - 6

0 0 0

- 6

1 5 1 5 1 5 1 5

- 6 - 6 - 6 - 6 - 6 - 6 - 6 - 6

0 - 6 0 0 0 0

Notes : "+" means and active neuron and "-" means an inactive one. The network has reached a state stable at the first iteration.

b) Second example: N=7, K=3

Input s ( same inputs as be f ore )

Connec t i on weight s ( same mat rix as be fore )

Initial State : - + + + - - +

F i r s t i t erat i on : - - + + + - -

Second iterat i on : - - + + + - -

Note : The initial state and the final active neurons are different from the previous ones.

c) Third example: N=7, K=6

Inpu t s 9 9 9 9 9 9 9 9 9 9 9 9 9 9

Connect i ons 0 0 - 1 8 - 1 8 0 0 0 - 1 8

- 1 8 0 0 0 - 1 8 - 1 8 0 0 - 1 8 - 1 8 - 1 8 0 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8

I n i t i a l State : + + + - - + +

First i t erat ion : + + + + + + -

Second i t erat ion : + + + + + + -

- 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8 - 1 8

0 - 1 8 - 1 8 0 0 - 1 8 0 0 0

- 1 8 0 0

Fig. 2: Examples of Successive k_out_of_N rule simulation

7 1

I 2

o e o e o o e o o • • • o o o • • • o e o e e e o o e e e o • • o o o e

' ' o e e o o o o o o e o o O O O e e e O O e e e o e o o o o o 0 0 0 0 0 0

' ' o e e o o o o e e o o o o o o • • • O O e e e o e o o o o o • 0 0 0 0 0

' • o e e o o o o e e o o o o o o • • • o o o • • • e o o o o o • 0 0 0 0 0

Fig. 3: Example of scheduling algorithm simulation

In fig. 3, two independent evolution of the neural network built using the Successive_k_out_of_N rule are presented. In this figure, a dark circle means an active neuron and a white circle means an inactive

one. Each evolution starts from a random initial state of neuron outputs and, as one may see the network evolves towards stable states meeting the timing constraints of the tasks for non-preemptive scheduling.

6. CONCLUSION

In this paper, a new approach is developed to solve real-time tasks scheduling. The developed approach is based on the use of artificial neural networks. As far as we know, such a technique has never been used to solve real-time task scheduling. This paper shows that by a careful choice of a neural network topology, one can solve real-time tasks scheduling taking into account timing constraints (as deadlines, earliest times, periods, maximum execution times), preemption and non-preemption of tasks, monoprocessor and multiprocessor architectures.

This study is an important improvement of the results presented so far by Cardeira and Mammeri (1994b, c), as they could only be applied to the preemptive case.

Neural networks are applicable to solve scheduling problems and real-time systems may take advantage of their very fast convergence rate. The ANNs main drawback is the need of a suitable hardware architecture that is not always available on real-time systems. The other drawback, i.e., the possibility of falling into local minima, is largely compensated by NSA's negligible execution time. Such a feature allows enough re-executions to highly increase the possibility of finding the global minima. ANN-based scheduling algorithms are not time-consuming so they can be used on-line in RTSs contrary to most approximate techniques for problem solving.

Page 74: Distributed computer control systems 1995 (DCCS ¹95)

REFERENCES

Cardeira C. and Mamrneri Z. (1994a). Neural Networks for Satisfying Real-Time Task Constaints. In Proceedings of SPRANN'94 !MACS International Symposium on Signal Processing, Robotics, and Neural Networks, Lille, France, April, pp. 498-50 1 .

Cardeira C. and Mammeri Z. (1994b ). Performance analysis of a neural network based scheduling

algorithms. In Proceedings of the 2nd Works hop on Parallel and distributed Real-Time Systems, Cancun, Mexico, April, pp. 38-42.

Cardeira C. and Mammeri Z. (1994c). Using Neural Networks for Multiprocessor Real-Time

Task Scheduling. In Proceedings of the 6th

Euromicro Workshop on Real-Time Systems, University of Maelardalen, Vaesteraas, Sweden,

June, pp. 59-65.

Cardeira C. and Mamrneri Z. (1994c).

Ordonnancement de taches dans les S'JStemes

temps reel et repartis: Algorithmes et criteres de classification. In Journal of Automatique, Productique et Informatique Industrielle , 27

(4), pp. 353-384.

Cardeira C. (1994e), Ordonnancement temps-reel par reseaux de neuronnes, PhD Thesis, Institut

National Polytechnique de Lorraine, September.

Chen P. (1992) Design of a real-Time AND/OR

assembly scheduler on a optimization neural

network. Journal of Intelligent Manufacturing, 3, pp. 251 -61 .

Cohen M. and Grossberg S. (1983). Absolute

stability of goal pattern formation and parallel

memory storage by compet1t1ve neural

networks. IEEE Transactions on Systems, Man, and Cybernetics, 13, pp. 8 15-26.

Garey M. and Johnson D. ( 1979). Computers and

intractability: a guide to the theory of NP­completeness. In Mathematical Sciences collection, W.H. Freeman and Co, San

Francisco.

Gee A. (1993). Problem Solving with Optimization Networks. PhD Thesis (CUED/F _INFENG/TR 150), Queen's Collegue, Cambridge, July.

Grossberg S. ( 1982). Studies of Mind and Brain: Neural Principles of Learning, Perception,

Development, Cognition and Motor Control. Boston MA, Reidel.

72

Hopfield J. ( 1982). Neural Networks and physical systems with emergent collective computational

abilities. In Proceedings of the National Academy of Science, 79, pp. 2554-8.

Hopfield J. ( 1985). Neural computation of

decisions in opum1zation problems. In Biological Cybernetics, 52, pp. 141-52.

Liu C. and Layland J. (1973). Scheduling

Algorithms for Multiprogramming in a Hard

Real-Time Environment. Journal of the ACM, 20 (1), January, pp. 46-61 .

Stankovic J . and Ramamritham K. (1991). The

spring kernel: a new paradigm for real-time systems. In IEEE Software, May, pp. 62-72.

Taggliarini G. et al. (199 1). Optimization Using

Neural Networks. In IEEE Transactions on Computers, 40 (12), December, pp. 1347-58.

Xu J. and Parnas D. ( 1991). On Satisfying

Timing Constraints In Hard-Real-Time

Systems. Software Engineering Notes, 16 (4), December, pp. 1 32-46.

Page 75: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

Co-specifications for Co-design in Avionics Systems Development

M. Romdhanil.2 P. de Chazellesl A. Jeffroyl A.E.K. Sahraoui3 A.A. Jerraya2

1 AEROSPA TIALE Aircraft, Avionics and Systems Division 3 16, Route de Bayonne, 3 1060 Toulouse Cedex 03, France

e-mail : t chazelles, jeffroy }@avions.aerospatiale.fr

2 TIMA/CNRS-INPG-UJF Laboratory 46, avenue Felix Viallet, 3803 1 Grenoble Cedex, France

e-mail : { romdhani, jenaya }@verdon.imag.fr

3 LAAS/CNRS Laboratory & IUT-B 7, avenue Colonel Roche, 31077 Toulouse Cedex, France

e-mail : [email protected]

Abstract Hardware-software concurrent design, referred to as co-design, is a new methodology that integrates both the development of hardware and software. It is made up mainly by the steps of specification, partitioning, and prototyping. This paper issues the step of specification. We propose a specification paradigm based on the use of more than one single language. This approach is referred to as co-specifications. It deals with formalizing the requirements through several partial specifications. These are therefore composed in a unified model that is used/or later co-design steps. We illustrate the approach through the specification of an avionics system that is part of the AIRBUS A340 on-board systems family.

1. Introduction

In order to master the increasing complexity of avionics and to face their high development cost, the emerging aeronautical standards promote for structured methodologies that emphasiz.e on systems modeling and allow for a joint hardware/software co-design[16). The long term goal of this work is the definition of a unified hardware-software co­design methodology in the context of AEROSPATIALE Aircraft avionics. The underlying motivations are:

- enhancing the systems specification quality, - optimizing the systems performances and costs, and - shortening the systems development cycle.

Actually, the hardware-software co-design itself is not recent, but the joint specification, design and synthesis of mixed hardware/software systems is a recent issue. Several

73

projects currently in progress (SpecSyn at lrvine[3), CODES at Siemens[l) , Thomas approach at CMU[l 7j, Gupta and De Micheli approach at Standford[4), Ptolemy at Berkeley[lO), RASSP[l2), etc.) are trying to integrate both the hardware and the software in the same precess.

Most existing methodologies for co-design are based on a single specification language. But, the specification of a very large design such as the electronic system of an aircraft need several specification languages. There is no universal specification language to support the overall design of an avionics system. Thus, the use of specification languages has to be selectively targeted. We will present and illustrate in this paper a co-design methodology based on a multi­languages specification paradigm.

The paper is organized as follows: The next section gives an overview of the methodology. In section 3, we detail the co-specifications approach. Then, an overview of the multi­languages composition is given in section 4. Lastly, we give in section 5 some conclusions and future perspectives.

2. The Hardware-Software Co-design Methodology

The proposed co-design methodology is a system approach that starts from the requirements and leads to the first system prototype. It aims at a joint design of hardware and software and explores different architecture partitioning alternatives, while preserving the current qualified development methods and tools. An overview of the co­design methodology flow is shown in figure 1.

The co-design flow consists of the following three steps:

Page 76: Distributed computer control systems 1995 (DCCS ¹95)

• Multi-formalisms specification and validation • Iterative partitioning and petf onnance analysis • Architecture prototyping

Hw/Sw PARTITIONING

Pertonnanee Analysis

Hw/Sw Co-simulation & Co-synthesis

- - 1 - - - - · - -

System Prototype

Figure 1: The co-design metho<lology flow

The specification step talces into account the use of environments that provide simulation and analysis facilities. In fact. when specifying. it is important to ensure that the system specification meets the requirements[?]. Partial avionics specifications are therefore composed in a unified representation named SOLAR[9]. The objective is to allow the designer to use one or more of specification languages and to translate these descriptions into the unified representation.

SOLAR has a set of built-around tools that perform system-level partitioning and architecture synthesis tasks. The goal is to easier the automation of the design steps. In SOLAR, a system is structured in terms of communicating design units. Communication between design units is achieved using a remote procedure call protocol[ l 1]. This protocol enables to describe wide range of communication schemes It hides the channel implementation. thereby allowing the re-use of existing communication schemes.

The target architecture serves as a platform onto which SOLAR design units are mapped. It is a distributed architecture composed of software modules, hardware modules, and communication modules[l5]. This model is general enough to represent a large class of existing hardware/software architecture platforms. Figure 2 shows an example of a target architecture platform. The model corresponds to the current ARINC 429 avionics architecture concept and also to the concept of the future Integrated Modular Avionics, known as IMA.

74

Figure 2 : The distributed target architecture model

3. Co-specifications of Avionics

A new avionics development starts from several requirements expressed usually in a textual form. These include ad-hoc information about the system to develop: functions to fulfil, critical functions, performance constraints, costs. delivery delays, etc. A comprehensive analysis of these requirements should lead to structuring and formalizing the requirements. Textual information should be formalized as larger as possible in terms of functional and behavioral requirements.

The CMWC(Centralized Maintenance and Warning Computer) deals with an experimental aircraft maintenance avionics system. It includes some functions of the AIRBUS A340 Centralized Maintenance Computer, known as A340 CMC.

Avionics BITE

Avionics BITE

Avionics BITE

ARINC 429 Bl.S CJ

cmaa a aaaaa

PRINTER Figure 3: The A340 CMWC environment

The CMWC is a highly reactive system. On one hand, it is connected to the man machine interface units. These are composed of a Multi-Control Display Unit(MCDU) and a printer. These units enable the pilot or the maintenance operators to initiate interactively maintenance tasks and to have personalized text reports about the aircraft systems. On the other hand, the CMWC interacts with all on-board avionics Built In Test Equipments(BITEs). These equipments transmit local warnings and faults information to CMWC and receive the maintenance orders. Figure 3 gives an overview on the CMWC environment.

Page 77: Distributed computer control systems 1995 (DCCS ¹95)

The global CMWC system specification is achieved through the ActivityCharts/StateCharts formalisms using ST A TEMATEe tool from i-Logix[6] . Communication protocols are specified using SDL (Specification and Description Language ) [2] , the International Telecommunications Union standard for protocols specification. The SOL tool is GOOde from Verilog. The warnings computation and maintenance management functions are specified using SAO (Computer Aided Specification). SAO is an AEROSPATIALE in-house visual formalism. It is based on a synchronous data-flow model and is usually used to specify the synchronous signals acquisition and processing. Table 1 summarizes CMWC specification formalisms. Figure 4 shows the CMWC multi-formalisms specification context

Function Formalism CMWC functional ActivityCharts/ decomposition StateCharts CMWC operational modes CMWC/MCDU, SDL CMWC/Printer, CMWC/BITEs protocols Warnings processing SAO Maintenance management

Table 1: The CMWC specification fonnalisms

The top-level functional model of the CMWC is produced using the ActivityCharts/Statecharts formalisms. The model is composed of several ActivityCharts modeling the system's main functions and a control activity that co­ordinates the execution of the ActivityCharts according the system's dynamic operational mode (flight, ground, etc.)[13]. The control activity is represented by means of a StateChart.

Model decomposition and refinement is done through a hierarchical approach according to the well known structured analysis method SA-RT (Structured Analysis for Real-Time systems)[S]. This approach enables the refinement of an activity either in ActivityCharts/StateCharts formalisms or in another formalism. The refinement of a system function corresponds, in some cases, to the re-use of an existing specification expressed in a given fonnalism.

75

�_)f()DEL bite_atr�� r --- - - -:::::-- - iMODE_CT

I .,,,,. - -/ 1..cd5de2bite anc_a�...,.,. I IL

�� --�: ��.:-, ,

, I , ,

,

SOL SllO SOL

Specificalion Speciftcation Specification

Figure 4: The CMWC multi-fonnalisms Sl)Ceification �

4. Composing Partial Specifications

Multi-languages specification inttoduces overlaps or gaps in coverage. As pointed in [19], a structured analysis-based specification should normally contribute to the consistency of the overall specification. But, it remains necessary to compose the partial specifications in a unified representation. This allows to operate full coherence and consistency checking, to identify requirements traceability links, and to facilitate the integration of new specification languages.

The multi-languages composition problem has been the topic of many researches mainly in the software engineering domain. Several approaches can be found in [18,19]. These approaches are intended to facilitate the proofs of concurrent systems properties. In our case, the interest is essenti�y focused on coherence checking and later automatic programming facilities. We assume that proofs of properties should be solved within each specification language environment.

Translating the partial specifications into the composition format does not require the development of new specific tools since specification environment generally provide interfacing utilities which facilitate the access and the use of the specification attributes. In the case of STATEMATE, rewriting of charts in the SOLAR format was done through programming the DATAPORT interface tool[8]. Similarly, TRANSPEC and CAPITOLE tools faciltated the translation of, respectively, SDL and SAO specifications

5. Conclusions

This paper presented the co-specifications approach "'.'e have set up in the context of the co-design methodology m avionics development The approach is based on the use of a

Page 78: Distributed computer control systems 1995 (DCCS ¹95)

structuring language that supports the modularity and refinement concepts, and a set of specific languages each of which is most suited for a given system function.

A pathfinder for the overall methodology is currently in progress. We have developed three translation tools that enable the capture of ActivityCharts/StateCharts, SOL, and SAO specifications in SOLAR[l4, 15]. The translators act only on sub-sets of the specification language. A complete SOLAR model has been generated for the AIRBUS A340 CMWC system.

We have noticed that SOLAR's extended finite state model facilitates the capture of StateCharts and SDL specifications since these are also state-based formalisms. The SAO data-flow specifications are simply captured as sequential procedures. SOLAR offers powerful communication abstractions that enable the capture of ActivityCharts/StateCharts broadcasting scheme, the SOL asynchronous communication scheme and the SAO discrete data exchange scheme.

Future work will deal with the verification of the consistency of the CMWC unified model, exploration of the hardware-software partitioning space, and architecture prototyping.

Acknowledgement This work is part of the CODESIGN project which is

currently in progress within the AEROSPATIALE Group. The project is supported by the Aircraft Division, the Space & Defence Division, the Missiles Division, SEXTANT Avionique and EUROCOPTER-France. Authors thank all persons from these companies who discussed the ideas presented here and helped in preparation of the materiel of this work.

References [1] K . B uchenrieder, A . S edlmeir, C .Veith,

"Hardware/Software Co-design Using CODES", Proc CHOL '93, Ottawa, Canada, April 1993.

[2] CCITT Recommendation Z.100 "Specification and Description Language" ITU, General Secretariat, Geneva, 1989.

[3] D.D.Gajski, F.Vahid, S.Narayan, "A Design Methodology for System Specification Refinement", Proc EDAC'94, Paris, France, February 1994.

[4] R.Gupta, G.De Micheli, "Hardware/Software Co­synthesis for Digital Systems", IEEE Design and Test of Computers, pp.29-41, September 1993.

[5] D.Hatley and l.Pirbhai, "Strategies for Real-Time Systems specification", Dorest House, 1988.

[6] D.Harel, H.Lacover, A.Namaad, A.Pnueli, M.politi, R.Sherman, A. Shtull-Trauring and M.Trakhtenbrot, "StateMate: A Working Environment for the Development of Complex Reactive Systems", IEEE

76

Transactions on software engineering , vol.16, no.4, pp. 403-414, April 1990.

[7] R.P. Hautbois, P. de Saqui-Sannes, "Results and Viewpoints on the Use of Formal Languages", Anglo­French workshop on formal methods, modeling, and simulation. Paris, France, February 13-15, 1995.

[8] i-Logix Inc., STATEMATE 5.0 DATAPORT Reference Manual, Burlington, MA, June 1993.

[9] A.A.Jerraya, K.O'Brien, "SOLAR: An Intermediate Format for System-Level Modeling and Synthesis", in "Computer Aided Software/Hardware Engineering", J.Rosenblit, K.Buchenrieder (eds), IEEE Press 1994.

[10) A.Kalvade, E.A.Lee, "A Hardware/Software Co-design Methodology for DSP Applications", IEEE Design and Test of Computers, pp. 16-28, September 1993.

[11] K.O'Brien, T.Ben Ismail, AJerraya, "A Flexible Communication Modeling Paradigm for System-Level Synthesis" , Handouts of Int'l whsp on Hardware/Software Co-design, Massachusetts, October 1993.

[12] M.A.Richards, "The Rapid Prototyping Application Specific Signal Processors (RASSP) Program: Overview and Status", Proc. of Int'l Wshp on Rapid System Prototyping (RSP}, Grenoble, France, June 1994.

[13] MRomdhani, A. Jeffroy, P.de Chazelles, A.E.K. Sahraoui, A.A. Jerraya, "Modeling and Rapid Prototyping of Avionics Unsing STA1E�1Ee", 6th IEEE International Workshop on Rapid S ystems Prototyping, Chapel Hill, North Carolina, USA, June 7-9, 1995.

[14] M.Romdhani, R.P. Hautbois: A. Jeffroy, P: _de

Chazelles, A.A. Jerraya, "Evaluation and Composition of Specification Languages, an Industrial Point of View", CHDL'95, Tokyo, Japan, August 28-September 1st, 1995 ..

[15] M.Romdhani, P. Chambert, P.de Chazelles, AJeffroy, A.A. Jerraya, "Composing system-level specifications for Codesign In Avionics", EURO­DAC/EURO-VHDL'95, Brighton, U.K., September 18-22, 1995.

[16] SIRT ARP 4754 "Guidelines for Certification of Highly Integrated or Complex Aircraft Systems", Draft 47-, Systems Integration Requirements Task Group, July 1994.

[17) D.E.Thomas, J.K.Adams, H.Shmitt, "A Model and Methodology for Hardware/Software Co-design", IEEE Design and Test of Computers, pp.6-15, September 1993.

[18] D.S. Wile, "Integrating Syntaxes and their associated semantics", USC/Information Institute, Technical report RR-92-297, Univ. Southern California, 1992.

[19] P. Zave, M. Jackson, "Conjunct�on . as

Composition", ACM Trans. on Software Engmeenng and Methodology 8, vol.2, pp. 379-4 1 1 , October 1993.

Page 79: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-B!agnac, France, 1995

TRANSPUTER CONTROL SYSTEM WITH A GAs MOTION PLANNER FOR THE PUMA560 INDUSTRIAL ROBOTIC MANIPULATOR

Q. Wang and A. M. S. Zalzala

Robotics Research Group Depanment of Automatic Control & Systems Engineering

University of Sheffield Mappin Street, Sheffield SI 3JD, UK

Email: rrg @sheffield.ac.uk

Abstract: This paper describes a new control system for the PUMA 560 industrial robotic manipulator based on transputer networks, where both the hardware and software designs are detailed. A Transputer Interface Board (TIB) establishing a transputer link to the 6503 microprocessors of the PUMA arm joints has been designed, built and tested. In addition to hardware implementation, software testing for this new system has been successfully accomplished. A great deal of flexibility can be achieved with this new system, yet without many difficulties, where it can be used as a platform to implement advanced control algorithms and develop sensor-based intelligent robotic structures which need much more computational power. Genetic Algoritl ms are used to plan the PUMA robot motion trajectory based on the new PUMA control platform, and the motion planner runs concurrently with the controller. Real-t� . ne experiments show that PUMA moves much more smoothly along the optimum planned trajectory.

Keywords: Robot Control, Motion Planning, Genetic Algorithms (GAs), Distributed Control, Interfaces.

1 . INTRODUCTION

Recent developments in robotic applications have shown a trend towards pri;.cise and high speed motion to accomplish a specific task. However. the efficiency of the available industrial robots is severely reduced by the complexity of its operation. In mathematical terms, the planning and control of robot motion is a very heavy computational burden to be executed in real-time. Problems in the control of robots arises from the vast computational complexities associated with its mathematical formulations, in addition to the need for appropriate adaptive control methods to achieve the required precision and speed.

Another trend towards intelligent systems leads to research about sensory feedback to accommodate environment changes. Most i ndustrial robots lack this capability, and their limitations lie in mainly three aspects. First, there are no channels for sensors' information to be incorporated. Second, there is no floating-point hardware to perform complex mathematical operations. And finally, the software source code is normally stored in EPROMs, and is not supposed to be modified, though it is necessary when sensor information and other intelligences are to be included.

77

The use of distributed processing with transputers is a very attractive solution which has shown premising features. Basically, the computational burden is divided onto several processors and massive p:irallelism within the network can reduce the computational burden. A unique interface system to link a SPARC-based network ofT805 transputers to the PUMA joint controller, hence allowing direct access to an open-system, is reported in this paper. Transputer-based PUMA control systems have been developed by other researchers, as in Chen ( 1993), Goldenberg ( 1988), Valvanis ( 1 985) and Nagy ( 1989), and their work will be reviewed in the next section.

Genetic Algorithms (GAs) are essentially optimization methods, where unlike conventional methods, they search for optimum solutions globally. By this way, GAs can avoid being trapped in a local minimum. which is the common handicap of conventional methods (Goldberg ( 1989)). There are other benefits for using GAs as the optimum search methods, one of which is its feasibility to be parallalised on a distributed multi-processor system. In robotics, it has mainly ?�en used in path planning and decision-making on colhs1on avoidance (Davidor ( 199 1 ) ).

Trajectory planning of manipulators requires prov�ding a time-history of motion for the arm to accomplish a required task. There are infinite trajectories, in the joint

Page 80: Distributed computer control systems 1995 (DCCS ¹95)

space, for a robotic manipulator to move from one position to another. It is an optimum problem to choose the best one among those available trajectories An objective is needed, where normally the travel time is to be optimized. The productivity can be increased on a large scale if the manipulator can achieve its task with the shortest time period.

The Robotics Research Group at the University of Sheffield has investigated the use of Genetic Algorithms to tackle this optimum problem for quite a time now (Chan (1993) and Zalzala ( 1 994)), and as in Sahar ( 1986), only two joints had been considered. It was found that the search time could be much reduced to about one-twentieth by using GAs (Chan ( 1 993)). Unlike Sahar ( 1986) and Chan ( 1993), in this paper, the authors use GAs to plan the time-optimal motion for a realistic industrial robotic manipulator - the PUMA560 with six degree of freedoms, which is not possible by conventional methods.

2. PUMA INTERFACE ALTERNATIVES

The controller of the PUMA560 robot can be enhanced by the addition of an external computer system. Basically, there are two levels in the current controller. One is the lower level which consists of a digital servo board, the 6503 microprocessor, an analog servo board and a power amplifier for each joint. There is a simple PD feedback controller which samples at a period of 0.875 ms in this lower level section.

The higher level, which is also called the supervisory level, consists of a LSI- 1 1 computer. This supervisory computer mainly functions as a management system using VAL, and ensures that new data for the lower level are sent in every 28 ms. Kinematics transformation, path planning, error handling and man-machine interfacing etc. are all processed by this computer, which presents many limitations.

Serva! interface alternatives to the PUMA controller, with or without VAL, are possible and some have already been implemented by various researchers. These alternatives differ in terms of the extent to which the existing controller hardware is replaced and the capabilities of the external computer used.

The approach labelled as ( 1) in Figure 1 , which allows for direct and total control of the joint servos, had been chosen by many researchers. Kazanzides et al. ( 1987) used a multiple MC6800-based single board computer, the custom-developed Armstrong multiprocessor system, and two SUN 3/260 computers. Chen et al. ( 1 993) used a transputer system together with a SUN workstation for the same purpose. Nagy ( 1 989) used a PC to control the PUMA robot. Though potentially very powerful and flexible, this option requires to develop a great deal of custom hardware and software.

The alternative labelled as (2) in Figure l was originally described by Visnes ( 1982), and while the low-level hardware is kept to control the joint servoing, the supervisory level was eliminated. By this, a great deal of flexibilities can be acquired with much less complexity than option ( 1 ) . Goldenberg and Chan ( 1 988) chose this option as well and developed a PUMA control system based on the TUNIS computer. Valvanis et al . ( 1 985) employed a DEC's VAX computer using the same interface option.

78

Although this option is simple and uncomplicated, incomplete documentation has proved to be the main obstacle. As the low level joint controller still exists, the custom developed supervisory level must be fully compatible with it. Corke ( 1 99 1 ) includes some description of how the original PUMA arm controller works.

I I

I I I I I I I I I I : : 0:121 : I I I I _ _ _ .£31 External I c@-----• . - ·�-- _ _ J

Figure 1 PUMA Interface Options

Option (3) shown in Figure 1 directly connects the external computer to the LSI (Q-bus) within the existing PUMA controller. This connection is simple but DEC products have to be used. Other options exist, such as what is labelled as auxiliary interface in Figure 1 , which uses serial links and limits transferring speed from the external computer. And what's more, VAL is still used with its limitations.

3 . THE PUMA TRANSPUTER INTERFACE BOARD

The system described here is based on the option labelled (2) shown in Figure 1 , where the T805 transputer network is used to replace the LSI- 1 1 . A SUN-SPARC workstation is used to provide the man-machine interface to develop and store the control software system. The transputer system consists of a motherboard which could have up to 1 6 processors (TRAMs), linked together by high speed (20MBaud) serial links. One of the TRAMs is the master which provides communications with the host computer (SUN-SPARC), and also controls the slave TRAMs.

Transputer-networks put many processors together to increase the computational power, while data is exchanged using high speed serial links between processors. Additional TRAMs can be added (together with other motherboards) if even more computational power is needed.

Page 81: Distributed computer control systems 1995 (DCCS ¹95)

The interactions between the transputer and the out-side world (e.g. other peripherals) are achieved using the Inmos COl l link adaptor. The link adaptor is currently linked to the master TRAM at the same baud rate (20MBaud) as the internal links between TRAMs. The function of a link adaptor is to transfer serial signals to parallel ones, which are TTL compatible and can be manipulated according to the user's requirements.

The only piece of the custom-built hardware required for the new control system is the PUMA-Transputer Interface Board (TIB). Transputers differ significantly from LSI-1 1 in the hardware used to interface external devices. LSI- 1 1 uses a DEC product, the DRV- l l parallel interface board, to communicate with the PUMA Arm Interface Board (AIB). In order to use the transputer system, the interface board TIB must be compatible with the low-level control section AIB, as a DRV- 1 1 is, to provide proper data and control signal transfer and buffering.

Oata ln Buffer (2 Bytes)

Figure 3 TIB Structure

The interface hardware structure is illustrated in Fioure 2. There are two data switches, which allow sig�als from either the TIB or the VAL (DRV- 1 1 ) to control the PUMA �· hence providing the original controller for comparison.

The Transputer Interface Board TIB is constructed a�cording to the operational principle of theAIB. Figure 3 illustrated how data and control signals are oroanised. The tr�nsputer talks to its link adaptor by a serial link, operatmg at a baud rate of 20MBaud. The serial si anal is transferred to an 8 bit ( I byte) parallel signal . This one byte is further latched to three bytes to be able to provide the necessary data bytes (2 bytes) and the control byte. The same is repeated for the input channel.

4. THE NEW CONTROL SYSTEM SOFTWARE

The programming language used to implement the control software is ANSI C running under the Inmos toolsets, which offers dedicated functions especially for link communications and parallel processing purposes. The whole software system is written in this high level language, which enables it to be easily transported to other platforms for more convenient debugging and developments.

The software system is organised in a hierarchy way, where the upper levels can make use of the routines in the lower levels to implement more sophisticated functions. These layers include:

LinkProt This is the lowest of the software hierarchy, including the actual communication protocol which was described in Visness ( 1 982). Basically, the routines in this layer are responsible for ensuring that the custom-built PUMA transputer interface board (TIB) really looks like a DEC DRV- 1 1 board to the lower level control section (AIB) . Reference [ IO] has some description of this DEC board.

·

Figure 2 The New Transputer-based PUMA Control System

79

Page 82: Distributed computer control systems 1995 (DCCS ¹95)

The proper data and control signals must be sent to the PUMA, to indicate when new commands and data become available, and an acknowledgment, to indicates that a command has been received from the PUMA, must be passed back to TIB. This interface driver routines also incorporate error indicators that will inform the control programs and the user of the PUMA' s failure to acknowledge any command or the user's failure to acknowledge any received requests.

PUMA Util Routines within this layer are responsible for l ) Arm initialization; 2) Arm calibration; 3) Arm movement based upon joint space; 4) Kinematics (forward and inverse); and 5) Other miscellaneous utilities, e.g. transformation between angles and encoder values.

The new optical encoder values must be sent to the low level control hardware at every 28 ms intervals to avoid jerking the arm. If time required to calculate a set of six new encoder values is larger than 28 ms, a process has to be dedicated to sending data to AIB at the specific rate.

PUMACtrl The routines at this level are responsible for performing the high level task of moving the PUMA arm to whatever position and orientation the user has specified, providing these are within the PlTMA' s range and capabilities.

5. PUMA MOTION PLANNING USING GAs

Th<: established and commonly used approach for robot motion planning employs a heuristic exhaustive technique to search the work space of the arm (Chan ( 1 993)). The main idea of the algorithm is to tessellate joint space into a grid of possible motion nodes, where at each option node, given the position and velocity at the previous node, possible velocity values are constrained by the dynamics of the arm. The most comprehensive formulation is reported by Sahar and Hollerbach ( 1986).

Joint space tessellation and graphic search scheme [ l l ] present a very good application for genetic algorithms. GAs reduce the computational time on a great deal, which makes it possible to plan motion for a six DOFs industrial robotic manipulator.

5. I Decoding the problem

end

1 I

I! 2

/j/' 3

� 5 \ 6

time (sec)

Figure 4 Relative transition scheme

80

Manipulator trajectories consist of finite sequence of positions (joint angles) and it is suitable to code these into a string of the format:

[a, I ' <Xzi' . . . , a. l;af2' <Xz2, . . . , a.2; . . . . . . ;aim' <Xi,. • . . . , a.m]

where <X;,j is the ith intermediate position node of the jth

link, and n is the number of intermediate position nodes and m is the number of DOF (6 for PUMA560).

For each joint, the motion space (joint angle against time) is divided into a nxn grid. The initial population is generated using a relative transitional scheme as shown in Figure 4, where, from any one node, the arm joint is restricted in moving to only six neighbouring nodes. The space was tessellated such that a transition has higher probability of moving towards the end point. A simple algorithm was developed, incorporating this relative transitional scheme, to generate trajectories such that they will always arrive at the desired final position as shown in Figure 5 .

- j I

B /A c-,... . c ./ - v o IV J

H F V E / h -

1 1/ ............. v / " I g

/ 'l ............. / -J v •

·-v · I D · -

bme (sec)

Combining Two tra1ectonos

j I

B I/A c ./

IV 0 F / E

v 'l

b / / •

bm& (sec)

Figure 5 Generating a valid trajectory

5.2 Three operators

An initial population of trajectories (from start position to end position) can be generated using the above algorithm. During Reproduction, the number of occurrences of the same trajectories selected for crossover is limited, which encourages higher interaction among different trajectories. To prevent any path dominating the population leading to pre-mature convergence, only a specific number of copies of the same trajectory are allowed to remain in the population after reproduction, and extra copies are replaced by new trajectories.

Page 83: Distributed computer control systems 1995 (DCCS ¹95)

Analogous Crossover (Davidor ( 1 991)) is used, and single point crossover was adopted as an initial study. After choosing across site in one parent string, crossover is performed only if the crossover site of the second parent is within the proximity by the circle centred at the first crossing site.

Mutation has a destroy trajectory operator which, when active, replaces the selected trajectory with a randomly generated one so as to give rise to new search space. Another mutation operator, the position operator, varies slightly the position of one or more nodes in a path. This operator helps to find trajectories which may or may not be better around a ' good' trajectory found by the crossover operator.

Reproduction was controlled to prevent pre-matured convergence, the analogous crossover directed sensible crossover operations and specially shaped mutation operators promoted new search space. The algorithm has proven to be far more efficient and is about twenty times quicker than the conventional heuristic search technique.

5.3 Search results

The fitness of a string is assigned as the value of the total time for PUMA to travel form a start position to a final position. The total time is denoted as follows:

1 = "f. h i = I 1

That is, the total time is defined as the summation of all the time intervals h1 (the time interval for the jth segment of motion and is calculated by Hollerbach's dynamic scaling scheme).

The end velocities should ideally be zeros. These constraints are incorporated as penalties applied to the objective function, that is a penalty consisting of the absolute velocity values are added to the objective function as follows:

1 = r. h + 0. 1 r. I v� I J == I J i ::: I

The optimal aim is to minimise this objective. The fitness of a chromosome is denoted by:

1 fitness = 2.0 --­

max l

where max J is the maximum objective in the same generation of populations.

PUMA joint actuators' bounding information had been observed to make sure that the trajectories are within PUMA's ability. The generation number is decided on through experience, where in the first experiment, the total generation was chosen as 3000, with the algorithm converging at about generation 2000, with a minimum travel time of 0.42 seconds. The whole calculation required about one hour and ten minutes (depending on how fast the machine is).

81

In the fo!lowing_ �gures, the solid line is for joint 1 , the

dashed l!n� for JO�n� 2, the dash-dotted line for joint 3 , the star lrne fo r JOmt 4, the plus + line fo r joint 5 and finally, the x line for joint 6.

The six joints have the following start position and final positions (in radians):

Start (-0.30 0.40 -0. I 8 0.00 -0.05 0.05) Finish ( 0.5 1 -0.42 0.58 0.87 0.64 0.84)

0 "' a: I

x .

six joint trajectories

x

1_ _ -

, +

--0.5 '----'---'----'-----'-----'--_... _ __. __ L..---.J 0

50

-50

-100

-150

-200 0

0.05 0., 0., 5 0.2 025 0.3 0.35 0.4 0 45 time {sec)

Figure 6 Minimum time path

0.05 0.1 0.1 5

six joint torqlJf!s

- -' ' - -

0.2

_ _ _ !

. - '

0.25 time (sec) 0.3

.- - , '

0.35

- - '

� 0.4 0.45

Figure 7 Corresponding torque information

Figure 6 shows the time-optimal trajectories searched by GAs. Figure 7 shows the torque information. As can be seen, they are all within their limit boundaries . Joint I only enjoy the bang-bang motion and is always at its extreme value, which can confirm that GAs have searched successfully the global minimum. The PUMA joint actuators' bound torques are:

(±97.6, ±1 86.4, ±89.4, ±24.2, ±20. 1 , ±2 1 .3)

Figure 8 shows the velocity profiles for six joints. As can be seen, the end-velocities are nearly zeros.

Page 84: Distributed computer control systems 1995 (DCCS ¹95)

2

a <( ,,. · +_ a:

-1

-2

-3

"' •

+

six joint velocities

. J C . . � . .

,,.. · � .... .'.' + + +

, , , ;

+

, '

-·������������������ 0 0.05 0. 1 0.15 0.2 025 0.3 0.35 0.4 0.45 time(sec)

Figure 8 Corresponding six velocities

6. CONCLUSION REMARKS

A transputer based PUMA interface system had been designed, built and tested successfully. With this system, it is possible to use much more advanced control algorithms, like neural networks and genetic algorithms, for the operation of the PUMA560 industrial robot in real-time.

This design increases the PUMA's capability a great deal. The only custom-built hardware is a single interface board, TIB, while the remainder of the controller was implemented in easily accessible, portable and flexible software. Thus the new PUMA controller represents a firm foundation upon which new dynamic models, end-effector and environment sensors and variety of other research topics related to the advanced control of robots may be studied experimentally.

Genetic Algorithms has been used to minimize the total time for PUMA to move from one point to another. Both simulation and experimentation had shown promising results, as it can be noted that the PUMA travelled much more smoothly along the planned minimum time path. This research can be extended to redundant manipulators (over 6DOFs) without much modifications. Study has shown that the minimum-time motion is not necessary a straight line in joint space.

ACKNOWLEDGEMENT

The authors wish to thank Dr H. Thomason for his help and useful discussion in the initial construction of the

82

TIB. Dr K. Yearby and Dr H. Hu (the RRG, Univ. of Oxford), expertise in transputers - especially the link adaptor - COl l , contributed a lot in the testino. The financial support is provided by EPSRC (Gr:nt No. GRJ/15797).

REFERENCES

Chan, K. K. and A M S Zalzala ( 1 993). Genetic-Based Motion Planning of Articulated Robotic Manipulators with Torque Constraints, IEE Colloquium on Genetic A lgorithms for Control and System Engr. , London, May 1993.

Chen N. and G A Parker ( 1 993). An Open Architecture Robot Controller, IEE Workshop Proc. - Systems Engr. for Real Time Appl., 13- 14 Sept. 1 993, Cirencester, UK.

Corke, P. I. ( 1991 ). Operational details of the PUMA560 robot manipulators, CSRO tech. rep.

Davidor, Y. ( 1 99 1 ). A Genetic Algorithm Applied to Robot Trajectory Generation, in Handbooks of Genetic A lgorithms, (Ed. L Davis), ppl 44- 1 65.

Goldberg, D. E. ( 1 989). Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley ( 1 989).

Goldenberg, A. A. and L Chan ( 1 988). An Approach to Real-Time Control of Robots in Task Space. Application to Control of PUMA 560 Without VAL-II, IEEE T. Industrial Electronics, Vol. 35, No.2, pp.23 1 -238, May 1 988.

Kazanzides, P., H. Wasti and W. A. Wolovich ( 1 987). A multiprocessor system for real-time robotic control: design and applications, in Proc. IEEE Int. Conj Robotics Automation, pp. 1 903- 1908.

Microcomputer Interface Handbook, Digital, 1 980. Nagy, P. V. ( 1 989). A New Approach to Operating a

PUMA Manipulator without Using VAL, Carnegie Mellon Univ. report, Canada.

PUMA Robot (MkI) Technical Manual, - Section 8.2:"Electrical Drawings", Unimation 1 1182.

Sahar, G. and J. M. Hollerbach ( 1986). Planning of Minimum-Time Trajectory for Robot Arms, Int . .!. Robotics Res., MIT Press, Vol.5 No. 3 pp.9 1 - 1 00.

Valvanis, K. P., M B Leachy, Jr., and G N Saridid, ( 1985). On the real-time control of a VAX 1 11750 computer-controlled PUMA-600 robot arm, RPI, RAL Tech. Rep. 42, April 1 985.

Visnes, R. ( 1982). Breaking away from VAL, Stanford Univ.

Wang, Q. ( 1 994). ART-CARS Progress Report No.4, March 14, Univ. of Sheffield.

Zalzala A. M. S. and K K Chan ( 1994 ). An Evolutionary Solution for the Control of Mechanical Arms, ICARCV'94 3rd Int. Conj Automation, Robotics & Comp. Vision, 8- 1 1 Nov. 1994, Singapore.

Page 85: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

AUTOMATED CLIENT SERVER CODE GENERATION FROM OBJECT ORIENTED DESIGNS USING HOOD4™

Maurice HEITZ

CISI 13, rue Villet, 31400 TOULOUSE, FRANCE tel (33) 61.1 7.66.66, Fax (33).61.17.66.96.

Email [email protected]

Abstract. This paper presents an approach for the development of reusable, testable distributed systems using the HOOD4 code generation principles for implementing Object Oriented Designs over different target systems (ADA95, C++ ). HOOD4 is an evolution of the HOOD3 design method (HOOD Technical Group,1993) recommended by the European Space Agency for all its contractors, and is now supporting inheritance and multi-target code generation. HOOD4 provides the designer with an Object Oriented framework by means of the HOOD RUN TIME SUPPORT library together with a design approach and associated code generation rules, shielding applications from complex semantics differences between OS platforms.

Index Terms : Object Oriented Design, Object Oriented Programming, Virtual Node, Verification, Client-Server, Real-Time, Testability, Distribution, Reliability.

1 . INTRODUCTION

With the availability of powerful development tools for client-server application developments, distributed applications are becoming more complex and larger whereas trying to integrate more and more the object technology. Mastering such developments is quite a challenge as projects are shrinking their budgets.

In this context, the Hierarchical Object Oriented Design (HOOD Technical Group, 1993) addresses the development of distributed systems through a post­partitioning approach based on the concept of Virtual Node, allowing extensive use of automated code generation. Often perceived as a heavy procedure to distribution, this approach is gaining increasing interests since :

• the technology is now supporting efficient inter processor communication mechanisms, rendering the post partitioning fully viable (and possibly more efficient) for loosely coupled systems,

• testability and reliability are more and more required by customers and end-users,

• more and more distributed applications are re engineered towards client-server architecture,

• distributed object technology (Vinoski, 1995) is going to be used for real time applications,

• the HOOD4 (HOOD Technical Group, 1995) method has developed an object oriented framework by means of the HOOD RUN TIME SUPPORT (HRTS) library together with a design approach and associated code generation rules,

83

shielding applications from complex semantics differences between OS platforms,

• powerful code generator and configuration handling tools are now used by developers.

In this paper we first recall the HOOD principles and motivation for enforcing separation principles all over the development from design down to code and testing. We give the target code structure associated to the HOOD4 entities and associated code generation rules. We conclude with an associated development approach allowing post distribution over Ada (Ada9X Mapping/Revision Team, 1994) or C++ ( Strousoup, 1 991) targets in a safe, reliable and test effective way.

2. HOOD4 TAR GET CODE GENERATION

2.1 Enforcing Separation Principle

The challenge of developing flexible, testable and reusable software can be overcome by enforcing the principle of separation of concerns all over the d evelopment. Such an approach has several advantages over traditional ones :

• the overall complexity is broken, through logical grouping of same concerns that can be handled by specialised teams or techniques.

• Associated and specific logical properties are emphasised, thus making the test and verification activities more efficient

Page 86: Distributed computer control systems 1995 (DCCS ¹95)

HOOD4 has the key concept of operation which is executed by a logical thread (B urns 1 989) with or without constraints. This concept and others related to encapsulation allow software associated to operations of object or classes to be structured into three separated parts: • p ure sequential code is supported through

the concept of OPCS 1 . It shall implement solely the functional and transformational code of an operation.

• state i n tegrity e n forcement code is supported through the concept of concurrency constraints and state constraints. The latter are described using an Object State Transition Diagram (OSTD) and are implemented through OSTM2 code descriptions and associated client precondition and post condition code (Meyer,1990) referred as OPCS_HEADER and OPCS_FOOTER code. (see figure 2.3.3 below )

• i n ter-process com m u n ication code is supported through the concept of HOOD protocol constraints and is described by means C l ientObcs and ServerObcs code providing a common infrastructure to all operations for communicating with [remote] processes or threads.

The implementation principles tried mainly to enforce this logical structure for HOOD operations and have lead into the definition of several logical code parts. Moreover the target code structure was mapped in a client-server architecture as soon as the implementation required several communicating execution threads. Associated ODS3 fields are supported in the HOOD4 SIF (Standard Interchange Format) allowing HOOD4 descriptions to be exchanged between different HOOD toolsets. Code generation rules could then be derived by defining target source code fields (possibly empty) associated to the logical parts according to the type of operation constraints. • The OPCS_ER4 and ClientOBCS5 parts refer to

the code executed by the client thread6. • The ServerOBCS and OPCS_SER parts refer to

code executed by a server thread The OPCS_HEADER and OPCS_FOOTER implement pre and post conditions code (Meyer 1 990) that enforces state integrity of the object (Sourouille, 1995) and is executed by a server thread.

• The OPCS_BODY part refers to the functional, algorithmic, transformational code of the operation.

l oPCS=Operation Control Structure

2oSTM= Object STate Machine 3oos = Object Description Skeleton 4opcs _ER=OPCS _Ex.ecution_Request 5oBCS=Object Control Structure 6thread = basic schedulable entity=process=task

84

2.2 Target Code Architecture

Implementation of the above principles could have lead to numerous solutions according to the features of the different targets and Operating Systems. HOOD4 tried however to define common code structures whatever the target and the kind of HOOD objects. For example the non-distributed code had to be reused unchanged when partitioning it into a distributed one over client and server Virtual Nodes. The design pattern associated to inter-process communication code ( Gamma, 1994) could also be reused when dealing with inter-VN communications, thus matching the remote object invocation mechanism of the CORBA Object Request Broker (OMG,1991) , based on the proxy design pattern (Vinoski, 1993). The definition of a HRTS layer hiding target characteristics as well as a number of recurrent services from the HOOD application helped a lot in the standardisation of logical code parts and for automating their generation. The HRTS library has been implemented in C++/UNIX7 and Ada/UNIX, and is publicly available (HUG,1995) to users (for own optimisation, porting on specific targets . . . ).

For an object or class of name <NAME>, with protocol constraints implying a client-server architecture, three kind of target units are generated as illustrated in figure 2.2 : • a <.NAME>module that provides the same

specifications and operation signatures when non constrained. It contains OPCS ER body code allowing clients to queue their requests and process return parameters after operations are executed within the server space. This module isolates the client from any subsequent spreading of its provided service implementation towards additional servers. Thus client code of an object is invariant whatever the choice or refinement of the implementation of its provided services.

• a <NAME> _RB module that provides an object request broker between the client and the effective <NAME> SERVER module. This module fully de-couples <NAME> from the server <NAME> _SERVER code. It is in this module that any optimisation with respect to network contention may take place.

• a <NAME> _SERVER module that includes all the code associated to state and concurrency constraints as well as the functional code. This allows code to be generated, which is functionally independent from any allocation schema on any execution infrastructure. The SERVER code will remain unchanged whatever the execution structure whether local , or remote or within VNs. Moreover such SERVER code can be tested and developed independently of a future or final allocation on a distributed/c lient server

7 the C++ version is build on-top of the ACE library and UNIX wrappers developped by D.S chmidt (Schmidt, 1994)

Page 87: Distributed computer control systems 1995 (DCCS ¹95)

architecb1re.

1·R··· · ·1r R· ·· 1a1 :: � ::: .� � �: � &

-.,...�� ..... .. = ......... = ... �-Figure 2.2 Client-Server Target Code Structure for an Object or Class.

2.3 Target Implementation and Illustrations

Let us take the well known ST ACK example for concise illustration of the associated target code in Adas. Some complexity is merely added by defining it as a class. TStack9 class has operations PUSH and POP HSER constrained, and operation STOP state and ASER constrained, and operation ST ART only state constrained, semantically correct only when the behaviour expressed in OSTD of figure 2.3. 1 is enforced .

HOOD4 STACK Graphical reprisenlalion.

STACK Start

oacka!e body TStack

I P!occdu= ST ART I PmceduicPUSH I Pra:edure POP I Pmceduic STOP I function SIZE

Packaae TSlaci_OSTM

I HRTSJ'SMs I

STACK OSTD Ada Code represenlalion. Figure 2.3. 1 TS tack Graphical Representations 10

8Translation into C ++ i s straight since we do not use here any "tasking feature"' . As "thread supporting UNIX" targets are available.translation from Ada to C+ is more direct. Full source code of this example in C++ or Ada.may be directly obtained from the author. 9The following naming convention is used : a class identifier starts with T, a type identifier with ''T _"

1 �arget Ada units are represented as squared boxes,

85

State Enforcement Code : Figure 2.3.2 illustrates the target code for state constrained operations, which is standardised in an FSMl 1 where transitions are only triggered by provided operation execution requests. witb HRTS PE; -- for a l l gl obal defi n i t i ons witb TFSMs ; use type HRTS P E . T Integer; package Stack OSTM ia - -

type T OSTDState ia ( EMPTY, F UL L , N EF, STOPPED , UNDEF INED ) ; -

type T OSTDOperat i o n i s ( S TART , S TOP , PUSH, P OP ) ; NB MAX STATES : conatant HRTS P E . T Intege r : =- T OSTDSt a te ' pos ( T OSTDStat e ' l as t ) ; NB MAX OPERATIONS : conatant HRTS PE . T I nt e ge r : = T OSTDOpe ra t i on ' po s ( T OSTDOpe r at i o n ' l a s t f;

NB-MAX TRANS I T I ON S : constant HRTS PE . T I n t eger : = N B MAX STATES * NB MAX OPERAT I O N S ;

package FSM ia new 'l'FSMa ( T Ope r at i on = > T OSTDOpe r at i on , T -S t ate=> T OSTDSt a t e , NB MAX TRANSITI ONS=>NB M AX TRAN S I TIONS ) ;

function stack FSM return FSM . T F sm ; end Stack OSTM; package body Stack OSTM i s

function Stack FSM return t heFSM . TFsm i s t heFSM : FSM . TF sm;

begin-- i ni t i al state i s S TOPPED FSM . C re ate ( theFSM, 4 , 1, STOPPED ) ; FSM . T ran s ( theFSM, EMP TY , START , EMPTY ) ;

FSM . Trans ( theFSM , N EF, P U S H , N EF ) ; FSM . Tran s ( theFSM, N-EF , P OP , N EF ) ; FSM . Trans ( theFSM, FUL L , P OP , N-EF ) ; FSM . Trans ( t heFSM, FULL , S TOP , STOPPED ) ; FSM . T rans ( theFSM, STOPPED, START , EMP TY ) ; return t heFSM;

end Stack FSM; end Stack OSTM;

Figure 2.3.2 - OSTD Implementation Illustration

Standard Code : Figure 2.3.3 illustrates the code associated to a HOOD4 class module with state constraints. Note that in HOOD4 operations of a class have a special parameter with reserved name "Me,,. witb HRTS PE; with Stack OSTM; package TStack is -

type TStack is tagged private ; procedure Start ( Me : i n out T S t a ck ) ; procedure S t op ( Me : in out T S t a ck ) ; procedure Push ( Me : in out T S t a c k ; My i t em : i n HRTS PE . T I nt eger ) ;

procedure Pop (Me- : i n- out TStack; A n i t em : out HRTS PE . T I n t eger ) ; funct ion S i z e ( Me :- i n TSt a c k ) r e t u r n

H R T S PE . T I nt ege r ; private -

StackSi ze : constant HRTS PE . T I nteger : =2 0 ; type T Sta ckBu f fer i s array (0 . . St ackS i z e

- 1 ) o f HRTS PE . T Intege r ; type TStack ia tagged record

Current S i ze : HRTS PE . T I n t eger : = O ; St ackBu f fe r : T StackBu ffe r ; FSM: Stack OS TM�FSM . TFSMs : = St ack_FSM;

end record; end TStack Serve r ; package body TStack i s

package FSM renames Stack OSTM . F SM; procedure S t a rt (Me : in out T S t a c k ) is begin

OPCS HEADER ( au t omat i c a l l y gene rated) - ­

F S M . F i re ( Me . FSM, Stack OST M . S TART ) ; -- OPCS BODY ( e x t racted from ODS f ield s )

Me . Current S i ze : = O ; - - OPCS FOOTER ( au t oma t i c a l l y gene rated) - ­

end Start ;

with its name in the top, and required units representented as attached small boxes. l l fSM=Finite State Machine

Page 88: Distributed computer control systems 1995 (DCCS ¹95)

procedure Push (Me : in out TStack; Myi t em : i n HRTS P E . T Intege r ) ia

beqin - -

OPCS HEADER ( aut omat i ca l ly generated) F SM . F i re (Me . FSM, Stack OSTM . P USH ) ;

OPCS BODY ( ext racted from ODS fie lds ) Me . St a ck Bu f fe r ( Me . Current S i ze ) : = My item;

Me . CurrentSi ze : = Mr . Current S i z e + l ; i f ( Me . Current S i z e = StackSi ze ) then

F S M . Set (Me . FSM, Stack OSTM . FULL ) ; end i f ; -

OPCS FOOTER ( automa t i cal l y gene rated) end P u sh ;

function S i ze ( me : in TStack) ret urn T i nt e g e r i• non const rai ned operat i o n s have e mp t y HEADER o r FOOTER code . beqin

return Me . Current S i ze ; end S i ze ; ---procedure STOP i• like start --procedure POP ia like POSH

Figure 2.3.3 Class TStack Standard Code Sample

Client Server Code : Figure 2.3.4 illustrates the target code structure associated to a HOOD4 class when implementation requires at least one client and one server execution thread. Such target structure may be directly mapped in C++ modules and could be optimised when using Ada tasking or thread supported targets; e.g.marshalling ( Ada9X Revision Team, 1 995) in OPCS_ER code and unmarshalling in OPCS_SER code, is not needed when threads or tasks share parameters in the same memory partition.

ASFl llSFl LSEi

DCUENT SPACE and THREAD

iod TSllct , ...... SUIT , ...... 1'1.91 llRSr .OICSI 1 rr.mmltf ,. .... srtr .IPCMSG

lt.:lioo� (5lll Ej

plCUg! OllalOBCS ·-�

RairnJpue

b&t iod71'S111:k I ""*Sf Alt 1-IU\1! 1-ltf

StaJt and C011C1UTtncy constrainl.s supp«ling calt

LJ§ LJSERVER SPACE and THREAD

iod7TS11ii U lrrai..sr.ur I l'raliRMll 1-lli7

lli!if.OBCS. ll'raliR� lm•Siz .IPCMSG

Figure 2.3.4 Class TS tack Target Structure for a Client-server Architecture Implementation.

86

TStack Client code: Note that TStack specification is unchanged regarding the standard code except for the private part which col"tains OBCS client code and IPC Message instances.

with Itema; use Items; packaqe TStack ia-- spec as in fiqure 2 . 3 . 3

private - but private part ia different type TSt ack_D i • record

OBCS : TCl ientOBC S . TC l ientOBCS; Mes s age : TMs g . TMsg;

end record; end TStack; end STStack ; packaqe body S TACK i s -- real dat a a l l i n STACK SERVER object procedure Push (Me : in out TSt ack ; Myitem : in HRTS PE . T I ntege r ) i s

beqin - -

TMsg . I n i t i a l i ze (Me= >Me . Me s s age , Sender= > S t a ck P E . STACK , Se n dee = > St ack PE . STA CK RB, Ope r a t i o n = > S t a c k P E . P U S H , Cnst rt=>HRTS

PE. NO CONSTRA I N T ' P a r amS i ze => 1 ) ; declare P t rStream : St ream Acces s : = TMs g . GetP a rams

( Me . Me s s age ) ; -

beqin T In tege r ' i nput ( P t r S t ream, My item) ;

end; TC l i entObcs . I n s rem (Me . OBC S , Me . Mes sage ) ;

-- i n sert a nd w a i t on return !PC me ss age -- no w a i t in case o f ASER constraint

Statu s : = TMsg . GetX (Me . Me s s age ) ; case Status i a when HRTS PE . X OK => nul l ; when Others = >- rai se; if not ( Me . Me s s age . CS TRNT=ASER) the n

OBCS . FREE (MSG) ; -- deal l ocate ; <Exception handler>

-end Push; -- similar code for POP , STOP , START and Si ze

Figure 2.3.5 Class TStack Client Code Sample

TStack RB Code

with Itema; use Items; with TStack SERVER; - - ex act l y the same code as i n- f igure 2 . 3 . 3 package TStack RB i s

procedure Start ; procedure Stop ; procedure Push : procedure P op : function Si ze;

end TStack RB ; with HRTS PE; u se t ype HRTS PE . T I nt ege r; with St ack PE; with S t a c k O S TM ; -

with TM sg; with TQP oo l ; with TSe rverObcs ; with TStack Server; package body T S t ack_RB i s

TheStack : T S t a ck Serve r . TStack; - - l ocal Me ssage : TMs g . TMsg; OBCS : TServerOBCS . TSe rve rOBCS ; P t rStream : S t ream Acce s s

procedure Stop i s - - LSER constrai ned beqin-- OPCS SER c ode

TStack Se rver . St op; end S t a rt ; -

procedure Push is-- HSER c o n s t r a i ned I tem : HRTS PE . T I ntege r ;

begin-- OPCS SER -code HRTS P E . T Intege r ' read ( P t rSt ream, I tem) ; T S t ack Serve r . Push (TheStack, Item) ;

end P u s h ; -function S i z e return T i nteger i s

aS i z e : HRTS P E . T I nt ege r ; begin

- -a S i z e : = TStack Serve r . S i z e ( TheS t a ck ) ; HRTS PE . T I nt e ge r ' write ( P t r S t ream, a S i z e ) ; return aSiz e ;

end S i z e ;

Page 89: Distributed computer control systems 1995 (DCCS ¹95)

procedure RB Di spat cher ia P revSender:T HOODOb ject := MessagE . Sender

beqin -

TMsg . SetSender ( Me ssage, Me s s age . Sendee ) TMsg . SetSendee (Mes sage , P revSender ) ; P t r S tream : = TMsg . GetP arams (Message ) ; caae TMsg . GetOperat i o n ( Me ssage) i s

when St ack P E . START =>Start ; when Stack-P E . STOP = > Stop; when Stack-PE . PUSH => Push; when otheras>EXCEP T I ONS . LOG ( ftT S t a c k RBH & T X VALUE ' image ( . X UNKNOWN OPERATION ) ) ; TMsg . SetX (Message , -X UNKNOWN OPERATION ) ;

end caae; - -

<Excepti on handler> with TMsg . SetX (Me s s age , HRTS P E . X BADREQUES T ) ; TServerObcs . I nsert ( OBCS , Mes s age ) ;

end RB_D i sp a t cher ;

beqin -- at package e l aborat i on loop -- pollinq schema

TServerObcs . Remove ( OBCS , Mes s age ) ; case TMsg. Get Sende r (Me s sage ) i s when Stack PE . STACK=> RB D i spatcher; when others =>

-

EXCEP T I ON S . LOG ( ftTStack R w & X VALUE ' image ( X UNKNOWN SENDEE ) ) ;

TMs g �Set X (Mess age , X UNKNOWN SENDEE ) ; TMsg . F l u shParams ( Me s sage ) ; -TServe rObcs . I nsert ( OBCS , Message ) ; end case;

end l oop; end T S t ack RB;

Figure 2.3.6 ST ACK Request Broker Code Sample

By default there is only one "RB package" for a class, whatever the number of instances in clients. The "RB_Dispatcher and RB package" supports the management of protocol constraints (release the client before or after execution of server code) and leaves room for dedicated tuning according to the specific needs of a client-server application:

The simplest strategy implements synchronous behaviour by means of polling and queuing client requests, and servicing one at time. More demanding applications may require asynchronous behaviour with sophisticated parallel handling of client requests. Such a pattern is illustrated by the "Reactor/Acceptor" design pattern proposed in the C++/UNIX ACE library (D.Schmidt, 1 995). The latter was implemented in the C++ implementation of the HRTS library.

2.4 VN Illustration

Virtual Nodes allow a designer to encapsulate a set of co-operating classes into modules distributable over a physical network. HOOD4 generation rules fully automate the V irtual N ode C on trol Stru cture (VNCS) elaboration which handle all remote communications according to the allocation of HOOD objects, as represented in figure 2.4.1 The code generation rules for VN enforce a client­server architecture where :

an allocated object accesses a remote one indirectly via a local proxy object which is a surrogate object for the remote one, and whose target code is the one generated with OPCS_ER code instead of OPCS code. Server target code for remote server objects is generated according to OPCS_SER code.

87

one ClientVncs and one ServerVncs target units are generated for each VN. This software is merely a variant of OBCS interprocess communication code dealing with "remote" communication

v VN Name

Remott_VN

Figure 2.4.1 - Representation of VN target units

Object and class allocation shall be carefully performed in order to limit inter-VN communication overhead. A VN may be locally or remotely called by another VN. Figure 2.4.2 and 2.43 illustrate the TStack client-server modules allocated onto 2 remote VNs. package C l ientVN is -- v i s i bi l i t y on all t ype s a nd c l a s s - -de f i n i ng t he dat a exchan ged w i t h - - se rve r V N s and cl i e n t VN s end C l i entVN;

package body C l i e ntVN ia -- encapsul a t i o n o f package TStack as i n f i gure 2 . 3 . 5 -- de f i ni t i o n o f Ob j e ct / operat i o n a l l ocat i o n t able -- instance o f a ClientObcs object with as parameters the VN allocation table , thus defininq the CLIENTVNCS code .

Figure 2.4.2 Client VN Code Sample

package ServerVN is -- v i s i bi l i ty on al l t ype s and c l a s s package s de f i n i ng -- the data excha nged w i th server VNs and cl ient VNs end C l i e n tVN ;

package body C l i e ntVN i a -- encaps u l a t i o n o f package TStack as i n f i gure 2 . 3 . 6 - - de f i n i t i on o f Ob j e ct / operat i o n a l l ocat i o n t able -- instance o f a ClientObcs object with as parameters the VN allocation table , thus defining the SERVERVNCS code . •

Figure 2.4.3 ServerVN Code Sample

3. DEVELOPING CLIENT_SERVER APPLICATIONS WITH HOOD4

The features of HOOD4 code generation for client­server architecture allow a designer to implement a post partitioning approach for client-server applications:

the design is performed and implemented first as "non distributed"

Page 90: Distributed computer control systems 1995 (DCCS ¹95)

• the design is re-engineered second into distributable units by means of VNs

Developing first such a non distributed code is globally efficient in that : • it provides at hand a prototype of a solution

that highlights the logical properties o f the system. It is then possible to reason about the target implementation constraints, instrument, and prototype them. Thus design decisions can be justified, and the testing process is more efficient.

• it provides a logical model, possibly defining an design pattern or a generic architecture independent of any target infrastructure, which can be reused on similar applications .

• We believe that it is always easier to develop a simplified system (and possibly redevelop it possibly later again) than to start a system integrating all constraints from scratch.

Of course, distribution constraints are placed on top of the non-distributed design especially those concerning physical memory access (parameters of operations cannot be defined as pointers ). Such constraints can either be handled at the cost of extra overhead by "deep copy" techniques, or even by configuring VNs restricting remote communications. Inheritance and attribution at the other hand may still introduce additional complexity. We have started to establish allocation rules where a full inheritance tree must be allocated in the same server space, and where additional inheritance is forbidden on classes in client

space. The allocation of objects and the physical configuration allow furthennore to trend-off between logical level and physical/configuration with regarding the communication through output. Analysis of data sharing conflicts may lead to further break-down of the objects or to re-allocation of the objects in a same VN.

4. CONCLUSION

The work presented is only a first step towards more automation in the development of distributed applications. We want to thank D.Schmidt for his work on ACE support (Schmidt 1 994 ) , without which we would not have been able to have a first implementation of the HRTS . A parallel implementation in Ada could not yet succeed in using the RPC and distribution facilities of Ada.

We believe that our approach is a viable one within the economical constraints of nowadays projects. The benefits expected are many fold :

at primary level, system atic reuse of the HR.TS library and ACE frameworks :

HR.TS EXCEPTIONS module for exception logging, tracing and management.

HRTS FS Ms m od u l e for F S M implementations

HRTS OBCS modules for inter-process communication implementations

• effective separat ion of fun ctional code from dynam ic and pre/post cond ition

88

c o d e . This should lead to dual parallel developments with prototyping, verification of dynamic code in parallel of functional code development

As a result, the suggested HOOD4 development approach for complex systems should lead to solutions matching the following constraints: • independence with respect to target hardware

configuration, a requirement which is more an more expressed on large projects.

• portability for several targets, growing demand on large projects where identical software pieces are running on different platfonns.

re usability on frozen parts of a given application domain (reuse of high level architecture and or parts of the designs),

• maintainability, which is directly improved if above constraint are already fulfilled.

5. REFERENCES

Ada9X Mapping/Revision Team ( 1 994), Annotated Draft Version 5.0 of the Programming Language Ada, and Rationale for the Programming Language A d a , V e r s i o n 5 . 0 , , I n t e r m e t r i c s , ISO/IECITC 1/SC22 WG9 N 207

Bums A; A.Wellings ( 1 989), Real-Time Systems and their Programming Languages, 1 989 Addison­W esley Press

Gamma E, R.Helm, R.Johnson and J.Vlissides (1994), Design Patterns : Elements of Reusable Object Oriented Software, MA : Addison-Wesley

HOOD Technical Group( 1993 , 1 9 95), B .DELATTE, M.Heitz, JF MULLER editors, "HOOD Reference Manual" , Prentice Hall and Masson, 1 993and "HOOD Reference Manual release 4", to be published 1 995.

HUG , "HOOD User Manual", (1995) C.Pinaud, M.Heitz editors HOOD USERs GROUP A.l.S.B.L." C/0 SpaceBel lnformatique, 1 1 1 , rue Colonel BOURG, B - 1 1 40 B R U S S E L S , B e l gium tel (32).2.27 .30.46. 1 1 fax (32) 2.27 .36.80. 1 3

M ey er B , ( 1 99 0 ) " 0bject O r iented S oftware Construction" in ISBN 0-8053-0091 , B enjamin Cummings

O M G , ( 1 9 9 1 )The Common Object request Broker : Architecture and Specification , OMG doc.

Sourouille JL, H Lecoeuche(1995), Integrating State in an 00 Concurrent Model, Proceedings of TOOLS EUROPE95 Conference, Prentice HAii

Strousoup B(l991) The Annoted C++ Reference Manual, Addison-Wesley Press

Vinoski S(l 993) Distributed Object Computing with CORBA, C++ Report, vol 5,

Vinoski S, Schmidt D( l 995) Comparing Alternative Client_Side Distributed Programming Techniques, C++ Report, 1995 May/June 1995 issues

Schmidt D. ASX (' 1994): An Object-Oriented Framework for Developping Distributed Applications, Proceedings of the 6th USENIX C++ Conference, Cambridge, MA , April 1 994

S chmidt D,Stephenson P(l 995) Using Design Patterns to Evolve System Software from Unix to Windows Nf C++ Report, 1995 March 1 995

Page 91: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

TEMPORAL PROPERTIES IN DISTRIBUTED REAL-TIME APPLICATIONS COOPERATION MODELS AND COMMUNICATION TYPES

L. Vega Saenz and J .-P. Thomesse

CRIN - CNRS URA 262 ENSEM, 2, av. de la Foret de Haye

F-54516 Vand<euvre-les-Nancy email: { vegasaen, thomesse } @loria.fr

fax: {33) 83 44 07 63

Abstract. Real-time distributed applications are composed of a set of cooperative tasks that interact between them. These interactions are time constrained because of the reactive nature of the application. We also have time coherence and consistency problems issued from the distributed nature of the system. We present one model based on the concepts of events to describe precisely the time properties of cooperating processes on actions and data interactions, and the time characteristics of the communication services. A temporal logic is used for the formal description.

Key Words. Distributed systems; Real-time; Communication.

1 . INTRODUCTION

All distributed applications are composed of a set of cooperative tasks. Cooperation means that these tasks have a delimitation of scope and re­sponsibility which implies a distribution of ser­vices, resources and data. These ones must be shared and available to all system components which interact each other to access these dis­tributed features . So, the term cooperation always involves a set of distributed entities that work together and inter­act between them. These interactions are time constrained because of the reactive nature issued from the responsive requirements of the system environment . An interaction is performed by means of a communication activity that could be characterized by the role and the number of the participating tasks. We distinguish two kinds of interactions: an action and a data oriented one. The concept of cooperation models allows us to express the nature of the interaction and the roles of the participating entities within the interaction. We call communication types the possible combi­nations issued from the number of the tasks par­ticipating in an interaction. In addition to the time responsive requirements, we have the fact that their distributed nature makes impossible to have the same global phys­ical time (Kopetz and Kim, 1990) . So, this is the cause of the existence of time coherence proper­ties to be verified on the interactions between the

89

distributed cooperative tasks (Thomesse, 1993) . So, we need a model that allows us to specify these interactions between tasks and their associated time properties in order to determine what kind of communication services are required to meet their constraints. The model is based on simple concepts which are logical conditions, events, oc­currences, actions and data. These concepts are sometimes similar. It depends only on the point of view or on the abstraction level that we consider. These concepts will be formally defined with tem­poral logic.

2. TIME PROPERTIES IN PROCESS INTERACTIONS

The behavior model of any kind of systems could be described by its external observable events and by the time relation between these events (D.Delfieu and Sahraoui, 1993; Koymans, 1990 ) . The basic concepts of our time model are the logical conditions and the occurrences of events. They give respectively a static and dy­namic dimension to the interaction model. Before describing the time properties over inter­actions, we introduce some concepts and defini­tions. First the concept of events and the time notions associated to the expression of time con­straints . Events are in fact the basic notion to express time constraints over the application com­ponents and their interactions. We show a classi­fication of constraints over the different kinds of

Page 92: Distributed computer control systems 1995 (DCCS ¹95)

events and then how this constraints are used to describe time properties on actions and data in­teractions. These notions will help us to define the time properties over the cooperation models with an example in the next section .

2 . 1 . Events

An event represents an instantaneous change be­tween two specific states of the application or in its environment. It is a condition that arises in the computer system or its environment and that requires some specific processing. Its occurrences are the instants on the time axis where the as­sertion over this condition changes from false to true. So, we define an event as: " a logic con­dition whose occurrence is the time instant when the logic condition over the assertion changes from false to true" . In a more formal way:

Definition 1 (Event) One event E is a predicate which assertion is associated to a logic condition issued from the application behavior . We note E; the i - th occurrence of event E and (E; )i=l , . . . ,n E 0 E is the ordered set of the event occurrences along the time: Ea -+ . . . -+ E; -+ E;+1 -+ . . . where -+ indicates the successor and Vi, E; -+ Ei+l · We have a time-stamping function : d : Oe -+ TIME where TIME is an ordered set of values representing the passing of time, i .e . , sys­tem clock, and 'r:/i d(Ei ) ::; d(E;+1 ) D

Time window is a concept which allows us to ex­press simultaneity on interactions. A time window is a bounded time interval which is characterized by a starting time and a delay, or by a starting time and an end time, which are application de­pendent; the degree of resolution of times and de­lays is implementation dependent (Technical Re­port, TCCA (DTR 12 178) , 1 992) .

Definition 2 (Time window) A time window LlT is a bounded interval [ts , te] where ts is the start instant , te is the end instant and LlT = its - te I I i e ' is E N and t e - ts > 0 ts = start(LlT) te =: e nd(LlT) 0

2 .2 . Time Constraints over Event Occurrences

In this section we show a classification of time constraints over event occurences. We classify the time constraints according to the possible links be­tween occurrences: constraints associated to one event occurrence, constraints over the sequence of a single event occurrences and constraints over the occurrences of related events.

90

One Event Occurrence. We have only two possi­ble constraints associated to one event occurrence: it must occur before or after a predetermined date. A third one can be a result of the two previous ones: it must occur between two dates .

Definition 3 (Earliest time constraint) For the i - th occurrence of event E with an associated earliest time constraint t e , the respective logic as­sertion is : E; /\ (d(E; ) 2: t e ) 0

Definition 4 (Deadline time constraint) For the i - th occurrence of event E with an associated deadline constraint id , the respective logic asser­tion is: E; /\ (d(E; ) :S td) D

Definition 5 (Time window constraint) For the i - t h occurrence of event E within a time window LlT, the respective logic assertion is: E; /\ (start(LlT) :S d(E; ) :S end(LlT)) D

Single Events. We can call these kinds of con­straints frequency constraints over event occur­rences because it characterizes the timing behav­ior of the sequence of occurrences of one event. Here we distinguish minimal, maximal , exact (pe­riodicity) and time window constraints (j itter) . The periodicity constraint specifies the exact dis­tance that must be meet between two successive event occurrences.

Definition 6 (Periodicity) One event E is a peri­odic one with LlTp period if: Vi((E; /\ QE; = E;+ i ) => (d(E; ) + LlTp = d(E;+1 ) ) ) o

The minimal arrival rate (Koymans, 1990) ex­presses the minimal time distance between two event occurrences (e.g. , assumption about the rate of a system stimulus) .

Definition 7 (Minimal arrival rate) A minimal rate of occurrences noted LlTmin is expressed by: Vi((E; /\ QE; = E;+1 ) => (d(E; ) + LlTmin :::; d(E;+1 ) ) ) 0

The maximal arrival rate expresses the maximal time distance between two event occurrences and it could be also an assumption about the rate of occurences of certain event .

Definition 8 (Maximal arrival rate) A maximal rate of occurrences noted LlTmax is defined by: Vi ((E; /\ OE; = E;+1 ) => (d(E;) + flTmax 2: d(E;+i ) ) ) D

The jitter is a time window constraint applied to a periodic event . It expresses the permissible time drift between two event occurrences (e.g . , assump-

Page 93: Distributed computer control systems 1995 (DCCS ¹95)

tion about the jitter data packet arrivals on mul­timedia applications (Towsley, 1993)) .

Definition 9 (Jitter) One event E has aj itter con­straint AI'jitter = AT max - AT min if Vi ( (Ei /\ OEi = Ei+i ) ::} (d(Ei ) + ATmin :S d(Ei+i ) :S d(Ei) + AT max ) ) 0

Dependent and Related Events. An event is not an isolated entity. It is generally related to oth­ers system events. We classify here the timing behavior of the relation between occurrences of many events. We distinguish two kinds: a casual relation and simultaneity one. We call the first one response time and it indicates the time con­straints of the occurrences of two events linked by a causality relationship: one cause event (stim­ulus) generates the production of another conse­quence event (response) . It is generally used to express a maximal duration on the time relation between a environment stimulus and its respec­tive system response (deadline) . It could also rep­resent a processing time because we can consider a processing activity as a casual relation: the oc­currence of the start-event of a processing (e.g . , procedure call) will generate an end-event (e.g . , result available) . This last time relation i s nor­mally expressed by an exact distance (delay) .

Definition 10 (Response time) A response time constraint between the i - th stimulus Es and its associated response Er , is expressed as maximal time bound ATr : Ef ::} (>(E[ /\ jd(Ei) - d(E[ ) I :S b.Tr) D

We call the second one time coherence. It links the occurrence of multiple events by a relation of simultaneity, i .e . , their occurences must be pro­duced at the same time. " At the same time" is not possible if we do not consider a time granu­larity and a tolerance. So event time coherence is defined over two, or more, events occurrences that must be produced within a predetermined time window .

Definition 11 (Time coherence) The occurrence i - t h of a set of events E{={l , . .. ,n} are called time coherent within a time window ATc if: Vj (Ef => (start(ATc) :S d(E{ ) :S end(ATc)) ) D

2 .3 . Action and Data Time Coherence

In the previous section we have defined events and showed a classification of time constraints over events. But events are only an abstraction allowing us to model conditions and the evolu­tion of a time constrained application. They al­low the expression of constraints over the two pro­cessing elements of the system: actions and data. These notions to a unambiguous definition of the

91

so called properties in (Technical Report , TCCA (DTR 12178) , 1992): time coherence and space consistency. The time coherence is a property of a list of variables. It indicates whether or not the value of each variable in the list has been pro­duced and transmitted and/or received within a given time window. The space consistency is a property of duplicated lists of variables over dis­tributed sites. It indicates whether or not all the copies are identical at a given time or within a given time window. Thus, these properties are defined over data, but they can not be really de­fined and verified without considering the actions executed over these data.

Interaction. We call interaction the links which allows us to describe the cooperation activity be­tween distributed processes . We distinguish two kinds: action and data interactions. They are not independent between them. They are rather closely linked. We need both of them because they allow to express clearly the time requirements in system cooperation.

Actions. Actions represent the computations per­formed by the application components, so they can also be called services, processes or tasks. In order to analyze a system at different scope levels, an action at certain level could be consid­ered as an event at the upper abstraction level. For example, at the communication level, a vari­able exchange is an action modeled by many events: request reception, PDU (Protocol Data Unit) coding, PDU sending, PDU reception, PDU decoding, sending of the acknowledge PDU, re­ception of the acknowledge PDU, indication and confirmation signaling. At the above level, this action could be represented as a single event " rendezvous-data-exchange" which represents a synchronization point between the producer and consumer, that triggers a particular related pro­cessing at the consumer site. Actions can be described by a set of events repre­senting their external and internal associated con­ditions which occurrences allow to describe their behavior. An action can be described by two event occurences at least : a start-event and an end­event occurrence. So, using the definition 1 1 , we can say that an action occur within a time win­dow ATtc if all its related events are time coherent within ATtc · By induction, a set of actions are time coherent if all their related events are time coherent.

Definition 12 (Time coherence of actions) The execution i - th of a set j of actions A1={ l , . ,n} each one described by a set of events occurrence Ekj={ l , . . ,mj } are called time coherent within a ' ' time window ATc if Vj, k (E;j ::}

Page 94: Distributed computer control systems 1995 (DCCS ¹95)

0

Data. They are the result provided by the exe­cution of actions, and they are needed to perform actions. They establish in fact the information links between the cooperations of actions. Data coherence is defined through actions on data. We can distinguish five actions associated to any kind of data: production, sending (writing) , trans­mitting (storing) , receiving (reading) , consuming. The actions are almost the same if the data trans­action is a communication or a store oriented one (actions indicated in parenthesis are storing ori­ented ones) . It only changes the semantics of the actions accessing to the communication/store medium. Data coherence is determined by the time co­herency of their associated actions, so we identify five time windows associated to each data opera­tion (Lorenz, 1994) . For instance, a set of data are time coherent on production if their associ­ated production actions are time coherent within the production time window (cf. definition 12) . Over a data transaction , an exchange of a set of data is time coherent if all the actions associated to each data are time coherent within their respec­tive time windows. We must identify two kinds of data time coher­ence: on multiple data and on single data. The multiple data time coherence correspond to mul­tiple data produced by distributed sites and con­sumed on one site, i .e . , the data time coher­ence in (Technical Report, TCCA (DTR 12178) , 1992) . The single data time coherence correspond to a data produced by one site in which mul­tiples copies of this data must be consumed by distributed sites, i .e . , the data space consistency in (Technical Report, TCCA (DTR 12178) , 1 992) .

3 . CASE STUDY

We'll show , using the previous notions, an exam­ple frequently found in time constrained applica­tions. It is the case of the periodic production of a variable that must be received by N con­sumers. We will use this example to express its associated time constraints. The time constraints are expressed using temporal logic (Manna and Pnueli, 1981) and the time constraints defined in section 2.2. Then we will show the time characteristics of this data exchange considering two different kinds of service provided by a communication system: a client-server and a producer-consumer oriented one.

92

Fig. l. Multicast of one data

Send

fi. T periodicity Receive1 Rec 2 Rec n Send

t t . . . t t I• •I

fi. Ttime coherence

fi. Tdeadline •I

Fig. 2. Constraints on periodic multicast

3 . 1 . Interaction Constraints

The interaction producer-multiconsumer is in fact a multicast data exchange as it is shown in fig­ure 1 . We distinguish in this example three con­straints that are illustrated in figure 2:

1 . Periodicity in production of periodic data. This constraint is linked to the periodicity of Send event that represents the send action to transmit data to consumers. This constraint is expressed on all i data transmissions as: \:/i((Si /\ OS; = S;+i ) => d(S; ) + D..Tp = d(Si+1 ) )

2 . Deadline associated to data arrival at con­sumer sites. This constrains the arrival of the data (Receive) at the consumers . This con­straint noted D..Tr is expressed on all i data transmissions to j = { 1 . . . n} consumers as: \:/i(S; => O(Vj (R{ /\ jd(Si ) - d(R{ ) I ::; D..Tr )) )

3 . Time coherence associated to data arrival at consumer sites. With this constraint, we specify that all data (Receive) must arrive within a time window. This constraint noted D..Tc is expressed on all i data transmission to j = { l . . .n} consumers as: Vi, j (R{ => start(D..Tc) :S d(R{ ) :S end(D..Tc))

These formulas express formally the time con­straints associated to the events of this multicast transaction. So, they specify only the temporal requirements that must be supplied by the below layer in the system functional structure, i .e . , the services provided by the communication system. These needs must be projected over the kind of services supplied by the communication system. Thus, in order to realize the multicast, the send and receive primitives must be adequated to the services provided by the communication system. The communication service behavior can be also characterized according to a cooperation model and a communication type. We show two kinds of communication service: a multiclient-server and producer-multiconsumer

Page 95: Distributed computer control systems 1995 (DCCS ¹95)

oriented ones, and how the send and receive prim­itives use them in order to meet the interaction needs.

3 .2 . Producer-Multiconsumers Oriented Services

When the communication mechanism supplies a multicast mechanism the primitives send et re­ceive can be translated directly to the communi­cation service. A single data transmission request at the producer site produces multiple indications in the consumer sites. We take as example FIP (Factory Instrumentation Protocol) which is based on producer-distributor­consumer protocol mechanism. The static nature of this field bus makes easy to verify the above time constraints. The periodicity on data transmission is ensured by the cyclic behavior of a centralized bus arbi­trator which is determined by the scanning table. Periodicity validity is also verified on production and consumption actions by the refreshment and promptness status respectively. The deadline and the time coherence on data reception are also veri­fied thanks to the synchronized data transmission mechanism based on the broadcast of data iden­tifiers controlled by the bus arbitrator . The time between the send and reception of a variable is determined by transmission delay on the medium because the centralized medium access control. Thus, the time characteristics (jitter, time re­sponse and time coherence) can be expressed over the events that characterize the communication system: the request (Req) and indication (Ind) primitives, by the following logic formulas: Vi( ( Req; A QReq; = Req;+l) => ( d( Req; ) + b..T min :=; d(Req;+1 ) :=; d(Req;) + b..Tmax ))

Req; => ()(Vj(Ind{ A ( jd(Req;) - d(Ind{ ) i ::; b..Tr ) A (start (b..Tc ) :=; d(Ind{ ) :=; end(b..Tc)))) These time characteristics could be quantified from the scanning time cycles and the delays in the transmission medium.

3 .3 . MultiClient-Server Oriented Services

The same data interaction over a communication system based on a client-server schema can not be projected directly over the events that charac­terize the communication system primitives. Here the provider of the periodic data is a server pro­ducing a data periodically, and the consumers are the clients that request this data. So, we have for each constraint the following considerations.

l . The periodicity on data production is real­ized by the server. The periodicity constraint in the transmission is projected over the pe­riodic requests produced by the clients that

93

consume the data. So the periodicity of a set j = { l . . .n} of clients on i occurrences is: \fj, i((Re<{; A QReqi = Re<fl+1 ) => (d(Req{ ) + b..Tp = d(Req{+1 )))

2 . The deadline associated to data arrival at consumer sites is translated on deadlines over each client-server transaction of each client . So, over the request-indication-response­confirmation communication events. Consid­ering only the deadlines between the request and confirm transaction events, we have at the i - th transaction: \fj(Req{ => ()(Cnf/ A ld(Req;) - d(Cnf/ ) I ::; b..Tr))

3 . For the time coherence, we have the same constraint projection than in the previous case. The time coherence is over all the events associated to the client-server transaction of each client. We show only the time coherence on confirmation events at i - th transaction: Vj(Req{ => ()(CnJf A start (b..Tc ) :=; d(Cnf/ ) ::; end(b..Tc)))

There are many communication services based on this client-server schema. For instance, the MMS (Manufacturing Message Specification) is applica­tion layer protocol providing services based on the VMD (Virtual Manufacturing Device) con­cept . The VMD supplies a virtual machine inter­face with a server behavior that allows the inter­working of a set of manufacturing shopfioor equip­ment. The quantification of these time character­istics is more complicated and it depends princi­pally on the kind of MAC layer that support these exchanges.

4. COOPERATION MODELS

After the case study presented in the above sec­tion, we generalize the concepts of cooperation models and communication types. In distributed systems the interactions between processes are performed via a communication network. The idea is to express the time constraints issued from the interactions needs and how these interactions are performed by a communication system. So the time constraints associated to interactions are projected over the communication system ser­vices. The communication system must be able to provide a quality of service that allows to meet the time constraints. In order to express the time constraints on data and actions interactions between the cooperative tasks, we use two cooperation models: producer­consumer and client-server. The first one has a data oriented semantic, whereas the second one have a service oriented one. We deduce four combinations from the possible number of participants on one interaction : one to

Page 96: Distributed computer control systems 1995 (DCCS ¹95)

N+1 Model of Interactions Time Constraints

Verification of - - Time Properties

N Time Characterisitics Model of Services

Fig. 3 . Projection of time constraints over the services of a communication system

one, one to N , N to one and N to M , where N and M are the number of involved processes. We call these four possible combinations communication types. It is over these communication types that we can distinguish the different time constraints on interactions. These models allow us to identify the roles of the tasks participating on the interactions. In the other way, we have a communication system that provide a set of services to the interacting tasks. The way in which the services are provided could be also represented by the means of the coopera­tion models. For example, the MMS application protocol mechanism is based on the client-server model, and the FIP protocol mechanism basis is the producer-distributor-consumer model. So, we could express the time characteristics associated to the protocol mechanisms and try to model how the interactions at the upper level are projected over these ones. We call verification of properties the verification that timing characteristics of ser­vices will supports the time requirements of the users of these services (Lecuivre and Thomesse, 1995) (cf. figure 3 ) . The formal expression of the constraints on interactions and of the time char­acteristics on the communication services are the first steps to make such a formal verification.

5. CONCLUSION

In distributed real-time applications, the correct­ness depends not only on the correctness of data transmission, and action execution , but also on the time at which they are performed. So, time properties must be considered on data and action relationships between the cooperative tasks of this kind of applications. We have presented time properties issued from cooperative activities in a distributed computing system. The main idea is to use the cooperation models to identify data and action interactions be­tween cooperative tasks, and to identify the time coherence properties over the different communi­cation types. We intend to give the basis to express formally the temporal needs between cooperative tasks in terms of time bounded interactions , and the tim­ing characteristics of the services provided by a communication system. This formal representa­tion is the first step to formal verification . This

94

verification must help to evaluate the choice of communication service according to the interac­tion needs. The objective is to make a formal derivation of time constraints on communication transactions from time constraints on applications interactions. Then to prove formally at the spec­ification stage that certain quality of service sup­port the traffic imposed by the time constrained processes interactions.

6. REFERENCES

D.Delfieu and A.E.K. Sahraoui ( 1993) . Expres­sion and verification of temporal constraints for real-time systems. In: the 7th Annual Euro­pean Computer Conference on Computer De­sign, Manufacturing and Production, COM­PEUR0 '93. Paris-Evry (France). pp. 383-391 .

Kopetz, H . and K .H . Kim ( 1990 ) . Temporal un­certainties in interactions among real-time ob­jects. In: 9th IEEE Symposium on Reliable Dis­tributed Systems. Huntsville,Alabama. pp. 165-174.

Koymans, R. ( 1990) . Specifying real-time proper­ties with metric temporal logic. The Journal of Real- Time Systems 2, 255-299 .

Lecuivre, J . and J .P. Thomesse ( 1995). Defi­nition of real-time services for heterogeneous profiles. In: Proceedings of the Distributed Computer Control Systems congress, DCCS'95. IFAC/Elsevier Science Ltd. Toulouse (France) .

Lorenz , P. ( 1994) . Le temps clans les architec­tures de communication : application au reseau de terrain FIP. PhD thesis. These de l 'Insitut National Polytechnique de Lorraine, Qentre de Recherche en Informatique de Nancy. Nancy (France) .

Manna, Z. and A. Pnueli ( 1981 ) . Verification of concurrent programs: the temporal framework. In: The Correctnes Problem in Computer Sci­ence (R.S. Boyer and J .S . Moore, Eds.) . Aca­demic press. London . pp. 215-273.

Technical Report, TCCA (DTR 12178) ( 1992) . User requirements for system supporting time critical communications . International Organi­sation for Standardization, ISO TC 184/SC 5 Architecture and Communication , Time Criti­cal Communication Architecture.

Thomesse, J .P. ( 1993) . Time and industrial lo­cal area networks. In: the 7th Annual Euro­pean Computer Conference on Computer De­sign, Manufacturing and Production (COM­PEUR0 '93). Organized by the IEEE Computer Society. Paris-Evry (France) . pp. 365-374.

Towsley, D. ( 1993) . Providing quality of service in packet switched networks. In: Joint Tutorial papers of Performance '93 and Sigmetrics '93 (L. Donatiello and R. Nelson, Eds.) . Lecture Notes in computer Science (729). pp. 560-586 .

Page 97: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

ATM BASED FOR DIS-

CONCEPTION AND ANALYSIS OF AN COMMUNICATION TRANSFER PROTOCOL TRIBUTED REAL-TIME SYSTEMS

R. Belschner and M. Lehmann

1lnstitute for Automation and Software Engineering, University of Stuttgart Pfaffenwaldring J, 7, 70550 Stuttgart - Germany, e-mail: {belschner, [email protected]

Abstract. This paper addresses issues related t o transfer protocols o f distributed RT-UNIX automa­tion systems. The wide-scale application possibilities of ATM have been considered in order to get ideas for an improved transfer protocol and its embedding into the operating system. A proposal to transform an existing RT-UNIX operating system { QNX 4.2) in order to provide ATM features is de­scribed and analyzed. For the analysis, prototyping models of the operating system, the net adapter, net driver and the net manager have been developed. Remote interprocess communication can be simulated and analyzed whether the imposed timing constraints are fulfilled. Some case-studies of simulations have shown that the approach represents a powerful and expressive instrument for the remote interprocess communication in distributed real-time automation systems.

Key Words. Real-time systems; distributed computer systems; discrete event simulation; real-time operating systems; ATM protocol; timing analysis.

l . INTRODUCTION

Distributed systems are getting more and more important within the automation field. Since most automation systems are time-critical, the communication between the stations which con­trol this system must also meet required specifi­cations concerning their temporal behaviour. It is very important in real-time systems that the behaviour of the network connecting the stations is predictable. A non-predictable network cannot guarantee an arrival time of a message. Therefore only networks whose behaviour can be predicted should be used within real-time systems.

In older communication networks the network it­self was the bottleneck . Small network capaci­ties and networks, where collisions on the net are possible, like the Ethernet , were reasons for the low performance of the network. Another bottle­neck , especially in distributed real-time systems, is the non-predictability of the behaviour of the network.

ATM based protocols will immensely change the network field within a few years. Their wide­scale application possibilities can also give new impulses to distributed control systems by im­proving their real-time behaviour. Therefore, as lower layer standardization will soon be finished, the higher transfer protocols have to be adapted to the provided performance in order to use the large advantages of ATM networks .

95

At our Institute research is being done to evalu­ate the behaviour of different networks concern­ing their performance within a distributed real­time system under the real-time operating system QNX. By now QNX supports besides Ethernet, which is a totally non-predictable network , only an ArcNet network . For both, Ethernet and Arc­Net, QNX provides for the same interprocess com­munication features which don't take care about quality of transmission. Once a message is sent, its delivery depends not only on the situation of the local node, but also on the current traffic on the network. Transfer protocols are needed with a deterministic interprocess communication. The transmission of critical data should be guaranteed in time and non-critical data should be transferred whenever free capacities are available.

This paper shows, on the one hand, how existing real time software standards (QNX, 1992) should be transformed to provide the ATM features and presents, on the other hand, a simulation to prove the assumed capabilities of such transfer proto­cols. The goal is to determine which new com­mands are needed to make QNX an operating sys­tem that can be used in distributed real-time sys­tems based on ATM networks.

Page 98: Distributed computer control systems 1995 (DCCS ¹95)

2 . REMOTE IPC UNDER C/RT-UNIX

2 . 1 . Overview

For the sake of completeness, a brief description of the real-time programming system C/QNX that will be discussed in the paper will follow.

QNX is a real-time operating system consisting of a microkernel ( lOk) , which is mainly respon­sible for process scheduling, interprocess commu­nication (IPC), low-level network communication, and first-level interrupt handling. All other oper­ating system functions - device I/O, network man­agement, etc. - are handled by modules (server processes, like process manager, net manager and net driver) , which are dynamically loaded when needed for a specific application. One of the great advantages of QNX is its interprocess communi­cation capabilities: processes communicate with each other via messages, whether they reside lo­cally or on remote nodes; boundaries between nodes are almost invisible . The remarkable de­gree of transparency is made possible by so-called virtual circuits , which are paths the network man­ager provides to transmit messages across the net­work. When a virtual circuit is created, it is given the ability to handle messages up to a spec­ified size by using POSIX features like Send() , Receive() or Reply() . Unfortunately, neither the creation of a virtual circuit nor the IPC provide for additional parameters in order to guarantee the transmission time. But , this does not make much sense since Ethernet , which is a totally non­predictable network, has become an industry stan­dard and the low-level as well as the high-level interprocess communication features have been adapted to the capabilities of Ethernet .

2 . 2 . A TM And RT- UNIX

As described in (QNX 1992) QNX networks can be put together using various hardware and indus­try standard protocol . Since these are completely transparent to application programs, new network architectures can be introduced at any time with­out disturbing the operating system.

As soon as a lower layer standardisation for ATM will be finished and ATM based network systems come to market on a reasonable price, QNX won't get away to provide for corresponding adaptation. In ATM networks the virtual channels must first be established between the sending station and the receiving station. This is done by establish­ing a signalling channel over which the required quality of services are determined.

In an ATM network, however , the transfer calls can be classified as calls which guarantee a trans-

96

Fig. l. The topology of the ATM based network

mission using ATM source capacity reservation and calls which transmit messages according to the business degree in the ATM switches. The guaranteed transmission is very important in crit­ical phases of distributed control systems. All non-critical transmission calls can be grouped in a second kind of message sending, which uses the free capacity of the ATM network . In the worst case, all non-guaranteed transmission is refused. Only the IPC features Send() and Reply() have to be extended by the additional prio-ftag which determines a guaranteed (high-prio) and an non­guaranteed (low-prio) transmission. The estab­lishment of virtual channels can be done within the creation of the virtual circuits by adding pa­rameters describing the quality of service. Once the virtual circuit has been successfully installed, the quality of service can be guaranteed in case of a high-prio Send() .

The following section shows how the existing QNX real-time system should be extended to provide the ATM features. Afterwards, the proposal is analyzed accordingly by using a Simulation Based Analysis System for Distributed Real-Time Sys­tems (SBA-DRT).

3 . AN ATM BASED CONCEPT OF A COMMUNICATION NETWORK

3 . 1 . Required ATM Components

Network Topology In this work a network config­uration consisting of an ATM switch and several connected PC's is considered, as shown in Fig. l . Each node is connected by two nets, one for send­ing and one for receiving.

ATM Switch The ATM switch is a main compo­nent of the ATM network architecture. It 's pri­mary job is to route the cells from the sending node to the receiving node. Since all nets con­nect to the switch, it must be able to route the cells at a very high speed in order for the switch not to become the bottleneck of the network. The ATM switches planned to be used in telecommu­nication systems therefore have very high capac-

Page 99: Distributed computer control systems 1995 (DCCS ¹95)

ities, e.g. 10 GBit/sec or more. Switches with such high capacities tend to cost very much. To keep the price of such a switch low, other methods of performance optimization are needed in addi­tion to high transfer rates. In our example for a switch, we let the switch compute whether the switch has enough capacity to fulfil the requested bandwidth for each new channel to be opened. If not, then this channel is rejected. Otherwise the channel is opened and the switch computes it's remaining capacity. This way it is guaran­teed that the switch is never overloaded. This of course has the big disadvantage, that the capac­ity of the switch may not be big enough if more channels are required than presumed. The switch we use therefore has two main functions: to route the cells and to compute it's remaining capacity. The switch itself needs input and output queues, a router and its own coupled network manager, which computes the remaining capacity.

ATM Board The ATM board does not differ much in it's functionality from boards needed for Eth­ernet or Token Ring networks. The biggest differ­ence is that in our system we have two nets con­necting to the node, one for sending and one for receiving. The board therefore needs two distin­guished processors in contrast to e.g. the Ether­net , where the board can only send or receive. The major job of the board is to send and receive cells and to monitor the connection on the lowest com­munication level . In our simulation we renounce the monitoring tasks. The board can trigger inter­rupts to signal the computer that it is done send­ing cells or that cells have been received. The board consists of four sending pages, each con­taining e.g . 100 cells, and one receive-ring-buffer. Two of the pages are used for high-prio cells and two are used for low-prio cells. Two pages are needed so that the network driver can load a page while the board is still sending cells out of the other page. We need to distinguish between high and low prio pages so the sending of low-prio cells can be interrupted at any time through high-prio messages to be sent. The receive-ring-buffer must be large enough to prevent the buffer from over­flowing during the time the computer needs be­tween the interrupt triggering by the board to sig­nal that cells have arrived and the downloading of the received cells into the computers main mem­ory.

3 .2 . The Underlying Concept

Since a computer running the operating system QNX uses a scheduling method to determine which process is the next to be executed, it is not predictable when the network manager gets the CPU to send the message. This is where the

97

Network Manager

'

Message Queue

- - - - - - - - - - - � - - - - - - � - - - - - - - - - - - -

ATM Driver

' '

'

' '

Send Queue

- - - - - - - - - - - _: _ - - - - _, _ - - - - - - - - - - - -

ATM Board

I '

100 ATM - Cells � 4 Page � Queues

Fig. 2. Segmentation/reassembly of a message

guaranteed transfer of a message ends. A guaran­teed transmission is only possible for the network itself. The most important parameters which have an influence on the transmission time of a message on the network are the capacity of the switch, the longest time needed for the switch to route a cell to its destination and the capacity of the net . The parameters inside a node are the bus speed, the time until the network manager gets the CPU and the CPU speed. Of all the parameters inside the node only the time needed for the network man­ager to get the CPU can be influenced. This is done by giving the network manager process a very high priority, since the scheduling in QNX is done by using different priorities for processes to determine which process is the next to be ex­ecuted. This assures that the network manager does not have to wait too long for the CPU to be given to him. The communication in an ATM based network uses virtual channels to send mes­sages. These channels have to be opened prior to the transfer of data. To open a channel, a node must send a message containing the wanted trans­fer rate to the switch. It has to determine, if it has enough capacity for the new channel. If enough capacity is left, the channel is opened. Low-priority messages and high-priority messages can be sent over this channel. In order to remain within the given capacity of the channel , the cells of high-priority messages are only sent every n-

Page 100: Distributed computer control systems 1995 (DCCS ¹95)

slots. The cells of low-priority messages are sent using the full capacity of the net . This leads to a very high transmission rate if the switch is able to handle it. But cells may be discarded when the switch reaches its capacity limits. Therefore the transmission time for low-priority messages is not fully predictable.

4. ATM EXTENSION UNDER C/QNX

In order to provide deterministic behaviour, the maximum rate of cells within all situations of the entire system has to be examined precisely. Then the initial signalling - using the ATM_VCAttach() function - , can make a worst case reservation, which assures determinism in critical phases. The ATM_ VCAttach() function attempts to establish a network link, called a virtual circuit, based on the required quality of transmission:

ATM_VCAttach ( Nid , Pid ,

Timeslot , nByt es ) ;

In this work, the quality of transmission is defined as follows. A virtual circuit is opened in the case that the transmission of nBytes data per TimeSlot is guaranteed .

The POSIX functions Send() and Reply() have been modified towards an additional parameter, which determines the kind of transmission, high­prio or low-prio:

Send ( pid , * s _ptr , s _ s iz e ,

*r_ptr , r_siz e , prio ) ;

Reply ( pid , *pt r , s ize , prio ) ;

The prio parameter can be set to low or high­prio, according to guaranteed or non-guaranteed transmission.

4 . 1 . The ATM Driver

The ATM Driver represents the interface between network manager process and the ATM Board (see Fig. 2 ) . The ATM driver cuts the messages to be sent into ATM format cells. Prior to splitting up the message headers and trailers are added to the message for error correction and detection. An ATM cell has a header in which all the vital in­formation needed for the cell to be routed to it's destination is stored. It also contains information on the priority and error correcting and detect­ing components. All together each cell contains approx. 44 bytes of information and 9 bytes of header-information, depending on what commu­nication protocol is used. Messages that are re­ceived must also be restored to their original form by the ATM driver. The driver hereby removes the header from the cells and pastes the messages back together. Additional error correcting and de-

98

tecting headers are also removed here. The com­plete message is then given to the network man­ager of the node which itself informs the QNX operating system of the arrival of new messages.

4.2. The Network Manager Process

The network manager is responsible for the cre­ation of network links (virtual circuits) and for the routing of messages across the desired path. Research has shown that, in QNX, the existing network manager can, with some additional func­tionalities, in future be used for the ATM network. One of the new features is the determination , if there is enough capacity for the new channel when creating a virtual circuit. Another one is the ca­pability to serve the ATM net driver .

One of the process manager's tasks is to spawn processes which is associated with a hard disk access of several milliseconds. Because of this, the network manager process should be given the highest priorities as mentioned above, even higher than the one of the process manager , which usu­ally owns the highest priority. Otherwise, guaran­teed transmission can't be achieved.

5. ANALYSIS OF THE CONCEPT

In the previous section a network manager and a network driver - adapted for QNX - have been de­signed. Although they are closely kept to those of QNX it is not at all guaranteed that the imposed requirements of the concept are met . As men­tioned above, besides scheduling the occurrence of interrupts may cause problems since they usu­ally have higher priority. This increases in case of high traffic since each message triggers interrupts when sent or received. Meeting the requirements means correct interaction of all components which are involved. However , it is not an easy task to verify the concept and its implementation . In con­sideration of the enormous complexity, an attempt to improve the correctness of the software design, a verification through Simulation Based Analysis (SBA) is applied in order to get detailed informa­tion, especially when system parameters are var­ied upon their bounded intervals .

5 . 1 . The Analysis Tool

The architecture of the analysis tool consists of the following three layers (see Fig. 3):

• simulation (layer 1) • analysis (layer 2) • optimization (layer 3)

The simulation layer (layer 1) provides for simula­tion models of the considered components which

Page 101: Distributed computer control systems 1995 (DCCS ¹95)

�-----� Write­

Optimization (Layer 3)

Quality Factors, /\ Fitness lf Requirements,

Desired Behaviour 1 Analysis

(Layer 2)

Actual Behaviour /\ of Simulation

lf

Input Q Simulation (Layer 1)

Back()

Fig. 3. The Simulation Based Analysis System

contain all the interesting characteristics of the distributed real-time system (see Fig. 4). The software specification that shall be simulated can be specified using the high-level software specifica­tion language EPOSIX, which supports the inter­esting real-time features of POSIX 1003.4. In the analysis layer (layer (2), the requirements (desired behaviour) can be specified by the TC-SL (Tim­ing Constraint Specification Language) , which is based on events (time stamps) . The time stamps are recorded during a simulation run and repre­sent the actual behaviour of simulation . The eval­uation of the TC-SL specification results quality factors which describe whether the specified re­quirements are fulfilled and over all how well they are fulfilled (DoF = Degree of Fulfillness) .

The quality factors are used as the input for the optimization (layer 3) . In the optimization layer, there are several evolutionary strategies imple­mented to figure out what will happen in extreme both the best and the worst case. Based on the obtained quality factors the interesting system pa­rameters are modified by the strategy and written back into the underlying simulation models. A simulation run is started again and the quality factors are evaluated. This evolution is continued until the desired quality is achieved . For more information see (Belschner , 1994) or (Belschner, 1995) .

5 .2 . Achieved Results

Various scenarios have been considered and ana­lyzed:

99

Simulation Control

Event / Distributor ---+ �

I Node n

'I Node I Node 1

I Operating System I Model Net

; Simulation model

Net-Manager I ---

t Ethe met ATM

I Net-Driver I � I

I • I • 7 .. I Net-Board I ,..... _.. .. .. I

Fig. 4. The architecture of the simulation system

• 2 node system: 1 communicating process on each node

• 10 node system: 1 communicating process on each node

• 2 node system: 2 communicating process on each node

In both the Ethernet and the ATM , a PCI-Bus with a transfer rate of 133 MByte/sec is assumed.

An important factor is the so-called lrptTrigger­Size which determines the number of received cells to trigger an interrupt to download them into the computers main memory. Simulations have shown that, on the one hand, a very small value consider­ably slows the real-time performance due to many interrupts. On the other hand, there is an upper limit which indicates that cells may be lost when the interrupt serving routine comes too late be­cause of a busy CPU.

Fig. 6 shows some simulation results for ATM and Ethernet . The results are based on the model data, shown in Fig. 5 .

In the ATM network the switch (600 MBit/sec) is overloaded and represents the bottle-neck. 5 stations offer 5x155 MBit/sec = 775 MBit/sec data. Theoretically, the transmission of 1 GB data should take 13 .33 sec (ATM) , respectively 800 sec (Ethernet) .

The simulation result in Fig. 6 show that , even when the higher performance of the ATM network by a factor of 60 is taken into account, Ethernet has a loss of 34. 1 % performance since collisions occur on the net.

The difference between the theoretical transmis­sion times and the simulated transmission times

Page 102: Distributed computer control systems 1995 (DCCS ¹95)

ATM switch: OutQueueSize 50 cells InQueueSize 20 cells Transfer rate 600 MBit/sec

ATM board: TxingPageSize 100 cells RxingQueueSize 100 cells IrptTriggerSize 70 cells MemCopy Rate 133 MByte/sec

ATM net: Err Probability le-9 Transfer rate 155 MBit/sec PropagationSpeed 0 .77 c Cable length 80 m

Ethernet net : transfer rate 10 MBit/sec slot time 5 1200 nsec interframe gap 9600 nsec JAM duration 3200 nsec collision detection time 900 nsec

Fig. 5. Some ATM and Ethernet model data

Net theoretical simulated sim. dur. ATM 13.33 sec 18.07 sec 5 .75 h ETH 800 sec 1450 sec 1 . 1 1 h

Fig. 6. Theoretical and simulated transmission time of 1 GB data and simulation duration on a Pentium 90 MHz PC using a 10 node system with 5 parallel messages, (ETH = Ethernet)

is caused by the influence of the operating system which handles interrupts and provides for schedul­ing , etc . .

By the way, the ATM network model i s much more complex than the Ethernet model which is under­lined by the simulation duration of 5 .75h, respec­tively l . l lh (Ethernet) for the transmission of 1 GB data.

6. OVERALL CONCLUSIONS AND FUTURE WORK

A concept for an ATM based communication net­work for a C/RT-UNIX programming system has been presented. A design of an ATM net driver and an ATM network manager has been proposed . Simulation models have been created in order to analyse the interaction of the components. The distributed system can be stressed by running an EPOSIX software simulation. Experience apply­ing the concept on some distributed real-time ap­plication specifications has shown that the ap­proach represents a powerful and expressive in-

1 00

strument for distributed real-time systems.

It leads to higher real-time software quality: pro­grams can be written taking less care of an non-predictable interprocess communication. The transmission of messages can be guaranteed from network manager to network manager of different nodes. The user only has to pay attention that not too many processes, which are scheduled un­der the same node, send a high-prio message at the same time. This leads to more compact, readable and flexible software. Additionally, the remote IPC performance has been better since ATM pro­vides more and more higher transfer rates. Guaranteed message transmission ends at the ATM driver on each node since it can't be foreseen how many high-prio application messages will be sent within a critical time slot. This problem still remains in the responsibility of the software de­veloper . As mentioned in section 4 .2 , the network manager process has to be given a very high priority. Be­cause of this, a big disadvantage occurs when a long message is to be transferred. Since the net­work manager has such a high priority, other pro­cesses are not able to interrupt it . If a process is time-critical, it may not be able to execute within the needed time. Therefore, it is important that high-priority messages are not too long. By now only the simulation layer of the SBA has been applied. It will be interesting to evaluate the ATM network design by using the the layers 2 and 3 . Especially the behaviour in the case of unfavourable parameter values is of interest .

7. ACKNOWLEDGEMENTS

This work has been developed within a co­operative program with the company SWD Soft­ware Systems, a QNX representative in Germany. The authors would like to thank the staff at SWD for the technical support provided. Last but not least the authors would like to thank Alex Lunken­heimer who contributed quite a lot to the imple­mentations.

8. REFERENCES

Belschner, R. ( 1 994) A Simulation System for Distributed Real-Time Systems. Proc. European Simulation Symposium, Istanbul, Turkey, Oct. 1994.

Belschner, R. ( 1995) Simulation Based Analy­sis of a Distributed Real-Time System and a Robot Control Using the ECS Evolutionary Strategy. Proc. European Simulation Multi­Conference , Prague, Turkey, June. 1995.

QNX ( 1992) Systems Ltd " QNX User's Guide" .

Page 103: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IF AC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

SELF CONFIGURATION PROTOCOL FOR A HARD REAL TIME NETWORK

L. Ruiz, P. Raja, N. Fischer, J.D. Decotignie

Laboratoire d'lnformatique technique Swiss Federal Institute of Technology IN-Ecublens, CH 1015 Lausanne, Switzerland email : ([email protected]

Abstract. In most real-time networks the verification to determine if the timing constraints of the application will be fulfilled and the correctness of data insured, is done off-line and in a static manner. Thus, minors changes cause total re ­configuration of the system. This paper presents a configuration protocol for a real-time network (Phoebus Ilx). The major particularity of this protocol is that it integrates features for configuring and verifying constraints in a pseudo on-line manner (at start-up or reset). This increments the flexibility of the system. Although the protocol was developed for a particular network, some of its concepts can be applied to other real-time networks.

Keywords. Fieldbus, Real-Time Communication, Communication Protocols, Configuration Management, Configuration Control.

1. INTRODUCTION

Traditionally, computer control systems were connected to the field devices with point-to-point links, resulting in a centralized architecture. However, experiences show that these links have several disadvantages such as: high installation and maintenance costs; low flexibility; lower modularity [Pleinevaux 1988]. Therefore, in the recent years, it is accepted that the point-to-point connections must be replaced by a single wire called fieldbus network [Pleinevaux 1988, Simonot 199 1]. This results in a distributed control system. The French standard Factory Instrumentation Protocol (FIP) [UTE 1990, Simonot 199 1 ] , Profibus [Profibus 1990a, 1990b] and Phoebus Ilx [Pleinevaux 1 988, Raja et al. 1993b, EPFL-LIT 1 990] are a few representative examples of fieldbus networks. Distributed control programs running on such networks have special characteristics. Some of these characteristics define the requirements of the application and must be taken into account when a fieldbus network is designed. The major requirement for networks supporting DCCS is that values passing through the network must still be useful (valid) for the control application. Thus, the network must guarantee the that the timing constraints of the application are fulfilled. In most real-time networks the verification to determine if the

IOI

timing constraints of the application will be fulfilled, is done off-line and in a static manner [Tindell 1994, Profibus 1990b, UTE 1989, Bums 199 1 ] . This results in systems that are not flexible, where the application developer has to do the re-configuration of the system after each minor change to the application (for example including a new sensor).

This paper presents an initialization protocol developed for the Phoebus Ilx fieldbus [Pleinevaux 1 988, Raj a et al. 1 993b, EPFL-LIT 1 990] . This protocol includes automatic recognition of devices connected to the network, download/upload of applications, mapping between application variables and network variables and verification of timing and consistency constraints of the application. This capacity for doing the verification of the system constitutes the important feature of this protocol. As Phoebus Ilx shares many concepts with other fieldbus such as FIP [UTE 1990, Simonot 1 99 1 ] and CAN [ISO 1992] this initialization protocol can be applied to other networks.

The rest of the paper is structured as follows. Section 2 gives a brief description of the Phoebus Ilx network. Then section 3 introduces the initialization and configuration protocol defined for this network. Section 4 indicates some possible future work and finally section 5 draws some conclusions.

Page 104: Distributed computer control systems 1995 (DCCS ¹95)

2. OVERVIEW OF PHOEBUS IIX

PHOEBUS Ilx is a centrally controlled network with a bus topology. The medium access is controlled by a master-slave protocol. The information passing through the network are variables. These variables represent the state of a sensor, the state of an actuator, a message, etc. Each variable has a unique network identifier and its own attributes (status , class, polling group, etc).

Although in PHOEBUS Ilx there is one master and several secondary stations, secondary stations cannot be directly addressed, only the variables produced or consumed by these stations can be addressed. When the value of a v ariable is needed, the master broadcasts a request for this variable and the producer (station) broadcasts the value of the variable. Simultaneously, all the consumers receive the new value. This kind of communication is called Producer-Distributor-Consumer protocol [Thomesse 1993].

Control applications distributed over this kind of network are often periodic [Raja et al. 1 994, Halang 1992] . Thus, periodic data is transferred. This is why the general operating mode of the network is periodic, with the following basic phases:

Periodic part

(cyclic data transfer)

basic cycle

Aperiodic part

" 1a.1arms�vents)

Figure 1 . Basic Cycles of Phoebus Ilx.

As shown in the figure 1 , the basic cycle is divided in 2 parts: one part that we call the periodic part, during which the master station polls each variable, its value is broadcasted and all consumer catch up its value. This part of the cycle is fixed and no re-transmission is allowed. The security is ensured.by fast cyclic data transfer. For example in figure 2, the master asks for the transmission of a variable. The producer (ID 4) broadcasts the value and the consumers (ID l and ID n- 1) receive the value of the variable (figure 3).

Bus M�wr t--+���-...

Producer Figure 2. Production Request.

The second part is the aperiodic part. Its purpose is to allow secondary stations to transfer occasional information such as alarms and events.

1 02

The request for this type of information transfer is piggy-backed on a produced variable frame by the secondary station during the periodic part of the cycle. The master then polls this variable during the aperiodic part and receives the information the secondary station wishes to transmit.

Consumer Figure 3. Consumption.

The normal operation of the network is divided into two phases. The first phase responsible for the initialization and configuration of the network. This phase is executed only at startup and not under real time constraints. The second phase is responsible for the transfer of variables. This is the runtime phase which was briefly presented above.

3. INITIALIZATION AND CONFIGURATION PHASE

PHOEBUS Ilx is dynamically configured, i.e. when the network is powered or after reset, all the network parameters are determined dynamically. Moreover, while initializing the network, the initialization protocol verifies some of the constraints of the application. For example it verifies that each consumed variable has a producer. After the initialization phase, the master knows all the variables that are produced or consumed in the network, the polling cycle for each one of them and the secondary station's applications must be already downloaded and started.

First of all, the master looks for all the secondary stations connected on the network. After the recognition is done, all the secondary stations indicate all variables they produce or consume. Then the master makes a variable mapping that attaches a network address to each variable. The master indicates the mapping of the variables produced or consumed to each interested station. It is possible that the application, for one or more secondary station, has to be downloaded. If this is required, applications are downloaded in the secondary stations after recognition of the stations. Figure 4 shows the pseudo-state machine of the master station. The states found in this figure will be detailed in the next sections.

3.1.- Recognition of devices connected to the network

When the master is started, it calls the stations recognition routine. This routine is in charge of

Page 105: Distributed computer control systems 1995 (DCCS ¹95)

recognizing all the stations connected to the network. To do this it broadcasts one at the time all the possible station's addresses. If a station recognizes its address, it responds with a presence frame to indicate that it is connected to the network. Using all the presence frames the master then builds a presence list. The recognition routine is executed three times in order to reduce the possibility of considering a present station as not present.

Network Start

Build Stations Alive List

Downloadl1.Jpload Application and Parameters

Upload Configuralion Lists, Assign Network addresses and produce new Configuration LiSls

Verify : • Variable consistency • Typing of variables and • Scheduing.

Download New Configuratton Lists

Figure 4. Master station pseudo-state machine.

3.2. Download/upload of application programs

When the recognition procedure is completed the application on top of the master has the possibility to download (upload) some parameters or application code to (from) the secondary stations. When the download (or upload) is completed the applications on the secondary stations start their execution. Then these local applications declare their produced or consumed variables to the network.

3.3. Mapping of variables

This is the third phase of the initialization protocol. In this phase the master requests all the secondary stations to indicate all the variables they produce or consume. This is done by uploading the configuration list. The configuration list is built in each station by including the variables declared by the local application. This list includes the global name, the type and the life time of each variable declared by the local application (Table 1 ). The global name represents the name of the variable within the user' s application. The life time is some sort of hard deadline for the validity of the variable. It indicates that the variable will be not valid anymore after the expiration of the amount of time indicated in this parameter. This time is counted from the acquisition time of the variable. The last parameter represents the type of the variable and it is used for planning the encoding/decoding [Raja 1993a] of the variable.

1 03

Table 1 Confi�uration List of a secondary station.

Global Type Produced Life Time Name Consumed

Pressure Simal Prod lOOrns • • . .

• • • •

Level 01 short Cons IOrns InputO l long; Cons 20rns

By combining the different configuration lists of all the stations the master assigns a network address (or identification) to each variable. This will be the identification of the variable within the network.

3.4. Verification

This is the main part of the initialization protocol. It is in this part where the masters verifies the application constraints. Three main verifications are achieved in this phase. First the master checks the variable declarations consistency. This consist of checking that all the consumed variables have a producer. Note that the contrary may be possible. Thus, the network allows to have produced variables without any consumer. This allows a little flexibility for connecting consumers afterwards. The second main consistency check is the type verification of the variables. The master checks that all the variables with the same global name have the same type. This verification is done to avoid typing problems within the network or the application. At this stage, if all verifications are successful, the master generates the required synchronization variables. These variables can be used by the applications in the secondary stations to synchronize their activity with the data flow on the network. The last verification performed by the master is the timing verification. Using the life time of the configuration lists the master verifies if it is possible to schedule all variables respecting their timing and data coherency constraints. The algorithm used to schedule the variables is described in [Raja 1993c]. However, the protocol can easily be changed to use another algorithm [Bums 1 99 1 ] . If it is possible, it produces the scheduling tables for the periodical variables. If it is not possible to schedule the variables it informs the application and stops the network.

3.5. Download of network addresses

If all verifications are correct, the master downloads a modified configuration list to each secondary station. Each new configuration lists includes the network addresses of all variables used by the station (Table 2).

After completing the download of the modified configuration list the master broadcast a runtime start signal. This signal indicates to the secondary stations

Page 106: Distributed computer control systems 1995 (DCCS ¹95)

that the initialization phase is finished and that they must start the real-time phase.

Table 2 New Configuration List of a secondary

S1atiQil

Global Type Prod Life Network Name Cons Time Address

Pressure Sirnal Prod lOOms 100 . • • • .

• • • . •

LevelOl short Cons lOms 101 InoutOl long Cons 20ms 204

4. FUTURE WORK

Some extensions t can be included in the initialization protocol. One of them is the possibility to insert or remove stations (or variables) at runtime. This would be a very attractive feature for connecting an analyzer (or some device) while the application is running. This would also allow modifying part of the application without stopping the whole system. Another important improvement is to modify the station recognition algorithm in order to be able to recognize the stations without having a station address. For example, for maintenance, this is a very important feature since it will allow some sort of "plug and play". This means that the modules will not have to be configured with a unique address before being connected to the network.

5. CONCLUSIONS

An initialization protocol for a real time network was presented . The major feature of this initialization protocol is that it does the automatic verification of some important application constraints. It also takes into account data validity constraints. For example the presented protocol verifies that all the consumed variables have a producer or that the timing constraints for each variable can be fulfilled, thus, freeing the application developer of configuration and verification of some constraints.

6. REFERENCES

Burns, A. ( 1991) . Scheduling hard real-time systems : a review. In : Software Engineering Journal, May 199 1 .

EPFL-LIT Field Bus Group ( 1990). Phoebus Ilx Specifications generales. Internal Repport, EPFL, Laboratoire d'informatique technique, December 1990.

Halang W. A. and K. M. Sacha (1992), "Real Time S ystems Implementation of Industrial Computer Process Automation'' , World Scientific Pub. Co., 1992.

ISO ( 1 992). Road Vehicles Interchange of Digital Information Controller Area Network (CAN)

1 04

for high Speed Communication. ISO DIS 1 1 898, February 1992.

Pleinevaux, P. , J. D. Decotignie ( 1988). Time Critical Communication Networks : Field Buses. In : IEEE Network Magazine, vol. 2, no. 3, May 1988.

Raja P., G. Ulloa ( 1993a). A Simple Encoder for Fieldbus Applications. In : ACM Computer Communication Review, Vol. 23 , No 1 , January 1993, pp. 34-38.

Raja P., J. Hernandez, J. D. Decotignie and G. Noubir ( 1993b ) . Design and Implementation of a robust Fieldbus Protocol. In : IEEE Int. Syrop. on Industrial Electronics, Budapest June 1993.

Raja P., G . Noubir ( 1993c). Static and Dynamic Polling Mechanisms for Fieldbus Networks. In : ACM SIGOPS, Operating Systems Review, Vol. 27, No 3, July 1993, pp. 34-45.

Raja P., L. Ruiz, K. Vijayananda, J.-D. Decotignie ( 1 994). A Conceptual Framework for Distributed Control Systems. In : Proceedings of the Symposium on Emerging Technologies for Factory Automation, IEEE-IES, Tokyo, Japan, December 1994.

Profibus : Normes DIN V l 9245 ( 1 990a). Ubertragungstechnik, Buszugriffs und Dbertragungsprotokoll. Teil 1, August 1990.

Profibus : Normes DIN V 19245 ( 1990b ) . Komunications-Model, Dienste fiir die Anwendung, Protokoll, Syntax, Codierung, Schnittstelle zur Schicht 2, Management. Teil 2, August 1990.

Simonot F., Y.Q.Song and J.P.Thomesse ( 199 1) . Approximate Method for Performance Evaluation of Message Exchange in Field Bus FIP. In : IFIP 9 1 , Data Comm. Sys. and Their Performance, Elselvier Science Publishers B.V., North Holland.

Thomesse J-P. ( 1993). Time and Industrial Local Area Networks, Proceedings of COMPEURO 1993, Paris, 1993.

Tindell K. , A. Burns, and A. Wellings ( 1994). Calculating Controller Area Network (CAN) Message Response Times. In : DCCS'94, Preprints of the 12th IFAC Workshop, Toledo, Spain, 1994.

UTE (Union Technique de l 'Electricite) ( 1990). Normes FIP NF C46-601 , NF C46-602, NF C46-603, NF C46-604, NF C46-605, 1990.

UTE Normalisation Fran9aise ( 1989). FIP: Norme C46-605 Gestion de Reseau, Decembre 1989.

Page 107: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

GUARANTEEING SYNCHRONOUS MESSAGE SETS IN FDDI NETWORKS

Sijing Zhang and Alan Burns

Real-Time Systems Research Group Department of Computer Science, University of York, UK

Abstract: This paper addresses the issue related to the deadline guarantees of synchronous messages in FDDI networks. The exact value of X; (the minimum amount of time for each node i on the network to transmit its synchronous messages during the message's (relative) deadline D;), which is better than the previously published, is derived. Based on this result, the schedulability of synchronous message sets is generally analysed. This analysis differs from previous ones by taking into account synchronous message sets with the minimum message deadline (Dmin) less than twice the Target Token Rotation Time (TTRT).

Keywords: Real-time communication, token-ring protocol, bandwidth allocation, local area networks, communication protocols

1 . INTRODUCTION

In a distributed system for hard real-time applica­tions, communication through message exchange between tasks residing on different nodes must hap­pen in bounded time, in order to ensure that end-to­end deadline requirements are met. This motivates the use of medium access control (MAC) communi­cation protocols which provide a guaranteed connec­tion and a guaranteed amount of channel bandwidth to support timely delivery of inter-task messages. With the property of bounded token rotation time, the timed token protocol becomes one of the most suit­

able candidates for hard real-time applications.

FDDI uses the timed token protocol at its media access control (MAC) layer. With this protocol, mes­sages are grouped into two types: synchronous and asynchronous. Synchronous messages arrive at regu­lar intervals and usually have delivery time con­straints. Asynchronous messages arrive in a random way and have no real-time constraints. At network initialisation time, all nodes negotiate a common value for the target token rotation time (1TRT) since each node has different synchronous transmission requirements to be satisfied. The negotiated value for 1TRT should be chosen small enough to satisfy the most stringent response-time requirements of all nodes. Each node i is assigned a fraction of 1TRT,

105

known as its synchronous bandwidth (H;), which is the maximum time the node is allowed to transmit its synchronous messages every time it receives the token. Whenever node i receives the token, it transmits its synchronous messages, if any, for a time interval up to H; . Asynchronous messages can then be sent, but only if the time elapsed since the previous token arrival at the same node is less than TTRT, i.e., only if the token has arrived earlier than expected.

Although FDDI networks guarantee, to each node, an average bandwidth and a bounded access delay for synchronous traffic, this guarantee alone (though necessary) is insufficient for the timely delivery of deadline-constrained messages. To guarantee the syn­chronous message deadlines, the protocol parametres (TTRT and H;s) have to be properly selected. Much work on a proper selection of these parametres has been done with the focus on synchronous bandwidth allocation (SBA) (Agrawal, et al. , 1 993; Chen, et al. , l 992b;Hamdaoui and Ramanathan, 1 992;Lim,et al. , 1 992; Malcolm and Zhao, 1 994; Zheng and Shin, 1993).

A challenging problem concerned with this study is how to determine the schedulability of a synchronous message set. Informally, a synchronous message set is schedulable if there exists an allocation (i.e., a set of all H;s) that can guarantee the message set such that synchronous messages are always transmitted

Page 108: Distributed computer control systems 1995 (DCCS ¹95)

before their deadlines as long as the network operates

normally. Such an allocation is said to be feasible (for the synchronous message set considered). Two ways are often used to help determine the schedula­

bility for a synchronous message set. One is to find a

feasible allocation for it, say, through a SBA scheme. Testing if a given allocation is feasible requires the exact value of X;, i.e., the minimum amount of time for each node i to transmit its synchronous messages during its message (relative) deadline D; (for the specific allocation given). Chen et al ( 1992b) derived

an X; expression. Unfortunely, the X; they derived may not be exact enough to determine if an allocation is feasible. An allocation which was deemed to be

infeasible (i.e., unable to guarantee the message set considered) when judged by their X;, may be actually

feasible for the same message set considered. In this paper, the exact X; expression is derived, which is necessary to test if a given allocation is feasible when the first method is used.

The second way to help determine the schedulability of a message set is by using the metric called the

Worst Case Achievable Utilisation (WACU). The WCAU of a SBA scheme is defined as the largest

utilisation U such that the scheme can always guaran­

tee a synchronous message set (i.e., produce a feasi­ble allocation for it) if the utilisation (factor) of the

message set is no more than U. Agrawal et al ( 1994) analysed four SBA schemes among which the nor­malised proportional allocation (NPA) scheme was

shown to achieve the highest value of WCAU, about

33%. A few other schemes (Agrawal, et al. , 1993;

Chen, et al. , 1992b; Malcolm and Zhao, 1994) also claimed to achieve a WCAU no less than this value. Zhang and Burns ( 1 995) studied the schedulability of synchronous message sets with Dmin<2·1TRT. While

the restriction of Dmin?. 2·1TRT on a synchronous message set is generally accepted as a necessary con­dition for all message deadlines to be guaranteed, their study shows that this is not always the case. They show that a synchronous message sets with Dmin < 2·1TRT may still be guaranteed, but may fail

to be guaranteed by any of the above-mentioned SBA schemes (with a WCAU of no less than 33%)

although the utilisation factor of the message set

could be far lower than the WCAU of these schemes. Their study shows that to test the schedulability of synchronous message sets with Dmin < 2·1TRT, one should not use the previously-derived non-zero values of WCAU (which were actually derived under the assumption of D min ?. 2·1TRT) through the second method. In this paper, a general analysis of the schedulability of synchronous message sets is presented. This analysis differs significantly from the previous ones by explicitly taking into account the schedulability of synchronous message sets with Dmin < 2·1TRT.

The rest of the paper is organised as follows: In

106

Sections 2 and 3 the framework under which this study has been conducted is presented. In order to

avoid confusion and to help understand related work, a similar framework to that adopted by many

researchers (Agrawal, et al. , 1993, 1 994; Chen, et al. , 1992b; Hamdaoui and Ramanathan, 1992; Lim, et al. , 1992; Malcolm and Zhao, 1994) is used. Specifically, the network and message models and SBA (schemes)

are described respectively in Sections 2 and 3. The timing properties of the timed token protocol are then

addressed in Section 4 where the exact X; expression is presented. In Section 5 a general analysis on the schedulability of synchronous message sets is given.

Finally, the paper concludes with Section 6.

2. NETWORK AND MESSAGE MODELS

The network is assumed to consist of n nodes arranged to form a ring and be free from any hardware and software failures. Message transmission

is controlled by the timed token protocol. Due to the inevitable overheads involved, the total bandwidth

available for message transmission during one com­

plete traversal of the token around the ring is less than

TTRT. Let 't be the portion of TTRT unavailable for

transmitting messages, and ex be the ratio of 't to TTRT. So the usable ring utilisation available for

message transmission is ( 1 - ex ).

Without losing generality (Agrawal, et al., 1994) each node is assumed to have only one synchronous mes­

sage stream. That is, a total of the n synchronous message streams, denoted as S 1 , • . • , Sn with S; corresponding to node i, forms the synchronous mes­sage set M, i.e., M={ S i .S2, . . . ,Sn }. Each synchronous

message (from stream S;) has a period P; (i.e., the minimum message inter-arrival time) and a (relative) deadline D; (i.e., if the message arrives at time t, it must be transmitted by time t + D;). The message length C; is the maximum message transmission time, i.e., the time needed to transmit a maximum-size mes­sage from stream S;. Thus, each synchronous mes­

sage stream S; is characterised as S; = ( C;, P;, D; ).

The utilisation factor of a synchronous message set M, denoted as U (M), is defined as the fraction of time spent by the network in the transmission of the syn­chronous messages, i.e., U (M) = L ,:1 (C/P;).

3. SYNCHRONOUS BANDWIDTH ALLOCATION

(SBA) SCHEMES

A SBA scheme can be defined as an algorithm which produces the values of the synchronous bandwidth H; to be allocated to node i in the network once given the

scheme required information (Agrawal, et al. , 1994 ).

3. 1 Classification

Page 109: Distributed computer control systems 1995 (DCCS ¹95)

SBA schemes, based on the type of information they

may use in allocating synchronous bandwidth to a

node, can be divided into two classes (Agrawal, et al., 1993): local SBA schemes and global SBA schemes. A local SBA scheme uses only information available locally to node i, which includes the parameters of stream Si (i.e., Ci, Pi, and Di), as well as TTRT and 't. A global SBA scheme can use both information locally available to nodes and global information

about other nodes. Let H = (Hi .H2, . . . ,Hn) be an allocation produced by a SBA scheme, and functions

fL and fc be respectively a local SBA scheme and a global SBA scheme. Then, a local SBA scheme can be represented as Hi = fL( Ci,Pi,Di, T, 't) for all i, i=l ,2, .. . n. A global SBA scheme can be represented

as H = fc(C 1 . C2, . . . , Cn ,P 1 , . . . ,Pn ,D I , . . . ,Dn , TTRT, 't).

A global scheme may perform better than a local one

due to it using system wide information, but it might

not be well suited to a dynamic environment since a

change in a message stream at a particular node may

require a re-calculation of synchronous bandwidth

over the entire network. In contrast, a local scheme re-calculates the synchronous bandwidth of only the node involved, without disturbing other nodes. This

makes a local scheme more flexible, and suitable for use in dynamic environments, and preferable from a network management perspective.

3.2 Requirements

In order to guarantee synchronous message deadlines, synchronous bandwidths have to be properly allo­cated to individual nodes, such that the following two

constraints must be met (Chen, et al. , 1992b): (To simplify representations, m will be used instead of L�1 Hh in the rest of the paper)

• Protocol Constraint: The sum total of the synchro­

nous bandwidths allocated to all nodes in the ring should not exceed the available portion of TTRT:

rR :s; TTRT - 't (1)

• Deadline Constraint: Every synchronous message must be transmitted before its deadline, i.e., any message arriving at time t with the (relative) dead­

line Di must be transmitted by the time t + Di. Let

X/H) be the minimum amount of time available for node i to transmit synchronous messages during D;, i.e., in the time interval (t, t+Dd (note that X/H) is a function of the allocation H ), then the deadline constraint implies:

Xi(H) � the maximum message transmission time

required during D1 ( for guaranteeing message deadlines of stream i )

(2)

Note that the left side of (2) could be more than C; for a general message set with arbitrary deadline

107

constraints (Malcolm and Zhao, 1994). As a spe­

cial case, for synchronous message sets with dead­

lines equal to periods, (2) becomes:

X;(H) � Ci (3)

A SBA scheme can guarantee a synchronous mes­sage set if the allocation H produced by the scheme satisfies both the protocol constraint ( 1 ) and the dead­line constraint (2) (Agrawal, et al. , 1 994). An alloca­tion H is feasible if it satisfies both ( 1 ) and (2). A syn­chronous message set is schedulable if there exists at least one feasible allocation for it.

4. PROTOCOL TIMING PROPERTIES

In order to guarantee the synchronous message sets as well as facilitate the analysis of the schedulability of

synchronous message sets, one needs to know the minimum available time within a given length of the

message (relative) deadline during which a node can

transmit its synchronous messages. This is directly related to the minimum number of the token arrivals to a node within the given time interval. Extensive

work has been done on the timing behaviour of the timed token protocol. Johnson and Sevcik (Johnson, 1 987; Sevcik and Johnson, 1 987) showed that the

maximum time possibly elapse between any two suc­

cessive token arrivals to a node is bounded by 2·TTRT. Chen and Zhao ( 1992a) first generalised the analysis by Johnson and Sevcik to give an upper bound on the elapse time between any v (where v is

an integer no less than two) consecutive token arrivals at a particular node, as shown in the follow­ing theorem: (Let t1, i ( l � 1 ) be the time the token

makes its l 1h arrival at node i)

THEOREM 1 : (Generalised Johnson and Sevcik's Theorem by Chen and Zhao) For any integer l � 1 , v � 2 and any node i ( 1 ::;; i ::;; n ), under (1 ),

t1+v-I,i - tl.i � ( V - 1 )-TTRT + L Hh + 't h=l.. ... n. h�i

The general result has been extensively used by many researchers (Agrawal, et al. , 1993, 1 994; Chen, et al. , 1992b; Hamdaoui and Ramanathan, 1992; Lim, et al. , 1 992; Malcolm and Zhao, 1994; Zheng and Shin, 1 993) in their development and/or analysis of dif­

ferent SBA schemes. However, as will be seen, their generalised upper bound may not be tight when v � n + 2. A more exact and tighter upper bound has been recently derived by Zhang and Bums (1994a) shown below (due to space limitation, the proofs to Theorem 2 (see (Zhang and Bums, 1994a)) and Corollary 1 and Theorem 3 (see (Zhang and Burns,

1994b)) below are all omitted):

THEOREM 2: (Generalised Johnson and Sevcik 's Theorem by Zhang and Bums) For any integer l � 1 , v � 2 and any node i ( 1 ::;; i ::;; n), under (I),

Page 110: Distributed computer control systems 1995 (DCCS ¹95)

l v-1 j t1+v-i,; - t1,; ::;; (v-1)-1TRT+ h

="f. ... H• +'t- n+I [1TRT-Llf-'t] h � i

By comparing Theorem 1 with Theorem 2 above, I t is clear that the upper bound derived by Chen and Zhao is tight only either under "'if! = TTRT - t or when

either v is less than n + 2. However, for some syn­chronous message sets to be guaranteed, synchronous bandwidth has to be allocated under "'if! < TTRT - t (Zhang and Burns, 1 994b). Based on Theorem 2 the following corollary can be derived:

COROLLARY 1 : Let J (v) be the tight upper bound on the elapse time in the worst case before a node uses up its next v (where v is a positive integer) allo­cated synchronous bandwidth ( H;), then, under ( 1 ),

I (v) = v ·ITRT + "1Jl + 't - l-v-J · [ ITRT - IJf - t ] n+l

This result has been used to derive an exact X/H) expression, shown by the following theorem:

THEOREM 3: Under ( 1 ), X/H) is given by:

� X;(H) = (m;-1 )B; + max [ D; - { mi" ITRT + IJf +'t-

L m;I

J ·[ ITRT - IJf - t ] - H; } ' o ] (4) n+

where m; is a positive integer which satisfies l (m;-1) '5. D; < J (m;), and must be either m or m-l where

m = l D;"( n + I ) + n · ( 1TRT - "1Jl - 't' ) j n·ITRT + "1Jl + 't'

Theorem 3 is necessary for testing the deadline con­

straint (2) (or (3)). The x;(i/) expression previously derived by Chen et al ( 1992b) is shown as below:

� X,(H) = (q,-1)-H, + max { 0 , min [ r,- ( L, H.+'t ) , H, ] } (5)

h=l, ... , n h �i

where q;= L D;ITTRT J and r;=D;-q; "TTRT. Expres­sion (4) differs from (5) by taking into account the calculation of the available time (X;(H)) for each

stream S; with D;<2·1TRT. The X;(H) expression (4)

is better than the previous X;(H) given by (5) in the sense that for any particular allocation and any given length of the message deadline, more available time for transmitting synchronous messages may be calcu­lated, which increases the possibility of satisfying the deadline constraints. Consequently, testing deadline constraints by the exact X;(H) expression (4) may make an allocation which was deemed to be infeasi­ble in the past (when judged by X;(H) given in (5)) become feasible for the same message set considered. Example I below illustrates this.

EXAMPLE 1 : Considering the following simple syn­chronous message set with P; = D; ( i = 1 , 2 ) :

108

Stream 1 : Stream 2:

C1 = 36 C2 = 24

P 1 = 300 P2 = 300

For simplicity, assume that TTRT = 50 and t = 0. By

applying the Proportional Allocation (PA) scheme, defined as H; = (C/P;)·( TTRT - t ), the allocation H = ( H 1 , H2 ) = ( 6 , 4 ) is produced. This alloca­

tion H is feasible since it satisfies the protocol con­straint (1), and the deadline constraint (3) when

judged by using the exact X; given by (4). That is, this message set can be guaranteed by the PA scheme.

But, the above allocation H might be wrongly sup­

posed to be infeasible because it fails in meeting the deadline constraints (3) when X; is calculated by (5). The rationale behind this is: When judged by the

previously-derived upper-bound (see Theorem 1), each node i may receive the token and then use H; only five times in the worst case during P;. Hence,

the deadline constraint apparently cannot be satisfied

for either of these two synchronous message streams.

However, when judged by the tighter upper-bound (see Theorem 2 and Corollary 1 ), the token can visit each node i at least seven times and at least seven

times H; can be used for transmitting synchronous messages during P; even in the worst case. There­

fore, the deadline constraints are met by the same

allocation H = ( H 1 , H 2 ) = ( 6 , 4 ) . D

5. SCHEDULABILITY ANALYSIS

Testing the schedulability of synchronous message sets is an important issue. How to decide the schedu­lability of a synchronous message set is a challenging

problem. Most reported work focuses on determining the schedulability of a synchronous message set

which is actually schedulable. Few results have been

obtained to help determine the unschedulability of an

unschedulable synchronous message set. In this sec­tion, a general analysis of the schedulability of syn­chronous message sets (including those with D min < 2·1TRT), is presented.

Usually, there are two methods which helps deter­mine the schedulability of synchronous message sets. One is to find a feasible allocation (say, through a SBA scheme) for the message set considered. How­

ever, failure in finding a feasible allocation does not mean that the given message set is unschedulable (unless the allocation is produced by an optimal SBA scheme). Another method is to compare the utilisa­tion factor of the message set with the WCAU of a scheme since, by definition of the WCAU, a SBA scheme can always guarantee synchronous message sets whose utilisation factors are no more than the WCAU of this scheme. But, the schedulability of a message set with its utilisation factor more than WCAU cannot be determined through this method.

In order to guarantee synchronous message deadlines, it is clear that each node i should get the chance of

Page 111: Distributed computer control systems 1995 (DCCS ¹95)

transmitting its synchronous messages at least once during Di. Due to the fact that the token rotation time is always bounded by 2·1TRT and for any time inter­val no less than 2·1TRT each node i is guaranteed to use Hi at least once (as long as ( 1 ) holds), the restric­tion of Dillin � 2·1TRT has been generally treated as one necessary condition for a synchronous message to be guaranteed. As a result, almost all the previous studies related to guaranteeing synchronous message deadlines with the timed token protocol, have been conducted under Dillin � 2·1TRT.

Zhang and Bums (1995) studied the schedulability of synchronous message sets with Dillin < 2·1TRT. While the restriction of D rrtln � 2·1TRT has been gen­erally accepted as a necessary condition for a syn­chronous message set to be guaranteed, their study shows that this is not always the case. Some synchro­nous message sets with D min < 2·1TRT may be guaranteed when synchronous bandwidths are prop­erly allocated under "ffl < TTRT - 't. Look at the fol­

lowing example:

EXAMPLE 2: Consider the following synchronous message set M with Pi = Di , i = 1 , 2, 3 :

C 1 =(ITRT-'t)/8 ; C 2=(1TRT-'t)/1 2 ; C 3=(1TRT-'t)/24

P i = P i = P 3 = TTRT + ( TTRT - 't )/4 + 't

The utilisation of the message set M is:

U(M) = L ;./ (C;IP;) = C 1 /P 1 + C2/P 2 + C 3/P 3

_ ( ITRT - 't )/8 + ( ITRT - 't )/12 + ( ITRT - 't )/24 -ITRT + ( ITRT - 't )/4 + 't

< ( 1-cx)/3 ( where ex = 't/TTRT ) Although the utilisation of the message set M is less than (1-cx)/3, a lower bound on the WCAU with the Normalised Proportional Allocation (NPA) scheme (Agrawal, et al. , 1994), the Local Allocation (LA) scheme (Agrawal, et al. , 1993) and the MCA scheme (Chen, et al. , 1992b), none of these schemes can pro­duce a feasible allocation for this message set. Both the LA scheme and MCA scheme cannot offer a feasible allocation due to the inapplicability caused by the restriction of D min � 2 · TTRT on a message set to be guaranteed. However, a feasible allocation does exist for the given message set. For example, the

allocation H=(H 1 ,H 2,H 3)=(C i . C2, C3) which can be produced by using any of the EMCA scheme (Zhang and Bums, 1994b), the ILA scheme (Zhang and Burns, 1995), the NLA scheme (Zhang, 1995), and even the Full Length Allocation (FLA) scheme (whose WCAU is only 0% (Agrawal, et al. , 1994)). Due to space limitation, the feasibility test of alloca­tion H=(C 1 , C2 ,C3) is left to readers as an exercise. D

Due to the fact that some synchronous message sets with Drrun < 2·1TRT may be schedulable, Zhang and Burns suggest a removal of the restriction of Dmin � 2·1TRT to avoid a schedulable synchronous message set with Drrun < 2·1TRT being simply

1 09

ignored. In (Zhang, 1995), it is shown that for syn­chronous message sets with Dmi0<2·1TRT, the WCAU of any SBA schemes is 0%. All the previously-reported non-zero values of the WCAU (with different SBA schemes) were all derived under Dmin � 2·1TRT. Therefore, the schedulability of mes­sage sets with D rrun <2 · TTRT can never be determined by the second method. The only possible way for this kind of message sets is through the first method.

For synchronous message sets with Dmin � 2·1TRT, some of them may be judged to be schedulable using the WCAU with a SBA scheme. Some proposed SBA schemes for guaranteeing synchronous message sets with message (relative) deadlines equal to periods are listed in Table 1 with their derived WCAU. Some notes to the table are given below:

• The SBA schemes listed are: FLA: Full Length Allocation (Agrawal, et al., 1 994) PA: Proportional Allocation (Agrawal, et al., 1 994) EPA: Equal Partition Allocation (Agrawal, et al. , 1 994)

NP A: Normalised Proponional Allocation (Agrawal, et al. , 1994)

LA: local Allocation (Agrawal, et al. , 1993) MCA: Minimum Capacity Allocation

(Chen, et al., 1 992b) ILA: Improved local Allocation

(Zhang and Bums, 1995)

NLA: New Local Allocation (Zhang, 1995)

EMCA: Enhanced Minimum Capacity Allocation (Zhang and Bums, l 994b)

• Let q; = l P/TTRT J , scheme NLA is defined as: j C1 ; if 1TRT < P, < 2·1TRT H, = C, I { C, } . (6)

---- max P1-[(q1+l)-1TRT---] , 0 ; 1f P, ?. 2·1TRT q,-1 q, q,-1

• "�" (in the WCAU column) means "no less than" .

As to the first method of schedulability testing, satis­fying the deadline constraint is especially challenging since this may involve a tedious and complex analysis

(Agrawal, et al. , 1994). Fortunately, the exact Xi(H) expression (4) which takes into account the calcula­tion of the available time for a message stream Si with Di<2·1TRT, has been derived and thus can be used to test the deadline constraints for any message sets including those with Dmin<2·1TRT. Therefore, with ( 4 ), one can decide if a SBA scheme can guarantee a message set by checking whether or not the allocation produced by the scheme can meet both (1) and (2). It should be noticed that for any given synchronous message set, testing if a specific allocation (whichever scheme produces it) is feasible is comparatively easier than finding a feasible allocation. For some schedulable synchronous message sets, feasible allo­cations could be very difficult to find.

Generally speaking, the above two methods help determine the schedulability mainly for schedulable

Page 112: Distributed computer control systems 1995 (DCCS ¹95)

Table I Summm of the SBA Schemes

Name

FLA

PA

EPA

NPA

LA

MCA

ILA

NLA

EMCA

Formula/ Algorithm for calculating Hi

H; = C; c-H = _!_-( ITRT - 't ) ' P;

H; = ITRT- 't n

C;IP; H· = --( ITRT - 't ) l u C; H; = lP; I ITRTJ - 1

See (Chen,et al., 1992b) for the algorithm

c, H; = max(l ;�T ]-1 , 1 )

See (6)

See (Zhang and Bums.1994b) for the algorithm

Pllli0<: 2TTRT WCAU required? (P min<: 2TTRT)

No 0

No 0

No 1 - a 3·n-(1-a)

No 1 - a 3

Yes I - a 3

Yes I - a � --3

No 1 - a 3

No n -( 1 - a ) 3·n-(1-a)

No > n·( 1 - a ) - 3·n-(1-a)

synchronous message sets (unless the allocation con­sidered in the first method is produced by an optimal SBA scheme). These two methods may fail to deter­mine the schedulability for some of the synchronous message sets when the allocations produced by non­optimal SBA schemes are not feasible and the untili­sation factor of these message sets is more than the WCAU (of considered SBA schemes). Since an optimal SBA scheme named EMCA (Zhang and Burns, 1994b) for guaranteeing synchronous message sets with P; = D; has been recently developed, the schedulability of synchronous message sets with P; = D; can now be determined by the first method. However, for some synchronous message sets with arbitrary deadline constraints (say, P; -::/:. D;), deter­mining their schedulability could be very difficult, if not impossible. Developing an optimal SBA scheme for synchronous message sets with arbitrary deadline constraints is highly demanded and challenging.

6. CONCLUSIONS

Testing the schedulability of synchronous message sets is a key issue in the study of guaranteeing syn­chronous message deadlines in FDDI networks where the timed token protocol is used. In this paper, a gen­eral analysis of the schedulability of synchronous message sets is presented. While the schedulability of a synchronous message set with message deadlines equal to periods can now be effectively determined, how to explore effective means to test the schedula­bility of a general synchronous message set with arbi­trary deadline constraints is still a challenging prob­lem to be solved.

1 10

REFERENCES

Agrawal, G., B. Chen and W. Zhao ( 1993). Local Synchronous Capacity Allocation Schemes for Guaranteeing Message Deadlines with the Timed Token Protocol. INFOCOM '93.

Agrawal,G., B.Chen, W.Zhao and S.Davari (1994). Guaranteeing Synchronous Message Deadlines with the Timed Token Medium Access Control Protocol. IEEE Trans. Computers, 43(3),327-339.

Chen, B. and W. Zhao ( 1 992a). Properties of the Timed Token Protocol. Tech. Rept. 92-038, Com­puter Science Dept., Texas A&M University.

Chen, B. , G. Agrawal and W. Zhao ( 1992b). Optimal Synchronous Capacity Allocation for Hard Real-Time Communications with the Timed Token Protocol. Proc. 13th IEEE Real-Time Sys­tems Symposium, pp. 198-207.

Hamdaoui, M. and P. Rarnanathan ( 1992). Selec­tion of Timed Token MAC Protocol Parameters to Guarantee Message Deadlines Tech. Rept. , Department of Electrical and Computer Engineer­ing, University of Wisconsin-Madison.

Johnson, M.J. ( 1987). Proof that Timing Require­ments of the FDDI Token Ring Protocol are Satisfied. IEEE Transactions on Communications, COM-35(6), 620-625.

Lim, C.C., L. Yao and W. Zhao ( 1992). Transmit­ting time-dependent multimedia data in FDDI net­works. Proc. SPIE Int'l Symp., OEIFIBERS'92, pp. 144-154.

Malcolm, N. and W. Zhao ( 1994). The Timed­Token Protocol for Real-Time Communications. Computer, 27(1), 35-41 .

Sevcik, K.C. and M.J. Johnson (1987. Cycle Time Properties of the FDDI Token Ring Protocol IEEE Trans. Software Eng. , SE-13(3), 376-385.

Zhang, S. and A. Burns ( 1994a). Timing Properties of the Timed Token Protocol Tech. Rept. (YCS 243 ), Dept of Computer Science, Univ. of York.

Zhang, S . and A. Burns ( 1994b). EMCA - An Optimal Synchronous Bandwidth Allocation Scheme for Guaranteeing Synchronous Message Deadlines with the Timed Token Protocol in an FDDI Network. Tech. Rept. (YCS 244), Dept of Computer Science, University of York.

Zhang, S. (1995). An Efficient and Practical Local Synchronous Bandwidth Allocation Scheme for Deadline Guarantees of Synchronous Messages in an FDDI Network Tech. Rept. (YCS 255), Dept of Computer Science, University of York.

Zhang, S. and A. Burns ( 1995). On the Schedula­bility of Synchronous Message Sets with the Minimum Message Deadline Less than 2*TTRT in FDDI networks. Tech. Rept. (to be submitted), Dept of Computer Science, Univ. of York.

Zheng, Q. (1993). Synchronous Bandwidth Alloca­tion in FDDI Networks. Proc. ACM Mul­timedia '93, pp. 3 1 -38.

Page 113: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1 995

HETEROGENEOUS PROTO TYPING FOR DISTRIBUTED REAL-TIME SYSTEMS

A. ALONSO, J . C . DUENAS, G. LEON, M. de MIGUEL and A. REN D O N*

ETSI Telecomu 11 icaci6n, Universidad Politecuica de Madrid, Ciudad Universitarw, E-28040 Madrid, Spain aalonso@tat. upm. e s , jcduenas@dit. upm. es, gonzalo@dit. upm. es, [email protected]. upm . es. are11 don@dit. upm. es Fax: 84 1 54 9 :!(} 77

Abstract. The purpose of this paper is to present some extensions to the IDERS environment, in order to deal with the development of distributed real-time systems. The IDERS method and tools are int ended t.o support. t.he user in t.he development of embedded monoprocessor real-time systems. This environment is based on a risk oriented process model. The development of the systems relies on a set of heterogeneous and incremental prototypes that can be animated in order to assess the system behaviour. In this paper, an extensi on of t his toolset for supporting the development of distributed real-t.ime systems is proposed. This extension includes the modeling of the communication media, in order t.o have information on t.he whole system behaviour. Special emphasis will be made for including mechanisms for assessing the system timing requirements .

Key Words. Distribu t ed computer control systems, real-time computer systems, prototyping, software tools.

l . IXTRO D l' CTION

The complexity of computer control systems is · continuously i ncreasing . Requirements such as timing, computation power, physical distribution of the controlled systems and fa ult tolerance. im­ply that these control applications have to be con­

ceived as d istributed real-time systems. The basic model under consideration is composed by a set of loosely coupled processors; i ntertask communi­cation is performed by message passing .

Due to their inherent difficulty, the design and implementation of these kind of systems is st i l l a research matter . In pa rticular , topi cs such as real- time opera.ting systems. communication me­dia. resources schedul ing. design methods, etc . , are subject t o study from different research groups a rou nd the world. This paper is focused upon the IDERS 1 method and toolset and its adapta-

1 The IDERS project is part.iall�· funded b�· t.he European Communit.ies under the ESPRIT programme, project. no. EP8S93. IDERS stands for lnt.egrated Development Envi­ronment for Embedded Real-Time s.,·stems wit.h Complete Product. and Process V isibi li t �" * A lvaro Rendon is a visiting professor from Uni versi­dacl de! Cauca, Colombia, and his work is sponsored by Colombian lnst.it.ut.e for Scit>nce and Technology Develop­ment ( COLCIENCIAS) and t he JDERS project ( ESPRIT EPS.�93 ) .

1 1 1

tion for supporting the development of distributed computer control systems.

The IDERS environment (Alonso, 1995) is in­tended to assist in the development of embedded monoprocessor real-time systems. In particular, it provides support for this process by means of:

• complete visibility of evolving product and development process and

• integrated product development process and tools.

Heterogeneous and incremental prototyping, cus­tomization of notations and support for the devel­opment process are some of the basic features of the IDERS method and tools.

2 . THE IDERS APPROACH

The Boehm (Boehm, 1988) spiral life cycle is a suitable framework for the development of com­plex systems. It is a risk-oriented software process model, whose m ain principle is to deal first with the more risky parts, thus reducing uncertainties in the development process.

The IDERS method makes use of the spiral life cy­cle. It relies on the construction of a set of evolv­ing and heterogeneous prototypes to deal with

Page 114: Distributed computer control systems 1995 (DCCS ¹95)

the system complexity. In ad dition . the ID ERS

method emphasizes the tracking of the temporal

requirement s a long the whole l ife cycle. This is a fundamental issue in the develop ment of real- time systems that is not always sufficiently supported by development methodologies.

The strategies followed in the IDERS method to alleviate some of the problems in the development of industrial real-t ime systems are:

Animation of models: the method proposes to develop evolutionary models at specifica­tion, design and implementation levels, able to be executed (or simulated ) . For the sake of easy validation , animation techniques are ap­plied , en h ancing the visibility of the system under developmen t. .

Smooth incremental t ransition: the tra nsi­t.ion from specifica tion to design and code is done in a smooth way. For this domain, it means integrating different platforms. The tools support. visualiza tion of the structure and behav iour of heterogeneous system mod­els. Testing and val idation can be clone in­crementally through ·the w hole process. in a risky guided way.

Configurability: the not.at.ions used for build­ing execu t a ble models can be configured or adapted to the project specific requirements, allowing the adapta tion of IDERS life cvcle to several analysis and design notations . •

Enactable process programming: it is pro­posed as a means to achieve fu l l visihilitv of the development process. It makes it p�s:'i­ble to execute .. what-if' scenarios of specific development events and thus identify critical clevelopnwnt fa ctors before they �ccur.

The basic rnncept supporting the I D ERS ap­proach to software development is prototyping. This term can be defined as a technique that relies on t.he construction of a partial executable model of the system, aimed to evaluate specific aspects of the system and/or an implementation approa ch. In few words, a prototype is a n early working ver­sion of the future application system .

Another ce1�tral concept to the development. of systems with the IDERS method is heterogene­ity. The development. state of each subsvstem in a complex real-time system is ra rely u nif�rm ( riskv parts are developed first ) . If t.he user wants t� animate the w hole system it wi l l be necessarv to deal with an heterogeneous model of the svstem . i .e . different parts of a sy:'tem m ay be de;c ri becl at different abst raction levels. and hence with dif­ferent. notations.

1 12

2 . 1 . IDERS models

One of the most i mportant issues in the develop­ment process are the abstraction levels the system has in each development cycle. The customization capabilities of IDERS support the adaptation of the notations to the specific needs of a project or organization. In the development of the project, the set of abstraction levels derived from Struc­tured Analysis for Real-Time Systems (SA/RT) development method (Ward et al. , 1985) are used. These abstraction levels are similar to those used in other proposals and are the following:

• The Specification Models ( S M ) , used to capture the user's system requirements in terms of its external interfaces and its be­h aviour from the user standpoint. There is no concern about computational resources, and timing requirements are mainly related with the response time to external events.

• The Design Models ( D M ) , used to rep­resent the decisions taken throughout the systeni design cycle. Both the system ar­chitecture and functionality are represented considering the allocation of resources such as processors, memory, communication links, etc. 'Within DM, there are several sub-models used to describe the partition of the system into subsystems, the allocation of activities and data to processors and the interfaces be­tween these ones, the software architecture inside each processor, and the code organiza­tion .

• The Implementation Models (IM ) , the lowest abstraction level , that describes the product implementation in terms of the pro­gram ming code.

The abstractions h andled in these descriptive lev­els are represented in a graphical language: SA­SD / RT . It is a hierarchical , graphical formalism, derived from Data Flow Diagrams adapted to the specification (SM) and design ( D M ) of real-time systems. In the initial paper (Ward et al. , 1985) , the functionality included into each transforma­tion is described by . natural language. A more formal approach is presented in ( Leon et al. , 1 993) and it is based on specifying the transformations by means of the executable part of a data-oriented specification language, VDM-SL ( El mstr�m et al. , 1993 ) .

Another extension t o t h e original work o n SA­SD /RT was to provide a clear, unambiguous se­mantics for execution of models by means of the defin ition of the SA-SD/RT elements in terms of High Level Timed Petri Nets (HLTPN) (Felder et al. , 1993) . SA-SD/RT models are automati­cal ly translated to H LTPN and the animation is performed in terms of this notation. The firing

Page 115: Distributed computer control systems 1995 (DCCS ¹95)

of transitions and the generation of tokens a re translated backwards and presented to the user in terms of the original SA-SD/RT notation . Hence, the Petri Nets are completely hidden to the user.

Thi8 capabil ity is helpful for the animation of het­erogeneous models1 as it allows to translate them to a common notation . However, in this case the tran8lation rules have to con8i<ler the possible dif­ferent nature of the models.

2 . 2 . A nimation of lhE protot.11pes

In the IDERS environment there are two kinds of animators: the Spec ification & Design Animator (SDA) and the Implementation Animator ( IA) .

SDA is in charge of animating Specification and Design l\fodels. The models a re i n itia lly described

using the SA-SD/RT notation. As mentioned above, models are then tra nslated automatically into IILTPN -the anima tor kernel notation-, a n <l animated internally by executing the result­ing net.. "Whenever a meaningful transition is fired , the result is mapped back to SA-SD/RT notation . t o shm\· the effect to the user.

DM are more interesting with respect to our pur­poses than Si\I . As DM considers the system re­sources, it is possible t o perform a first analysis of the timing behaviour of the svstem. There are se\·eral Dl\'I submoclels. being t l�e Software Envi­ronment Model ( SEM ) the most important one. The level of detai l of i ts �onstructs lets the user think and define their model on the basis of op­erating system concepts: tasks, communication primitives. mechanisms to spec ify the execution order of tasks· transformations, timing pr im itives, etc . The constructs provided by the toolset for describing SEl\l are closely related to operating system objects and primitives. Hence, in order to get realistic information from the an imations. these constructs have to model the operating sys­tem of the target environment . and represent it in terms of Di\I notations.

The Implementation Animator is responsible for animating the final code i n the host environment . The IA has to model the target operating sys­tem and input/output devices. The timing of the animation has to be related to the final system exec ution in order to get meaningful results. This implies that the target operating systems has to be simulated in the IA and the simulated time must correspond as close as p ossible with the exe­cut ion time on the target system. The im plemen­

tation code is parti.a l ly executed. and its timing is simulated , using for th is purpose est imations or measures obta ined for t.hat code in the t arget computer.

1 1 3

This kind of animation is very useful when devel­oping embedded real-time systems. It is co�mon the situation where the final hardware is not avail­able while developing the software. In addition, by simulating the execution of code it is possible to make preliminary test of timing propE!rties that in the target will be much more difficult to perform.

2.3. Animation of heterogeneous models

IDERS gives complete visibility of the evolving monoprocessor systems. In some development stages , its description may be heterogeneous , i .e . the system is described partly with the specifi­cation notation , partly with the design one, and partly with the implementation language. So, in order to' animate the whole description it is neces­sary to deal with different notations and seman­tics. The heterogeneous animation will provide the user with information on the timing and func­tional behaviour of the complete system .

The work being currently performed only consid­ers heterogeneous des�riptions using Design and Implementation Models. The Specification Mod­els do consider unlimited resources, therefore it is not clear the l inks to other models and the in­tended results of including this model in hetero­geneous prototypes.

The next question is how to communicate DM and IM. It was necessary to look for a view of the sys­tem being shared by both kind of models, and having common objects for supporting their inter­actions. One suitable approach found was the use of operating system objects. Both kind of mod­els rely on them: DM use constructs that model the underlying operating system, and IM perform primitive calls for using operating system services.

One further step was the identification of the task as the grnnularity unit for partitioning and inter­acting. This implies that a task in an heteroge­neous prototype must be described in terms of only one notation , i .e . it cannot be heterogeneous, and it must be part of a D M or an IM. The inter­act.ions between heterogeneous models are mainly related with the intertask communication opera­tions.

An additional component, called Animation Adaptor is responsible for ensuring consistent an­imations of heterogeneous models. Some of their functions are:

• managing i nformation for ensuring consistent scheduling decisions. At any time, the eligible task according to the scheduling policy must be active. This implies that either a task in the IM or in the DM will be animated, but not in both.

Page 116: Distributed computer control systems 1995 (DCCS ¹95)

• translating the information that represents intertask communications between heteroge­neous tasks and ·managing the communica­tion objects to allow consistent operations. Hence, the blocking/unblocking semantics of these objects must be accomplished in a dis­tributed way.

• ensuring a consistent management of the t ime . It should not happen that an event that occurs at a certain time is notified to an animator with a greater simulation time.

A view of the connections of the components in­volved in heterogeneous animation is depicted m figure 1 .

Design Implementation Animator Animator

t i Animation

Adaptor

Fig. 1 . Components in an heterogeneous animation .

3 . H ETEROGENEOUS PROTOTYPES OF DISTRIBUTED SYSTDIS

:3 . 1 . Distrib uted ren l- t 1 111 e S. 1Jstrn1s

The requirements of an inneasing number of ap­i)l ications force to conceive them as d istributed 1-.,a l-t ime systPms. Their development present en­gineers with a number of chal lenging problems. \\"hich are . the cent ral subject of an important \\"Orldwide resParch effort .

Distributed real-time systems present. additional problems than monoprocessor systems. Some of the most relevant are the following:

• End-to- eu d den dlin e s : In monoprocessor sys­tems, timing requirements a re usually trans­lated into deadl ines associat.ed to individual tasks. In this way. the goal is to ensure that these are met by the corresponding tasks. On the other hand . "·hen dealing with dis­tributed systems. the designer is not usu­ally concerned with deadli nes "of individual tasks, but with encl-to-end deadlines that re­fer to the time elapsed from the moment t.hat an event fin's an acti vity until this is fin­ished . This activity is usually associated with a transaction . which involves the execution of several tasks in different processors and the generation and deliwry of messages to al low t heir coopern t ion . Hence. the system bas to

1 14

be implemented and scheduled in such a way that the global deadlines are met . This will imply to associate partial deadlines to the rel­evant tasks and messages.

• Scheduling of the communication media. A common approach when dealing with mono­processor systems is to consider that the only shared resource is the processor. The other ones are dedicated or are accessed within critical regions. This approach is no longer valid when dealing with distributed systems. At least the communication media has to be scheduled: it is obvious that this resource is shared among a certain number of processors. As mentioned above, in order to meet global deadlines, it is necessary to predict the com­munication delays of the real-time messages. Hence, i t is necessary to use determinis­tic communication protocols that . allows to schedule the messages delivery within dead­lines.

• Allocation of tasks to processors . Another problem to consider is how to allocate .tasks to processors in order to optimize the per­formance of the system, while ensuring the fulfillment of the timing requirements.

There are currently a lack of software engineer­ing methods and tools that takes into account the mentioned problems. The goal of the ex­tensions to be presented is to provide the user with a framework for supporting the management of these problems during the development of dis­tributed real-time systems.

:3 .2 . Extensions to the IDERS environment

Taking as the starting point the previously in­troduced IDERS capabilities for development and animation of heterogeneous models of monopro­cessor real-time embedded systems, our goal is to extend them to make possible a similar toolset fo­cused on distributed real-time embedded systems.

The IDERS functionality allows the user to pro­totype and animate the set of tasks that are ex­ecuted in a certain processor. It is possible to check that the system temporal and functional behaviour follows the stated requirements. In or­der to extend it for distributed systems, some ad­ditional elements need to be included, specially those related to modeling the communication me­dia. The proposed architecture is depicted in fig­ure 3 .2 . It shows a typical configuration for the animation of a distributed system modeled as a set of heterogeneous processor p rototypes plus a communication media model.

The main elements in the architecture for sup­porting the animation are:

Page 117: Distributed computer control systems 1995 (DCCS ¹95)

Time Manager

:-:· . . :· ··:·.: · · · COMM� ADAPT . ..

.

· . . .. . . . : . . ·

. ..• .•. ;. :J�igri/. '? · . • nmplementation: . : :: • · .... Animator- •·•· · · ...

Information Manager

Distributed Models Adaptor

ANIMATION SUPPORT

Fig 2. Componen ts of a n ima tion of a heterogeneous model of a d istributed system.

Communication Adaptor The communication media will be in cha rge of receiving messages from the connected pro­cessors a.nd deliver them, ta.ken into account the media. protocol and transmission delays. This is simulated and represented as another prototype.

Animation Adaptor This module manages the exchange of infor­mation between the set of tasks allocated to a single processor. In · addition to the func­tionality mentioned i n sect.ion 2 .3, this mod­ule has to receive and st>nd the interprocessor messages, using the services provided by the communication adapt.or.

Distributed Models A daptor It. manages the required information to ensurt" the consistent animation of the complete pro­totype. In addition to the appl ication mes­sages, it is necessary to deal with timing in­formation. In particular , the time advance in different processor prototypes has to be syn­chronized. It should not ha ppen that a mes­sages arrives a t. a processor with a time ear­lier that. the current simu lation time at that processor . The subsystem depicted as ''Informat ion manager" takes care of modi:'! configu ration

and profi l ing.

The main characteristics of this new architecture for the prototyping and animat ion of d istributed real-time systems are:

1. The tools architecture is st ructured in a h i­erarchical manner. This pol icy allows cert.a.in degree of opt imiza tion of the number of mes­s.ages interchanged in tlw animation. Each layer of ad aptors ( A n im a t ion Adaptors. Dis-

1 1 5

tributed Model Adaptors) , only deals with in­formation interchanged in a limited scope.

2. The animation is performed in a synchronous basis. This means that only a common clock for the system under animation is provided. Thus, if data coherence is maintained , the causal relationships between events are pre­served; on the other hand, the animation is realistic in the sense that the mapping between events and animation time in the model is the same than between events and real time in the systems ( if the model is cor­rectly built, of course) . It allows to test the global scheduling of processors and commu­nication resources, for example. The use of discrete-event simulation mecha­nisms t.o support the animation allows to per­form it efficiently.

3. The communication media can be modeled at different abstraction levels. As communi­cation objects are considered as another part of the global model, they can use the facilities provided by the heterogeneous prototyping toolset. Hence, the model that represents the connection between processor models can be described in the same kind of abstraction lev­els : specification ( without resources require­ments) , design , or even using the procedu­ral code (using the real implementation code that handles connections, for example) . This heterogeneous approach to handle the com­munication media may be useful for dealing with devices and protocols that are being de­veloped in parallel with the rest of the system or to use simplified models in the early devel­opment st.ages.

The proposed environment will allow the user to deal wit.h some of the specific problems faced

Page 118: Distributed computer control systems 1995 (DCCS ¹95)

when develop ing distribu ted real-time systems. Jn

the IDERS toolset , i t is possi ble to check the . fu lfi l lment of the t iming req u i rements. The ex­tended envi ronment should easi ly h a ndle end- to­encl deadli nes, by defining a transaction . tracking the execution of its tasks and messages, and cal­cul a t i ng its global behaviour.

The faci l it ies prov ided by the extended environ­ment wil l show whet her the capacity of the indi­vidual processors and the communication media is sufficient to schedule successfully the correspond­ing tasks and messages. An important advantage is that the user m ay have some information on these issues early in development process, based on initial estimations of the task processor usage and the messages flow . It is obvious that in this way it wi l l be possib le to check if a certa i n a l loca­

tion of tasks to processors is feasihle.

4. CONCTX S IO\'S The IDERS enviro n ment is c u r rent ly being devel­oped and the heterogeneous prototyp ing faci l it ies a re being appl ied in the development of some i n­d ustrial embedded rea l-t ime systems for demon­stration purposes . In this abstract . it is shown

how to extend this environment to support the

development of d istri buted rea l-t ime systems. In the framework of heterogeneous prototyping , com­muni cation media models interact wi th user a p­

p l icat ion models regardless their maturity degree ( spec ification , design and im plernent at.ion ) . Ben­

efits a l ready obtai ned in ot her a reas by using ani­mation and early vali d a t ion are thus appl ied to the field of distributed rea l- t i me systems. The first steps towards the development of the pro­posed extensions are being done and i t is planned

to make a similar use of the distrib u ted environ­

ment to assess pract ically its v iabi l ity.

1 16

ACKNOWLEDGMENTS

The authors wish to acknowledge the collabora­tion of the other members of the IDERS consor­timn in the development of this work.

The IDERS consortium is formed by IFAD (Den­mark) , VTT (Finland) , POLIMI (Italy) , MARI (UK) , Alenia (Italy) , RNT (Finland ) , Lattice (UK) , and Universidad Politecnica de Madrid (Spain) .

REFERENCES

Alonso, A . , L. Baresi , H . Christensen and M. Heikkinen ( 1995) . IDERS: An Integrated En­vironment for the Development of Hard Real­Time Systems. 7th Euromicro Workshop on Real-Time Systems, IEEE Computer Society Press.

Boehm,B. ( 1 988) . A Spiral Model of Software Development and Enhancement . Computer, 21 (5 ) , 6 1-72.

Elmstr0m, R. , Lassen, P.B . , Andersen, M. ( 1993) . An Executable Subset of VDM-SL, in an SA/RT Framework. Real-Time Systems, 5(2/3 ) , 197-2 1 1 .

Felder, M . , Ghezzi , C . . Pezze, M . ( 1993) . High­Level Timed Petri Nets as a Kernel for Ex­ecutable Specifications. Real- Time Systems, 5(2/3 ) , 235-248 .

Leon, G . , de la Puente, J .A . , Duenas, J .C . , Alonso, A . , Zakhama, N. ( 1993). The IPTES Environment: Support for Incremen­tal , Heterogeneous and Distributed Prototyp­ing. Real- Time Systems, 5(2/3) , 153-171 .

Ward, P.T, Mellor , S .J . ( 1985). Structured De­t•e/opment for Real-Time Systems. Yourdon Press, Englewood Cliffs, NJ.

Page 119: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

DEPENDABLE DISTRIBUTED COMPUTER CONTROL SYSTEMS : ANALYSIS OF THE DESIGN STEP ACTIVITIES

F. Simonot-Lion *, J.P. Thomesse *, M. Bayart ** and M. Staroswiecki * *

* CRIN - CNRS URA 262, ENSEM

2, avenue de la Foret de Haye, F-54516 Vandceuvre Les Nancy

E-mail : { simonot, thomesse} @ Loria.fr

** LAIL - CNRS URA 1440

Batiment P2 - Cite scientifique, F-59650 Villeneuve d'Asq Cedex

E-mail : {bayart, staroswieckij @univ-lillel.fr

Abstract : The paper sets the problem of the design step in the context of production

automated systems and presents a modeling of this activity. The design step is located after the

specification one and before the strictly speaking implementation one. The problem is to be sure

that the partitioning of the specified application into intercommunicating modules and the

allocation of the modules to computing assets and communication systems will satisfy the

different constraints expressed in the end-user requirements, and in the specification.

Keywords : design process, real time systems, validation

1 . INTRODUCTION

A production automated system is composed with three entities : the physical process equipments, the human operator(s) and the computer-based control system. This paper focus on the design of the last one 1 •

On the one hand, the digital technology is increasing ; this comes with a favourable price to

performance ratio for the computer based components

and with the growth of their reliability.

On the other hand, the goals of the control system is extended to the supervision, maintenance, quality

I This paper presents the conclusions of a work

supported by the French "Ministere de l'Enseignement

Supeneur et de la Recherche" . This study was done by a

team coming from research laboratories and industries :

"Impact de !'emergence des reseaux de terrains et de

!'instrumentation intelligente dans la conception des

architecture de systemes d'automatisation de processus",

Convention 92-P-0239, appel d'offre Productique­

Robotique 1992 du Ministere de l 'Enseignement

Supeneur et de la Recherche.

1 17

management, . . . functions. So the control �ystems become more complex and sophisticated and the

applications are distributed on several heterogeneous computers (PLC, sensors, actuators, . . . ) and their operating systems connected by the means of different networks and their protocols.

Moreover, because of the context of their use and of economic considerations, these control systems are subject to stringent constraints expressed in terms o f :

• s afety (men, p h y sical e q uipments, environment, . . . ) • quality of products, quality of services,

• productivity, . . . and more generally in terms o f dependability constraints, in particular, time constraints.

Furthermore the development process of such applications must respect cost constraints. So their design consists first in defining an eligible system and secondly in producing the best one according to one or several criteria.

In the part two, the development process is analysed and the importance of the design step is demonstrated. The three models of the systems used

Page 120: Distributed computer control systems 1995 (DCCS ¹95)

at this step will then be presented in the third part ;

in the fourth one, elementary activities which must

be executed in order to construct the solution will be

listed.

Nowadays there is no unique formal method able to support the design step. Nevertheless methods based

on well known models and formalisms may be used

in order to solve a part of the problem ; some of

them are invoked in the last section.

2. THE DESIGN STEP OF DISTRIBUTED

CONTROL SYSTEMS

There are many steps and activities involved in

building a distributed control system. The order in

which they must be performed defines the system

development process (Ghezzi, et al., 1991 ; Calvez.

1 992). At Fig. 1 a model of this process is

proposed ; it presents the scheduling of well

identified activities.

needs requirements E l . . ... xp 01tat10n

integration of a mo mated

Fig. 1 . Model of the development process

The control systems specification is supposed here to

be proceeding disregarding the distribution problem.

At this step the functions to be accomplished by the system are refined, the different flows (data, control)

between the functions are identified and some

properties can be soon proved as the correctness of

the specification (lack of deadlock for example).

At the opposite step of the development cycle, the

control system integration, it is necessary to assume

that the control system, which is built, first,

1 18

observes the same properties and respects the

dependability constraints and, second, is the best one

according to one and/or several criteria. Two

keywords are there expressed :

• V al idation It means that the part1t10ning of the specified

application into intercommunicating modules and the

allocation of the modules to computing assets and

communication systems must satisfy the different

constraints.

• Optimisation. This implies :

- to minimise the number of feedback loops along

the development process, especially the ones which

result of a bad selection of physical components.

- to avoid the purchasing of over-dimensioned

computers or networks or the systematic resort of

redundancy which increases the complexity of the

overall system.

For economic reasons it is better to assume this

validation before the realisation of the system. So the

design step is the key stage in the development

process because at this point is decided which

computers and networks will support the application.

3. MODELS OF THE SYSTEM AT THE DESIGN

STEP

To perform the design step three points of view on

the system are necessary. The functional architecture

is produced by requirements analysis and

specification. It is an entry point of the design step.

The operational architecture expresses the result of

the design activity. It is elaborated by a proposed

mapping of the functional architecture on a physical

architecture defined and dimensioned along this

activity.

3. 1 The functional architecture

It models the result of the specification step

expressed in a rigorous and formal way ; this model is defined disregarding the choice of computers and

networks, and the distribution problem. It describes :

- the elementary functions that the control

system must assume (called "atoms" in this

document),

- the set of data exchanged between the functions (data flows),

- the behaviour of the set of functions (control

flows),

Page 121: Distributed computer control systems 1995 (DCCS ¹95)

Each of these elements is characterised by attributes,

eventually subject to constraints and/or considered as

optimisation criterion.

The functional architecture is obtained by a logical

decomposition into atoms. Note that the physical

decomposition is one of the design elementary

activity.

An atom is defined as the smallest entity,

representing a treatment, which is no more refined.

it can be mapped on one and only one computer and

cannot be activated partially.

(Brown and Mc Lean, 1987; Verlinde, et al. , 1989, Simonot-Lion and Verlinde, 1992 ; Staroswiecki, et al. , 1994) propose models to describe functional

architectures and/or atoms.

3.2. The physical architecture

It is composed of :

- a set of computers (PLC, sensors, actuators,

computers, . . . ) with their operating systems,

- a set of communication networks, with their

protocols,

- the definition of the connection of the different

computers on the networks.

Each entity of this architecture is characterised by the

provided resources (memory size, simultaneous

allowed connections, . . . ) and by temporal

performances. These characterisation is essential to

verify some properties ; for example, it can be

proved that a given physical architecture respect cost

constraints or interoperability constraints.

Other verifications will be proceeded on the physical

architecture in regard of the functional architecture

mapped on it ; that is on the operational (or

operative) architecture.

3.3. The operational architecture

It is made by the mapping of the different elements

of the functional architecture onto a physical

architecture.

The optimal and validated operational architecture is

the result of the design stage. It refers to the best

architecture in terms of one or several criteria.

Validation means that it is conformed to the end user requirements.

1 19

4. A DESIGN PROCESS MODEL

To analyse the design activity and to try to define a

method or a methodology supporting this step, it is

necessary to take some more or less strong

hypothesis.

The defined model is first presented. Then the

industrial constraints for a control system

development are studied and finally these new needs

are considered to be taken into account in a more

realistic approach.

4.1 . A simplified approach

Fig. 2 shows the different elementary activities, to be

proceeded along the design step ; let us suppose here,

that the control system design doesn't refer to

previous systems.

These activities can be organized m two main

classes :

a- selection and dimensioning of physical

components :

- selection of computers and operating systems

able to support atoms of the functional

architecture.

- dimensioning of computers (processor,

memory size, input/output devices, co­

processors, . . . ) . - selection of networks able to support data and

control flows specified between atoms mapped

on different computers.

- dimensioning of networks ; this is made by

traffic analysis (data and control flows in

functional architecture).

- definition of architecture : which computer on

which network(s). - validation of physical architecture.

b- mapping of the functional architecture on a

physical architecture :

- partitioning of the set of atoms ; though this

activity treats just functional entities, it depends strongly of the physical components. It can be issued from analysis of data flows, required

memory, atoms functionalities,

- allocation of atoms of a part to a computer.

- instanciation of an allocated atom in terms of

procedure or task, . . . , of a control flow in terms

of scheduling, . . .

- global validation of the operational architecture deduced from the previous activities ; that is to check that this result meets

all the required properties.

Page 122: Distributed computer control systems 1995 (DCCS ¹95)

constraints - criteria

available components

("vendor catalogues")

computers

networks

constraints

DESIGN

functional architecture

attributes

constraints - criteria

know-how dec "sions

partioning of

application

validation of operational

architecture

operational

architecture

Fig. 2. The design process activities

To define an optimal and validated operational

architecture deals with the co-operation of different

activities :

- each one is not strictly independent of the

others,

- the activities cannot be ordered. Their planning

depends on the order of constraints taken into

account by the designer. Furthermore, each of these activities leads to a partial

and/or temporary result : - partial : a selected computer is just a part of

the operational architecture,

- temporary : the partial result which is

validated at a given date, may not be more correct after a later activity.

This analysis leads on strong hypothesis, for

example a lack of existing partial solutions is supposed. This doesn't fit with the industrial context.

In order to extend the above presented approach, some

remarks can be made first.

1 20

I I - - - - - - - - - - - - - - - - -

no eligible solution

4.2. Top-down versus bottom-up approach

Two main strategies can be followed for the

development of operational architecture ; the steps

which are mainly concerned are the specification and

the design ones :

- The strictly top-down approach : the functional

architecture is built by decomposition of the system

into subsystems ; these ones are themselves refined

into smaller sub-systems, and so on, up to get a set of atoms, data and control flows. Then from this

given functional architecture must be deduced the physical and operational ones.

- At the opposite, a bottom-up strategy can be

used ; partial operational architectures can be

identified as providing the services associated with

one (or several) given function(s) of functional

architecture. So the design of the global operational

architecture would consist of identifying these partial operational architectures that would be iteratively

assembled together to form the result.

Page 123: Distributed computer control systems 1995 (DCCS ¹95)

In fact it is possible and often convenient to combine

these two strategies for different parts of the system

and/or at different points in the development process.

• Indeed, the functional architecture characteristics is

helpful to select physical components.

• For complex systems, the definition of a functional

architecture is usually made by refinement ; a main

problem is to specify the condition on which the

decomposition process can be stopped. This can be

based on the knowledge of some particular partial

operational architecture (for example intelligent

devices).

• The designing of an operational architecture often

adds functions which are not expressed in the

functional architecture, as functions implemented in

bridges, gateways, . . . They must be taken into

account for the validation activities.

• A partial operational architecture can bring

possibilities which are not considered in a given

functional architecture. The designer can decide to use

these new possibilities and to add new functions to

the initial system. For example, intelligent devices

often are able to memorise informations and to date

them ; so the specifier may envisage these

opportunities in order to introduce diagnostic and/or

maintenance functions in its initial functional

architecture.

More generally the development process must take

into account the use of partial operational

architectures, the reuse of partial physical and

operational architectures and the evolution of existing

operational architecture (evolution of computers

and/or networks or of the control system aims).

5. VALIDATION MODELS USED AT THE

DESIGN STEP

The different components of a distributed application

may be independently validated but the complete

application must be itself validated. This section

develops especially some formalisms and models concerned with the validation of the operational

architecture behaviour.

It is known that protocols may be validated at the design stage, thanks to various formal methods based on languages, state-transition models, logics. Their

performances may be evaluated thanks to queuing

systems models, stochastics Petri Nets . . . Their implementation is tested thanks to conformance

testing standards and tools.

But it is not sufficient to prove that the

communication system is well suited for a given

121

application. Therefore, interoperability tests have

been introduced (Benkhellat, et al. , 1992) .

We have to distinguish interoperability of the strictly

speaking communication and the interoperability of

the application itself, which is sometimes called

interworking. Communication and application

interoperabilities may be seen as respectively a validation of the physical and of the operative

architectures.

Actually, application interoperability testing is often

seen as a stage at the integration step in the

development of an application. But it is shown that

the communication interoperability depends on the

application (Benkhellat and Thomesse, 1994) and it

should be interesting to prove them at the design

stage.

Some proposals to do that are based on Petri Nets

(Juanole and Atama, 1 99 1 ; Rodriguez and Ladet,

1995), on HMS machines (Philippe, et al. , 1993). It

is necessary to model the communication profile and

the distributed application in a similar way. The

validation concerns then the operational architecture.

It has been proposed by Benkhellat ( 1 994) to

distinguish four kinds of interoperability (IOP) as

well for communication as for application. They are

called service IOP, protocol IOP, Time IOP and

resources IOP.

• Service IOP is a property which relates that the

communicating entities provide the right services for

the application.

Example : a server provides the services required by a

client.

• Protocol IOP is a property which relates that the

protocols are well compatible regarding all their

options and parameters.

Example : the maximum lengths of a frame are

identical in all the concerned implementations.

• Time IOP is a property which relates that from

time point of view, the entities meet the time

constraints.

Example : the response time of stations in a controlled Medium access control are compatible.

• Resources IOP is a property which relates that the resources needed by the application are provided

by the physical architecture.

Example : the number of possible simultaneous

connections on a station is sufficient to support the needs of its application processes.

Page 124: Distributed computer control systems 1995 (DCCS ¹95)

These tests or validations may be done at the design

stage if all the application processes and the

components of the physical architecture are

characterised in such a way. Formal methods as logic

programming, or systolic networks for example may

be used in order to model these components and

obtain the proof that it may or it must interoperate.

The validation of a distributed application may be

done in several stages :

• validation of the physical architecture

independently of the application; it is restricted

to the proof of a minimal capability to

interoperate. Non-interoperating stations must

be discarded.

• validation of the operational architecture which

is also the final validation of the physical one.

6. CONCLUSIONS

In this paper, the design stage activities were briefly

analysed. Several actions must be completed at this

stage which may be ordered or no. They depend on

the chosen components which are not only

components of the physical architecture but also of

the operational one.

The validation of an operational architecture is a test

of interoperability of all the elements, stations,

networks and distributed atoms and flows. This

validation goes through a formal modeling of all these elements, task which may be of a large

complexity. This approach may be not strictly

applied on very large systems, but on critical parts of it. That's a challenge for next works, which must be proceeded through a well suited modeling of pertinent

aspects of devices including physical and functional

point of view.

7. REFERENCES

Benkhellat, Y. and J.P. Thomesse. ( 1 994) . V alidation of Timing p roperties for Interoperability in Distributed Real Time

Applications. Proceedings II 3th International IFIP Symposium on Protocol Specification, Testing and Verification XIV (S.T. Vuong and

S.T.Chanson, Ed), 331-338. Chapman & Hall,

Vancouver (Canada). Benkhellat, Y., M. Siebert and J .P. Thomesse.

( 1 992). Interoperability of Sensors and

Distributed Systems. Journal Sensors and A ctuators. 2, 247-254.

B rown, P.F.and C .R. Mc Lean [ 1 987] . The

architecture of the NBS factory automation

1 22

research testbed . Proceedings IFAC 87, Munich,

(Germany).

Calvez, J.P. ( 1 992). Embedded Real-Time Systems. A Specification and Design Methodology. John

Wiley.

Ghezzi, C. , M. Jazayeri and D. Mandrioli ( 199 1) .

Fundamentals of Software Engineering . Prentice

Hall international Editions. Juanole, G. and Y. Atama. ( 199 1 ) . Functional and

Performance Analysis using Extended Time Petri

Nets. International Workshop on Petri Net and Performance models. Madison (USA).

Philippe, C . , A. Khalfallah, F. S imonot-Lion

( 1 993). Specification and validation of time

Constraints with the HMS Machines Model.

Proceedings 7th Annual European Computer Conference on Comp u te rs in Design, Manufacturing and Production, Paris (France).

Rodriguez, A. and P. Ladet. ( 1 995) . Validation

d'Applications de ContrOle-Commande reparties.

Cas de la Repartition sur le bus de Terrain FIP.

A ctes des journees d'etude SA PID. Paris (France).

S imonot-Lion, F. and C. Verlinde. [ 1 992] .

Importance d'un cadre de reference dans la mise

en place d'une demarche de developpement d'un

systeme automatise de production. Actes de la Conference A utomatisation Industrielle, 1 . Montreal (Canada).

Staroswiecki, M. Bayart M., Akaichi J. ( 1 994) . Distribution of intelligent automated Production­

A clustering approach. Proceedings of Integrated

systems engineering IFA C conference. 377-382,

Baden-Baden (Germany).

Verlinde, C. , E. George] and J.P. Thomesse ( 1989). Hierarchical and Functional Architecture for

Control Systems. Proceedings IECON'89. Philadelphie (USA).

Page 125: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

DEADLOCK PREVENTION IN A DISTRIBUTED REAL-TIME SYSTEM.

O.H.Roux*, P.Martineau**

*LAN (URA n °823), ECN, Universite de Nantes 1, rue de la Noe 44072 Nantes Cedex 03, France.

**LESTER I /UT de Lorient 10 rue Jean 'Zay 56100 Lorient, France

Abstract : Deadlock is a circular wait condition that can occur in any multiprogramming, multiprocessing, or distributed computer system. This article focused on the second, third and fourth models of the Knapp hierarchical model called the AND-model, the OR-model and the AND/OR model. In the last model, a task is blocked until it is granted any of the resources it has requested. The classical solution for the AND-model callled the Priority Ceiling Protocol (PCP) is extended to the AND/OR-model. In the aim to extend this solution to distributed systems, the Rajkumar solution, which consists in the utilization of a global super-priority, is chosen.

Keywords : Deadlock, Prevention, Inheritance Protocol, Distributed Computer Control System, Real Time, Scheduling Algorithms.

1 . INTRODUCTION

Real Time systems have timing constraints imposed on their computations, such as ready times, deadlines and periods of executions. The correctness of a result from a real-time system depends not only on the logical computation carried out but also on the time when the result is delivred. An untimely result may be of little or not use. If the proper functioning of a system depends on the strict adherence to its timing requirements, it is called a hard real-time system.

Many real-time systems have a fixed number of peri odic tasks that p erform well-defined computations. All these tasks may execute in a concurrent system. Guaranteeing that all hard deadlines are met is one of the most important issues in designing hard real-time systems. To meet hard real-time requirements, systems must use scheduling algorithms which can guarantee predictable response times for all tasks. These tasks (or processes) can request some shared resources in an order not known a priori; a task can request some resources while holding others. It introduced the danger of eternal delays. The problem is inherited from the strategy that processes may exclusively hold ressources and acquire resources in the sequel of processing. This

123

phenomena first discovered in operating systems has been named deadlock. Deadlock (Knapp 1 987) is a circular wait condition that can occur in any multiprogramming, multiprocessing, or distributed computer system that uses locking if resources are requested when needed while already allocated resources are still being held. In a real time system, deadlock is inacceptable and has to be prevent. In such an environment, we proposed to control the sequence of resource allocation to processes in the aim to prevent deadlock.

Section 2 defines different models of deadlocks and the different strategies to discard deadlocks. Section 3 recall solutions of prevention proposed in the last 10 years. Section 4 presents an extension to distributed system.

2. MODEL

2. 1 A basic example

A deadlock situation can occur as soon as two tasks request two resources. The resources are critical i.e. at a time, at most one task can access and locked a resource. Let's define T1 and T1, two tasks that request to R 1 and R1, two resources as variables.

Page 126: Distributed computer control systems 1995 (DCCS ¹95)

Request

Locked by Locked by

Request

Fig. 1 : A basic deadlock between two tasks.

2.2 The Knapp classification

Knapp (1987) classified the deadlock problem into a hierarchy of six models to reflect the complexity of a particular deadlock problem . Each model is caracterized by the restrictions that are imposed upon the form resource request can assume. The hierarchical set of deadlock models ranges from very restricted request forms to models with no restrictions whatsoever.

Single Ressource Model : The simplest possible model is one in which a task can have at most one outstanding resource request at a time. The model is widely used in theoretical studies of database systems.

AND model : In the AND model, tasks are permitted to request a set of shared resources. A task is blocked until it is granted all the resources it has requested. A shared resource is not available for exclusive use until all its shared lock holders have released the lock.

OR model : In contrast to the AND model, an alternative way for making resource requests is the OR model (Chandy, 1983). In this model, a task is blocked until it is granted any of the resources it has requested.

AND/OR model : The AND/OR model is a generalization of the two previous models. A task in the AND/OR model may specify resources in any combination of AND and OR requests. For example, a task may request resources (R1 or R1) and R3.

C(n,k) Model : the C(n,k) model, was first formulated by Bracha and Toueg ( 1984) as k-out-of-n request. This model allows the specification of requests to obtain any k available resources out of a pool of size n. Every request in the C(n,k) model can be expressed in the AND/OR model but the converse is false. Example : a pool of n=3 resources {Rt, R1, R3 }

C(n,2) = (Rt and R1) or (R1 and R3) or (R2 and R3) ;

Rt or (R2 and R3) can not be expressed in the C(n,k) model .

t24

Unrestricted model : In the most general model no underlying structure of ressource requests is assumed.

2.3 Strategies

Principally, there are three strategies for dealing with the deadlock problem :

1. Deadlock detection

2. Deadlock avoidance

3. Deadlock prevention

In the first strategie, the system is allowed to enter a deadlock state. Once a deadlock is formed, it persists until it is detected and recovered from. The deadlock detection computation can proceed concurently with the normal activity of a system.

The last two strategies ensure that a system will never enter a deadlock. Deadlock avoidance is inefficient because checking for a safe state is computationnally expensive. Most proposals for prevention require each process to specify all needed resources before critical section begin.

3 . MONOPROCESSOR ARCHITECTURE

Since resource sharing cannot be eliminated, synchronization primitives are used to ensure that resource consistency constraints are not violated. A classical appl ication of commonly used synchronization primitives can lead to uncontrolled priority inversion i.e. a situation in which a higher priority task is blocked by lower priority tasks for an indefinite period of time. So, several protocols have been proposed in the eight last years to prevent priority inversion.

For the AND-model, the Priority Ceiling Protocol (PCP) (Rajkumar 1991) is now a classical solution. The goal of this protocol is to prevent formation of deadlocks and multiple blockings. The underlying idea consists in ensuring that when a task preempts the critical section of another task and executes its own critical section, the priority at which this new critical section will executed is garanteed to be higher than the inherited priorities of all preempted critical sections. The AND/OR-model may let choice in execute one critical! section among two. These two critical sections may have duration significantly different. The classical solution results in taking the more expensive critical section account in the aim to prevent any deadlocks.

In this section, we briefly review the resource control protocols : SUP, PCP (DPCP) and SRP. In this paper, it is assumed that every shared resource in a system is guarded. The locking and unlocking of a resource by a task defines a critical section in which the shared resource is accessed exclusively by the task.

Page 127: Distributed computer control systems 1995 (DCCS ¹95)

3.1 State of the Art

The Superpriority (SUP) : The superpriority protocol (Kaiser 1 98 1 , Mok 1 987) is based on non preemptable critical sections. If a high priority task a.-rives when a low priority task is in a critical section, the high priority task will be blocked; the lower priority task temporarily inherits the priority of the high priority task and continues its execution until it finishes the critical section. After that, the low priority task resumes its original priority and becomes preemptable again. In such a system, a task will be blocked only before it starts its execution, and there can only be one active critical section at any time in the system.

The Priority Ceiling Protocol (PCP) : PCP is designed for systems using static priority scheduler (as Rate Monotonic (RM) for periodic tasks which assigned a fixed priority for every task according to its period). A priority ceiling is defined for every resource as the priority of the highest priority task which may lock the semaphore (resource). Using p(T) to denote the priority of task T and ceil(S) to denote the priority ceiling of resource R, the protocol can be stated as :

A task T requesting to lock a resource R is granted the lock only if p(T) > ceil(R') where R' is the resource with the highest priority ceiling among all resources currently locked by tasks other than T.

The Dynamic Priority Ceiling Protocol (DPCP) : DPCP is based on ED (Earliest Deadline) algorithm. Similar to PCP, a priority ceiling is defined for every shared resource. The ceiling value of a resource at any time t is the priority of the highest priority task that may lock the resource at or after t.

It has been already shown that PCP is not suitable for ED (Martineau 1 994). It consumes twice more time in overheads than the following SRP.

The Stack Resource Policy (SRP) : SRP can be used with either RM and ED algorithm (Baker 1990). Using SRP, the scheduling of tasks is based on their priorities, be it static or dynamic. In addition to the priority, every task is also assigned a fixed preemption level according to its period. A high priority task can only preempt a low-priority task if its preemption level is also higher.

The classical solution for the AND-model callled the Priority Ceiling Protocol (PCP) (Rajkumar 1 99 1 ) can be extended to the AND/OR-model.

3.2 Prevention in the AND/OR model

In the following example, let's define two tasks Ti and T1. The priority of T1 is highest than the priority of Ti ( p(T2)>p(Ti) ). The available time of Ti and T1 are respectively ri and r2 (r2>ri) .

125

program of Tl : begin

end ;

request ( ( Rl and R2 ) or R3 ) then begin

set of statements a ; request ( R4 ) then

begin

end ;

set of statements b ; end;

program of T2 : begin

end ;

request ( R4 ) then begin

set of statements c ; request ( R3 ) then

begin

end ;

set of statements d; end;

If the first request of Ti locks R3, the execution of Ti and T1 lead to a deadlock.

Waits for R4 • :=:.::: . . . � r1

T.2, -�1�1.E�m;���a�nt\ai•w•a•it•s•fo•r•R•3 .... �.�----� -- J.:.;.;.-_. . .... !! .. ·�·�· ....... r 2

Fig 2 : Deadlock.

As in the PCP, to prevent the formation of deadlock, a priority ceiling is defined for every ressource as the priority of the highest priority task which may lock the ressource :

p(Ri) = max { p(Tj) I Tj can locked Ri }

Definition 1 : A request is an expression whose operands are the resources and whose operators are and and or. This expression can be developped as follows : term i or term2 or . . . or termn where

termi = Ri i and Ri i and ... and Rim.

Each term is a potential solution of the request.

Definition 2 : The priority of a term is :

p(termi) = . �ax . { Rj } .

J= l r.lm

Definition 3 : When a task locks a term (all the elements of a term), its priority becomes :

Pt(T) = max ( Pt-iT), p(term) )

Let's go back to the previous example. Depending from the selected term of the request of Ti , the tasks can be scheduled as follows :

Page 128: Distributed computer control systems 1995 (DCCS ¹95)

Fig 3 : First request of T1 locks R3.

locks Rl and R2

rz

Fig 4 : First request of T1 locks R1 and Rz.

In any case, deadlocks are prevented but the first schedule presents some disadvantage. When T1 locks R3, the priority of T1 becomes equal to the ceiling priority of R3 (p(R3 ) = p(Tz)) ; then Tz can not preempt T1 .

So, the execution of tasks is sequential and the highest priority task is delayed by a lowest priority task until it released the resources.

This problem did not appeared if the first request of T1 locked R1 and Rz (Fig 5) because the priorities of the term R1 and Rz is lowest than this one of R3 . The ceiling priority inherits by T1 is lowest than the priority of Tz which is not delayed.

So, let's define a rule for selecting the term that will be locked by the request primitive.

Rul e 1 : In a request, the term with the lowest ceiling priority among all free terms is selected

Consequently, the inherited priority of a task is defined as follows :

Pt(T)=max{PH:(T), min (p(termi))/tenni is free } . i = l .. .n

Example : Let's consider three tasks Tl · T 2 and T3.defined as follows :

Priority : p(T1 ) = 2 ; p(T2) = 3 ; p(T3) = 8

Available time : r3 > rz > r1 .

126

program o f Tl : begin

set o f statements a ; request ( Rl or R2 ) then

begin set of s tatements b ; if the locked ressource is Rl then

begin request ( R3 ) then

set of statements c ; end;

if the locked ressource is R2 then begin

reque s t ( R4 ) then set of statements d;

end; end;

end ;

program of T2 : begin

end;

request ( R3 ) then begin

set of statements e ; request ( R l ) then

set o f statements f ; end;

program of T3 begin

end;

request ( R4 ) then begin

set of statements g ; reque s t ( R2 ) then

end; set of statements h ;

unlocks R3 unlocks Rl

+ ii ...

unlocks Rl locks R3 locks R 1 unlocks R3

�w@iirnJ;rnrl .. unlocks R2

locks R4 locks R2 unlocks R4

'irwmniaui I .. trity -------------3 !···-··-·-·· .. --··-·····: - - -2 :=- ................................ ... ..

Tl : ··············· T2 : - T3 : - -

Fig 5 : Execution with extended PCP and request selection criteria.

Without prevention, deadlocks may occur between T1 and Tz or between T1 and T3. The ceiling priorities of resources R1 , Rz, R3 and � are respectively: 3, 8, 3, 8. This example shows that the association of extended PCP with our request selection criteria leads to deadlock prevention and reduction of highest

Page 129: Distributed computer control systems 1995 (DCCS ¹95)

priority tasks blocking time. So, the first request of T 1 locks R 1 · Priority of T 1 comes to 3 instead of 8 whether it has locked Rz. When task T3 becomes available, it can locked Rz immediately.

Conclusion : An extension of the PCP to the model AND/OR which garanties that the system never enter in a deadlock situation is proposed. A criteria is also developped to select the term of the request's expression that will be locked in the aim to reduce the sequentialisation of the execution and the blocking time of highest priority tasks.

4. DISTRIBUTED SYSTEMS

In the aim to extend solution· to distributed systems,

the Rajkumar solution (Rajkumar 1 990) which consists in the utilization of a global super-priority, is presented. The super-priority garantees that the critical section cannot be preempted by any other task.

In a distributed system, the distinction between local resources and global resources is necessary : In one hand, a local resource may be locked by several tasks that execute on one processor (it can be protected by the previous protocol), in the other hand, a global resource may be locked by several tasks of different sites. The protection of these resources require a global protocol.

4.1 The Concept of Remote Blocking

Synchronization requirements can introduce much longer blocking delays when tasks execute in a multiple processors environment. On any given processor P, a task can be preempted by higher priority tasks on P, blocked by lower priority tasks on P, and in addition, wait for tasks of any priority on remote processors to release required global resources. The detennination of whether a task meets its deadlines must, therefore, consider not only preemption and blocking on its local processor but also the waiting time introduced by tasks of any priority on all remote processors (Rajkumar 1990). We shall refer to the latter as remote blocking.

A resource that is accessed by tasks allocated to different processors is referred to as global resource, and a critical section which incudes access to a global resource is referred to as a global critical section. Similarly, a resource that is accessed by tasks allocated to a single processor is called a local resource, and a critical section which incudes access to local resources (and only local resources) is referred to as a local critical section. Note that if there are no global resources in the system, the multiprocessor synchronization problem decomposes into multiple uniprocessor problems and the uniprocessor priority ceiling protocol, for example, c�n be used very effectively on each processor.

Example : Consider normal prioritized execution without priority inheritance in effect. Suppose that

127

task Ti is bound to processor 1, and that tasks T2 and T3 are bound to processor 2. As shown in figure 6, suppose that Ti is executing on processor 1 and wants to lock resource R. But R is currently locked by task T3 executing on processor 2. Task TI is now said to encounter remote blocking.

-... locks R unlocks R 0 "' T3 � <.> 0 ... f 3 �

T2 �

r2 ... 0 "' locks R unlocks R "' Cl> <.> 0 ... � r I

Fig 6 : Remote blocking.

Previous works (Rajkumar 1990) have shown that the remote blocking time of a task blocked on a global critical section will be a function of critical section if and only if the global critical section cannot be preempted by tasks that are executing outside critical sections. From this rule arises that the priority ceiling of a local resource R is defined to the priority of the highest priority task that may lock the resource R. And the priority ceiling of global resource is defined such that it is greater than the highest priority assigned to any task in the entire system.

Let the highest priority assigned to any task in the entire system denoted by PH. As the priority ceiling of a global resource must be higher than any priority of other tasks, the priority ceiling of any global resource can be defined as :

ceil(R) = PH + . max {p(Tj) I Tj requests R} J = l ...n

where the first part of the expression garanties that the ceiling priority is greater than any basic task's priority and the second part preserves the relative relations between resources.

4.2 Application to AND/OR model

Let's go back to the previous example with this new rule : R is a global resource, when Ti locks this resource, it inherits the ceiling priority of R. This priority is equal to PH add to the basic ceiling priority of R. It gives : 2.p(T3 ). This priority is higher than the priority of T2 and it enables Ti to finish its critical section. When it finishs, it gets back to its initial priority and is preempted by T2. In the same time, T3 can locked the resource and executes.

Page 130: Distributed computer control systems 1995 (DCCS ¹95)

.... � T3 locks R unlocks R "' � D IU ... .. 0 ... r3 ii.

T 2 I I ;

('l r2 ... locks R unlocks R 0 Tl "' "' n � I MS,DtMMI 0 .. ... ii. q

Fig 7 : Superprioriry solution.

This example shows the efficiency of the superpriority protocol to forbid remote blocking : global critical section are executed rapidly.

With the AND/OR model, our local solution is particularily adapted. It will arised in the minimization of inter-processor transactions because in case of choise between a local and a global resource, our algorithm always selects the local resource. The priority ceiling of any global resource is always higher than the priority ceiling of any local resource. This solution answers specifications of Chen and Lin ( 1991 ). In doing so, the global interference is decoupled into several local interferences so that the worst-case interference can be avoided. The effect of such partitioning is that the blocking sets of tasks can be changed so that the worst-case blocking length of some critical tasks can be improved.

5. CONCLUSION

This article proposed an extension to the AND/OR model of several AND model protocols developped in centralized and distributed contexts. The extensions to the AND/OR of PCP prevent deadlocks and the proposed associated criteria looks for a minimisation of situations where highest priority tasks can be delayed. This deadlocks prevention can be extended to a distributed system using the superpriority protocol. Such an extension leads to a minimization of inter­processor transactions. The priority ceiling of any global resource is always higher than the priority ceiling of any local resource.

Our solution can be used in association with a static priority scheduling algorithm (for example, Rate Monotonic (Serlin 1 972)). A dynamic priority scheduler (for example, Earliest Deadline (Jackson 1 955)), can generate some problems like too important overheads in use with DPCP (Martineau 1994 ). The protocol SRP seems better adapted to this kind of algorithm but the autorisation of preemption is given at the begining of execution instead of begining of critical section. At this time, it is impossible to know wich resources will be free in the futur, when the task will request it.

128

Our futur investigations concern the adaption of such protocols to dynamic priority scheduler. It is already clear that it will probably increases significantly the overheads and transactions between nodes.

6. REFERENCES

B aker, T. ( 1 990). "A Stack-Based Resource Allocation Policy for Realtime Processes", in Real-Time Systems symposium, December, pp 191--200" .

Bracha, G. and Toueg, S. (1 984). "A distributed algorithm for generalized deadlock detection" . Proc. ACM Symp. Principles of distributed computing, pp 285-301 .

Chandy, K.M., Mistra, J. Haas,L.M .. ( 1983) "Distributed Deadlock Detection" . ACM Transaction Computer Systems, pp 144-156.

Chen, M.L and Lin, K.J. (1991). "A priority ceiling protocol for multiple-instance resources", in proceedings of Real-Time systems symposium of San Antonio, Texas, dee., pp 140-149.

Kaiser, C. , ( 1 982) . "Exclusion mutuelle et ordonnancement par priorite", T.S.L, Vol 1 N°1 .

Knapp, E. (1987). "Deadlock detection in Distributed Database". ACM Computing Surveys, vol 1 9, n° 4, pp 303-328.

Jackson, J.R. (1955). "Scheduling a production line to minimize maximum tardiness. Reasearch Report n°43, Managment Science Research Project, University of California, Los Angeles.

Martineau, P. and Silly, M. (1994). "Scheduling in a Hard Real-Time system with shared resources", Proceedings of the 6th EUROMICRO Workshop on Real-Time systems, june, pp 234-239.

Mok, A.K. (1987). "Programming Language Support for Distributed Real-Time Application" , Technical Report, Department of Computer Sciences, University of Texas at Austin.

Rajkumar, R. (1991). "Synchronisation in Real-Time Systems : A Priority Inheritance Approach" . Kluwer Academic Publishers.

Rajkumar, R. (1 990). "Real-Time Synchronisa-tion Protocols for shared Memory Multiprocessors" . Proceedings of the 10th international Conference on Distributed Computing. 1 990, pp 1 16-123.

Serlin, 0. (1972). "Scheduling of time critical processes". In Proc. of the Spring Joint Computers Conf., pp 925-932.

Page 131: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

ANALYSIS OF THE IEEE 802.4 TOKEN PASSING BUS NETWORK WITH FINITE BUFFERS AND SINGLE PRIORITY

Jung Woo Parkt, Deok-Woo Kimt, and Wook Hyun Kwont

t Dept. of Control and Instrumentation Engineering Seoul National University, Seoul 151-742, KOREA

Tel: +82-2-873-2279, Fax : +82-2-885-6620 E-mail : jwpark @isltg.snu. ac.kr

twoorigisool Inc., Yoopoong Bldg., 1595-1, Bongchun-dong, Kwanak-gu, Seoul 151-057, KOREA

Abstract. In this paper, the IEEE 802.4 token passing bus network is analyzed. It is assumed that all nodes have finite buffers, finite THT (Token Holding Time) and asymmetric loads. The priority mechanism is not considered. This paper derives the approximate matrix equation between the queue length distribution and the token rotation time in equilibrium state. Based on the matrix equation, the mean waiting time and the blocking probability are derived analytically. The analytic results are compared with simulation results in order to show that the deviations are small.

Keywords. IEEE 802.4; Token passing bus; Performance analysis

1 . INTRODUCTION

The token passing mechanism is a method that the node who has frames to be transmitted can use the common communication medium only if it has the token. When the token arrives, frames can be served according to the service rule of the token passing mechanism. The service rule of the token passing mechanism can be divided into two types. One is the polling system and the other is the timer-controlled token passing system. In the polling systems, the number of frames which can be transmitted during one to­ken hold is restricted by the total number of transmitted frames (Takagi, 1985) . In the timer­controlled token passing systems, the number of frames which can be transmitted during one token hold is restricted by the THT(Token Hold­ing Time) (IEEE, 1885) . When the total service time of transmitted frames exceeds the THT, the token has to be passed to another node.

Timer controlled token passing system such as IEEE 802.4 token passing bus network which

129

is widely used as industrial network has finite buffers and asymmetric loads. For instance, it is used in the MAP (Manufacturing Au­tomation Protocol) and Mini-MAP as an MAC (Medium Access Control) layer (MAP(fOP Users Groups, 1993). Therefore analysis of timer-controlled token passing system with finite buffers and asymmetric load is important.

Because the analysis of the timer-controlled to­ken passing system with finite buffers is very complex, many studies have tried to analyze the timer-controlled token passing system by using the existing analysis techniques of the polling system (Bhuyan, at al. , 1989; Takine, at al. , 1986). These results of the studies based on the analysis of the polling systems have considerable errors and need much com­putation time in finite buffer case. For exam­ple, the equations in (Takagi, 1991) and (Jung, 1991) requires the matrix calculation which has

2=f:1 IIf:1,;;e/Q1 + 1) + 2=f:1 IIf:1,i=j (Q1 + 1) and IJf:1(Q; + 1) - 1 unknowns respectively(N: number of nodes, Q;: number of buffers) .

Page 132: Distributed computer control systems 1995 (DCCS ¹95)

There are some direct analyses of timer­controlled token passing system without using the analysis techniques of the polling system. These analyses assumed single or infinite buffers which can not be applied to the system with finite buffers (Takine, at al., 1986; Rego and Ni, 1988). Other analyses assumed that the timer­controlled token passing system had symmetric or constant load, although finite buffers were considered (Colvin and Weaver, 1986; Jaya­sumana, 1988). In (Kim, at al. , 1993), the per­formance of timer-controlled token passing sys­tem with finite buffers and asymmetric loads was studied. But the results of (Kim, at al., 1993) showed some deviations from real values, since the paper assumed unrealistically that frames could not be generated when the node had the token. Analysis of the timer-controlled token passing system with finite buffers and asymmet­ric loads where frames can be generated at all times is so complex that no exact results have been found, so far.

This paper suggests the performance analysis of the IEEE 802.4 token passing bus network with finite buffers, single acccess class and asymmetric loads. It is also assumed realistically that frames can be generated at any arbitrary instant. The analytic results derived are the queue length distribution, the mean waiting time, and the mean token rotation time.

This paper is constructed as follows. In Sec­tion two, the model considered in this paper is described. In Section three, the main results are explained. In Section four, the mean wait­ing time and blocking probability are derived. Some numerical results are given in Section five. Conclusions are given in Section six.

2. MODEL DESCRIPTION

A fully asymmetric timer-controlled token pass­ing mechanism is introduced to model the IEEE 802.4 token passing bus networks. Also, it is assumed that the model have only the highest priority frames. We consider the error-free to­ken passing mechanism with N nodes where each node has a different load condition and a finite buffer. The maximum number of frames served during one token hold is restricted by the THT. The token should be passed to the succeeding node when the buffer is empty or the THT expires. The probability distributions of frame service time and switch-over time at each node are not restricted. The probability distribution of frame inter-arrival time follows Poisson dis­tribution. Blocking occurs if the queue is full. In that case, the blocked frames are assumed

1 30

Figure 1 : The IEEE 802.4 token passing bus network

to be lost. As shown in Figure 1 , frames arrive independently at each station . and wait in the buffer.

Symbols used in this paper are listed as follows:

Q; : Number of buffers in the i-th node N : Total number nodes in the network T HT; : Token holding time at the i-th node X; : Mean service time of a frame at the i-th

node x · · : Random variable, service time of the j -i,J

th frame at the i-th node t; : Mean node service time at the i-th node

during one token hold O; : Mean switch-over time of a token from

the node i to the next node .A; : Mean frame arrival rate at the i-th node Bp; : Blocking probability at the i-th node M q; : Mean queue length at the frame arrival

instant in the i-th node T : Mean token rotation time W; : Mean waiting time at the i-th node s · . i,J

e · . l ,J

Pi,j

S;

E;

: Probability that the queue length of the i-th node at the token arrival instant is equal to j (l S i S N, 0 S j S Q;)

: Probability that the queue length of the i-th node at the token passing instant is equal to j (l S i S N, 0 S j S Q;)

: Probability that the j frames are served at the i-th node (1 S i S N, O S j S Q;)

: (Q; + 1) x 1 queue length distribution ma­trix of the i-th node at token arrival in­stant (1 < i < N) S = [s · o s - 1 · · · s · Q ]T - - 'l 'l, 'l, 'l, I : (Q; + 1) x 1 queue length distribution matrix in the i-th node at token de­parture instant (1 S i S N) E; = [e · o e · 1 · · · e · Q-]T 'l, 'l , i , l

3. MEAN QUEUE LENGTH DISTRIBUTION AND MEAN TOKEN ROTATION TIME

Let T AI;,n and TD I;,n be the token arrival in­stant and the token departure instant on the node i during n-th cycle in the network, respec-

Page 133: Distributed computer control systems 1995 (DCCS ¹95)

n-th Cycle (n + 1)-th Cycle

TDhn TDli,n+I

TAhn TAhn+ I

Figure 2: Token arrival and departure instants

tively. Figure 2 shows those time instants.

The characteristics of the token passing bus net­work can be derived by observing the time inter­val between two successive token arrival instants or two successive token departure instants. It is obvious that the token rotation time and the mean node service time are denoted as

and

N T = L (Oi + t;)

i= l

Q; t; = X; L j Pi,j '

j= l

(1)

(2)

respectively. But, in order to solve the above equations, another equations between Pi,j and T are needed. In (Kim, at al. , 1993), the ma­trix equations between Pi,j , S;, and T are easily derived, since it was assumed that there is no frame arrival during token hold. In this paper, however, frames are arrived during token hold just like the standard IEEE 802.4 token passing bus network.

In this paper, the equations, (1) and (2), are derived by using the mean value concept. Since the queue length decreases only when service started, it is possible to derive those equations without introducing any random variables if we consider all possible cases based on probability space of the system at each TAI; . This can be realized such that, first, the queue length distribution at TAI; is derived, second, Pi,j is derived, and finally, we obtain the average queue length distribution E; from S; and the service mapping matrix C; .

Let us consider the system status at token arrival instant. Suppose that the node i just received the token. At the token arrival instant, TAI;, the node checks whether the queue is empty. If the queue is not empty, the first waiting frame in the queue is transmitted. After the transmission is completed, the node checks whether T HT; ex­pires. If T HT; does not expire, the next waiting

1 3 1

frame is transmitted. This operation is repeated continuously until T HT; expires or when the queue is empty. Since T HT; is checked only at the end of transmissions, the node can hold the token longer than T HT;. Since frames can be arrived when the node holds the token, Pi.i depends on s;,j, which is the probability distri­bution of frame service time and frame arrival rate. Let r;,k be the probability that k frames arrive during t; at node i. We divide the time interval t; into M equal sub-intervals, which means maximum M frames can arrive during t;. If we choose M » Q;, the probability that j frames are served at the i-th node is

Pi.i =

S;,O J = Q j

°'i,j-1 L Ti,j-kSi,k k=I

Q1 m

+ /3i,j L L Tm-kSi.k m=j+I k=I

Q; M

+ /3i,j L L TkSi,m 1 < j < Q; m=l k=Qi-m+l

Q; M

°'i,Q;-1 L L Ti,j-kSi.k j = Q;,

where a;,j, {3;,j are defined as

a;,i P[z;,i < T HT;]

(3)

{3;,i - P[z;,i ::'.'.: THT;, z;,i-I < THT;], (4) and z;,j is a random variable defined as,

j

Zi,j = LXi.k · k=l From (3), the relationship between e;,j and s;,i can be derived. By introducing ( Q; + 1) x ( Q; + 1) service mapping matrix C;, the relationship can be denoted as the matrix equation such as,

E; = CiS; , (5)

where C; is defined as (6).

It is obvious from (1) and (2) that the mean time interval in which the node i does not have the token is T-t; . In the time interval T-t;, frames are arrived with the rate of >.; and no frames are transmitted. Thus we can calculate the average number of arrived frames during T - t; from the probability distribution of the frame inter­arrival time. The queue length distribution at the next token arrival instant is denoted by the queue length distribution at the token departure instant and the arrival mapping matrix I'. Let 'Yi,i denote the probability that j frames are arrived at the i-th node during T - t; and let the ( Q; + 1) x ( Q; + 1) arrival mapping matrix I' be (7).

Page 134: Distributed computer control systems 1995 (DCCS ¹95)

r, =

/'i.O 0

/'i,l /'i,O

/'i.2 /'i,J

'Yi.Q;-1 /'i,Q;-2

Q; -1 M

L 'i,kPi,k + L 'i,k/3Q; -1

k = 1 k=Qi

Qi:-1 M L 'i,kPi,k-1 + L 'i,kPQ;-2

k=2 k=Q;

Q;-1 M

L Ti,kPi,k-Q; +2 + L Ti,kPi,I

k=Qi - 1 k=Qi

0 0

0 0

/'i.O 0

l'i,Q;-3 l'i,Q;-4 00 00 00 00 00

0

0

0

0

Q;-2 M

L 7i,kai,k+ l + L 7i,kaQi-1

k=O k=Q;

Qi-2 M

L 'i ,kPi,k+ l + L 'i.kPQ;-1

k=O k=Q;-1

Q;-2 M

L 'i.kPi,k + L 'i,kPQ;-2

k=l k=Q;-1

Q;-2 M

'""' T k{J k Q 3 + '""' T· kP· I L..,., t, 1, - 1 + � 1, 1, k=Q;-2 k=Q;-1

M

T· QU· Q·-J + � 7'.· k"' Q · - 1 1 , 1, 1 � t, t, 1 k = l

M

,. o/J· Q 1 + '""' T· kfJ Q 1 1, 1, i- � 1, 1, i-

k = l

M

7i,Of3i,Qi-2 + L 'Ti,kf31,Q;-2

k=l

M

T· o/J 1 + '"""' 7 kfl· 1 1, 1, � t, 1, k=l

ao.o - 1 a�.1 a0.2

aLo aL1 - 1 a;.2

a�,O a;,I az,2 - 1 . . .

0

0

0

a' ai

ai · · · a' 0 Q;-1,0 Q;-1,l Q;-1.2 Q;-1,Q;

1 . 1 1 0 1

(6)

Ll'i.j L l'i,j L l'i.j L l'i.j · · - 2=/'i,j

(14) The dimensions of the matrices A; and B; are

(Qi + 1) x (Qi + 1) and (Qi + 1) x 1 , respectively. Since A; is uniquely determined for a given T and t; , only one set of S;'s satisfies (13). The equations (1), (2), (3), and (13) can be solved by using the algorithm presented in (Kim, at al. , 1993).

j=Q; j=Q;-1 j=Q;-2 j=Q;-3 j=O

(7) Then si,j is denoted by

S; = I';E; . (8)

Therefore, from the matrix equation (5) and (8), we have

S; = I';C; S; . (9)

By rearranging the equation (9), we have

A;S; = !3; , (10)

where

A; = I'; C; - I, !3; = [O 0 0 of , (11)

and I is an identity matrix of size (Q,+l) x (Q,+1).

The arrival mapping matrix I'; has Q; + 1 in­dependent rows. The rank of C;, however, is Q; since the Q;-th row is a zero vector. Con­sequently, the product I'; C; forms a singular matrix because the last row of I'; C; is a depen­dent row. In order to make the rank of the matrix I'; C; be Q; + 1 , we replace the last row of I'; C; with

S;,o + S;_} + S;,2 + S;,3 + · · · + Si.Q; = 1 . (12)

Thus, if we define matrices Ai and B; as (14), j

where aL = L /i,j-rCr+l,k+b respectively, (10) ·r=O

can be rewritten as

(13)

1 32

4. MEAN WAITING TIME AND BLOCKING PROBABILITY

.

Since frames can arrive at any arbitrary time, mean waiting time is determined by the queue length at the arrival instant of frames. We have calculated the mean token rotation time T and mean service time t; using the analysis suggested in the previous section. The mean queue length in the interval t; may different from that in the interval T - t;, because frames are served only in the interval t;.

Let the queue length distribution at frame ar­

rival instant in the interval t; be D;, where

f). = [ d- o ft 1 d- 2 · . . d- Q· ] • Let us divide l 11 1) i , 11 1 the time interval t; into M equal intervals and define Pi,j(n) as the probability that the queue length at node i is equal to j at n-th sub-interval. At the n-th sub-interval, the probability that j frames are arrived is

�n _ =

(n.A;t;)j (-

nt; , ·) . Ti,1 j !Mi exp

M /\' . (15)

The probability that j frames are served by the

Page 135: Distributed computer control systems 1995 (DCCS ¹95)

end of n-th sub-interval is

n ( nµ;t;)1 ( nt; ) u;,j = j !MJ exp -

M µ; , (16)

where µ; = 1/X; . Thus, the probability p;,1 (n) is

p;,1 (n)=

S;,o j-1 Q;-r

L L Si,r'7�ku�k+r-jai.k+r-j r= l k=j-r

Q; Q;-r

+ L L Si,r"YZku7,k+r-/l'.i,k+r-j ·r=j k=O

Q;-1 n

L L Si.r"YZku7:k+r-Q/ �i.k+r-Q; r= l k=Q;-r

n

+ L s;,Q,"YZku7,kai.k k=O

j = O

l<j<Q;

j = Q;,

(17) where 0 � n � M. Therefore the average

queue length distribution during t; is obtained such that

d;,j 1 M

Jim - '°"' p· · (n) M---+oo M L., i,J (18) =

n= l

Consequently,

S;,O

Obviously, the blocking probability !fp; and the mean queue length M q; during T - t; are given by

and

Q;-1 tJ..p . = 1 - '"' J. k z � i, , k=O

Q; Mq; = L j 'd;,k, j=O

(23)

(24)

respectively. Since frames can arrive whether the node has the token or not, we obtain the mean queue length M q; and the blocking proba­bility Bp; in equilibrium by using (20),(21) ,(23), and (24).

(25)

t; T - t; Bp; = /Jp;T + !fp;-y;- · (26)

Therefore, from (25) and (26), the mean waiting time can be easily obtained using Little's law,

VV:· =

Mq;

' .A;(l - Bp;) . (27)

j = O j-1 Q;-r (2k ') I \ k k+r-j { 2k+r-j

( ' )m m } ..!. '"' '"' Si,r + T - J . Ai µi O:i,k+r-j l - '"' Ai + µ; ti e-(,\;+JL;)t

t; L L k! (k+ r - j) !(.A; + µ;)2k+r-j+I L., m! r=l k=3-r m=O

Q; Q;-r . (2k _ ') !_Ak k+r-j . . { 2k+r-j (,A · )m m } d;,j = + ..!. '"' '"' Si,r + T . J . i µi a.,k+r-J 1 - '"' ' + µ, t; e-(,\; +1•; ) t t; L_., L_., k!(k + T - J) ! (A; + µ;)2k+r-3+I L_., m!

r=J k=O m=O

k=O (19) If the queue is full when a frame is arrived, the

frame is blocked and discarded. The blocking probability at frame arrival instant during t; is

Q;-1 tJp; = 1 - :L fL.k · (20) k=O

And the mean queue length during t; is simply

Q; IVlq; = :L j fl; k , (21) j=O In (Kim, at al. , 1993), the average queue length distribution D; during T - t; is derived, which is

1 � e {i -� (.A;(T-t;))' e-"';(T-t;))} .A;(T-t;) L., '·P L., r ! JJ=O r=-0

d;,j= j "' Q, Q;-1

1 - 2: ct:.k j = Q;. k=O

(22)

1 3 3

1 < j < Q;

j = Q; .

5. NUMERICAL RESULTS

This paper compared the analytic results of this paper with simulation results and with the ana­lytic results of (Kim, at al. , 1993). The example is chosen to show the difference between the real behavior of the token bus network and the approach in (Kim, at al., 1993) and, at same time, to show the analytic equations presented in this paper are quite valid. The example model has 5 nodes. The THT is set to 3msec. Each node has 3 buffers. The probability distribution of frame service time follows exponential distri­bution. And the mean service time of a single frame is 3msec. The probability distribution of frame inter-arrival time follows Poisson distri­bution. We vary the mean arrival rate of frames from 10 to 300 frames per second.

Figure 3 shows both of the approach presented in this paper and that of (Kim, at al. , 1993) result in good approximations. Since it was assumed in (Kim, at al. , 1993) that no frames arrive when

Page 136: Distributed computer control systems 1995 (DCCS ¹95)

the node has the token, the mean waiting time calculated from the equations in (Kim, at al. , 1993) has errors up to 5 milli-seconds, which is shown in Figure 4. As shown in Figure 5 , the blocking probability grows and the errors of analyses diminish as the arrival rate increases, because the probability that the queue is full when a frame arrives grows.

�35---------------� � L ____ _,,,.,,.::!::!::'�::::::::=:::::::::::::::::::-:::::::::::: :=i-:gE� ::�---+P'"'-·-· _.·

··_·

-------------j f:: � l -� 20 /i � 15 / .. ·

.,:.( 10 /'

• Simulation if----i · · · ·· ·· Analysis(Kim. at al.. 1993) j - Analysis(suggcsted) lf----i

� I / � 5 �1 �------------------; 1l I � O +--+--+--+_,__-+-<-+-�-+--+�-<-+--<--+--+--+---+--<-+-r--+-+--+-<

10 30 50 70 90 110 130 150 170 190 210 230 250 270 290 Arrival rate

Figure 3: Mean token rotation time 45 ,.---------------------,

• • • • • • • • . • T � +-----------=����======i

g 3s+--------o£_-----,,.,,��'-"""'----I "' s�+-----+----�-----------j

E �-' ------�----------__, 'Z I .� 20 • Simulation

� 15 · · · · · · · Analysis(Kim, at al., 1993)

c +---.,..,_�----< - Analysis(suggested} "' � 10 5 +--.,,,_ _____________ ___,

10 30 50 70 90 110 130 150 170 190 210 230 250 270 290 Arrival rate

Figure 4: Mean waiting time

0.9+-------------------1 . £ 0.8+-----------------=--r� � 0.7+-----------,,---c�· """�::;.'-.c· · ·_c.· ·_· · ·_· · ·_·· --1· ·· · ·

·- � - - · · · · · · � 0.6+--------��-,,.---'-----------1 0: � - / � 0.5"-------/--?� .. -. ,.-C.-----------j � 0.4+-: ---/--r--.. . 7'-. -,-----------,-�-! � 0.3+-----,,__�-- • Simulation - ./ . . ·· ·· · · · · Analysis(Kim, at al., 1993)

0.2+----.-+-/ .. . � .. -----< -- Analysis(suggested)

0.1 I

.. // o ...._.....,:::.;c.:.:�-+-+-+-t-+-+--+--+-+-r--+-+-;--+--+ -+->-+-+-+---1 10 30 50 70 90 110 130 150 170 190 210 230 250 270 290 Arrival rate

Figure 5 : Blocking probability

6. CONCLUSION

In this paper, the performance analysis of the IEEE 802.4 token passing bus network with finite buffers, single acccess class, and asymmet­ric loads is presented. This paper derived the queue length distribution, the mean waiting time and the mean token rotation time. It is pos­sible to evaluate various load conditions using our model, since the effect of a frame arrival is represented as a single matrix. Therefore, our model is more suitable for the analysis of working IEEE 802.4 token passing bus networks compared with existing studies. This analysis requires a few seconds to calculate mean values of queue length, token rotation time, and wait­ing time. This paper provides an approximate analysis tool, but it is very accurate because the effect of finite size buffers is considered. Based

1 34

on the analysis in this paper, the optimal buffer size of working industrial devices can be deter­mined. Future work in this area involves the priority mechanism to obtain more realistic per­formance evaluation for the IEEE 802.4 token passing bus network.

REFERENCES

Takagi, H. (1985). On the Analysis of a Sym­metric Polling System with Single- Mes­sage Buffers. Performance Evaluation, pp. 149-157.

IEEE (1985). Token Passing Bus Access Method and Physical Layer Specification, ANSI/IEEE Standard 802.4, IEEE, Inc.

MAP(TOP Users Groups (1993). Manufac­turing Automation Protocol 3.0 MAP(TOP Users Groups.

Bhuyan, L. N., D. Ghosal, and Q. Yang (1989) . Approximate Analysis of Single and Multi­ple Ring Networks, IEEE Trans. Comput. , Vol. 38, No. 7, pp. 1027- 1040.

Takine, T., Y. Takahashi, and T. Hasegawa (1986). Performance Analysis of a Polling System with Single Buffers and Its Appli­cation to Interconnected Networks, IEEE J. Select. Areas Commun. , Vol. SAC-4, No. 8, pp. 802-812.

Takagi, H. (1991). Analysis of Finite-Capacity Polling Systems, Adv. Appl. Prob. , Vol •

23, pp. 373-387.

Jung, W.Y. (1991). Analysis of Finite Capacity Polling Systems Based on Virtual Buffering and Lumped Modeling, Ph.D Dissertation, Korea Advanced Institute of Science and Technology .

Rego, V. and L. M. Ni (1988) . Analytic Models of Cyclic Service Systems and Their Appli­cation to Token-Passing Local Networks, IEEE Trans. Comput. , Vol. 37, No. 10, pp. 1224-1234.

Colvin, M. A and A C. Weaver (1986). Per­formance of Single Access Classes on the IEEE 802.4 Token Bus, IEEE Trans. Com­mun. , Vol. COM-34, No. 12, pp. 1253-1256.

Jayasumana, AP. (1988) . Comments on 'Per­formance of Single Access Classes on the IEEE 802.4 Token Bus, IEEE Trans. Com­mun. , Vol. 36, No.2, pp.224 - 225.

Kim, D. W., H.S. Park and W.H. Kwon (1993). The Performance of a Timer­Controlled Token Passing Mechanism with Finite Buffers in an Industrial Communi­cation Network, IEEE Trans. Ind. Electr. , Vol. 40, No.40, pp. 421-427.

Page 137: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

HOW TO SCHEDULE PERIODIC AND SPORADIC TASKS WITH RESOURCE CONSTRAINTS IN A REAL-TIME COMPUTER SYSTEM

Maryline Silly

Ecole Centrale de Nantes I Universite de Nantes LAN I U.A. CNRS n° 823

I rue de la Noe 44072 Nantes Cedex 03 France e-mail: [email protected] Jr

Abstract: We study the problem of scheduling periodic and sporadic tasks on the nodes of a distributed system. A periodic task consists of an infinite number of requests, each of which has a prescribed deadline. Each node is a monoprocessor machine and we assume that the existence of enough processing power for all periodic tasks is verified at system initialization time. Periodic tasks may interact each other by sharing critical resources. The Dynamic Priority Ceiling Protocol is used to guarantee that each task completes execution before its deadline and each resource is never accessed by more than one task simultaneously. In addition, we allow for the unpredictable occurrence of sporadic tasks. We develop a local scheduling scheme that permits first to test on-line whether or not an occurring hard sporadic task can be guaranteed to meet its timing requirements and second to schedule soft sporadic tasks with a minimal response time.

Key-words: Process control , Scheduling algorithm, Periodic tasks, Deadlines, Resource allocation.

1 . INTRODUCTION

Distributed computer control systems are applied to small and large processes in various fields. Their essential function is to allow a predictable control of a process and to acquire data concerning this process. The application software supported by the distributed system is composed of a collection of tasks which function together in order to realize a common objective. The application domains of distributed computer control systems are various and include flight control, nuclear power plants, manufacturing systems ,. . . The reasons for which dis tributed computer architectures are well suited to process control are the followings: - when the application is large and complex, they permit to decompose the process control problem into manageable entities and to handle the real-time activities on different computers, avoiding system overload situations ; - the natural characteristic of fault-isolation enables a distributed system to normally continue even if a failure

1 35

occurred in one part of the system; - finally. a fundamental benefit of a distributed system is its extensibility i.e the ability to add control functionalities without system impact. In this paper, we are concerned with a distributed computer architecture that supports hard real-time software i.e a software in which tasks have deadlines that must be met, otherwise there might be severe consequences. It comes that the major goal in designing a real-time distributed computer system is to ensure the adherence of all timing constraints during execution in order to avoid a system failure. Here, the tasks of interest run on computers that are connected by a communication network. Each computer, built around a single processor is called a node and has its own private memory which contains a copy of the operating system and a part of the application software, this one being initially distributed over all the nodes. Generally, a node is responsible for controlling a set of devices and consequently works in their feedback loop. Its derives its inputs from sensors and the operator and then its outputs are sent to control

Page 138: Distributed computer control systems 1995 (DCCS ¹95)

actuators and used to update displays. Because the control function dedicated to each ncxle is well def med so is the set of tasks that the node has to run.

Real-time tasks are either periodic or non-periodic. A periodic task is invoked at fixed intervals and constitutes the base load of the system. Its attributes, such as the required resources, the execution time, and the invocation period are usually known in advance. A sporadic task consists of a computation that responds to internal or external events. A soft sporadic task profits from being executed as soon as possible after its arrival while a hard sporadic task needs to meet its deadline. For example, soft sporadic tasks represent application services in response to operator requests such as maintenance and bookkeeping and hard sporadic tasks are invoked in abnormal or critical situations such as a perturbation in the control object. One can ensure the correct execution of periodic

tasks by allocating them to the processing nodes of the distributed system, statically i .e during the implementation phase. In contrast, the allocation of sporadic tasks is done dynamically during system operation by distributing them as they arrive, which allows more efficient use of system resources. Distributed scheduling has two basic functions : sharing tasks among the nodes and locally scheduling tasks on each individual node. The local scheduling policy will be dedicated in one hand, to sequence periodic tasks initially assigned to the node and on the other hand to find which hard sporadic tasks to accept and which to reject in a time efficient manner.

In this paper, a local scheduling strategy will be described. Tasks are assumed to be preemptable and may interact with each other by sharing resources. The Dynamic Priority Ceiling Protocol will be used to handle shared resources (Chen, Lin 1990). While this protocol has been well described using a model of periodic tasks, now, it appears of primary importance to study it with respect to a mixed set of periodic and sporadic tasks. The paper is organized as follows: In section 2, a state of the art relative to scheduling with timing and resource constraints is given. In section 3, the scheduling approach is presented and in section 4, we describe its implementation. Finally, section 5 concludes the paper with a brief summary.

2. BACK.GROUND

2.1 Scheduling periodic tasks

In a single-processor system in which all tasks are independent, periodic and preemptable at any time, it was shown that the Deadline Monotonic (DM) algorithm is optimal among all fixed priori ty scheduling algorithms (Audsley, Burns, Wellings 1991) and the Earliest Deadline (ED) algorithm is optimal among all dynamic priority scheduling algorithms (Liu, Layland 1973). These algorithms are optimal in that sense that they can al ways

136

produce a feasible schedule if any other algorithm of the same class is able to do so. According to DM, a higher priority is assigned to a task with a shorter critical delay. Since tasks are defined with fixed critical delays, their priorities are fixed. Systems using ED execute the request with the earliest deadline. Since different requests of the same task have different deadlines, the task have dynamic priorities from request to request. In what follows, a dynamic priority scheme will be opted for. Such scheme is generally presented as less suitable than a static priority scheme the implementation of which is simple and involves little overhead. However, real-time systems are highly evolutive and

require a flexible and predictable scheduler able to cope with dynamic changes in processor workload. From this point of view, the ED algorithm brings efficient answers to an important number of questions arising in next generation systems including on-line acceptance of sporadic tasks, timing fault-tolerance, ...

22 Scheduling hard sporadic tasks

The problem of jointly scheduling both hard periodic tasks and hard sporadic tasks in dynamic priority systems has been considered in (Chetto, Chetto 1989) and (Schwan, Zhou 1992). Periodic tasks are scheduled according to ED algorithm. Proved to be also optimal for sporadic tasks (Dertouzos 1974), ED appears to be very appropriate for systems that support a mixture of hard deadline tasks. The algorithm proposed by Chetto and Chetto has been designed with a view to achieving a high flexibility which lies in the ability to have a precise knowledge of maximum slack time which can be recovered at any time instant and then dedicated to sporadic tasks. Initially, the critical delays were assumed to be equal to their period and at any time there is only one ready sporadic task that requires to be run. The algorithm presented in (Silly, Chetto, Elyounsi 1990) relax these assumptions and is still optimal in that sense that any occurring sporadic task is accepted if and only if all periodic tasks and previously accepted tasks meet their deadline. The paper of Schwan and Zhou (Schwan, Zhou, 1992) also developed a dynamic scheduling algorithm for periodic and sporadic tasks, based upon ED. Its main advantage concerns its efficient implementation since i ts worst case complexity is O(nlogn). Efficiency is attained thanks to an adequate representation of scheduling information under the form of a data structure termed a slot list to record the time periods at which tasks have been scheduled. Whenever a new task arrives, a feasibility test is performed by only considering the subset which contains tasks whose scheduling intervals conflict with the scheduling interval of the newly arriving tasks.

Page 139: Distributed computer control systems 1995 (DCCS ¹95)

2.3 Scheduling soft sporadic tasks

The problem of jointly scheduling both hard and soft deadline tasks has been an active research area in the last few years. Most of approaches extend the Deadline Monotonic algorithm. The simplest approach consists in relegating soft tasks to background processing by executing them at a lower priority level than any hard periodic task. In another approach known as Polling, the capacity of a periodic task task called server is used to service sporadic tasks. This presupposes to compute off­line the capacity of this server such that the set of hard periodic tasks is schedulable. While the Polling approach reveals to be superior to Background, its main disadvantage lies in that the soft sporadic tasks ready at a given time may exceed the capacity of the server because the server is not necessarily coordinated with the arrival process, then leading to long response times since some sporadic tasks must wait for the next release of the server to be executed. Other approaches termed Bandwith Preserving have been developed and do not suffer from the s e disadvantages. The Priority Exchange, Deferrable server and Sporadic server also give the preferential treatment of periodic tasks over sporadic tasks but allow to preserve capacity throughout the server's period and not only at the beginning. While Bandwith Preserving methods lead to shorter response times than Polling and Background at low and medium loads, they degrade to provide the same performance as Polling at high loads. Furthermore, since they are based upon the worst case execution time of periodic tasks, they do not permit to reclaim spare capacity when the effective execution time is less than the worst case execution time. Nevertheless, such drawback was avoided by the Extended Priority Exchange algorithm.

Because the Bandwith Preserving methods appear to be time consuming and lack flexibility to profit from spare capacity, more recently, a new algorithm called Slack Stealing was developed by Lehoczky and Ramos Tbuel (Lehoczky, Ramos­Thuel 1992). It was proved to be optimal in that sense that it minimizes the response time of soft sporadic tasks among all static priority algorithms which meet deadlines of hard periodic tasks. The Slack S tealing algorithm consists in making any spare processing time available as soon as possible by stealing slack from the periodic tasks. Determination of the slack available at any time instant is possible because the processor schedule is mapped out off-line and then inspected at run-time. Such algorithm suffers from the need to map out the hyperperiod equal to the least common multiple of task periods which may be very long if tasks are not strictly periodic. Moreover, tasks are assumed to be independent i.e have no synchronization constraints.

A variation of this algorithm, termed Dynamic Slack Stealing, was proposed by Davis et al (Davis, Tindell, Burns 1993) to permit to deal with a more

137

general task model. Proved to be optimal, the Dynamic Slack Stealing algorithm computes the slack at run-time and consequently easily adapts to an extended class of dynamic real-time systems where sporadic tasks may be hard and tasks may exhibit synchronisation. Further, by exploiting run­time information about periodic task execution requirements, it permits to reclaim gain time and improve the response times of soft tasks. However, the infeasibility of this algorithm due to its prohibitive execution time overheads has lead to develop approximate algorithms that nevertheless provide close to optimal performance.

2.4 Scheduling with resource constraints

In a context of resource sharing, each task is composed of a set of modules which are executed serially and may be critical sections that is have a mutual exclusion requirement. The addition of critical sections makes the scheduling problem an NP-hard problem the solution of which can -only be obtained in polynomial time by a sub-optimal heuristic. I n such a context, ED i s no longer optimal. Moreover, when it is used, one must cope with a specific problem called priority inversion which occurs when a high priority task is forced to wait for the execution of many lower priority tasks for an indefinite length of time. This kind of waiting is called blocking. One way to limit the priority inversion problem consists in using specific resource access control protocols which coordinate the access to shared resources. The Dynamic Priority Ceiling Protocol (DPCP) bas been proposed to enhance ED (Chen, Lin 1990). DPCP is especially dedicated to systems with a dynamic priority scheme. A priority ceiling is defined for every critical section and its value corresponds to the priority of the highest priority task which uses or will use the resource. This protocol consists of two mechanisms respectively termed priority inheritance and priority ceiling. In the priority inheri tance mechanism, a low priority task T in a critical section temporarily inherits the priority of the highest priority task currently waiting for T to leave the critical section. In the priority ceiling mechanism, a task T is allowed to enter a critical section only if its priority is higher than the priority ceilings of all critical sections currently being used by any other task. Like other ceiling-based protocols, DPCP prevents deadlock and ensures no more than one blocking for any task. DPCP bas been extensively studied during the last few years but results mainly concern periodic tasks and new schedulability conditions. In what follows, a new acceptance test for sporadic tasks will be proposed, assuming that tasks are scheduled according to DPCP.

3. LOCAL SCHEDULING

Suppose that periodic tasks may lock or unlock semaphores according to DPCP and the task set of

Page 140: Distributed computer control systems 1995 (DCCS ¹95)

interest is schedulable. In other terms, all the requests can meet their deadlines. First, assume that there are no sporadic tasks and the periodic tasks are executed as soon as possible. The processor activity can be simply identified by the list of its busy periods within a well-known time bounded interval called the hyperperiod. Indeed, it was proved that the schedule produced on a periodic task set is repetitive with repetition period equal to the least common multiple of task's periods (Leung, Merril 1980).

3.1 Acceptance of hard sporadic tasks

Now, assume that some sporadic tasks have been accepted on the machine. After their arrival and acceptance, these tasks are jointly scheduled together with the periodic tasks according to DPCP. Whenever a new sporadic task arrives, the acceptance routine must determine whether it can be accepted. The approach proposed in order to solve this problem is based on the exact computation of the largest amount of slack time available, as in the sporadic scheduling algorithm developed in the case of independent periodic tasks (Silly 1994). Such computation is performed on-line and uses the current state of processor workload. More precisely, it was proved that executing periodic tasks according to the Earliest Deadline as Late as possible (EDL) algorithm will result in a feasible schedule where maximum processor idle time is made available as soon as possible. Then, it comes that determination of the marjmum processing time which can be recovered from any current time without injuring timing requirements of periodic tasks amounts to simulate the schedule produced by EDL on the current set of periodic requests up to the end of the hyperperiod. The localization and duration of the busy periods in the resulting schedule (called the EDL schedule) enables us to provide a n acceptance test by comparing these data with the timing requirements of the sporadic tasks.

In this section, a similar approach is developed by taking into account additional constraints due to resource accesses. Let t be the current time. Let S be the set of semaphores currently accessed at t. Each semaphore sj in S is characterized by the deadline of the request that locked it, dl , the maxim':lm amount of processor time req�r� to unlock it, Bl (t) and the ceiling deadline of SJ , cl which corresponds to the d�adline of the highest priority task that will access SJ. At time t, the dynamic workload imposed on the machine by periodic tasks results from the set of requests, denoted by T, which require to be processed from t up to the end of the hyperperiod. Among these requests, one can distinguish those that will become ready for execution after time t and those that have already started their execution.

The determination of the largest amount of processing that can be done during any interval [t, t'], t<t', denoted by Q(t,t') is obtained by:

1 38

- forming a fictitious periodic task set, say T' from 'T by use of the following modification: for each semaphore sj in S , subtract W(t) from the execution time of any periodic request of T with a deadline equal to di and add up Bi(t) to the execution time of any periodic request with a deadline equal to ci.

- and applying the EDL scheduling algorithm to 'T' from t up to the end of the hyperperiod and so, determine the localization and duration of the idle time periods in the resulting schedule, called the EDLm schedule. Computation of the EDLm schedule then permits to identify the latest start time of the periodic tasks while guaranteeing their timing requirements and consequently enables us to determine at any current time t and for any future time t', the maximum processing time, Q(t,t'), which can be recovered in order to process additional tasks. Once constructed, the EDI..m schedule is memorized thanks to the list of its idle time periods, each of then characterized by its start time and finish time. Now, let consider the system at time t corresponding to the arrival of a new hard sporadic task. At t, the set of sporadic tasks present on the machine is described by a set of couples (Cj, di) where Ci and di respectively denote the remaining execution time and the deadline of the sporadic task Thi. Let Th(C, d) be the new occurring task. The problem of testing whether Th can be accepted for execution amounts to verify whether all sporadic tasks with a deadline greater than that of Th meet their deadline with respect to the EDLm schedule produced on the periodic task set T. Let denote by Q(t, t') the sum of the execution times of all sporadic tasks with a deadline less than or equal to t'. Then, the following condition guarantees the acceptance of task Th:

for every i such that di>d, Q(t, di) � Q(t,di) (1) This test runs in O(N+m) where N and m respectively represent the maximum number of periodic requests within the hyperperiod and the maximum number of sporadic tasks simultaneously present on the machine.

3.2 Minimizing response times of soft sporadic tasks

Here, we assume that soft sporadic tasks have the same priority and consequently, the arrival time will be used to break the competition tie on First Come First Serve (FCFS) basis. Each sporadic task is specified by a processing requirement which corresponds to the maximum time required for its complete execution. In what follows, we will say that a scheduling algorithm is optimal if, for any sporadic arrival stream processed in FCFS order and for any periodic task set scheduled according to DPCP, the response time of every sporadic task is minimized.

Page 141: Distributed computer control systems 1995 (DCCS ¹95)

At current time t, the set of soft sporadic tasks is characterized by a set of values Cj that denote the remaining execution time of task Tsi We assume i<j implies that Tsi arrived before Tsj . From what precedes, assuming that there are no hard sporadic task, it comes that the earliest completion time of T s i is the time instant fi that satisfies

Q( 1,fiJ = I c j c2). j�i

The scheduling scheme consists in executing periodic tasks as soon as possible according to DPCP as long as no soft sporadic task requires to be executed. Whenever at least one such task arrives, the dynamic idle time.vector is computed so as to determine the start time and the finish time of the EDLm busy periods for executing periodic tasks as late as possible. Such computation is only required whenever a new sporadic task occurs while no other one was present since the dynamic idle time vector does not change as long as periodic tasks are executed in the EDLm busy periods. The optimality of this scheduling strategy lies in that it maximizes the processing power of the machine made available as soon as at least one sporadic task arrives and until all sporadic tasks completes their execution. We note that the time complexity of the strategy is comparable with that of any priori ty driven algorithm since the computation of the dynamic idle time vector reveals necessary only at particular time instants described above, which means that the overheads decrease with increasing system load. Now, let assume that both soft sporadic tasks and bard sporadic tasks require to be run on the machine. We are concerned by the problem of executing periodic tasks and hard sporadic tasks as late as possible which amounts to determine the start time and the finish time of the busy periods for these tasks. Let t be the time instant corresponding to the occurrence of a new hard sporadic task and suppose that at least one soft sporadic task is pending for execution. Let denote by oo the length of the first idle time period in the EDLm schedule. In other terms, t+ oo is the latest start time of the next periodic task to execute. Let Oj be the laxity of the hard sporadic task Thj. Let O= min {Oj , i:?;()}. It is clear that the start time of the next busy period for hard deadline tasks will be given by t+O. Its finish time will be either equal to the earliest deadline followed by an idle time period among that of periodic requests if O=Oo or equal to the deadline of the hard sporadic task Thi if O=Oj.

4. OUTI.lNE OF TIIE SCHEDULING SCHEME

4.1 The local scheduler

Our algorithm called SCHEDULER is shown in Fig . l . Basically, SCHEDULER performs the Dynamic Priority Ceiling protocol for scheduling

139

periodic tasks and hard sporadic tasks and uses the FCFS rule for scheduling the soft sporadic tasks. Whenever a soft sporadic task occurs and the list of soft sporadic tasks was empty, SCHEDULER computes the EDLm schedule, identifies the busy periods for hard sporadic tasks and periodic tasks and executes soft sporadic tasks in the remaining idle time periods.

Algorithm SCHEDULER (LPf: per_task_set_type) (* returns a feasible schedule for periodic and hard sporadic tasks where response times of soft sporadic tasks are minimized*) begin

CALCUL (LPf, var D, var K, var P); OC:=false;

� t:=O; INIT ( K, D, var Kt. � Dt); while (t<P)

while (Ls)={}and t<P) do SCH_DPCP (Lb); t:=t+ l ; end while

UPDA1E ( Lb, St. � Kt. var Dt); while {Ls:;t{}and t<P) do

BUS_PER <Dt. o, var s, var f)); while (t<s and Ls:;t{} and OC=false ) do

SCH_FCFS ( Ls) end while while (l>s and t<f and OC=false) do

SCH_DPCP (Lb) end while end while

end while until false

end.

Fig. 1 . The local scheduler

Algorithm SCHEDULER uses the following procedures:

- CALCUL initially computes the static idle time vector (D) and the static deadline vector (K) from the list of periodic tasks (LPI) that maintains their static attributes.

- INIT is invoked at the beginning of every window with length equal to the least common multiple of task periods (P). It reinitializes the dynamic deadline vector (Dt) and the dynamic idle time vector (Kt) with respectively D and K and reinitializes current time (t ) to 0.

- SCH_DPCP selects a task for execution according to DPCP in the list of hard deadline tasks ( Lb), updates dynamic attributes of tasks in Lb and updates the list of currently locked semaphores (St). Lb gathers hard sporadic tasks and current requests of periodic tasks.

- SCH_FCFS selects a task for execution in the list of soft sporadic tasks waiting for execution (Ls) according to FCFS and updates Ls.

- UPDA1E is invoked whenever a soft sporadic task occurs while Ls was empty. It updates Kt and Dt in order to schedule periodic tasks in the EDLm busy periods.

Page 142: Distributed computer control systems 1995 (DCCS ¹95)

- BUSY _PER returns the start time (s) and the finish time (f) of the next busy period for the hard deadline tasks. Thus, BUSY _PER is invoked as long as list Ls is not empty, at the end of every busy period and each time a new hard sporadic task has been accepted since this may modify the laxity of the system. When such a situation happens, we assume that the acceptance routine possibly executed on a co-processor has updated the minimal laxity (o) among that of hard sporadic tasks and has set a boolean variable (OC) to TRUE that notifies the acceptance of a new hard sporadic task to the scheduler.

42 The guarantee routine

Whenever a new hard sporadic task occurs and requires to be processed, the acceptance routine described in Fig. 2 is invoked and performs the decision test. If accepted, the sporadic task is inserted in the list of tasks pending for execution. If rejected, the task may be considered by another module , generally called the bidder, in charge of sending i t to a machine whose processing power is sufficient enough for guaranteeing it a feasible execution.

Algorithm ACCEPTANCE Cfh: spo_task_type)

(* tests the acceptance of a new occurring task Th with execution time C and deadline d *) begin

INSERT (C, d, var Lh); UPDATE (Lh, St, var Kt • .Ym: Dt) ; if TEST <Kt. Lh) = TRUE

end.

then OC:= TRUE else REMOVE (C, d, � 4) end if

Fig. 2. The guarantee routine

Algorithm ACCEPTANCE uses the following

procedures: - INSERT temporarily inserts the occurring task

in the list of hard deadline tasks pending for execution.

- TEST verifies inequality (1) to test whether the occurring task can be accepted for execution. If the test is positive, the global variable OC becomes TRUE and will indicate to SCHEDULER that a new task has been accepted.

- If the task is rejected, its descriptor is removed from the list of tasks (Lh) by procedure REMOVE.

5. SUMMARY

We have proposed an approach to verify whether a real-time system can adhere with all its timing requirements. Real-time software was described by a set of periodic tasks initially assigned on a node of a distributed system, and in addition sporadic tasks that occur and require to be run on this node at

140

unpredictable times. The scheduling scheme uses the Dynamic Priority Ceiling Protocol which enhances Earliest Deadline in a context of resource sharing. An algorithm was developed which permits first to test the acceptance of hard sporadic tasks and execute them together with periodic tasks and second to execute as soon as possible soft sporadic tasks for minimizing their response times.

REFERENCES

N.C. Audsley, A. Burns, M.F. Richardson and A.J. Wellings. Hard Real-Time Scheduling: The Deadline Monotonic Approach . Proc. of 8 th IEEE Workshop on Real-Time Operating Systems and Software, Atlanta, May 1991 .

M.I. Chen and K.J. Lin. Dynamic Priority Ceilings: A Concurrency Control Protocol for Real-Time Systems. Real-Time S ystems Journal, 2(4), pages 325-346, Dec. 1990.

H. Chetto and M. Chetto. Some results of the earliest deadline scheduling algorithm. IEEE Trans. on SW Eng. , 18(8), pages 736-748 , 1989.

H. Chetto, M. Silly and T. Bouchentouf. Dynamic Scheduling of Real-Time Tasks under Precedence Constraints. Real-Time Systems Journal, 2, pages 181- 194, 1990.

R.I. Davis, K.W. Tindell and A. Burns. Scheduling Slack Time in Fixed Pre-emptive Systems. Proc. of IEEE Real-Time Systems Symp. pages 222-23 1 , Dec. 1993.

J.P. Lehoczky and S. Ramos-Thuel. An Optimal Algorithm for Scheduling Soft-Aperiodic Tasks in Fixed-Priority Preemptive Systems. Proc. of IEEE Real-Time Systems Symp., pages 1 10-123, Dec. 1992.

J.Y.T. Leung and M.L. Merril . A note on preemptive scheduling of periodic real-time tasks. Information Processing Letters, 1 1(3), pages 1 15- 1 1 8, 1980.

C.L. Liu and J.W. Layland. Scheduling Algorithms for Multiprogramming in a Hard Real-Time Environment. Journal of the ACM, 20( 1), pages 40-61 , 1973.

K. Schwan and H. Zhou. Dynamic Scheduling of Hard Real-Time Tasks and Real-Time Threads. IEEE Trans. on SW Eng., 18(8), pages 736-748, 1992.

M. Silly, H. Chetto and N. Elyounsi. An Optimal Algorithm for Guaranteeing Sporadic Tasks in Hard Real- Time Systems. IEEE Symp. on Parallel and Distributed Processing, Dec. 1990.

M. Silly. A Dynamic Scheduling Algorithm for Semi-Hard Real-Time Environments. Proc. of 6 th Euromicro Workshop on Real-Time Systems, pages 13- 137, June 1994.

Page 143: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IF AC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

LAN MEDIUM ACCESS CONTROL SINIULATION STUDY UNDER A REAL-TIME DCCS: AN AUTOMATED GUIDED VEIDCLES SYSTEM

J. A. Sirgo, H. LOpez, J. C. Alvarez, J. M. Alvarez

Departamento de lngenieria Electrica, Universidad de Oviedo Campus de Viesques sin, 33204-Gijon, Spain

Abstract: This paper will describe the communication system implemented to control an AGVS traffic and tasks and show the behaviour of different Medium Access Controls (MACs) used in LANs and Fieldbuses under the AGVS realistic requirements. The behaviour is studied by simulation of this MACs with a software Simulation Package. The results indicate that not always the same MAC method gives the best performance in a communication system for real-time DCCS. Sometimes, real-time oriented MACs obtain worse results than other non real-time oriented MACs. It depends on the DCCS communication requirements.

Keywords: Real-time communication; Networks; Automated guided vehicles; Flexible manufacturing systems.

1 . INTRODUCTION

The "Departamento de Ingenieria Electrica, Electr6nica,

de Computadores y de Sistemas" (DIEECS) at the University of Oviedo is developing a prototype of a Flexible Manufacturing System (FMS). This FMS consists of an Automated Warehouse, an Automated Guided Vehicle System (AGVS) and several working cells.

In this FMS the control is distributed over several

computers and microprocessor boards connected by Local Area Networks (LANs) or Fieldbuses. This paper will describe the communication system implemented to

control the AGVS traffic and tasks and show the behaviour of different Medium Access Controls (MACs) used in LANs and Fieldbuses, obtained by simulation of this MACs under the AGVS realistic requirements. The communication system is under real­time requirements, as alarm events that should be managed as soon as possible. So it is very important to choose an accurately MAC and simulation will help to find it.

1 4 1

2. THE AGVS COMMUNICATION SYSTEM

The AGV shown in Fig. 1 is designed to transport palletised goods. The designed AGV incorporates an on-board microprocessor in order to control all vehicle tasks and sensors, as shown in Fig. 2. When the vehicles are in movement, a communication network is required to supply commands to the vehicle from a remote host computer which controls the whole AGVS.

To release the host computer from routine communication tasks, there are two microprocessor boards that works like an interface between hosts

computer and low level communication devices.

One of them, Microprocessor Board for Inductive Loop (MBI), allows on-route communication between AGVs and host computer emiting to and receiving information from an aerial in the AGVs. This communications system is called inductive loop.

The other, Microprocessor Board for Work Cells (MBW), synchronises AGVs load/unload operations by means of an infrared communication system.

Page 144: Distributed computer control systems 1995 (DCCS ¹95)

Fig 1 . Designed AGV.

A reliable method to control the two busses must be adopted to avoid errors which would come up if several vehicles undertakes communication simultaneously. Polling is the solution given to this problem. The MBI and MBW addresses one of the vehicles of the system sending a message through the inductive loop or the infrared ports. Even though this solution is highly reliable, in large systems, vehicles might be waiting for instructions for long time. Therefore, if the number of work cells and the length of the paths is high enough, installing several MBI and MBW is advisable in order to improve efficiency. Each MB, is to control an area of the whole path or a set of work cells. Then, several operations may be carried out simultaneously and vehicle waits for polling drop down. Hence, traffic fluency and speed are remarkably increased.

3. THE MAC SIMULATION PACKAGE

As polling could not to be the best choice as MAC for an AGVS, other MACs has been tested with it by mean of a MAC Simulation Package. This Package is so flexible that the clients of the network can be configured ad hoc. Any number of hosts in the network can be defined as master or slaves. The number per second of frames generated by time distributions or deterrninisticaly, source and destination of the frames, fixed or variable frame length, require or not acknowledgement, acknowledgement frame length, etc. can be defined too.

Other MAC evaluation techniques as analytic modelling or queuing simulation packages, were discarded mainly because his lack of flexibility to modelling the host behaviour in the network.

The MACs that can be tested include Master-Slave Polling, CSMA/CD, CSMA/DCR, Token-Bus, and Token-Bus with Master-Slave Polling. The Simulation Package gives infonnation on mean or maximum

142

Fig. 2. On-board architecture.

transfer, queuing and acquisition time. It gives to infonnation about the load and the perfonnance of the network as number of bits/s generated and transferred, nwnber of frames, etc. So, information about the network behaviour under each MAC can be obtained if data is represented on graphics.

One of the main problems in the development of the MAC Simulation Package, was the simulation of the CSMA collision events. If an purely event driven simulation is used as Prasad and Patel (1988) did, it requires a lot of computing time. Using an collision slot model similar to that used by Moura, et al. ( 1989) in their analytic model, the computing time for this simulation drops down. However, results are so closed to those obtained by Bux (1981), Prasad and Patel (1988) and Moura, et al. (1989) that the model should be valid.

The Simulation Package MAC models have certain limitations, over all in the CSMA models: the physical link is error free; the bus is lineal, not a tree; the hosts are equidistant and uniformly distributed all along the bus; and the propagation time in negligible.

And some parameters are common for all the MACs, transmission speed: 10 Mb/s, bus length: 2000 m. and propagation speed: 200· 106 mis.

Even though some MACs are not implemented at 10 Mb/s transmission speed, the same transmission speed has been take for all of them to compare the performance of different MAC methods.

The compared parameters are mainly the acquisition time and the transfer time. The transfer time of a message is the sum of: queuing time (the time period from the generation of a message until its arrival at the front of a ready queue), acquisition time (the time period between the arrival of a message at the front of a ready queue to the capture of the physical bus for its transmission), transmission time (the length in bits of the frame carrying the message

Page 145: Distributed computer control systems 1995 (DCCS ¹95)

divided by the transmission speed in bits/s), and propagation time (the time spent by the electricity to reach the destination host).

The Simulation Package logs the reply time too, the time period between the message generation to the receipt of his acknowledgement or a reply message.

Each simulation is repeated several times by statistical reasons to obtain the mean of each time, and the maximum is logged too. The maximum

values logged have not an statistical meaning but as a result of thousands or even millions of simulated

messages transmissions, it gives an idea of the maximum delay on each case.

4. AGVS REAL-TIME COMMUNICATION REQUIREMENTS

Based on the AGVS inductive loop communication system prototype, the real-time requirements has been defined. The vehicles have a ma .... imum speed of 1 mis and they generate an event, for instance passing over a track cross, approximately every second. It has been considered that an alarm should be attended in less than 0 . 1 seconds to avoid vehicle collisions or accidents. The traffic controller send a command to each vehicle every 10 seconds, and load a new map or program in each vehicle every 1000 seconds (more or less three or four big messages to each vehicle every hour). To check if everything is working on, the traffic controller send a message asking for the AGV status every second to each vehicle. So, the traffic controller can be called master and the vehicles can be considered as slaves.

The message size has been estimated in 10 bytes for every command, event, checking or alarm message

and an average of 1 000 bytes for a loading map or

program message.

The study has been based in a 100 AGVs plant. To

control the vehicles traffic and tasks there have been defined two models: a single master controller or ten distributed master controllers in the plant.

In the first model (Model 1 ), the single master has to poll all the AGVs to determine if every vehicle is alive and working on, send them tasks, load on them new maps or programs and attend their events and alarm as indicated above.

In the second model (Model 2), each master controls 10 of the 100 AGVs. Some additional message traffic should be added to the network model as co­ordination messages between masters. It has been defined an average of one 10 byte message every second between each possible pair of masters.

143

The second model has more significance in the

Master-Slave Polling, were a master can control a region of the AGVs path, and has to co-ordinate with another master when an AGV pass from one's region to the other's. But in this MAC, the co­ordination messages should be transmitted by a different physical link and every master has to own a

physical link different to the others because Master­Slave Polling can not support several Masters in the

same bus.

Another difference between the Master-Slave Polling

and the other MACs in the two aforementioned models, is that an slave can not send a message if it is not polled by the master. The master should poll each slave ten times every second to carry out the alarm messages requirements. This is a big load to the bus.

5. MAC SIMULATION RESULTS

The above network models description has been implemented easily in the Simulation Package to be simulated for each MAC in spite of its complexity.

In the below graphics, except for the Master-Slave Polling, not only the above models has been simulated. The load generated for the above models

is about one Mbit/s, i.e. a 0. 1 or 10% load in a 10 Mbit/s transmission speed bus. In the graphics, the first mark represents a simulation with a tenth of the load described in the models, the second is more or less the mentioned load (approximately a 1 0% of the bus capacity), the third is two times the load, the

fourth is three times and so on. Them, it can be seen how the increased load affects to each MAC delay times. The delay times represented are: acquisition time (t_ac), queuing time (t_qu) and transfer time

(t_tr).

5. 1 Master-Slave Polling.

In the graphics, the mean and maximum times have been represented for the unique master model and the ten masters model. Obviously, when the polling load is spread on several masters the performance of

the communication system is better, but the configuration is more complex.

The number of hosts in the Master-Slave bus, i.e. the number of slaves that a master controls, have not effect on the delay times if the message load in the bus is the same.

For the Model 2 the Master-Slave Polling can carry out the AGVS communications system requirements (for instance, the maximum transfer time logged of a

message is 10 ms.), but not for the Model I .

Page 146: Distributed computer control systems 1995 (DCCS ¹95)

Master-Slave Polling 1 E+06

1 E+05

-;;; a 1 E+04 "' E

F " 1 E+03 "' ..

::;:

1 E+02

0 0,2 0,4 0,6 0,8

Load

Ct_ac ,6t_qu ()t_tr

Fig. 3 . Master-Slave Polling mean times.

5.2 CSMAICD.

The CSMA/CD performance is negligibly affected by the number of hosts in the network when the load

is the same. This has been proved in several model simulations.

In the same way, the results for the Model 1 and the Model 2 are quite similar. Because of this, only the unique master model has been represented in graphics, and also it is the most suitable case for this MAC.

Instead of it is an stochastic method, for a 10% bus load (the load of this models) the mean and

maximum delay times are suitable for the system requirements, but over a 20% load (for instance, if the system has more information traffic or more

AGVs) some messages appear with maximum acquisition and transfer time delays over 100 ms. Some of these messages can be an alarm messages as CSMA/CD can not to give priorities to the transferred frames, and this can be dangerous for the AGVS.

The maximum delay times seem to be limited to 400

ms. for high loads but actually delay times should tend to infinite. The deviation is due the CSMA/CD discards frames if it can not send them in the 16th try, and the Simulator Package do not take this discarded message in account, so they do not appear in the simulator statistics. The first discarded frames appear at a 30% load.

5.3 CSMAIDCR.

The only difference from this MAC to the CSMA/CD is the way it resolves a bus collision. It uses DCR (Deterministic Collision Resolution) that consists of a bus time multiplexing after each

-;;; a ., E F E " E -� ::;:

1 E+07

1 E+06

1 E+05

1 E+04

Master-Slave Polling

� : � • • • !6

.......... L---------l----------L--a

· · · · · · · ·� · - · · · · · ·· · · · · · · ·t · · · · · · ·

� � · · ·i· · ·T' ········· · · · · · · · · · ·· · · · · · ···············

1 E+03 +-...--......-........,---..�__,...-.-_, 0 0.2 0,4 0,6 o.s

Load

Ct_ac ,6t_qu ()t_tr

Fig. 4. Master-Slave Polling maximum times.

collision. It is a deterministic MAC but the

performance is worst when the number of hosts in the network is increased (Gonsalves and Tobagi,

1988).

The best results for this MAC are for the Model 2 (represented on graphics), because as masters are sharing most of the network load, the bus time

multiplexing after each collision is used for several masters instead of only one in Model 1 . In spite of this, CSMA/DCR can not support a load over 10% for Model 2. For Model 1 mean an maximum transfer times (not represented) are nearly and over a second respectively while the acquisition times are approximately the same that in Model 2.

5.4 Token-Bus.

This MAC and the next one has been studied under two different schemes as Ayandeh ( 1988) did. One is for hosts to transmit only one unique frame per

token visit (Token-Bus un.), suitable for networks with real-time applications. The other allow hosts to transmit more than one frame until their message buffer is empty or the ma"Ximum token hold time (10 ms.) is reached. This is referred as exhaustive

. service (Token-Bus ex.). The two mentioned schemes can be the two e:>..'tremes of the Token-Bus priority system.

1 44

A bigger number of host in the network increases delay times for light loads in both schemes, but the performance is similar for the first one while it is worst for the second when the number of hosts are increased.

The best results in the unique frame service are for

Model 2 (for Model 1 it can reach the communication system requirements), because there are less slaves for each master and then the cycle time to poll them is lower. Even in this case the

Page 147: Distributed computer control systems 1995 (DCCS ¹95)

CSMA/CD (Model 1 )

.. ,:, 1 E+04 ,. . . . . ... . . . , . .. . . . . . . . , . .. . " E

;:: m 1 E+03 :::;: ......... .._..�

0 0,2 0,4 0,6 0,8

Load

-a-i_ac --lr-t_qu

Fig. 5. CSMA/CD mean times for Model 1 .

CSMA/CD (Model 1 ) 1 E+07 �--�-----�

.. 1 E+06 . . . . . . . . . . � . . . . . .

a

1 E+03 +--...-,........, _ __,.. _ __,_�-1 0 0,2 0,4 0,6 0,8

Load

--lr- t_qu -0- t tr

Fig. 6. CSMA/CD maximum times for Model 1 .

Token-Bus un. MAC only can carry out the system requirements with a load under 20%.

On the contrary, Token-Bus ex. results are better for Model l (see Fig. 9 and 10), because an exhaustive use of the token by several masters increases delay times. The MAC fulfils the requirements even for a 60% load (and for a 55% load for Model 2, not represented on graphics).

5.5 Token-Bus with Master-Slave Polling.

This MAC is frequently implemented in fieldbuses, usually with lower transmission speeds than the simulated 10 Mbit/s. Only the masters participate in token passing, and when they own the token, they poll their slaves. The unique frame service and the exhaustive service schemes are applicable as above.

The two schemes results are similar for both models. In both schemes results are a bit better for Model 1 than Model 2, for the same reasons as Token-Bus ex., and the increased number of hosts in the network lead to the same effects as Token-Bus ex.

CSMA/DCR (Model 2) 1 E+06

1 E+05

Iii' a .. E ;:: c:

1 E+03 "' .. :::;:

1 E+02

0 0,2 0,4 0,6 0,8

Load

-a-1 ac --lr-t_qu -+-t_tr

Fig. 7. CSMA/DCR mean times for Model 2.

145

CSMA/DCR (Model 2)

1 E+03 .,__-.....--..---..---�,........,,........,

0 0,2 0,4 0,6 0,8

Load

--lr- t_qu

Fig. 8. CSMA/DCR maximum times for Model 2.

But acquisition time at higher loads are not as limited as Token-Bus ex., because now the slaves have to wait for a master polling to transmit their frames.

The results represented on graphics in Fig. 1 1 and 12 are the best for this MAC (exhaustive service on Model 1) . Communication system requirements are fulfilled even for a 50% load.

6. CONCLUSIONS

Not always the same MAC method gives the best performance in a communication system for real­time DCCS. Sometimes, real-time oriented MACs obtain worse results than other non real-time oriented MACs. It depends on the DCCS communication requirements.

The Master-Slave Polling implemented in the AGVS prototype is only suitable if the number of AGVs is short. If there are a lot of AGVs several buses with a master on each one are needed. The network configuration became more complex.

Page 148: Distributed computer control systems 1995 (DCCS ¹95)

1 E+06

1 E+05

'" a 1 E+04 " E

;= c

1 E+03 .. " ::;;

1 E+02

1 E+01

0

---&- t ac

Token-Bus ex. (Model 1 )

0,2 0,4 0,6 0,8

Load

--11- t_qu --()-- t_tr

Fig_ 9. Token-Bus ex. mean times for Model 1 .

Token-Bus ex. (Model 1)

'" 1 E+06

a " E ;= 1 E+05 E " E .,. .. ::;;

1 E+04

1 E+03 +--����--,...��--< 0 0,2 0,4 0,6 0,8

Load

-G- t ac --11- t_qu --()-- t tr

Fig. 10. Token-Bus ex. maximum times for Model 1 .

On other side, two at first less suitable MACs seem to be more suitable than other two real-time oriented MACs (CSMNCD versus CSMNDCR and Token­

Bus versus Token-Bus with Master-Slave Polling). For this MACs, a single master model is more suitable than a ten masters model.

With a software simulator it has been found the most suitable MAC for the AGVS described model. In this case, it has been the Token-Bus with exhaustive frame service. The AGVs must participate in the token passing.

REFERENCES

Ayandeh, S. ( 1988). Simulation Study of the Token Bus Local Area Network. In: Proceedings of 13th Conference on Local Computer Networks, pp. 268-274. Computer Society Press of the IEEE, Washington.

1 46

Token-Bus with Master-Slave ex. (Model 1 ) 1 E+06 �-----,..----�

1 E+OS

'" a " 1 E+04 E

;= c

1E+03 .. " :::;;

1 E+02

0 0,2 0,4 0,6 0,8

Load

-G- t_ac --()-- t_tr

Fig. 1 1 . Token-Bus with Master-Slave Polling ex. mean times for Model 1 .

.. E

Token-Bus with Master-Slave ex. (Model 1 ) 1 E+07 ,,...--,.------..,---,...---,----,

� 1 E+05 " E -� ::;;

1 E+03 +---���--,...��--< 0 0,2 0,4 0,6 0,8

Load

-G-t_ac --()-- t_tr

Fig. 12 . Token-Bus with Master-Slave Polling ex. maximum times for Model 1 .

Bux, W. ( 1981) . Local Area Subnetworks: A Performance Comparison. IEEE Trans. on Communications, 29, pp. 1465-1473.

Gonsalves, T. A. and F. A. Tobagi (1988). On the

performance effects of station locations and access protocol parameters in Ethernet networks. IEEE Trans. on Communications, 36, pp. 44 1-449.

Moura, J. A. B., J. P. Sauve, W. F. Giozza and J. F. Marinho ( 1989). Redes Locales de Computadoras: Protocolos de Alto Nivel y Evaluaci6n de Prestaciones. McGraw-Hill, Madrid.

Prasad, K. and R. Patel ( 1988). Performance Analysis of Ethernet Based on an Event Driven Simulation Algorithm. In: Proceedings of 13th Conference on Local Computer Networks, pp. 268-274. Computer

Society Press of the IEEE, Washington.

Page 149: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

INTEGR ATION OF W IRELESS MOBILE NODES IN MAP /MMS

P. M O REL, J .-D. DECOTI GNIE*

*Swiss Federal Institute of Technology, Lausanne, Computer Engineering Department, EPFL­DI-LIT, CH- 1 0 1 5 LA USANNE, Tel: +41-21- 693-2681 ; Faz : +41-21- 693-4701 ; e-Mail [email protected]

Abstract: In industrial networking (Pleinevaux and Decotignie, 1993), the wireless communica­tion can be justified by a fundamental need (e.g. the communication with mobile nodes) or by a convenience problem (e.g. physical reconfiguration) . The latter case may be important in flexible manufacturing, where reconfigurations can be costly. On the other hand, the control of mobile de­vices (autonomous vehicles, mobile robot etc . . . ) is a fundamental requirement especially in flexible manufacturing. This paper presents how it is possible to consider the management of mobile wireless nodes as a distributed application using the MMS Object Model.

Keywords: Mobile networking; wireless network; distributed systems; MAP /MMS; IEEE 802.11

1 INTRODUCTION

Mobile robots, our main application target , are expensive devices which have embedded comput­ers for their control, they communicate with their {fixed) local command computer as intelligent sta­tions. The commands are high-level such as go to that place or take this object. The robot acknowl­edges in report form like I am at that place. On the other hand, it must be possible to automat­ically download the embedded software and read journals and internal status tables for technical support. This calls for two types of services which can be handled by the industrial Manufacturing Message Specification M MS (ISO, 1989).

The mobile node could be modelled in an ab­stract way known as a Virtual Manufacturing De­vice (VMD). Inside the VMD, MMS objects are used to represent physical entities associated with the mobile device . Each MMS object has a set of MMS services to manage it, and it is through these services that actions are conducted in the MMS environment. Typically, the MMS client (in our case the cell supervisor) uses MMS services to manipulate MMS objects on the MMS server (the mobile node) which contains at least one VMD.

This paper presents how it is possible to extend the MAP /MMS protocol to allow mobility. The integration of the IEEE 802. 1 1 wireless protocol in the MAP stack is also showed. In this context, the transparency with the upper layers of the OSI model and the issues of addressing and routing in mobile environment are explored .

147

The paper is organized as follows. Section 2 presents the MAP /CNMA, MMS and IEEE 802 . 1 1 standards . Section 3 is a presentation and an analysis of the proposed architectures . Section 4 shows an integration of a wireless mobile node in a local industrial network MAP.

2 TECHNICAL BACKGROUND

2. 1 MAP and CNMA

Manufacturing Automation Protocol (MAP) was defined by General Motors with the goal of reduc­ing the cost of the installations and independence in the choice of its suppliers. Its architecture is based on the seven-layers ISO standard .

MAP started with the use of the protocol IEEE 802.4 - token bus. Since 1986, a superset of MAP, called CNMA (Communications Network for Man­ufacturing Applications) (CNMA, 1991), has been specified and implemented by european companies and institutes in a Esprit II project. The IEEE 802.3 (Ethernet) protocol at the MAC level and the Remote Database Access protocol at the ap­plication layer have been added.

In the following of this paper, we refer to the MAP Ethernet version.

2.2 MMS

Manufacturing Message Specification (MMS) (ISO, 1989) is an international stan-

Page 150: Distributed computer control systems 1995 (DCCS ¹95)

<lard that defines a set of services (as well as a corresponding communication protocol) that comprise a part of the application layer of MAP. MMS was designed to standardise and facilitate the remote control and monitoring of industrial devices made by different vendors. MMS serves as a common language that forms a foundation for interconnectivity of industrial devices. MMS is based on a client-server model of communications.

In many automation systems, the controlling ap­plication, called MMS client , is responsible for directing the operations of the individual con­trolled machines, called MMS servers , distributed throughout the automated system. We examine the case in which a MMS client application node or a MMS server application node moves from one subnet to another .

2.3 Wireless LAN (IEEE 802. 11}

A wireless LAN is conceptually different from a wire LAN (Lessard and Gerla, 1988). The most important differences is the shared medium, a re­duced reliability and a dynamic topology. Another difference is the signification of the word address: for a wireless network the address is not equivalent to a specific location.

Since 1990, the P802. 1 1 Working Group started to develop standards for all kinds of wireless com­munications. The initial goal of the elaboration of this protocol was the following: • Develop a medium access control (MAC) and Physical Layer (PHY) specification for wireless connectivity for fixed, portable and moving sta­tions within a local area (Departement , 1994) .

At The MAC Level networks with more than 1000 nodes are allowed by the standard and it handles data transmission speeds up to 20 Mbps. 802 . 1 1 uses a contention mechanism to allow sta­tions to access a shared channel, in the spirit of 802 .3. Due to the fact that a station cannot si­multaneously listen on the same channel on which it is transmitting, it is not able to determine that a collision has occurred until the end of a packet transmission. A special collision avoidance mech­anism must be added to the CSMA protocol to reduce the probability of collision. The MAC pro­tocol uses Carrier Sense Multiple Access with Col­lision Avoidance (CSMA/CA) (Antonio and San­jic, 1995) . The 802 . 1 1 MAC-layer protocol is tied to the IEEE's 802 .2 Logical Link Control layer. This makes 802 . 1 1 LANs easier to integrate with CNMA also conforming to the 802.2 LLC stan­dard.

At The Physical Level, the draft standard IEEE 802 . 1 1 defines three different physical types:

148

Direct Sequence Spread-Spectrum (DSSS) in the 2 .54 GHz ISM Band, Frequency-hopping spread­spectrum (FHSS) in the 2 .54 GHz ISM Band and baseband IR. A 1-Mbps transmission speed has been specified for DSSS LANs.

3 NETWORK ARCHITECTURES

Before we discuss routing between mobile nodes and fixed stations , let us examine two architec­tures for interconnecting one or more wireless net­works with existing networks in the factory set­ting, starting with the most simple topology and ending with the most complex. The evaluation of all of these two topologies allows us to detect the weakness of the system. So we could propose to add elements that improve the existing structure.

3. 1 Wireless Cell Structure

Within a given area or "cell" , mobile stations op­erate in the same physical and logical channel and form a wireless LAN segment . Mobile stations cannot reach each other directly but only one cen­tral station, the Hub, which is also the Access Point (Gateway) to the wired network. Mobile stations can roam from cell to cell by registering with another access point , this process is called "handover" .

3.2 Basic Network Architecture

In the simple architecture shown in the figure 1 , the roaming area of the mobiles i s covered with multiple wireless cells, centred around Access Points (AP) interconnected by a single MAP sub­network.

Fig. l. Base Network Architecture

The MAP subnetwork carries communication be­tween all stations, whether wired or wireless and in addition, the special traffic between APs, for example during handover procedure.

Page 151: Distributed computer control systems 1995 (DCCS ¹95)

Evaluation The main advantage of this ap­proach is that it is possible for a mobile node to roam between wireless cell without modifying the MAP protocols.

Another advantage is that there is no need to up­date the routing table of the subnetwork gateway, or to introduce a special mobile controller nodes because the movement is confined to a single sub­net.

But, if however a mobile needs to roam among several subnetworks, we need to enhance the MAP protocols as outlined in the next section.

3.3 Extended Network Architecture

The architecture, shown in the figure 2, is close to a real factory floor divided in fabrication cells. Each fabrication cell has its own subnetwork with one or more wireless cells. A mobile can roam between fabrication cells but logically belongs to a particular fabrication cell.

Fig. 2. Extended Network Architecture

Evaluation In this extended architecture, we need to enhance the MAP protocols and develop a method for routing to a mobile MAP system.

4 ROUTING

The first problem encountered when we introduce mobility is that protocols like IP assume that a computer network address encodes its physical lo­cation. Several works on mobile Internet protocols have been published (Perkins et al. , 1994; John et al. , 1 99 1 ; Frank, 1994) . Based on the above re­search, Younger et al (Younger et al. , 1993) have proposed a model for the integration of wireless nodes in OSI networks. We proposed to adapt

149

this model to the local industrial network MAP.

4 .1 Requirements

Any system using mobile MAP protocols must remain compatible with existing hosts. So that changes to the base MAP protocols can neither be specified in the existing routers nor in hosts. This means that it is not possible to specify any change above the Network Layer.

Existing distributed applications must continue to work without interruption when a mobile host moves between adjacent cells. In any case, the fact that a node is mobile should be hidden by the net­work from other systems which wish to communi­cate with it.

To each mobile host is assigned a constant NSAP address on a home subnet, known as its home ad­dress. Correspondent hosts may always use the home address to address packets to a mobile host. Each mobile host has a home agent (for exam­ple, the cell controller) that maintains a list which identifies those mobile hosts that it is configured to serve and where is the current location of each of these mobile hosts.

When a mobile host connects to a new AP, it must perform a registration process with this AP be­fore packets will be delivered to the mobile host. This process is performed when the mobile host is initially activated or after handover. Each AP maintains a visitor list of its currently registered mobile hosts. When the registration process is performed, the new AP notifies the new location to the mobile host's home agent.

Handover The handover protocol is used by a mobile station that has found an AP giving better RF communication quality than the current AP.

Old New AP AP

MAP

AP+tandovw IND

Handovet RESP

AP-Handover REO

Fig. 3. Handover Procedure

New Mobile AP Node

H�tlf REQ

Hancbv• CONF

The handover procedure is initiated by the mobile host. It can be seamless if the mobile host finds an AP that transmits a signal of sufficient quality.

Page 152: Distributed computer control systems 1995 (DCCS ¹95)

4.2 Routing In Subnet

If movement is confined to a single subnet as in figure 1 , the routing is done by the APs. The update of the AP's routing tables and the home agent's routing tables is done during the handover procedure. If the mobile is in its own subnet , its in-coming packets are routed directly by the AP to the mobile host without the help of the home agent .

Such movements don't require to modify the MAP protocols, because they are invisible to the subnet­work independent convergence sublayer (SNIC) , which provides the subnet-independent ISO net­work service to the Transport layer, and includes inetnetwork routing and switching.

4.3 Routing Between Subnets

If a mobile host is able to move between subnets, then the movement is visible to the SNIC layer . In Computer Integrated Manufacturing, the control of the autonomous vehicles is done hierarchically by a single host. So, it is most efficient , in terms of network traffic, to group the function of home agent and vehicles controller in the same station . This means that only the vehicles controller and the AP's MAP protocols need to be modified to handle the mobile packets traffic.

Tunnelling The home agent send packets to an autonomous vehicle's current location using tunnelling. Tunnelling involves the use of an en­capsulation protocol. The original Destination Address is moved into the packet's body. The new Destination Address corresponds to the mo­bile host's AP or to the vehicles controller. Once delivered to that host, the packet will be handled by the enhanced MAP protocol software and sent eventually to the wireless host .

Evaluation The above sections describe a typical industrial application and one of its solu­tions . This solution assume that the management of the autonomous vehicles is done by an enhanced MAP protocol host , the vehicles controller, which knows where the mobile host is and which is its AP.

The main advantage is that the intermediate routers need not understand the tunnelling pro­tocol, since after being encapsulated, the packet is simply a normal MAP packet addressed to the AP or home agent.

The main drawback is that messages sent to mo­biles must be encapsulated and re-directed when­ever the mobiles are roaming away from there home subnetwork

1 50

5 CONCLUSION

We have considered the problem concerning the distribution of a controlling application in an mo­bile automated system. This paper outlined two architectures that allow the integration of mobile nodes into a MAP industrial network. We have shown that new services must be added for en­abling mobile hosts to maintain network connec­tions even as they move about from one subnet to another.

Future work will involve the implementation of a field test and the evaluation of other wireless stan­dards, like DECT, in the framework of industrial communications networks.

6 ACKNOWLEDGMENTS

The author would like to graciously thank Alain Croisier for his helpful insights provided to me during the editing process of this paper.

7 REFERENCES

Antonio, DESIMONE and NANDA Sanjic ( 1995) . Wireless data: Systems, standards, applica­tions. MOB/DATA an interactive journal of mobile computing.

CNMA, Esprit Project ( 1991 ) . Implementation guide 5 .0 . (available on FTP server lit­sun.epfl .ch) .

Departement, IEEE Standard (1994) . IEEE Draft Standard 802 . 1 1 Document P802. l l/Dl. USA.

Frank, Reichert ( 1994) . The walkstation project on mobile computing. In: Wireless Networks (IEEE/ICCC conference). Vol. 3 . pp. 974-978.

ISO (1989 ) . Manufacturing Message Specification. Service Definition .

John, Ioannidis, Duchamp Dan and Maguire Ger­ald Q. Jr. ( 1991 ) . IP-based protocols for Mobile Networking (ACM SIGCOMM 91) . Computer Communication Review 21(4) , 235-245 .

Lessard, A. and M. Gerla ( 1988) . Wireless Com­munication in the automated factory environ­ment . IEEE Network Magazine 2(3) , 64-69.

Perkins, Charles, Andrew Myles and David B. Johnson ( 1994) . IMHP: A mobile host protocol for the Internet. Computer Networks and ISDN Systems 27(3) , 479-491 .

Pleinevaux, P. and J .-D. Decotignie ( 1993) . A sur­vey on industrial communication networks. An­nales des Telecommunications 48(9-10) , 435-448 .

Younger, E.J . , K.H. Bennett and R. Hartley­Davies ( 1993) . A model for a broadband cel­lular wireless network for digital communica­tions. Computer Networks and ISDN Systems 26(4), 39 1-402.

Page 153: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

DEFINITION OF REAL TIME SERVICES FOR HETEROGENEOUS PROFILES

J. Lecuivre 1,2, J-P. Thomesse I. I CRIN-CNRS URA 262 - ENSEM

2 A venue de la Foret de Haye - 545 1 6 Vandoeuvre - FRANCE Phone : 83 59 59 59 - Fax : 83 44 07 63 - Email : { lecuivre, thomesse } @loria.fr

2 Silicomp Ingenierie

BPI Zirst Montbonnot - 38330 Montbonnot - FRANCE

Phone : 76 41 66 66 - Fax : 76 41 66 67 - Email : jle@ silicomp.fr

This work has been undertaken under ANRT research grant n° 142/93

between Silicomp Ingenierie and CRIN.

Abstract. Our paper deals with the explanation and definition of real time services which should

help the realisation of CIM applications. We discuss the use of temporal Quality of Service parameters associated to CCE services and we also present ideas concerning further work for the definition of other Quality of Service parameters.

Keywords. Distributed Computer Control System, Integration, Real Time Communication,

Quality of Service

1 . INTRODUCTION

Several functional levels have been defined in CIM architecture and some of the currently identified functions at level 2 are System Control and Data

Acquisition (SCADA), Man Machine Interface (MMI), Device Configuration, Quality Control, Maintenance . . . Our paper deals with the description of real time services suited for the development of these functions and of requirements induced on the underlying communication protocols. We mainly focus in this paper on the explanation of variable list read service. For a client, two ways have been considered to read a list of variables values : • The first method is to fetch the variable values

where they are produced when the client needs

them. A Quality of Service called QoS 1 is associated to this first method.

• The second method is to require locally a copy of the variables values with the guarantee that the copy is not too old. A Quality of Service called QoS2 is associated to this second method.

These two methods are relying on two kinds of multipeer connections which are both N-M relations according to the classification of the identified cooperation models (Thomesse, 1 993) (Vega, and Thomesse, 1 995). The first method is of multiclient - multiserver type. The second one is of multiproducer - multiconsumer

1 5 1

type. Both methods and associated QoS are detailed

in the following sections. We assume in this paper that all clocks in the system are synchronised in order to obtain an

approximate global time as in (Kopetz, 1 99 1).

2. CONTEXT PRESENTATION

2. 1 . Needs description

Modern distributed computing is often relying on distributed object techniques (Mowbray, 1 994), (Birrel, et al. , 1 994). A network object is an object

which methods can be invoked over a network. In a manufacturing environment, a variable can be seen as an object which is distributed over the network, this object has several attributes such as it's value,

it's production date, it's transfer date, the address of

it's producer, . . . Some attributes of this object are static (i.e. they do not change from the creation of

the object until it's destruction), others are dynamic, one of the problem is to maintain the "coherence" of the dynamic attributes over the DCCS. The motivation for definition of such services is the heterogeneous situation at all the levels of the CIM architecture. The wish in the 1980's was an international standard fieldbus, for the connection of sensors actuators and PLCs, and now several fieldbusses are existing : Profibus, Fip, CAN, Pnet, LON and always Modbus; and they are sometimes working together. In the CIM architecture we also

Page 154: Distributed computer control systems 1995 (DCCS ¹95)

find other networks like MAP, TOP, or Ethernet for the connection of workstations, PCs, PLCs. Distributed systems which are build using standard programming interface and which use standard communication protocols are sometimes called "Middleware" (Bernstein, 1993) because they sit in the middle below industrial application and above operating system and communication protocols. To enable application processes to perform the functions described above and to enable them to exchange data and to check if the time constraints are satisfied or not, CCE ESPRIT project (CCE­CNMA Consortium, 1994) has developed real time services with time related QoS requirement parameters and QoS data parameters (ISO/IEC JTC l , 1995). These services are performed by the Application Entities (AEs) distributed over the network. A profile is defined as set of services and protocols organised in layers, typically in Fig. 1 ,

profilel could be TCP/IP on Ethernet and profile2 could be MMS on Mini Map; sub MMS on Fip or Modbus on a serial link. Fig. 1 . shows typical real time exchanges between Application Processes (APs) : Ex l : A supervision process APl should be aware

of the fault of the physical process within reasonable delays which depend on the time constants of the physical variables.

Ex2 : A fabrication process AP2 needs to download a program of 6 kbytes on PLC2 which should be transmitted and started before the end of the execution of the program running on PLC l , the transmission from Machine 1 to Machine 7 should not take more than l s.

Ex3 : A maintenance process AP3 should periodically have the status values of the different PLCs of the CIM application.

Fig. 1 : An heterogeneous architecture

2.2. Cooperation Models definition

In (Thomesse, 1993), the analysis of real - time data exchanges has lead to identify four types of relationships between the application processes. For each of them, two cooperation types may be considered. The relationships are related to the number of APs in relation, the cooperation types to the manner that the APs may cooperate. The relationships are :

1 52

one to one (1 - 1 ), one to many (1 - N ), many to one (N - 1), many to many (M - N). The cooperation types are respectively the client -server (CS) model and the producer one (PC). The client - server model allows a process (client) to request a service to another one (server) and eventually to wait for the answer. The producer - consumer model allows a process (producer) to send information to another one (consumer). In CS model, a lot of services may be defined or provided (as in MMS), in PC one, only the exchange of information without considering their semantics is allowed. Combining relationships and cooperation types lead to define eight cooperation models 1) Relation 1 - 1 : client - server and producer -

consumer models 2) Relation 1 - N : client - multiserver and producer

- multiconsumer models 3) Relation N - 1 : multiclient - server and

multiproducer consumer models 4) Relation N - M : multiclient - multiserver and

multiproducer - multiconsumer models. Communication models according to the first relation require only point to point communication whereas communication models according to the other relations require multipeer protocols.

3. SERVICES DESCRIPTION

3. 1. Offered Quality of Service

A QoS parameter is an information conveyed between entities. In (ISO/IEC JTC l , 1995) they have classified these parameters into QoS requirements parameters and QoS data parameters, we present parameters of both kinds and we more precisely focus on time-related characteristics. For the first way considered to read a variable list, the client application process can't require a temporal QoS. Nevertheless, the underlying protocol furnishes by default to the AP the way to control the temporal QoS of the received confirmation by the association of transfer date, production date to each variable value. For the second way considered, to read a variable list, the application process opens a real time connection between itself and the variables producers. The application process requires statically (i.e. during the connection life time) a temporal QoS on the real-time connection. The underlying protocol also furnishes to the AP the way to control the temporal QoS of the received confirmation by the association of transfer date, production date to variable value.

3.2. Read Service according to Client Multiserver Model.

In this paragraph we describe one of the CCE services. Services according to client multiserver

Page 155: Distributed computer control systems 1995 (DCCS ¹95)

cooperation model have already been described (Dakroury Y., 1990), (Elloy J.P. and Ricordel R. 1995). In order to clearly describe the behaviour of our service, we have chosen two complementary methods : a time diagram to describe sequencing of the different primitives arising on different sites, and then communicating Finite State Machine which is a more formal method. This service can rely over several kinds of point to point transport communication protocols, they are denoted profile Pl . The client AP uses a multipeer connection which is opened at the creation of the local CCE Application Entity object with all it's access servers. Each access server Application Entity handles several opened connection between itself and all it's producers. This notion is described on the following graph :

Producers j=I �e==:=---J=2 j:{ I Client �---r::'!"'==::::::=-- i=2 .':') 2

- Opened point to point i=p JJ·� p connecuon �

Fig. 2 : Connection graph between Client - Access Servers and Producers.

To identify the objects in relation, in a client multiserver exchange, we introduce the three following sets : Si={Lp l e {Access_server }

�,J={I .q l e {Producers_ Accessed_ by_ Si} V:,J,k={I.r l E {Variables_ of_ Pi} _producer }

The cce_Read service is used by an application process to read the values of a variable list. The user gets in return a list of elements, each element is a structure composed of the variable identifier, the variable value and the production and transfer dates. The time diagram below describes in an informal way the behaviour of the service for a variable list {

v k } with i E { 1 . . p } , j E { 1 .. q } , k E { 1 .. r } l ,J,

We suppose here that the application layer protocol between access server and producer is MMS.

Service Provider : CCE Client :

Application Client Profile 1 ���� Process site site SI

cce_Read.re (mode, { Vijk))

cce_Read.conf ({Vijk. VALijk, DATijk, ACCijk} {Vijk, EXCijk) ERR)

Profile 2 Producer PLC site Pl I

production �

date

�oduction date

Fig. 3 : Informal description of cce_Read service

In the execution of cce_Read service which is described by the time diagram, several activities are

153

running concurrently and are handled by threads (Arcos and Dupre, 1993) : • On the client site :

Each thread has to access to a local/distant access server.

• On the access server site :

Each thread has to translate the request into the relevant communication protocol (MMS, Modbus, . . . ) which accesses to producers variables values and waits for the confirmation. The following tables give the meaning of the parameters conveyed between entities for each service primitive. Primitive : cce_Read.req

Parameters Parameters signification

mode Call mode Blocking, Non Blocking.

{ V,,J,k } Variable Identifiers List.

Primitive :

Parameters

{ v1.1.k }

Primitive :

Parameters

{ V ljk,

VALljk,

TDATljk,

PDATljk }

{V ljk,

EXCljk}

Primitive :

Parameters

{ Vijk, VALijk,

TDATijk,

PDATijk}

{ Vijk,

EXCijk } ERR

cce Read.ind

Parameters signification

Subset of variables identifiers accessed by S 1 .

cce Read.rep

Parameters signification

Subset of variables identifiers accessed by S 1 . Value associated to variable identifier. QoS data parameter : Transfer date associated to variable value. QoS data parameter : Production date associated to variable value1 . Variables Identifiers subset for which the service could not be performed. Exception code returned.

cce Read.cnf

Parameters signification

Variables Identifier List. Value associated to variable identifier. Transfer date associated to variable value. Production date associated to variable value. Variables Identifier List for which the service could not performed. Exception code returned. Global error code if any.

We explain in a simplified case the behaviour of cce_Read service according to client - multiserver cooperation model. We have chosen communicating finite state machines to describe behaviour and synchronisation of concurrent activities arising on the different sites. Example below shows possible

l This QoS Paramater is not yet furnished but would be very useful

Page 156: Distributed computer control systems 1995 (DCCS ¹95)

primitives sequencing in the restrictive case of the

eIDiss1on of cce_Read(BLOCKING, {Vl ,V2})

request by the client application process. Vl and V2

are produced on different sites and accessed via different protocols.

A =--"�V"w·�·-·"""""""'� Fig. 4 : Client application process FSM

Fig. 5 : Client cce application entity FSM

A =

-��'

=

-� ,__,,,,,_,_,,.�, ----�--· ' \(����

Fig. 6 : cce access servers FSMs

mms_Read_ Vl VAL 1 RepCnf!

1 54

mbs_Read_ V2VAL2RepCnf!

Fig. 7 : Producers FSMs

3.3. Read service according to Producer( s)

distributor Consumer( s) model

This service is "real time and multipeer connection"

oriented. A consumer opens a real time connection

with the producers and this by the way of p

distributors. Producers

Distribute�rs J=l _ ,�J=2 Consumers - - : -� ; i=l j=q 1

h=l · : -- - - �j=l

- - opened h=2 • �- - - - :- : :. ;;- · 2Z:::::::::::J'.:� multi�r · - � , __ , - - 1= J;q 2

connection h=m• ': - - . . - opened point - - - --- i�J=q

to point connection 1=p J=q p Fig 8 : Opened connections graph.

To identify the objects in relation, in a producers

distributor consumers exchange, we introduce the following sets :

Ci,h={tm } e {Consumers_furnished_by_Di }

D;={Lp l e {Distributors }

�.J={ J. q } e {Producers_ accessed_ by_ Di }

�.J,k={J..r } e {variables_ of_ Pij _ producer }

This service allows the user to express his needs concerning the freshness of variables contained in a

variable list. The freshness of a variable represents the maximum delay accepted by the user between the production (on producer side) and the representation of the information to the consumer

Application Process (on the consumer side).

The three phases of this service are : Real time connection Opening Data transfer phase

Disconnection. Service Provider : CCE

Consumer Cih : Application Process

RT_Multipeer_connecL

Consumer Site

(mode. ___, _ ___, { Vijk. dijk })

RT _Multipeer_connect onf

({ACKijk))

Profile l Distributor

Site DI

Producer Profi!i! 2 PLC

Site Pl I

Fig 9 : RT multipeer connection opening

The following tables give the meaning of the parameters conveyed between entities for each service primitive during Connection Phase.

Page 157: Distributed computer control systems 1995 (DCCS ¹95)

Primitive : RT_Multipeer connect.req

Parameters

mode

{ V;,j,k • diik}

Parameters signification

Call mode : Blocking, Non Blocking. Variable Identifiers List. QoS Requirement parameter.

Primitive : RT Multipeer connect.ind

Parameters Parameters signification { Vz,1.k Subset of variables identifiers

accessed by D 1 .

dl ik} QoS Requirement parameter.

Primitive : RT Multipeer connect.conf

Parameters Parameters signification:

fACKlik} Acknowledgement.

Primitive : RT_Multi eer_connect.re

Parameters

ACKi"k

Consumer Cih : Application Process

Service Provider : CCE

Consumer Site

Profile 1 Distributer Site Dl

Profile 2 Producer PLC Site Pl!

Production �

Date

Fig 10 : Real time data transfer phase

The following tables give the meaning of the parameters conveyed between entities for each service primitive during Transfer Phase : Primitive : Poll.reo :

Parameters Parameters siimification

{ V;,J,k } Subset of variables identifiers accessed bv Di.

Primitive : Poll.ind :

Parameters

{Vljk,

VALljk, TDATljk, PDATljk}

{Vljk,

Parameters signification

Subset of variables identifiers accessed by Di. Variable value. QoS Data parameter (Transfer Date) QoS Data parameter (Production Date) Variables Identifier List for which the service could not performed.

EXCl ik 1 Exceotion code returned.

The polling period can be deduced from the dijk QoS requirement parameter, either by simulation (Lecuivre, and Song, 1 995) or by measurement.

3.4. Further work on QoS item

ISO/IEC JTCl (ISO/IEC JTC l , 1995) is currently working on the Quality of Service item, it's purpose

1 55

is to provide a conceptual and functional framework for QoS. In this section we present several other temporal QoS characteristics. Data time validity This characteristic is the lifetime of the local value of data it is quantified as a time interval. Remaining lifetime This characteristic is the time remaining before the data ceases to be valid this characteristic is a time interval. We have presented QoS parameters under the Application Process control, now, the following QoS characteristics should be under the control of underlying application protocol : Temporal coherence characteristic Indicates whether an action has been performed on each value in a list within a given time window and which is a quantified as a boolean value. There is a range of possible further specialisation for the temporal coherence : Temporal data production coherence This characteristic indicates whether the value of each variable in a list has been produced in a given time window it is a quantified as a boolean value. Temporal data transmission coherence This characteristic indicates whether the value of each variable in a list has been transmitted in a given time window it is a quantified as a boolean value. Temporal data consumption coherence This characteristic indicates whether the value of each variable in a list has been consumed in a given time window it is a quantified as a boolean value. Spatial consistency This characteristic indicates whether or not all copies of a duplicated list or multiple copies of a list of variables are identical at a given time or within a given time window it is a quantified as a boolean value. There is a range of possible further specialisations for the spatial consistency characteristic, including timeless spatial consistency, temporal spatial consistency, etc. "A real-time system has to meet the deadlines dictated by its environment. If a real-time system misses a deadline, it has failed" (Kopetz, 1991). By real-time connection we don't only mean quick transmission but a connection which offers the way to handle time constraints and to check if the deadlines are respected or not. A solution is to associate to each value transferred QoS data parameters such it's production date, it's transfer date (CCE-CNMA Consortium, 1994). This allows the user by comparison to it's local time to know if the data is fresh enough or not, this means to check if the QoS requirement parameters are respected or not.

Page 158: Distributed computer control systems 1995 (DCCS ¹95)

This work may be seen as a proposal to satisfy some requirements as stated is (ISO TR12145, 1992) and which completes previous solutions (Song, et al.,

1 99 1 ).

·4. CONCLUSION

Producer(s) Consumer(s) communication models are necessary to put the right data at the right place to the right person at the right time (Dieterle, et al.,

1994). The solution is to realise Real Time multipeer connections between the Producer(s) AP(s) and the Consumer(s) AP(s). We have presented new services which may be implemented on heterogeneous architectures. Characteristics are associated to multipeer connections and not to individual service requests. A future work could be concerned with the definition of dynamic QoS parameters associated to each services. Main results achieved yet are : A portable platform distributed over Ethernet network, called APPLI-BUS and based on Modbus and MMS. A performance evaluation tool (Lecuivre, and Song, 1995) which is under test and which enables the user to set the parameters of this platform according to the wished Qo� requirements parameters. Protocols described in this paper obey to different communication models. As a conclusion we can say that description and standardisation of real time services and inherent real time communication protocols interfacing level 1 and 2 of a CIM architecture would enforce finding solutions to actual real time requirements.

BIBLIOGRAPHY

Arcos P.J., B. Dupre, ( 1993). "Posix et le temps Reel" , Actes de la conference Real-Time Systems' 93, Session 1 pages 5-15 , Paris.

Bernstein P.A. , ( 1993). "Middleware An Architecture for Distributed System Services", Digital Equipment Corporation, Cambridge Research Lab.

Birrel A. , G. Nelson, S. Owicki, E. Wobber, ( 1 994). "Network Objects", Digital Equipment Corporation, Systems Research Center, Palo Alto, California.

CCE-CNMA Consortium, (1994). "Introduction to the CIME Computing Environment - A platform for the creation of industrial applications. ", ESPRIT Project 7096 CIME Computing Environment integrating a Communications Network for Manufacturing Applications (CCE-CNMA), Ref 7096.94.08/F2.PD.

1 56

Dakroury Y., ( 1 990). "Specification et validation d'un protocole de messagerie multi-serveurs pour l'environnement MMS", These de doctorat ENSM, Universite de Nantes.

Dieterle W. , H.D. Kochs, E. Dittmar, ( 1994). "Communication Architectures for Distributed Computer Control Systems", Proceedings of IFAC Distributed Computer Control Systems Workshop, Toledo Spain, pages 13- 18 .

Elloy J.P., R . Ricordel, ( 1995). "Modelisation et verification du comportement de services de synchronisation des applications Temps Reel reparties" ,Actes des conferences RTS'95, pages 67-82, Paris.

ISO TR1 2145, ( 1992). "User Requirements on Time Critical Communication Architectures", Technical Report, ISO TC1 84 SC5 WG2 Time Critical Communication Architecture Reporter's Group.

ISO/IEC JTCl, ( 1995). "QoS Basic Framework" , ISO/IEC JTC1/SC21 Open Systems Interconnection, data management and Open Distributed Processing.

Kopetz H., ( 1 99 1 ). "Event-Triggered versus Time-Triggered Real­Time Systems", Lecture Notes in Computer Science, vol 563, Springer Verlag.

Lecuivre J., Y.Q. Song, ( 1995). "A framework for validating distributed real time applications by performance evaluation of communication profiles" , accepted to WFCS'95 : IEEE International Workshop on Factory Communication Systems, Lausanne.

Mowbray T.J., ( 1994). "Choosing between OLE/COM and CORBA", Object Magazine, pp 39-46.

Song Y. Q., P. Lorentz, F. Simonot, J.P. Thomesse. (1991) . "Multipeer/Multicast Protocols for Time­Critical Communication", In Proceedings Multi peer I Multicast Workshop, Orlando, (Florida, USA), august 1 99 1 .

Thomesse J.P., ( 1993). "Time and industrial local area networks" 7th Annual European Computer Conference on Computer Conference on Computer Design, Manufacturing and Production (COMPEUR0'93), pages 365 374, Paris-Evry (FRANCE).

Vega Saenz L., J.P. Thomesse, ( 1995). "Temporal properties in distributed real time applications Cooperation models and communication Types", Published in this proceedings.

Page 159: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

A Distributed -Real-Time Environment For the

Transaction Processing CIM Applications

Y. DAKROURY* , and J.P. ELLOY* *

* Computer and Systems Eng. Dept. , Faculty of Engineering Ain S hams Uni versity, Abbassia, Cairo, Egypt.

* * Labor:ttoire d'Automatique de Nantes, Ecole Centrale de Nantes URA CNRS 823, 44072 Nantes cedex 03, France.

A b s t r a c t The paper presents a distributed transaction processing environment that

permits the management and the coordination of distributed transactions for real­time Computer Integrated Manufacturing ( CIM) applications. The environment has a hiera rchical architecture which consists of four layers. The first layer, which represents the nucleus of this architecture is a Real- Time Operating System (RTOS). The RTOS provides the services necessary to enable the applications to meet their timing and throug hput constraints. The Manufacturing Messaging Specification (MMS), supported by the ISO as an application layer standard, constitutes the second layer to support messaging communications to and from programmable devices in a C/M environment. The MMS standard introduces a complete set of services for different types of programmable devices as well as the communication protocol needed to support the transfer of data and services parameters. The third layer is !'.Omposed of a Multi-Server C<1ncept (MSC) specification which p rovides coordination and management levels to execute a service request by a client over an MMS object distributed over different remote servers. A Transaction Processing Facility (TPF) constitutes the fourth layer of the proposed environment. The TPF guarantees the A tomicity, Consistency, Isolation, and Durability (A CID) properties, defined by the ISO, of the distribu ted transactions by specifying the communications protocol and services needed to meet these properties.

Ke v w o rds Real-Time, Distributed System, CIM, Factory A u tomation, Transaction

Processing, ISO/TP, Manufactu ring Message Specification, MMS, Manufacturing Automation Protocol, MAP, Computer Network, Protocol, ISO/OSI.

1 . I ntroduction here to permit the management and O pen Di s t ri b u ted Proces s i n g

(ODP) is being standardized b y the ISO. The ODP prescri bes a very generic arc h i te c t u re o f the d e s i g n o f distributed systems. It may be viewed as a meta-standard to coord i n ate and guide the development of domai n ­specific ODP standards. S pecific fields of ODP applications include advanced telec o m m u n icati on s arc h i tecture such as i n te l l i gent networks , automated manufacturing systems, office systems, management information systems, etc . ·

One of the major fields of an ODP system is the CIM applications. So, a d i s t r i b u ted rea l - t i m e t r a n s ac t i o n process i n g envi ron ment is proposed

157

t h e c o o rd i n a t i o n o f · d i s t r i b u ted t r a n s act i o n s fo r real - t i m e C I M appl ications. This environment has a h i e ra r c h i c a l a r c h i te c t u re w h i c h consists o f four layers:

1 . Real -Time Operating System (RTOS).

2 . M a n u factu r i n g Specification (MMS).

M e s s age

3 . Multi-Server Concept (MSC) fur the m a n u fac t u r i n g m e s s age spec i ficati o n .

4 . T ra n s a c t i o n Facility (TPF).

P r o c e s s i n g

For distri buted CIM applications, each layer of this architecture can communicate with its peer layer in

Page 160: Distributed computer control systems 1995 (DCCS ¹95)

another node by using certain specific protocol as shown in figure 1 .

fl&u"' I: A lllerardd<:al Archltectutt ol the Dlsutbured lal-Time T.-..-ttoa l'rot:aslnc En•lronment.

I n the proposed environment, each layer offers certain services to the outer layers, shielding those l�yers from the detai ls of how the offered services are actually implemented. The offered services enable the realization of the flli1 ctions that should be p e rfo rm e d by each layer . · The in teractions between each layer and i ts adj acent l ayer · are defined by the operations and the services the i n ner layer offers to the outer one.

The next section presents the c omponents of each layer in the h i e r a rc h i c a l e n v i r o n m e n t , i t s fu n c t i o n a l a rc h i tec ture , a n d i ts i nterface w i th the adjacent layers: B oth o f the spec ificati on method i n terms of fini te state automata and the va l idatio n method in terms of the Calculus of Communicati ng S ystems (CCS) are i ntroduced in section three. F i n a l l y , a general c o n c l u s i o n i s presented in section four. 2. H i e r a r c h i c a l L a y e r s P r e s e n t a t i o n 2.1 . R ea l - T i m e f R T O S )

O p e r a t i n g S y s t e m

Real-time systems are defined as those systems in which the correctness of the system depends not only on the logical result of computation, but also on the time at which the results are produced [ l ] . Examples of real-time systems are pro'cess control systems, communications systems, command and control systems, and CIM systems. In a CIM appl ication, the response times of s o m e p r o d u c t i o n fu n c t i o n s are i mportant, or moreover it is absolutely i mperative that responses occur within the spec i fied dead l i ne. Also, i n a d i s tri b u ted CIM system the data a c q u i s i t i o n a n d c o m m u n i c at i o n s

158

systems have to ensure throughput as well . To have such a behavior, a real­time operating system is needed [2] to be respons i b l e of the fo l lo w i n g fu n c t i o n s :

- support for scheduling o f real-time processes.

- preempti ve schedul ing. guaranteed interrupt response. i nterprocess communication .

high speed data acquisition. UO support.

user c o n trol of s y stem resources .

The RTOS represents the nucleus of t h i s arch i tecture and can be c lass ified in one of s i x general categories presented in [3] that are suited for real-time CIM appl ications. These categories can have small , fast, proprietary real - t ime kern e l s l i k e VxWorks [4] and VRTX [5,6] systems or add a real-time extension of existing commercial operating systems l ike RT­UNIX [7], or CHORUS [8,9] systems. The choice of an adequate rea l - t i m e operating system wil l b e a n application dependent. The proprietary real-t ime kern e l s are s u i ta b l e . to s ma l l app l i c at ions w h i l e for the l arge appl ications , the t ime sharing real­time operati ng systems are sometimes preferred. However, for any real-time C I M ap p l i c at i o n , t h e r e a l - t i m e operating system should respect the relevant time constraints.

D i stri buted real -t i me s ystems des ign rai se new theoret ica l and practical issues [ l O ] , beyond those in centralized systems. Thus, the proposed distributed RTOS is characterized by the fact that all resources in the system are managed by a distributed kernel . This kernel consists of several instances of the s ame set of processes ; each instance is al located to a distinct node, and every decis ion about the system behav i o r i s taken t h ro u g h the cooperation among several i n s tances or even all . The kernel manages the local processes context, implements the schedu l ing pol icy, and executes the primitives of the classical RTOS. 2.2. M a n u r a c t u r i n g M e s s a g e

Spec i fi ca t i o n ( M MS) A CIM system i.s often composed

of many manufacturing, contro l , and production systems di stributed over certa i n geo g rap h i c a l s p a c e and connected together v i a a c o m puter n e t w o r k . T h e p r o b l e m o f

Page 161: Distributed computer control systems 1995 (DCCS ¹95)

c o m m u n icat i n g d i fferent types of d e v i ces ch aracteri zed by d i fferent i nterfaces and different messages has i mposed a standardization project to define the communications profile in a CIM environment. The Manufacturing Automation Protocol (MAP) [ 1 1 , 1 2] has been developed to be a s tandard communication protocol for the CIM systems [ 13 , 1 4, 1 5 , 1 6] . Then , the MMS standard [ 1 7, 1 8] has been designed by the I S O as an appl icati on service e lement within the application layer of the MAP protocol . This application l ayer standard supports the messaging c o m m u n i c at i o n to a n d fro m progra m mab le dev ices m a CIM e n v i ro n ment. Moreover, the MMS services and objects define specific extensions to permit the exchange and the storage of specific data related to d ifferen t types of machines: robots , p r o g r a m m a b l e l o g i c c o n t r o l l ers , numerical con trol machi nes, etc. The services and communication protocol associated w ith each type of these m ac h i ne s are def ined i n the c orresp o n d i n g companion s tandard [ 1 9 , 2 0 , 2 1 ,22] . The M M S s tandard i m plements a comprehens i v e di rect c o n tro l of these devices through special ized communication capabil i ties such as r ead/write vari ab les and start/s top/resume programs.

The MMS services define the e x terna l l y v i s ib le behavi o r o f an equipment modeled by an entity called a Virtual Manufacturing Device (VMD). A VMD is . an abstract representation of a spec i fic set of resourc es and f u n c t i o n a l i t y a t t h e r e a l manufacturin g device and a mapping of this abstract representation to the physical and functional aspects of the real manufacturing device. To satisfy this representatipn, the VMD entity contains a set of MMS objects and an executive function that permits the execution of - the requested services . The MMS protocol controls the message exchange between a VMD client and a VMD server. In the MMS standard, for reasons of security, abstraction, and ma intenance, one MMS o bject is encapsulated in only one protected system. called server. A server manages data objects and executes operations requested by a client. An appl ication process client makes use of the VMD capab i l i t i es fo r s o m e p art icu lar purpose by i n voki ng operations in

1 59

terms of MMS serv ices requests to manipulate the data managed by a server. This system is satisfying the client/server model which is adopted by the MMS standard. The · MMS client interacts with the corresponding MMS s e r v e r by u s i n g the M M S c o m m u n i c a t i o n p r o t o c o l . T h e fu n c t i o n a l a rc h i t e c t u re o f t h e client/server MMS model i s presented in figure 2.

User Olent

SER\llC&nl(•.·) SERVIC&sp( ... , CAHC:B..:nl'I•.-) CANC:Bnt>l+,·I

Olent l'rococol Machi""

Agure Z: MMS O� funcllonal Al<:hl�t-.

T h e M M S s p e c i fi c a t i o n constitutes the secon d h ierarchical l ayer o f the propos ed envirnnmen t architecture. Two peer layers, playing the roles of a client and a server, are communicated using the MMS protocol . In the cl ient side, this layer contains the MMS client protocol machine that is respons ible of tran smittin g the c l ient MMS services requests and receiving the MMS services responses. In the server side, this layer contains the MMS server protocol machine that is responsible of receiving the MMS services requests and transmitting the MMS services responses. Furthermore, the M M S server layer contains the realizatio n procedures of the M M S services inc luded i n t h e executive function entity of the VMD model as well as the definition of the different c lasses of the MMS objects. The time constrain ts for execu t i n g a MMS service as well as the time interval in which the value of a real object on the outside world follows · the value of an MMS object are g uaranteed by the scheduler of the real-time operatin g sy s tem . 2.3. M M S f M S C )

M u l t i - S e r v e r C o n ce p t

T h e M M S cl ient/server model al lows a client to access data objects existing in thei r entirety at a server.

Page 162: Distributed computer control systems 1995 (DCCS ¹95)

Th is constraint l imits the cooperation faci l i ties especially in the case of the i mplemen tati on of fau lt-tolerant or flex i b l e app l ications . Therefore, an extended client/multi-server m odel is pres e n ted in [23 ,24 ,25 ,26] w h i c h specifies the access of MMS data objects distributed over several servers. So, the access is done either local ly and/or distant according to the locality of the obj ect 's components . The extended m u l t i - s e rver m o d e l i m pl i e s the cooperation of several MMS servers in order to execute a service requested by a c l ient. This extension permits to an MMS object to be distributed over many MMS servers. This added feature to the MMS specification can extend the scope of applications of this standard and can be of great industrial interest.

The third h ierarchical level of t h e a r c h i tec t u re i s the M S C specifications. This layer is responsible of t h e c o o r d i n a t i o n a n d t h e management of the execution of the service requested by the client over a distributed MMS object. A client ; ssues a service request and waits for a positi ve or a negati ve confi rmation i n d i c a t i n g the execution or the rejection of the requested service. Upon receipt of a service indication by the server, called the principal server, it issues a set of remote and/or local service requests, according to the locality of the object's components, to every MMS server, cal led secondary s e r v e r s , h a v i n g o n e o r m o re component of the distributed object. Each component of the MMS object can be itself an object composed of many other components . This arborescent h ierarchy can be ex tended to any n u m b e r o f lev e l s and c an be represented by a tree structure. Each branch of this tree represents the transmission of a service request and the reception of a service confirmation at one end of the branch, and the reception of a service indication and the transmission of a service response at the other end of the branch. The transmi tted serv ices over the tree branches, ca l l ed sub-services, are o rd i n ar y .MMS services . S o , the execution of these services can be realized by issuing cal ls to the services real ization procedures of the executi ve function entity existing in the M MS l ay e r . A l s o , the c o m m u n ic at i o n s between this layer and its remote peers

160

may be su pported by the M M S commun ication protoco l . The basic p ri n c i p a l beh i n d t h i s e x tended architecture, presented in figure 3, is to preserve the same c l ient/server protocol model specified by the MMS standard without any modifications. This criteria enables . the multi -server protocol to use the same protocol data units defined and presented by the M M S s pec i fi c a t i o n . T h u s , the interactions between two remote MMS servers are occurred via the standard M M S ser v i c e s . T h erefore , the modifications are carried over the procedures of services realization . The new added enti ty between the server protocol machine and its executive fu n c t i o n , c a l l e d s u p e r v i s o r , c o o r d i n ates a n d m a n a g e s t h e indication o f the service issued b y the client and re.alizes the corresponding remote serv ice requests necessary to execute the initial service.

�·.-· CANCBOlff •.-)

� �� t S.SBlYICl!nol'.·I � $CAllCB.npl+.·I

Supemoor

�t �::::1 j - - Machine a e . ..

°'""' ­-

Rgutt 3: Multl-- Fu..,_ An:hlte<:tu� To - SU-.

2 . 4 . T r a n s a c t i o n P r o c e s s i n g Faci l i ty CTPF)

The great advantage of the multi-servers model is the abil i ty to execute a c o m p lete l y d i s t r i b u ted transaction in a CIM environment. Transactions are defined as a set of related operat ions characteri zed by four properties : atomicity, cons istency, isolation, and durabi l ity. The atomicity property states that e i t h e r a l l transaction operations are executed or not. If an operation aborts, the entire

Page 163: Distributed computer control systems 1995 (DCCS ¹95)

transacti on should fai l in order to recover the precedent s tate. The consistency · property states that the set of transaction operations are executed in conformation with the appl ication semantic. The isolation property states that partial res u l ts o f transaction operations cannot be accessed except by the transaction operations . The durabi l i ty property states that the effects of the transaction operations should not be modified as a result of any fai lure. The transaction operations consist of a set of MMS services encapsulated together to perform a speci fic task in the production or the manufacturing operati ons . It i s the role of the c l ient to bu i ld the transaction and select the appropriate M M S services accord i n g to the objective of the transaction. The MMS spec i fication enables a real open system to adopt both the client role and the server role during the l ifetime of an application process. So, whenever a c l i e n t as k s fo r a tran s a c t i o n preparation, i t asks at first a l l the local servers if they wish to prepare the transaction, then it asks the remote servers . This c on forms w i th the features of the mul ti -server concept and supports nested transactions . A nested transaction is composed of some number of sub-transacti ons, existing at the secondary servers, and is modeled by a tree of transactio n names . The leaves of this tree are called accesses because they are the transactions that directly access data.

t

ransacllon-Begln.Req KllLReq

ramaalon-End.Req Transaction-Beatn.Cnfl +,-)

A TPF constitutes the fourth layer of the proposed environment arch itectu re. The TPF fu n c t i o n a l architecture (27) , presented i n figure 4, introduces the interactions between the different entities that compose this facil ity. These entities are:

- transaction manager · entity. - communications manager entity . - recovery manager entity .

The commun ications manager entity prepares the Protocol Data Units (PDUs) of the inter-server messages according to their specific format and then forwards these messages to the c o m m u n i c a t i o n m e d i a f o r transmission. It also receives the PDUs of the coming messages. The messages may carry either TP services or MMS s e r v i c e s . I n a d d i t i o n , t h e communicati ons manager entity keeps a list of all the MMS servers that are involved in a particu lar transaction. This information is supplied to the transactio n manager for use during commit or abort processing.

Th·e transaction manager entity coordinates the initiation, commit, and abort o f l o c a l a n d d i s tr i b u ted transacti o n s . It uses the two-phase c o m m i t m e n t p r o to c o l fo r t h e transaction validation and to guarantee the ACID properties. In the first phase, the transaction manager entity of the cl ient asks each of the participating servers, called subordinates, i f they are wil l ing to prepare. If any subordinate server repl ies negatively, i.e. it wants to abort the transaction, the transaction

Obl«t·Ready .Req No-Ready .Req

Obl«t-Commit.Rsp Oblect·Rollback.Rsp,....-__ _.......,

Obltet-1\eady.Req ._,..,.....�..__T""" No-R9dy.Req

Oblect-Commlt.Rsp ()bfect·Rollbad:.Rsp

ResullS of rec:ove

ILKllLCnf ILKIU.Req

R9d)'Jnd Rollback.Ind Rollback.Cnf CommlLCnl

,---i:.........i:M111&1SL....--' R .Req RoUback.Req Codlmlt.ltsp

Prepare.Ind Comm!Und RoUback.hld Rollback.Cnl

Recem.:I responses from Rollba lt.Rsp .-+-�=�rem-=-iote necoveiy -;si:::;::-.-1---L,

I Transmitted

Communicadoa Media

End.llld

Rec:elvtd Services POU Transmlned Servlf POU I

Figure 4: Distributed Transaction Processing fadllty Architecture.

1 6 1

Page 164: Distributed computer control systems 1995 (DCCS ¹95)

is aborted. By answering positively, the subordinate server gives up its right to abort the transaction and must be p repared to commit or abort the. transaction as specified by the client tran sact ion man ager ent i ty . If a l l s u b o r d i n a t e s e r v e r s r e s p o n d affi rmatively, the transaction manager entity of the c lient transmits a commit message to every participating server. Each subordinate server must respond to the commi t message so that the cl ient transaction manager entity wil l be i n formed about the transaction process ing status. Final l y , when a tran saction aborts, eac h tran saction m a n a g e r e n t i t y n o t i f i e s t h e p a rt i c i p at i n g s e r v e rs th at the transaction has been aborted to stop all p r o c e s s i n g o n b e h a l f o f t h i s t ra n s ac t i o n .

The recovery manager entity is res p o n s i b l e fo r' tran sac t i o n abort, server recovery, node reco very, and c o m m u n i c a t i o n m e d i a fa i l u re rec overy. When a transaction aborts, each transaction manager entity of the partic i pating servers tel l s its local reco very manager entity to undo the effects of this transaction.

Each of the previous entities is resp o n s ible of implementi n g certain spec i fic functions and interacts with the other enti t ies to guarantee the c o herent execut ion of the M M S t r a n s ac t i o n s . T h e t r a n s a c t i o n p r o c e s s i n g fa c i l i t y u s e a communication protocol that is dervied fro m the ISO Transaction Processing (TP) specification [28,29,30] . The ISO TP standard is an application layer service element of the MAP architecture that p ro v i d e s s e r v i c e s to s u p p o r t transaction processing and establi shes a framework for coordination across mu l t ip le separate open sy stems, to c o m mu nicate wi th its peers in a n et w o rk en v i ro n ment . The TP standard offers the application of the two-phase c o m m itment protocol to gu aran tee t h e atom i c i ty of the transaction, p rov ide recovery support i n case of fai lures or if part of the tran s ac t i o n fai l s w h i l e the rest continues to operate, and assure the consisteney of the MMS distributed data o bj ec t s . 3 . S p e c i fi c a t i o n a n d V a l i d a t i o n

A complete specification of the entities composing the MMS layer, the

1 62

MSC layer, and the TPF layer of the pr:,oposed en v i ronment are presented in [24], [25] , and [27] respectively. For each layer, the specification is given in terms of a finite state automaton which represents al l s tates and the correspondi n g trans i t ion condit ions , and the actions shou ld be taken wi thout i mp o s i n g any act i v ati on c o n d i t i o n . Th i s tec h n i q u e has permitted to use the CCS as a validation tool . In fact, the CCS presents the behavior of a finite state machine in terms of the set of the external visible synchronization transitions; this b lack box model lets the transition conditions invis ible with respect to the model itself. This modeling technique assures the val idation of the communicating automata assuming that a l l poss ib le trans i tions can be acti v ated. This validation technique is more robust than any other validation techn i que derived from a detailed specification. But, on the contrary any automaton declared not valid by th i s techn ique can have a correct behavior. 4. Conclusions

A proposal of a d i stri buted tra n s act ion p roces s i n g e n v i ron ment that permi ts the execution of CIM distributed transactions on a real -ti me bas is is presented. Th i s envi ronment has a h ierarchical arch i tecture and consi sts of four layers . Each layer i n teracts with i ts adjacent l ayers through an interface represented by the services provided by the inner l ayer to the outer l ayer . T h i s h i e rarc h i c a l arch i te c tu re s i m p l i fi es the d e s i gn as w e l l as t h e i m p l ementat i o n o f the p ro p o sed enviro nment. The layers, from the i n ner one to the outer one, are suppo rted by a RTO S , the M M S spec ificati on, the M S C specificati on, and a TPF speci ficatio n respectively. With i ts RTOS and the proposed extensions of the MMS and the TP standards, the introduced environment can have a great industrial interset and support a wide range of distributed manufactur ing appl icat i o n s . 5 . R e fe r e n ces [ l ] . Ramamritham K., and Stankovic J . A . , " S c hedu l i n g A l g o r i th ms a n d Operati ng Systems . Support for Real­Time Systems", Proceedings of the IEEE, vol . 82, No. I , January 1 994.

Page 165: Distributed computer control systems 1995 (DCCS ¹95)

[2] . Aslanian R. , "Real-Time Operating S y stem " , C o m p u ter S tand ards & Interfaces, vol. 6, no. 1 , 1 987. [3] . Furht B . , Grostick D., Gluch D., Parker J . , and Pastucha W., "Issues in the Design of an Industry Operating System for Time-Critical Applications " , Proceedings of the Real-Time 90, Stuttgart, Germany, June 1 990. [4] . Vx Works Programmer's Guide, Release 5 .0, Wind River Systems, 1 992. [ 5 ] . Ready J . , "VRTX: A real-time operat i n g s y s tem fo r em bed ded mic roprocessor app l i c at i o n s " , IEEE Micro, August 1 986. [6]. VRTX32/68020, Versatile Real-Time E x e c u t i v e fo r t h e M C 6 8 0 2 0 Microprocessor, User's Guide, Software Release 1 , Ready Systems, April 1 987. [7] . Furht B . , Grostick D., Gluch D. , Rabbat G., Parker J . , and McRoberts M. , "Real-Time Unix Systems: Design and Application Guide", Kluwer, 1 993. [ 8 ] . Herrmann F . , " C horu s : u n en viron nement pour l e developement et I ' execution d'applications re parties .. ' Technique Science Informatique, Vol . 6, No. 2, 1 987. [ 9 ] . C h o ru s K e r n e l V 3 R4 . 0 Program m e r's Reference M a n u a l , Tech . Rep. CS/TR - 9 1 -7 1 , C h orus Systems, September 1 99 1 . [ 1 0] . Sha L . , and Sathaye S .S . , "A Systemat ic Approach to Des ign ing Distributed Real-Time Systems " , IEEE Computer, September 1 993. ( 1 1 ] . Hollingum J., "The MAP Report" , IFS Publ ications Ltd. and S pri nger­Verlag, 1 986. ( 1 2] . Mattews R.S., Muralidhar K.H., and Sparks S . , "MAP 2 . 1 Conformance Testing Tools" , IEEE trans. on Software Engineering, vol. 4, no. 3 , March 1 988. ( 1 3] . Mizlo J.J. , "MAP Pilot Installation: An Actual Implemen tati on " , 5 th an nual i n ternat ional conference on Computer Communications, 1 986. [ 14] . Minet P., "MAP: un reseau local pour u n env i ron nement i n d u s tr ie l a u to m at i s e " , Tec h n i qu e S c i e n c e Informatique, vol. 6 , no. 2, 1 987. [ 1 5] . Minet P., Rollin P., Sedillot S . , "Le reseau MAP", Hermes, Paris, 1 989. [ 1 6] . Dowyer J , ' and Ioannou A., "Les reseaux locaux industriels MAP et TOP", Masson, Paris, 1 99 1 . [ 17 ] . ISO 9506/ l , Industrial s y s t e m s , M a n u factu r i n g S pec i fi c at i o n , P a r t 1 : definition, 1 990.

automation M e s sage

S e rv i c e

1 63

[ 1 8] . ISO 9506/2, Industrial s y s t e m s , M an u fac turi n g S peci f icat i o n , Part 2 : definition, 1 990. [ 1 9] . ISO 9506/3 , Industrial s y s t e m s , M an u fac tu r ing S pec i ficat ion, Part 3 : Standard for Robots, 199 1 .

automation M e s s age P r o t oc o l

automation M e s s age

C o m panion

[20] . ISO 9506/4, Industrial automation s y s t e m s , Man u fac tu r i n g M e s s age Spec i ficati o n , Part 4 : C o mpanion Standard for Numerical Control, 1 99 1 . [2 1 ] . ISO 950615 , Industrial automation s y s t e m s , M an u factur i n g M e s s age S pec i fication , Part 5 : C o mpan i o n S t a n d a r d fo r P r o g r a m m a b l e Contro llers, 1 99 1 . [22] . ISO 950616, Industrial automation s y s t e m s , Man u factu r i n g M e s s age Spec i ficat ion, Part 6 : Co mpanion Standard for Process Control , 1 993. [23] . Dakroury Y. and Elloy J.P., "A new Multi -Server Concept for the MMS Environment", 9 th IFAC workshop on Distributed Computer Control Systems, Tokyo, Japan, 26-28 Sept., 1 989. ·

[24 ] . Dakroury Y . , "Specification et validation d'un protocole de messagerie m u l t i - serveur pour . l ' en v ironnement MMS " , Ph .D. thesis , ENSM Nantes, Nantes University , France, 1 990. [25 ] . Dakroury Y . , El loy J .P . , and R�cordel R., "Design and validation of a m'u l t i - server MMS protoc o l " , IEEE Conference on Communications, ICC'95 , Seattle, Washington (USA), June 1 8-22, 1 995 . [26 ] . Dakroury Y . , Elloy J .P . , and Ricordel R., "Specification of a secured mul t i - server MMS protoco l " , IEEE Conference on Distributed Computing Systems, ICDCS'95, Vancouver (Canada), May 30-June 2, 1 995. [27] . Dakroury Y. and Elloy J .P. , "A Dis t r ibuted Transacti on P ro c e s s i n g Faci l ity for the MMS Spec ification " , IEEE Symposium on Computers and Commun ications , Alexandria (Egypt), June 27-29, 1995. [ 28 ] . I S O 1 0 026/ l , D i s t r i b u ted Transaction Processing, Part 1 : OSI TP Model, 1 992. [ 29 ) . I S O 1 0 026/2 , D i s t r i b u ted Transaction Processing, Part 2: OSI TP Services, 1 992. [ 3 0 ) I S O I 0026/3 , D i s t ri b u ted Transaction Processing, Part 3 : OSI TP Protocol Specification, 1 992.

Page 166: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

CONTROL DESIGN FOR AUTOLAB USING THE REAC­TIVE PARADIGM

S. BAJAJ • , A. SOWMYAt , S. RAMESH* and N. AHMEDt

• Indian Institute of Technology, Bombay, Department of Computer Science and Engineering, Powai, Bombay 4 00 076, India t University of New South Wales, School of Computer Science and Engineering, Kensington, NSW 2052, Australia t James Cook University of North Queensland, Department of Computer Science, Townsville, Queensland 4811, Australia

Abstract. Autolab is a flexible automation system which automates common procedures for sample preparation and analysis in a chemistry laboratory. This paper reports on our efforts to use the latest tools in the reactive languages field to model Autolab and to design a distributed control system for Autolab. The language of choice is Esterel, and Autolab is currently under development as a prototype.

Keywords. Automation, Autolab, Esterel, Formal specification, Reactive languages.

1. INTRODUCTION

A utolab is a flexible automation system for sam­ple preparation and analysis in a chemistry labo­ratory, and automates common procedures used in mining and metallurgical industries, pharmaceuticals and chemical plants. Automation is proposed to be achieved by utilizing a centrally placed robot arm, surrounded by a series of laboratory stations con­taining analytical instruments and other chemistry hardware such as balances, mixers, dispensers and centrifuges [Ahmed and Sowmya, 1994] . Object han­dling at each station and sample movements between stations is performed by a robot arm, using user­defined procedures. The results are fed into a sam­ple database which may be accessed by operators on the factory floor and by a quality assurance team. The advantages of flexible automation of a chemistry laboratory include high quality of results, increased productivity, creation of more challenging work for technical staff and reduced risk of exposure to haz­ardous chemicals. Further, flexible automation is able to meet needs that change over time due to the intro­duction of new products and new analysis techniques. There are limited commercial versions of such auto­mated systems, and Autolab has been proposed by us as an extension and refinement of such systems. In this paper, our interest in Autolab is in its distributed nature and local reactivity of its distributed compo­nents. This work has focussed on using the latest tools in the reactive languages field to model Autolab and to design a distributed control system for Auto­lab. It is currently under development as a prototype. The reactive language of choice has been Esterel due

1 65

to Berry and co-workers (Berry and Gonthier, 1988 and Boussinot and de Simone, 1991], and this paper reports on this aspect of the research.

In section 2, we briefly discuss reactive systems and the synchronous approach to specifying and program­ming them. In section 3, we give a brief description of relevant Autolab features. In section 4, we de­scribe Esterel constructs of relevance to the specifica­tion and design task. In section 5, we sketch a con­trol design for Autolab using Esterel and end with concluding remarks in section 6. We must stress that this is ongoing work, with the first phase of design complete and implementation just about to begin.

2. REACTIVE SYSTEMS AND THE SYNCHRONOUS APPROACH

Traditional transformational systems take some in­put from the environment, process them and output the result to the environment before terminating. On the other hand, reactive systems maintain permanent interaction with their environment, by continuosly re­sponding to inputs from their environment by emit­ting outputs to it. The class of reactive systems is diverse, ranging from microwave ovens and digital watches, to man-machine interfaces for software sys­tems, computer operating systems and complex in­dustrial plants. Most applications of reactive systems are safety critical. It is essential, therefore, that these systems are thoroughly specified and verified before they are designed and become operational.

Page 167: Distributed computer control systems 1995 (DCCS ¹95)

Till recently, assembly language programming was used to implement reactive systems. Recently, how­ever, several high level software constructs have evolved to specify these systems. Concurrent pro­gramming languages is one such development, to­gether with their associated verification and simula­tion techniques.

Reactive languages belong to the class of concurrent languages and may be classified into synchronous and asynchronous programming languages (Benveniste and Berry, 1991) . The synchronous approach is based on the following characteristics:

• they are based on the zero-delay paradigm: ie the system takes no time to respond to its en­vironment or its other subsystems. Such an approach permits easy timing analysis of the system since there is no need to keep track of communication delays.

• they support logical concurrency rather than physical concurrency. By physical concurrency, we mean that executable code consists of con­current tasks, which are scheduled by the oper­ating system. In the synchronous languages, physical concurrency is removed at compile time itself, and the concurrency at the spec­ification level is only logical in nature. Thus there are no hidden runtime overheads due to scheduling.

• they are deterministic, so that analysis is sim­pler.

Esterel, Lustre, Signal and Statecharts are examples of synchronous languages.

3. AUTOLAB

Autolab comprises a robot, a controller and n work­stations (Figure 1 ) . Each of these entities is an em­bedded controller having its own processor which per­forms a set of local tasks and communicates with the other entities. Samples arrive continuously over time at labstation1 , and each sample is tagged with a list of predetermined tests that it has to undergo. Let us call these tests as tasks, and each labstation can perform exactly one task at a time; this task is unique to that labstation. Labstation1 is the arrival station, and its function is to queue the arriving tasks. n queues are maintained, and an arriving sample is put into queue i if its first task needs labstation;. Subsequest tests are logged into a database of the controller.

The controller checks the status of labstations. If labstation; is free, the queue corresponding to it is checked, and the controller directs the robot to pick a sample from labstation1 (the first sample in queue; ) and place it at labstation; . After the sample un­dergoes the desired test at labstation; , the next test that that sample should undergo is retrieved from the database, and the controller informed of it. Thus, whenever the controller finds a labstation free, it not only looks at the queue for that labstation, but also

166

at other samples waiting to enter this labstation from other labstations. After deciding on the sample to pick, the controller directs the robot to move that sample from a particular labstation to another. A labstation does not accept a new sample until the previous sample has been removed from it.

Clearly, a number of concurrent activities are happen­ing in this model, and the communication has both synchronous and asynchronous :flavours. Also, we have a distributed system with distributed processors and distributed control, and the distributed compo­nents are also reactive, reacting to arrivals of samples and the completion of tasks. The control design for such a system is clearly complex, and must take into account the reactivity and concurrency of components and their communication with each other. We pro­pose the use of formal methods to specify and design such control and choose Esterel as our language.

4. THE ESTEREL LANGUAGE

The basic Esterel model (Boussinot and de Simone, 1991) is a reactive model in which communicating sys­tems interact continuously with their environment. When activated by an input event, a reactive sys­tem reacts by producing an output event. The life of a reactive system is divided into instants which are the moments when it reacts. Esterel is guided by the perfect synchrony hypothesis, according to which all system reactions are instantaneous so that acti­vations and productions of output are synchronous, as if the programs were executed on an infinitely fast machine. Esterel assumes that reactions are atomic, ie the system cannot be activated while still reacting to the current activation. Esterel has the parallelism operator (written as I I ) . All communication between parallel entities is achieved using signals that can be emitted, can be tested for presence and can have a value. This characteristic allows programs to make " instantaneous decisions" . Finally Esterel programs are deterministic, and produce identical output se­quences when fed with identical input sequences.

The main new constructs of Esterel include emit sig­nal, testing if a signal is present, await signal, sus­tain signal (ie emit the signal continuously) , and the watchdog statement do watching event, which limits the execution of its body until the next reaction when the event is present. These new constructs are emi­nently suited to the task of specifying reactive control.

5 . CONTROL DESIGN FOR AUTOLAB

The robot, controller and labstation within Autolab are embedded controllers with individual processors.

1 . Controller: The controller has a variety of jobs to perform:

• it must check the status of labstations and find the free ones

Page 168: Distributed computer control systems 1995 (DCCS ¹95)

• when a labstation completes its test on one sample, the controller must decide where the sample goes next

• among the samples waiting to enter a labsta­tion, the controller must pick one and direct the robot to move the sample to it, if it is free.

Of these tasks, the status of each labstation may be checked in parallel. Also, the processing of samples at labstations and finding the next labstation for a sample may be done in parallel. Thus we decide to have a set of controllers running in parallel, and each controller dedicated to bringing a sample into a par­ticular labstation. Each controller in turn has two distinct tasks to perform:

• if a labstation is free, decide which sample goes there

• otherwise, wait till the labstation is free, then find the next labstation where the sample should go, and contact the corresponding con­troller.

We call these two subcontrollers the start-controller and the end-controller.

Of these, the start-controller has been specified in Es­terel, as below. Since the end-controller involves wait­ing, we assume that a C function would implement it and inform its labstation, which would in turn contact the appropriate start-controller.

In the following Esterel code, as well as in subsequent code, most declarations are ommitted due to space constraints; signal names have been chosen mnemon­ically, however, which should aid comprehension.

signal belt_to_controller (integer) , Y. local signals

labstn1_to_controller (integer) , Y. to be sustained

labstn_free , kill_labstn_free in

loop await 1 1_to_controller ; emit Q_l1_service (?l1_to_controller) ; Y. queue the service signal await move_from_labstation1 end loop

1 1

await l_free ; do sustain labstn_f ree watching kill_labstn_free end loop

1 1 Y. service loop starts here

[ loop

present labstn_free then emit get_first_Q ;

await case 11_f irst_Q do

emit controller_to_ robot (?l1_f irst_Q) ;

await ready_from_robot_ to_ controller ;

emit move_from_ labstation1 ;

case b_first_Q do emit controller_to_

robot (?b_first_Q) ; await ready_from_robot_

to_controller ; emit move_from_belt ;

end await ; emit kill_labstn_free ;

else await tick

end present end loop

] end signal

2. Labstation: The set of functions that a labstation must do are:

• inform the start-controller that it is free

• do local processing of a sample when it arrives

• wait for the "next labstation" signal for that sample from the end-controller, so that it can contact the appropriate start-controller.

loop emit l_free ; await sample_arrived_

to_l ; sample_id : = ?sample_arrived_to_l ;

loop await tick ;

await b_to_controller ; emit Q_b_service (?b_to_controller) ; await move_from_belt end loop

1 1

loop

167

emit do_local_proces s ; Y.do local task here and get

the next station here Y.emit appropriate controller

signal await local_process_done ;

Page 169: Distributed computer control systems 1995 (DCCS ¹95)

emit find_next_labstation ;

await case next_labstation1 do

await tick ; emit l_to_controller1 ( ?sample_arrived_to_l) ;

case next_labstation2 do await tick ; emit l_to_controller2 (?sample_arrived_to_l) ;

case next_labstation3 do await tick ; emit l_to_controller3 (?sample_arrived_to_l) ;

case next_labstation4 do await tick ; emit l_to_controller4 (?sample_arrived_to_l) ;

end await ;

await sample_gone_from_l end loop end var

3. Robot: The functions of the robot include:

• look at the start-controller of each labstation and choose one of them for service

• do the required service of moving a sample from labstation; to labstationi

• inform labstation; of the departure of the sam­ple and labstationi of the arrival of the sample.

signal c1_to_robot : integer , c2_to_robot : integer in

'l. sustain the service signal 'l.from controller1 loop await controller1_to_robot ; do sustain c1_to_robot ( ?controller1_to_robot) watching sample_arrived_to_l1 end loop

1 1

'l. sustain the service signal 'l.from controller2 loop

await controller2_to_robot ; do sustain c2_to_robot ( ?controller2_to_robot) watching sample_arrived_to_l2 end loop

1 1

168

'l. service loop var sample_id : integer in

[ loop 'l. poll the service s ignals

present c1_to_robot then

sample_id : = ?c1_to_robot ; 'l. required to inform the next 'l. labstation of the sample id

emit ready_from_robot_ to_controller1 ;

trap T in

[ 'l. controller1

await move_from_labstation1 ; 'l. indicates whether 'l. the sample has to 'l. be moved from 11

'l. or 12

emit sample_gone_from_l1 ; exit T

1 1 await move_from_labstation2 ;

emit sample_gone_from_l2 ; exit T

1 1 await move_from_belt1 ;

'l. or the corresponding queue emit sample_gone_from_belt 1 ;

'l. at labstationO exit T

] end trap ; emit sample_arrived_to_l1

(sample_id) 'l. sample to be taken to the

'l. controller asking service

else present c2_to_robot then

'l. repeat same procedure 'l. if service from controller2

sample_id : = ?c2_to_robot ; emit ready_from_robot_to_

controller2 ;

belt2 ;

else

await

case move_from_labstation1 do emit sample_gone_from_l1 ;

case move_from_labstation2 do emit s ample_gone_from_l2 ;

case move_from_belt2 do emit sample_gone_from_

end await ; emit sample_arrived_to_l2

(sample_id) ;

await tick

Page 170: Distributed computer control systems 1995 (DCCS ¹95)

7. to prevent instantaneous

7. loop

J

end present end present

end loop

end var end signal

4- Labstation1: The tasks include:

• whenever a sample comes into the system, the labstation it has to visit first is known; the sam­ple must be queued in the appropriate labsta­tion queue

• the corresponding controller must be informed about the arrival of a new sample, if the first sample in the queue has been picked by the robot but the queue is not yet empty.

Thus, Labstationl may be split into two as the two tasks may be done in parallel:

1. a set of parallel Queuers which queue the ap­propriate samples in their respective queues.

2. A set of parallel Queue-movers which inform the corresponding controller about the presence of the first sample in the queue.

* * * * * * * * Queuer * * * * * * * *

signal Q_lfull in 7. sustain the Q_full signal

loop await Q_full; do sustain Q_lfull watching sample_gone_from_belt end loop

1 1 [

7. service loop of the queuer loop

await Sample_for_Q ; present Q_lfull then

emit Error_in_queing_ sample_in_Q ;

else emit Q_sample_on_belt

?Sample_for_Q) ; end present

end loop

J

end signal

* * * * * * * * Queue-mover • • ••••••

1 69

signal local_Q_empty in % this is the local sustained %signal

% here the Q_empty signal is %sustained t ill a sample Y.arrives on belt

loop await Q_empty ; do sustain local_Q_empty watching Q_sample_on_belt end loop

1 1 %this is the service loop of the Y.queue mover which moves the queue %when the sample goes from the %belt

loop present local_Q_empty then

await t i ck else

emit b_to_controller ( ?Q_sample_on_belt) ;

await s ample_gone_from_belt ; emit move_belt ;

end present end loop

end signal

Based on this analysis and a set of assumptions for the design stage, we have come up with a design in which the robot and start-controllers act as servers; they service a set of signals arising at various sources. The software for each of the modules has been written and simulated using Esterel tools.

6. CONCLUSION

As already noted, the design has been simulated un­der the Esterel environment. Using the verification tools of mauto and atg, the specification has been ver­ified to make sure there are no deadlocks in individual modules. Due to limitations of space, the verification procedure is not described here. It must be pointed out, however, that even though none of the modules has deadlock individually, it is possible that the full system generated by putting the modules together in parallel may indeed have a deadlock, which has not been verified yet. For the robot, we have also assumed that we have a deterministic ordering of service sig­nals, ie service signal from controllerl (if present) is always serviced before the service signals from other controllers. This might lead to starvation of some ser­vice signals. In the controller, we assume that some C code would schedule the service signals.

Possible extensions to this design include the addition of a buffer labstation where processed samples may

Page 171: Distributed computer control systems 1995 (DCCS ¹95)

await further tests. This buffer would permit fast lab­stations to process the next sample without waiting for the previous to be removed. Also, the controller may choose its next client, not non-deterministically as now, but based on its closeness to the robot. This would assign priorities to samples. Similarly priorities may be assigned to tasks too.

Currently we have a project under way at the UNSW AI/Robotics laboratory, to implement a simplified version of Autolab using a real robot arm and the Esterel tools.

7. ACKNOWLEDGEMENT

This research was partly conducted at Tata Institute of Fundamental Research, Bombay while the second author was visiting there in 1994.

8. REFERENCES

Ahmed, N. and Sowmya, A. (1994) . AutoLab: a Robotics Solution for Flexible Laboratory Au­tomation. Proc. SPIE Intelligent Robots and Computer Vision XIII: SD Vision, Product In­spection, and Active Vision, Boston, Nov 1994, D. P. Casasent, 205-214, SPIE Proceedings Se­ries, Vol. 2354.

Benveniste, A. and Berry, G. (1991) . The syn­chronous approach to reactive and real-time systems. Proc. IEEE. 79(9), 1270-1282.

Boussinot, F. and de Simone, R. (1991) . The Esterel language, Proc. IEEE. 79(9) , 1293-1304.

Berry, G. and Gonthier, G. (1988) . The ESTEREL synchronous programming language: design, semantics, implementation. INRIA Report 842, Sophia-Antipolis.

Fig 1. Autolab

1 70

Page 172: Distributed computer control systems 1995 (DCCS ¹95)

Copyright © IFAC Distributed Computer Control Systems, Toulouse-Blagnac, France, 1995

A HIGHLY DISTRIBUTED CONTROL SYSTEM FOR A LARGE SCALE EXPERIMENT

C. Gaspar

CERN, European Organization for Nuclear Research CH-1211 Geneva 23, Swizerland

J . J. Schwarz

INSA L3I. B502 IF 20 av A. Einstein 69621 Villeurbanne Cedex, France

Abstract These days physics experiments can no longer be acomplished by a single "genious" in his laboratory, they involve large international collaborations of hundreds of scientists, enourmous particle accelerators and very complex particle detectors. This paper will give an overview of the problems encountered and the solutions re­tained when building control systems for supercolliders, the largest scientific instru­ments ever built by humans. Keywords Computer communication , Control Systems, Distributed control, Reliabil­ity, Robustness

INTRODUCTION

DELPHI (DEtector with Lepton, Photon and Hadron Identification) (DELPHI Collaboration, 1991) is one of the four experiments built for the LEP (Large Electron-Positron) collider at CERN, the European Organization for Particle Physics.

DELPHI consists of a central cylindrical section and two end-caps. The overall length and the diameter are over 10 meters and the total weight is 2500 tons.

The electron-positron collisions take place inside the vacuum pipe in the centre of DELPHI and the products of the annihilations fly radially outward­s. The products of the annihilations are " tracked" by several layers of detectors and read out via some 200,000 electronic channels. A typical event requires about 1 million bits of information.

The DELPHI detector is composed of 20 sub­detectors, as described in Fig. I , which were built by different teams of laboratories of the DELPHI collaboration (around 800 scientists from 50 lab­oratories all over the world) .

The main aim of the experiment is the verification of the theory known as the " Standard Model" .

The DELPHI experiment started collecting data in 1989 and it has to be up and running 8 month­s/year (24h a day) until around the year 2000.

171

DELPHI

Fig. 1 . The DELPHI Detector

During its life time the experiment is constantly modified , to allow for different physics studies, new sub-detectors can be introduced and old ones can be upgraded or replaced.

The control system of the experiment has to as­sure that the experiment works efficiently and re­liably during the running periods and it has to allow for an easy reconfiguration of any part of the experiment according to the physicists wish­es.

Page 173: Distributed computer control systems 1995 (DCCS ¹95)

DELPHI ONLINE SYSTEM

The online system of a physics experiment is com­posed of many different parts, its main tasks are:

• The Data Acquisition System (DAS) (Charp­entier et al . , 1991) Reads event data from the 20 sub-detectors composing DELPHI and writes them onto tape. In order to provide a high degree of in­dependence to the individual sub-detectors the DAS system has been split into 20 autonomous partitions. These partitions are normally com­bined to form a full detector but they can also work in stand-alone mode for test and calibra­tion purposes.

• The Trigger System (Fuster et al. , 1992) Provides the DAS system with the information on whether or not the event is interesting and should be written to tape. The final trigger de­cision is a combination of the partial decisions of the sub-detectors.

• The Slow Controls System (SC) (Adye et al. , 1992) Controls and monitors slowly moving tech­nical parameters and settings, like tempera­tures, pressures and high voltages of each sub­detector, and writes them into a database.

• The Lep Communication System (Donszelman and G aspar, 1994) Controls the exchange of data between the LEP control system and DELPHI.

• The Quality Checking System (QC) Provides automatic and human interfaced tools for checking the quality of the data being writ­ten on tape.

The complexity of controlling such a system comes from the fact that although the differen­t parts of the system have different requirements and constraints, ranging from real time behaviour in the DAS system to strict safety constraints in the SC area they have to work together for the common goal of providing " good" data for physics analysis .

In previous experiments the control of the differ­ent areas was normally designed separately by dif­ferent experts, using different methodologies and tools resulting on a set of dedicated control sys­tems.

DELPHI decided to take a common approach to the full "experiment control" system. The result was the design of a system that can be used for the control and monitoring of all parts of the ex­periment, and consequently obtaining a system

1 72

that is easier to operate, because it is homoge­neous, and easier to maintain and upgrade.

The Online control system is characterized by a highly distributed architecture, as most curren­t computer control systems, it consists of work­stations interconnected by a local area network. Each workstation (through a Graphical User In­terface - GUI) controls and monitors a part of the system, either a sub-detector (Det) or a central task, like DAS or SC as shown in the diagram of Fig. 2 .

GUi n

Fig. 2. The Online System

The need for the multiple tasks composing the DELPHI control application to run on different machines brought up the problem of communi­cating easily, effectively and reliably among pro­cesses and processors.

In order to solve this problem DELPHI has de­signed it's own communication system - DIM (Distributed Information Management System) (Gaspar and Donszelman, 1993) .

DESIGN REQUIREMENTS

The control system of the experiment is com­posed of more than 500 processes distributed over around 40 workstations.

As in most distributed systems, DELPHI had to face critical design issues like :

• locating tasks and data resources distributed across the network,

• establish and maintain interprogram communi­cation on the network ,

• coordinating the execution of distributed tasks,

• synchronizing replicated programs or data to maintain a consistent state,

• detecting and recovering from failures in an or-derly, predictable manner.

The purpose of the DIM system is to implement a coordination model able to handle these diverse issues coherently.

Page 174: Distributed computer control systems 1995 (DCCS ¹95)

DELPHI adds some aditional requirements:

• An efficient communication mechanism DELPHI has some requirements on what con­cerns the communication mechanism, this issue will be discussed in the next chapter.

• Uniformity The DIM system should be capable of handling all process interactions withing the online sys­tem, all processes involved with control, mon­itoring, processing or display should use the same communication system. An homogeneous system is much easier to program, to maintain and to upgrade.

• Transparency An important goal for a distributed communi­cation system is tranparency. At run time no matter where a process runs, it should be able to communicate with any other process in the system independently of where the processes are located. Processes should be able to move freely from one machine to anoth­er and all communications should be automat­ically reestablished (this feature also allows for machine load balancing) . At coding time the user should not be con­cerned with machine boundaries, the commu­nication system should provide a location- -transparent interface.

• Wide-area availability DELPHI is an international collaboration, any necessary information should be available to the outside world, using the same system.

By fullfilling such requirements a communication system can greatly improve the performance of the complete system. It provides a decoupling layer between software modules, that makes cod­ing, maintenance and upgrade of the system eas­ier and improves efficiency and reliability at run­ning time.

COMMUNICATION MECHANISM CONSID­ERATIONS

When designing a distributed control system, the choice of the communication mechanism to be used is an important issue.

Distributed applications are often based on Re­mote Procedure Calls {RPC) (Birrell and Nelson, 1984) . In the RPC mechanism the client send­s a message containing the name of a routine to be executed and its parameters to a server, the server executes the routine and sends a message back containing the result. This implies that the communication is point-to-point and synchronous

1 73

since the client always waits until the routine fin­ishes execution.

For an application like DELPHI the RPC mech­anism is very heavy and not well suited. In DEL­PHI's Online system some processes have to reac­t to condition changes and many times multiple processes have to be notified of these changes.

A more suited model for this type of require­ments is one that allows for asynchronous and one-to-many (group communications) (Kaashoek and Tanenbaum, 1991) communications.

The solution we thought the best in our case is for clients to declare interest in a service provided by a server only once (at startup) , and get updates at regular time intervals or when the conditions change.

This mechanism - interrupt like - as opposed to RPC's polling approach involves twice less mes­sages sent over the network, i .e. is faster and saves in network bandwidth. It has also the ad­vantages of allowing parallelism (since the client does not have to wait for the server reply and so can be busy with other tasks) and of allowing multiple clients to receive updates in parallel.

This approach together with the possibility of sending commands to servers {more RPC like) are the main features of the DIM Communica­tion Mechanism.

DESIGN PHILOSOPHY

DIM, as most communication systems, is based on the Client/Server paradigm.

The basic concept in the DIM approach is the concept of " Service" . Servers provide " Services" to Clients. A service is normally a set of data {of any type or size) and is recognized by a name -" Named Services" . The name space for services is free.

In order to allow for the required transparency (i.e a client does not need to know where a serv­er is running) as well as to allow an easy recov­ery from crashes and migration of servers a Name Server was introduced.

The architecture developed for the interactions between servers, clients and the name server in order to fullfill the previously mentioned require­ments is the following :

• Servers To become a DIM server, a process has to ac­complish the following operations: o Register the Services, by calling the routine dis_add_service for each service it wants to pub­lish.

Page 175: Distributed computer control systems 1995 (DCCS ¹95)

The service can be provided in two ways, ei­ther by describing the address and the size of the data to be passed to the client or by spec­ifying a routine that will prepare the data and return its address and size. o Register commands (if any) expected from clients, by calling the routine dis_add_cmnd specifying a routine to be executed on com­mand arrival. o Start serving client requests. By calling dis_start..serving the list of provided services will be sent to the name server and clients will start beeing served. From then on whenever a client service request arrives, its parameters are stored and the ser­vice will be sent to the client whenever neces­sary, according to the requested update mech­anism. The routine dis_update_service can be used to force the dispatching of the service to the clients (that have enabled this type of up­date) . Example : A program publishing the current state of the experiment can call dis_add_service specifying " DELPHI/STATE" as service name, the ad­dress and size of a character string buffer where the current state will be stored and NIL as rou­tine address. service_id = dis_add_service(" DELP HI/STATE", state_buf, state..size, 0}; dis..start..serving(" DELP HLSTATE" );

In the main program, possibly by using as a client many different DIM services, it will compute the overall state of the experiment store it in state_buf and call dis_update_service{service_id).

• Clients Any process can access a service by using the routine dic_info_service.

The client can get the service by specifying a buffer address and size where the service data will be stored and/or by specifying a callback routine to be executed on service arrival . In fact when using the buffer address mechanism the system works as if the clients maintain in cache a copy of the server's data (the cache co­herence being assured by the server) . The client can also specify the address and size of a constant to be copied to the service buffer and/or passed to the callback routine whenever the connection to the server breakes or can not be established. The update mechanism can be of three differ-

1 74

ent types : o ONCE_ONLY : The service data will be ac­cessible to the client only once. (Very rarely used) o TIMED : The Service data will be updated at regular time intervals (Used normaly for the update of discrete quantities like trigger rates or event sizes) o MONITORED : The Service data will be up­dated whenever it changes, available only if the server provides it by calling dis_update_service. (Used for passing states or error conditions) Clients can send commands to servers by us­ing dic_cmnd_service specifying the address and size of the command data. Clients can also dis­connect from services using dic_release..service specifying the service identifier returned by dic_info_service.

TIMED and MONITORED services are only requested once by the client (normally at start­up) , the service will then be updated automat­ically by the server. When using MONITORED services the server will update the information of all clients when­ever it changes, thus making sure the data is coherent over all the clients of a certain ser­VIce. Example : A program displaying the current state of the experiment can call dic_info_service specifying " DELPHI/STATE" as service name, MONI­TORED as update mechanism, the address and size of a character string buffer where the state will be stored by the server, the address of the the callback routine that will update the screen when the state is updated and the string "NO LINK" as constant in case of connection fail­ure. dic_info..service(" DELP HI/STATE", MONITORED, state_buf, state..size, update_routine, routine_tag, "NO LINK", 8);

By making this call at program statup, up­date_routine will be called every time the state of DELPHI changes, all it has to do is print the variable state_buf, in case the server that provides this service dies " NO LINK" will au­tomatically be printed. If the server restarts the correct value will again be printed. In the meantime the main program can be busy doing anything else

• Name Server The Name Server acts as a service directory: it keeps an up-to-date list of all the services and servers available in the system. It handles

Page 176: Distributed computer control systems 1995 (DCCS ¹95)

Server registration by storing in an hash table (of linked services) the services provided by the server. All servers stay permanently connect­ed to the name server and send alive messages from time to time so that the name server can be sure of their availability (when they die or hang their services are discarded) . If a serv­er tries to register a service that already exists the name server sends back a kill signal and the server dies with a clear error message (Services have to be unique) . Whenever a client wants to access a service it first contacts the name server, the name server replies the node and address of the server and its communication protocol. From then on the client contacts the server directly. If the ser­vice is not available the client receives a nega­tive answer but the request stays queued in the name server and when/if the service becomes available the client will be warned and the con­nection established. If a server dies or hangs while a client is con­nected to it, the client will recontact the name server, the server might have moved to another machine, anyway the request stays in the name server for whenever and wherever the service is again available. Both servers and clients have retry mechanism­s, if the name server is unreacheable they will keep on retrying at random (within limits) time intervals (to avoid network congestion) and whenever it comes back to life servers will re­register all their services and clients will finally request the services they need. While the name server is down all client-server connections pre­viously established continue working. The Name server also keeps some statistics about servers and services, these statistics are available as DIM services.

Figure 3 shows a small example of the usage of the DIM system within the DELPHI Online System

@ Server Library (I Client Library

(!) Service Registration (publishing)

G> Service RequesUReply (subscription)

Q) Services : - DATA/ ·-···•CMNDS

Fig. 3. DIM example

1 75

IMPLEMENTATION ISSUES

• Transparency /Easy-to-use The complete client and server functionality described above is hidden from the user inside library routines. Once a Server has " published" its services or a client has "subscribed" to the services it needs, the handling of client requests or server updates can be done (if desired) without any notifica­tion or intervention of the user process.

• Monitoring and Debugging The behaviour of complex distributed applica­tions can be very difficult to understand with­out the help of a dedicated tool. The DIM System provides a tool - DID, Dis­tributed Information Display - that allows the visualization of the processes involved in the application as shown if Fig. 4. DID provides information on the Servers and Services available in the system at a given mo­ment and on the clients using them.

Fig. 4. Display Tool

• World Wide Access An application built using DIM can be dis­tributed over the world provided that either TCP /IP or DECNET are available. The information available as DIM services can also be accessed by WWW (World Wide Web) through a WWW-gateway. The WWW page

Page 177: Distributed computer control systems 1995 (DCCS ¹95)

can be composed with the help of a dedicated editor.

CURRENT DEVELOPMENTS

Although the DIM system is currently in use in the DELPHI experiment , the project is not fin­ished, its installation on different platforms and over other network protocols would be of great use for both DELPHI and other potential users.

The DIM system is for the moment only avail­able under the VMS operating system and it uses as network support TCP /IP and/or DECNET. The extension to UNIX and OS9 over TCP /IP is beeing studied with the associated problems of different data representations over different ma­chines.

The data formats and the network protocol to be used on each connection will have to be negotiat­ed between the server, the client and the Name Server.

CONCLUSIONS

DELPHI is one of the largest Physics experiments in the world, it's online control system is com­posed of many different components distributed over many machines. In order to allow for efficient communication among machines and processes a communication system - DIM - was developped.

DIM has greatly simplified the coding and main­tenance of the DELPHI online software by pro­viding a network-transparent inter process com­munication mechanism. The distribution of and the access to up-to-date information of all parts of the system takes place with minimum addition of user code.

DIM's asynchronous communication mechanism allows for task parallelism and multiple destina­tion updates. It's characteristics of efficiency and reliability have considerably improved the perfor­mance and robusteness of the complete online sys­tem. The number of crashes was reduced from once a week (taking about two or three hours to recover) to none during the last year.

The access to DELPHI information is possible from all over the world either directly through DIM or via WWW through a WWW-DIM gate­way.

DIM is responsible for most of the communica­tions inside the DELPHI Online System, in this environment it makes available around 15000 Ser­vices provided by 300 Servers. DIM is now also beeing used by other experiments at CERN.

The extension of the DIM system to other plat-

1 76

forms would be of great use for DELPHI and is beeing studied.

REFERENCES

DELPHI Collaboration, Aarnio, P. et al. (1991} . The DELPHI Detector at LEP. Nuclear Instruments and Methods in Physics Research A303 pp. 233-276.

Charpentier, Ph. et al. ( 199 1 ) . Architecture and Performance of the DELPHI Data Acquisi­tion and Control System. Proceedings of the International Conference on Computing in High Energy Physics '91 . Tsukuba, Japan.

Fuster, J.A. et al. (1992) . Architecture and Per­formance of the DELPHI Trigger system. Proceedings of the IEEE 1992 Nuclear Science Symposium. Orlando, Florida.

Adye, T. et al. (1991 } . The Slow Controls of the DELPHI Experiment at LEP. Proceedings of the International Conference on Computing in High Energy Physics '92. Annecy, France.

Donszelmann, M . and Gaspar, C . ( 1994} . The DELPHI distributed information system for exchanging LEP machine related informa­tion. Nuclear Instruments and Methods in Physics Research A352 pp. 280-282 .

Gaspar, C. and Donszelmann, M . ( 1993} . DIM -A Distributed Information Management Sys­tem for the DELPHI experiment at CERN. Proceedings of the IEEE Eight Conference REAL TIME '93 on Computer Applications in Nuclear, Particle and Plasma Physics . Vancouver, Canada.

Birrell, A. D. and Nelson, B. J. ( 1984) . Imple­menting Remote Procedure Call. ACM Trans. Comp. Syst. Vol . 2 , No. 1, pp. 39-59.

Kaashoek, M . F. and Tanenbaum, A. S. ( 1991} . Group communication in the Amoeba dis­tributed operating system. Proceedings of the 1 1th International Conference on Distributed Computing Systems. Arlington.

Page 178: Distributed computer control systems 1995 (DCCS ¹95)

AUTHOR INDEX

Ahmed, N. 165 Halang, W.A. 1 9 Sahraoui, A.E.K. 73 Alonso, A. 1 1 1 Heinzman, J. 37 Sautet, B. 49 Alvarez, J.C. 141 Heitz, M. 83 Schoop, R. 43 Alvarez, J.M. 141 Schwarz, J.J. 171

Silly, M. 1 35 Iglesias, C.A. 7 Simonot-Lion, F. 1 17

Bajaj , S. 165 Sirgo, J.A. 141 Bayart, M. 1 17 Skubich, J.J. 19 Beauvais, J.-P. 55 Jeffroy, A. 73 Sowmya, A. 1 65 Belschner, R. 95 Jerraya, A.A. 73 Staroswiecki, M. 1 17 Burns, A. 1 05 Jung, W.Y. 1 29 Stothert, A.G. 25

Strelzoff, A. 43

Carcagno, L. 49 Kim, D.W. 129 Cardeira, C. 67 Kwon, W.H. 129 Cherkasova, L. 61 Thomesse, J.-P. 89, 1 5 1

Thomesse, J.P. 1 17 Laengle, T. 37 Tisato, F. 3 1

Dakroury, Y . 1 57 Lecuivre, J. 1 5 1 Torngren, M . 1 3 Dane, B . 1 Lehmann, M. 95 De Chazelles, P. 73 Leon, G. 1 1 1 De Miguel, M. 1 1 1 Lopez, H. 141 De Paoli, F. 31 Lueth, T. 37 Unger, H. 1 Decotignie, J.-D. 147 Decotignie, J.D. 101 Deplanche, A.-M. 55 MacLeod, l.M. 25 Vega Saenz, L. 89 Dours, D. 49 Magdalena, L. 7 Velasco, J.R. 7 Duenas, J.C. 1 1 1 Mammeri, Z. 67

Martineau, P. 1 23 Morel, P. 147 Wang, Q. 77

Elloy, J.P. 157 Wannernacher, M. 19

Facca, R. 49 Raja, P. 101 Fengler, W. l Ramesh, S. 165 Zalzala, A.M.S. 77 Fischer, N. 101 Rendon, A. 1 1 1 Zhang, S. 1 05

Rokicki, T. 6 1 Romdhani, M . 73

Gaspar, C. 171 Roux, O.H. 1 23 Gonzalez, J.C. 7 Ruiz, L. 101

177