16
Future Generation Computer Systems 18 (2001) 265–280 The PaCMAn Metacomputer: parallel computing with Java mobile agents Paraskevas Evripidou a,, George Samaras a , Christoforos Panayiotou a , Evaggelia Pitoura b a Department of Computer Science, University of Cyprus, P.O. Box 20537, 75 Kallipoleos Street, CY-1678 Nicosia, Cyprus b Department of Computer Science, University of Ioannina, GR 45110 Ioannina, Greece Abstract The PaCMAn (parallel computing with Java mobile agents) Metacomputer launches multiple Java mobile agents that communicate and cooperate to solve problems in parallel. Each mobile agent can travel anywhere in the Web to perform its tasks. A number of brokers/load forecasters keep track of the available resources and provide load forecast to the clients. The clients select the servers that they will utilize based on the specific resource requirements and the load forecast. The PaCMAn mobile agents are modular; the mobile shell is separated from the specific task code of the target application. To this end, we introduce the concept of TaskHandlers which are Java objects capable of implementing a particular task of the target application. TaskHandlers are dynamically assigned to PaCMAn’s mobile agents. We have developed and tested a prototype system with several applications such as parallel Web querying, a prime number generator, the trapezoidal rule and the RC5 cracking application. Our results demonstrate that PaCMAn provide very good parallel efficiency. © 2001 Elsevier Science B.V. All rights reserved. Keywords: PaCMAn Metacomputer; TaskHandlers; Java-mobile agent; HPC 1. Introduction Our motivation is the development of a system that can harness the vast resources of the Internet that would have otherwise be idle into a dynamic hetero- geneous Metacomputer for high performance comput- ing (HPC). The PaCMAn Metacomputer [11] consists of three major components: servers, clients and bro- kers. The clients launch multiple Java mobile agents [1,4,5] that travel in designated servers in the Web where they perform their specific tasks. The broker Corresponding author. Tel.: +357-2-892239; fax: +357-2-339062. E-mail addresses: [email protected] (G. Samaras), [email protected] (P. Evripidou), [email protected] (E. Pitoura). keeps the registry of the servers and clients that par- ticipate in the Metacomputer. Each agent supports the basic communication and synchronization tasks of the classical parallel worker assuming the role of a pro- cess in a parallel processing application. There is also a second degree of parallelism achieved by the use of multithreading inside each agent that strengthens the concurrent execution of an application. Furthermore, in the PaCMAn framework, we generalize the paral- lel agent by separating the mobile shell from the spe- cific task code of the target application. We achieve this with the introduction of TaskHandlers, which are Java objects capable of performing the various tasks. TaskHandlers are dynamically assigned to our agents. The driving force motivating the use of Java mobile agents in PaCMAn is twofold. First, mobile agents 0167-739X/01/$ – see front matter © 2001 Elsevier Science B.V. All rights reserved. PII:S0167-739X(00)00098-4

The PaCMAn Metacomputer: parallel computing with Java

  • Upload
    others

  • View
    7

  • Download
    0

Embed Size (px)

Citation preview

Page 1: The PaCMAn Metacomputer: parallel computing with Java

Future Generation Computer Systems 18 (2001) 265–280

The PaCMAn Metacomputer: parallel computing withJava mobile agents

Paraskevas Evripidou a,∗, George Samaras a,Christoforos Panayiotou a, Evaggelia Pitoura b

a Department of Computer Science, University of Cyprus, P.O. Box 20537, 75 Kallipoleos Street, CY-1678 Nicosia, Cyprusb Department of Computer Science, University of Ioannina, GR 45110 Ioannina, Greece

Abstract

The PaCMAn (parallel computing with Java mobile agents) Metacomputer launches multiple Java mobile agents thatcommunicate and cooperate to solve problems in parallel. Each mobile agent can travel anywhere in the Web to perform itstasks. A number of brokers/load forecasters keep track of the available resources and provide load forecast to the clients. Theclients select the servers that they will utilize based on the specific resource requirements and the load forecast. The PaCMAnmobile agents are modular; the mobile shell is separated from the specific task code of the target application. To this end,we introduce the concept of TaskHandlers which are Java objects capable of implementing a particular task of the targetapplication. TaskHandlers are dynamically assigned to PaCMAn’s mobile agents. We have developed and tested a prototypesystem with several applications such as parallel Web querying, a prime number generator, the trapezoidal rule and the RC5cracking application. Our results demonstrate that PaCMAn provide very good parallel efficiency. © 2001 Elsevier ScienceB.V. All rights reserved.

Keywords: PaCMAn Metacomputer; TaskHandlers; Java-mobile agent; HPC

1. Introduction

Our motivation is the development of a system thatcan harness the vast resources of the Internet thatwould have otherwise be idle into a dynamic hetero-geneous Metacomputer for high performance comput-ing (HPC). The PaCMAn Metacomputer [11] consistsof three major components: servers, clients and bro-kers. The clients launch multiple Java mobile agents[1,4,5] that travel in designated servers in the Webwhere they perform their specific tasks. The broker

∗ Corresponding author. Tel.: +357-2-892239;fax: +357-2-339062.E-mail addresses: [email protected] (G. Samaras),[email protected] (P. Evripidou), [email protected] (E. Pitoura).

keeps the registry of the servers and clients that par-ticipate in the Metacomputer. Each agent supports thebasic communication and synchronization tasks of theclassical parallel worker assuming the role of a pro-cess in a parallel processing application. There is alsoa second degree of parallelism achieved by the use ofmultithreading inside each agent that strengthens theconcurrent execution of an application. Furthermore,in the PaCMAn framework, we generalize the paral-lel agent by separating the mobile shell from the spe-cific task code of the target application. We achievethis with the introduction of TaskHandlers, which areJava objects capable of performing the various tasks.TaskHandlers are dynamically assigned to our agents.

The driving force motivating the use of Java mobileagents in PaCMAn is twofold. First, mobile agents

0167-739X/01/$ – see front matter © 2001 Elsevier Science B.V. All rights reserved.PII: S0167 -739X(00 )00098 -4

Page 2: The PaCMAn Metacomputer: parallel computing with Java

266 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

provide an efficient, flexible and asynchronous methodfor obtaining services in rapidly evolving networks.Second, mobile agents support intermittent connectiv-ity, slow networks, and lightweight devices. PaCMAnapplications can be started from lightweight devicessuch as laptop computers or even PDAs. After launch-ing the PaCMAn application, the lightweight device(client) can disconnect from the network and the ap-plication will continue executing uninterrupted. Theclient will receive the output of the application it haslaunched, whenever it reconnects to the network again.Thus, the PaCMAn framework is appropriate for theemerging wireless environments.

At any given time, there is a vast number of comput-ers connected to the Internet that are idle. The collec-tive computational power of these computers dwarfsthe computational power of all the world’s high perfor-mance computers put together. The topology of theseloosely coupled resources changes very dynamically.These are the resources that PaCMAn is targeting. Tra-ditional parallel processing techniques such as MPIand PVM are better suited for tightly coupled systemssuch as multiprocessors and NOWs with high perfor-mance networks. A strong point of MPI and PVM isthe use of highly optimized conventional languagessuch as Fortran, C and C ++ that have better compu-tational performance than Java. However, this mightchange in the future. There are several attempts suchas the Java Grande Forum [25] to make Java a betterenvironment for HPC than the traditional languages.

PaCMAn also has an advantage over conventionaltechniques when there are large volumes of distributeddata. An ideal application for PaCMAn is distributeddata mining. The ability of PaCMAn to launch anynumber of mobile agents to travel in the Web andperform tasks in parallel makes it ideal for applica-tion that have huge volumes of distributed data. The1997 press release of the RSA code breaking chal-lenge stated among others [23]:

“Perhaps a cure for cancer is lurking on the Internet”

Distributed data mining with PaCMAn on the con-stantly updated data warehouses of medical institu-tions worldwide could be utilized to explore suchassertions.

Most of the existing Web-based Metacomputerprojects [15,21,22] are applet based. Thus, the servermachines are randomly becoming available when

the computer owner points to the Web page thathosts these systems. PaCMAn on the other hand isproactive. At any given time, a client can use all theservers that are registered as available at that time.For example, companies can register their computeras available after work hours on weekdays and forthe entire weekends. Home computers on the otherhand can register as available during work hours. Ofcourse, users have the ability to make their computerunavailable explicitly by turning off the PaCMAndaemon or implicitly when the local load reaches apredetermined threshold.

The current implementation of PaCMAn is based onthe Aglet Technology [2], developed by IBM, Tokyo,which is a Java-based framework for building mobileagent. We have also tested other mobile agent sys-tems [7] and we are currently porting PaCMAn to theVoyager [24] system which is RMI and CORBA based.

Section 2 presents a brief overview of mobileagents and makes a qualitative assessment of the mo-bile agents characteristic that make them suitable forPaCMAn. Section 3 presents the basic characteristicsof the Aglet Technology. In Section 4, we describe thePaCMAn framework and in Section 5, we focus onthe object definitions of the framework. In Section 6,we present our prototype implementation, experimen-tal results from sample applications and performanceanalysis. Section 7 briefly reviews other approachesthat use Java for HPC. In Section 8, we present ourconcluding remarks.

2. Mobile agents

A software agent is an autonomous program thatresides on computers and/or networks. At any pointit has the ability to interact with its environment, andadapt its course of action in a way that maximizes itsprogress towards its goal.

Mobile agents are self-contained agents that cannavigate autonomously. At each site that they visit,they can perform a variety of tasks [13]. The underly-ing computational model resembles the multithread-ing environment in the sense that like individualthreads, each agent consists of program code and astate. Communication is usually performed throughthe exchange of messages. Mobility is a fundamentalextension of this model, each agent can autonomouslyrelocate itself or its clones.

Page 3: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 267

To perform useful work, each mobile agent needsto interact with the server environment it visits. Thus,a daemon like interface (i.e. an agent execution en-vironment) is provided at the server site that receivesthe visiting agents, interprets their state and sendsthem to other daemons as necessary. This allows thevisiting agent to access services and data on the cur-rent server. Each daemon must be explicitly started ateach site that wishes to participate in the mobile-agentbased Metacomputer. Issues of security and authen-tication can also be handled by the agent systeminterface.

Java mobile agents, the most popular kind of mo-bile agents nowadays, inherit the universal portabilityand seamless interface of the Java programming lan-guage. Furthermore, their execution environment is theJava virtual machine (JVM) that is constantly beingimproved and optimized. PaCMAn and other similarsystems for distributed processing over the Web ben-efit from this increasing investment in Java and theWeb in general.

One question that may arise is, why are we utiliz-ing mobile agents and not just mobile objects [6]. Themain difference between the two is that agents are au-tonomous and more flexible entities with dynamic be-havior. Have we had used mobile objects, we wouldhad to extend them extensively in order to attain thedynamic behavior required by PaCMAn. For example,the concept of dynamic TaskHandlers could not besupported by mobile objects directly and would haverequired considerable extensions (which in essencewould have amounted in us constructing our own ver-sion of a rudimentary mobile agents system).

The fact that we based our system on mobile agentsgive us more flexibility to expand in other applica-tion areas such as data mining and tasks that com-bine intelligent behavior and computation. It is in ourfuture plans to introduce more “intelligent” commu-nication in our system, following the KQML/FIPAstandard.

3. The Aglet technology: a Java-based agentexecution environment

The Aglet technology [2,3] (also known as theAglets workbench) is a framework for programmingmobile network agents in Java. Developed by the IBM

Japan research group, the Aglet technology was firstreleased in 1996. From a technical point of view, theIBM’s mobile agent called Aglet (Agile applet) is alightweight Java object that can move autonomouslyfrom one computer server to another for execution,carrying along its program code and state as well asany data obtained so far.

An Aglet can be dispatched to any remote server thatsupports the JVM. This requires that the remote serverhas preinstalled the Tahiti daemon, a tiny Aglet serverprogram implemented in Java. A running Tahiti serverlistens to the server’s ports for incoming Aglets, cap-tures them, and provides them with an Aglet context(i.e. an agent execution environment) in which theycan run their code from the state that it was halted be-fore they were dispatched. Within its context, an Agletcan communicate with other Aglets, collect local in-formation and when convenient halt its execution andbe dispatched to another server. An Aglet can also becloned or disposed.

To allow Aglets to be launched from within applets,the IBM Aglet team provided the so-called FijiApplet,an abstract applet classes that is part of a Java packagecalled Fiji Kit. The FijiApplet maintains some kind ofan Aglet context (like the Tahiti Aglet server). Fromwithin this context, Aglets can be created, dispatchedfrom and retracted back to the FijiApplet.

For a Java-enabled Web browser to host and launchAglets to various destinations, besides the FijiAppletapplet class, two more components are required andprovided by the Aglet Framework. These are an Agletplug-in to allow the browser to host Aglets and anAglet router that must be installed at the Web server.The Aglet router is implemented in Java and is avail-able as a stand-alone Java program or as a servletcomponent for Web servers that support servlets. 1

Its purpose is to capture incoming Aglet and forwardthem to their destination.

The Aglet framework provides security guaranteesin addition to the standard Java security. The secu-rity manager of the Tahiti server performs variouschecks on a visiting agent before it is allowed to run.Furthermore, if during execution, the Aglet requiressome protected services (such us disk access), it mustfirst be validated by the security manager before gain-

1 Servlet: a server-side applet that dynamically extends the func-tionality of a Web server.

Page 4: The PaCMAn Metacomputer: parallel computing with Java

268 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

ing access to these resources. More advance securityschemes will be implement by the PaCMAn frame-work at a later stage.

4. The PaCMAn framework

The PaCMAn Metacomputer consists of a numberof computers located anywhere in the Internet thatregister with the PaCMAn broker as servers or clientsor both. Fig. 1 depicts the basic components of thePaCMAn Metacomputer:

• Broker: It keeps track of the system configuration.All participating clients and servers and their capa-bilities are registered with the broker. The brokeralso monitors the systems resources and performsnetwork traffic/load forecasting [9]. In our system,we have adopted an agent-based version of the net-work weather service (NWS) [14]. NWS uses CPUand network sensors to create its forecast. The cur-rent implementation of NWS provides both a C APIand CGI interface. Currently, the PaCMAn brokercan utilize NWS without many modifications. Theagent daemon process at the server site can start theforecaster whenever it is ready to participate in thePaCMAn Metacomputer. The broker can dispatchspecial agents that visit these nodes, access the fore-cast and report it back to the broker.

• Server: It is a Java enabled computer (has installedthe Java run time environment) that runs the TahitiAglet server, in order to host incoming PaCMAnmobile agents.

Fig. 1. The PaCMAn Metacomputer.

• Client: It is a Java enabled computer that runs theTahiti Aglet server, in order to launch/host PaCMAnmobile agents. To be able to launch Aglets fromwithin applets, the Fiji Kit is installed on each client.It includes the FijiApplet, an abstract applet class,which allows the browser to host Aglets, and theAglet router that must be installed at the Web server.

An auxiliary structure of the PaCMAn infrastruc-ture is the virtual circular queue (VCQ) (Fig. 2). TheVCQ is a distributed circular buffer implemented withJava RMI. It facilitates distributed producer–consumercollaborative work. Portions of the queue reside ondifferent servers that can be distributed in the Internet.Producers of work (JobSlice in our terminology) storeit in a portion of the circular queue that is closer tothem. If the local portion of the queue is full, the queueserver passes the JobSlice to one of its neighbors. Con-sumers fetch work from the queue. If the local portionof the queue is empty, the local queue server requestsa JobSlice from one of its neighbors. The PaCMAnVCQ also supports automatic queue balancing: twoneighboring queue servers compare their queue sizesand balance them by transferring JobSlices. This pro-cedure is repeated in a round-robin fashion until theentire queue is balanced.

The VCQ due to its static nature is suitable forjobs that have fairly long duration (days, weeks ormonths) such as the RSA code breaking challenge. Forsuch applications, the participating PaCMAn serverscould change very frequently during the lifetime ofthe application. So the presence of a stationary queueenables the seamless continuation of the work.

To realize our approach, a number of componentsare defined to complement the existing agent execu-tion environment (i.e. the Aglets). In particular, theexisting Aglets are extended with parallel processingcapabilities. In addition, a library of task-specific ob-jects, called TaskHandlers, is implemented to realizethe computation portion of the application. Specif-ically, the following components comprise the pro-posed framework:

• PaCMAn mobile agent: The main responsibility ofthe PaCMAn mobile agent is first to create and co-ordinate other PaCMAn mobile agents and then toinitiate a variable number of TaskHandler objects toperform the job specified by it’s currently assignedJobSlice. The basic structure of the PaCMAn agent

Page 5: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 269

Fig. 2. An example of the VCQ distributed on a number of buffer servers on four geographically distant, in the Internet sense, areas.

is depicted in Fig. 3. The decision of which ob-ject, from the TaskHandler library, will be initiatedis done dynamically at run time based on the cur-rent demands, which in turn are determined by thecurrently assigned JobSlice object.

The PaCMAn mobile agent can assume various rolesin the PaCMAn framework the most common are:

• Mobile worker (MW): This is the workhorse ofthe PaCMAn system. It performs all the necessary

Fig. 3. The dynamic relationship of the PaCMAn mobile agent, TaskHandler and JobSlice.

computation, communication and synchronizationtasks. The MW initializes and uses the appropri-ate TaskHandler. The mode of operation of theMW is similar in concept with the single pro-gram multiple data (SPMD) parallel model: theMW runs in all the servers selected for the ap-plication, the same code/process running in allthe servers but on different data at each server.The task instructions are passed to it through aJobSlice.

Page 6: The PaCMAn Metacomputer: parallel computing with Java

270 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

• Mobile-coordinator/dispatcher (MCD): The MCDdoes not participate in the computation. Instead,it performs task-specific coordination among theMWs. This is done by dynamically assigning tasksto the MWs through the use of JobSlices and alsoby providing information about the status of otherMWs. For example, in a computationally intensiveapplication, the MCD can produce a large numberof JobSlices (much larger than the number of theMWs), effectively dividing the entire task into anumber of subtasks, and dynamic assign a JobSliceto each MW whenever it requests it. This mode ofoperation is appropriate for application where theload cannot be balanced statically, such as the primenumber generator.

• TaskHandler library: It is a collection of Java ob-jects that are serializable, and thus can travel alongwith our PaCMAn mobile agents. The TaskHan-dler is an object that is in the disposal of an agentto use when it needs to perform a specific com-putational task. For example, when the need arisefor a database query, the agent can utilize theDBMS TaskHandler. The TaskHandler objects areinstantiated by the mobile agents in new separatethreads that run along with the agents. Initially, thelibrary has only our TaskHandler abstract class. Itcan be enriched with other TaskHandler objectsthat must be descendants of our TaskHandler ab-stract class (this is done both for technical reasonsand to ensure a common interface, although it canbe overridden). Finally, this library contains an ob-ject called the JobSlice object. The JobSlice object

Fig. 4. The agents’ hierarchy.

holds all the necessary initialization informationfor the TaskHandler which will be used. JobSlicesare essential to the framework, as in effect they actas the main pipeline between our mobile agentsand the TaskHandler objects. The only thing thatour agent needs to start working on a problem isa JobSlice. When the current JobSlice is finished,the agent can request a new one. The JobSlice isgeneric and can be used by all the TaskHandlerobjects of the library.

As stated earlier the VCQ is stationary and is bet-ter suited for long running jobs. The MCD on theother hand is used for dynamic agent coordination.The MCD moves if there is a need to provide syn-chronization to a group of MWs. Of course, it is pos-sible to implement a more dynamic VCQ by linkingtogether a number of MCDs.

Fig. 3 depicts some code segments that illustrate thedynamic nature of the JobSlice and the TaskHandler.The mobile shell has a JobSlice and a TaskHandlervariable. These are used to reference the JobSlice andTaskHandler objects, respectively. The JobSlice objectis given to the mobile shell whereas the TaskHandlerobject is created by the shell. Which class is used asa TaskHandler is determined by the JobSlice object.

5. Object definitions of the PaCMAn framework

MW: As shown in Fig. 4, the MW is defined to bea descendant of the Aglet class (in the figure, this is

Page 7: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 271

represented with the PaCMAn mobile agent). Besidesthe functionality inherited from the Aglet class, theadditional responsibilities of an MW are to:

• Maintain a list of locations and Ids of the otherMWs and MCD.

• Create and dispatch other MWs.• Communicate with other MWs.• Communicate with an MCD to get more JobSlices

and or instructions about its course of action.• Instantiate and use TaskHandler objects, either to

complete its given task or to handle a message suchas joining partial results.

In our framework, the mobility and basic commu-nication 2 events are inherited from the Aglets frame-work and overridden as needed. Due to this fact and theconcept of TaskHandler objects that are dynamicallyassigned, the object definition of the MW strongly re-sembles the object definition of an Aglet. That is be-cause most of the actions taken by an MW are resultsof, or have as a result, some form of communicationwith other members of the PaCMAn framework orsome event triggered by the Aglets framework.

In order to send a message from Aglet A to AgletB, Aglet A must know Aglet B’s agletID (a uniqueidentity) and its location. The agletID is not prede-termined but is created (by the Tahiti server), whenan Aglet is created. Thus, only after the creation ofall the PaCMAn mobile agents and only the “leader”PaCMAn mobile agent has access to this information.This makes it necessary to organize this information ina routing table. The routing table contains the URL ofeach PaCMAn mobile agent, its agletID and its logicalID (a serial number given by our application). At anearly point during execution, the leader broadcasts therouting table to all other agents. When a mobile agentmoves, the routing table needs to be updated. Broad-casting a message can do this. The departing mobileagent has to leave its proxy on the Tahiti server at itsinitial location in case messages are in transit whilethe routing tables are updated.

The Aglets framework provides us with the basiccommunication methods for both handling and gener-ating messages:

2 The communication scheme in the Aglet framework is basedon event handling. When a message arrives an event is triggeredand as a result the handleMessage method is called.

1. sendMessage(Message), sendOnewayMessage-(Message), sendAsyncMessage(Message), send-FutureMessage(Message) which are responsiblefor sending messages in a peer to peer scheme.

2. getXXXXReply() methods which get a reply thatit was sent in response for an older message.

3. waitForReply() which waits for a reply from a mes-sage sent.

4. sendReply(XXXX) methods which send a reply toa newly arrived message.

On the handling side, there is no need for receiv-ing methods (like those in MPI). Message receipt isboth implicit and explicit. When a message arrivesat an Aglet, it is handled implicitly by the han-dleMessage(Message) method except in the case anexplicitly getXXXXReply() method is used. In theAglet workbench, there were no broadcast and col-lective operations, thus in our extension, we includedsuch methods for broadcast and collective operations.Aglets have multicasting facilities but not true broad-casting. Aglets’ multicasting works only within asingle context [3]. Furthermore, Aglets’ multicastingworks by letting Aglets register to receive specifictype of messages. Since PaCMAn is designed to workwith multiple servers and thus different contexts, wehad to implement our broadcast operations in thesame fashion as MPI broadcast. The same applies tocollective operations.

The mobility events provided by the Aglet frame-work are described in great detail in the Aglet docu-mentation [2]. Apart from these, the run() method isthe central to the MW as this is the method responsiblefor the creation and utilization of the main TaskHan-dler object(s). All other methods of an MW existas auxiliary methods of the run() method. TaskHan-dler objects implement the task-specific actions andother needed actions, like reformatting the results (in-stead of having specific methods of the MW for thatpurpose). TaskHandlers are dynamically assigned tothe run() method. Fig. 3 depicts the relationship ofthe PaCMAn mobility agent, with the JobSlice andTaskHandler objects.

Mobile coordinator/dispatcher: The MCD can bedefined either to be a direct descendant of the Agletclass or by refining the MW. This is because the MCDis identical with an MW that uses a TaskHandler todo the coordination and dispatching (again, in Fig. 4

Page 8: The PaCMAn Metacomputer: parallel computing with Java

272 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

this is represented with the PaCMAn mobile agent).The basic functionality–responsibilities of an MCDare:

• All the functionality of the MW (in the case wherethe MCD is its descendant).

• Maintaining and handling (giving away, adding,etc.) a list of JobSlices.

• Communicating with other members of the frame-work.

• Tracking the PaCMAn mobile agents status.

The MCD exists to serve requests for JobSlices andto help coordinating tasks like the combination of par-tial results. For example, the MCD can be used tobalance the computation load required for the produc-tion of some partial result, when it is difficult to doso statically. The MCD can also maximize the utiliza-tion of our resources. The database application that weuse for testing consists of two lines of computations.First, each MW travels to a distributed database andperforms its query. Then, the partial results are joined.Thus, some MWs will be waiting for others to finish,in order to join their results. The MCD can minimizethis waiting time, by instructing the “finished” mo-bile agents to work together (by exchanging and join-ing their ResultSlices, while others are still producingtheirs), instead of waiting for a predetermined mobileagent to finish (static scheduling).

Broker: The broker can be defined as a direct de-scendant of the Aglet class. The basic functionality —responsibilities of a broker are to:

• Maintain a list of available Tahiti servers, as wellas network traffic and load information for theseservers.

• Maintain a list of the PaCMAn mobile agents cur-rently running. Information about their whereaboutsshould also be kept.

• Maintain information on useful (and/or needed) re-sources.

• Communicate with other members of the frame-work.

The broker’s job is to enforce a load-balancingmechanism on the PaCMAn framework. This is doneby constantly monitoring the activity of the servers.An agent-based version of the network weather fore-cast [14] system has been adopted for our workbecause it fulfills all our requirements.

TaskHandler and JobSlice objects: The TaskHan-dler object defined to be a descendant of the TaskHan-dler class. The basic functionality–responsibilities ofa TaskHandler object are to:

• Be aware of its current state.• Be implemented as an object that can exist in a

thread (in other words to implement Runnable).• Be able to return a ResultSlice 3 to the Aglet that

uses it.• Accept commands in the form of messages from

the PaCMAn mobile agent that owns it.• Accept new JobSlices from the PaCMAn mobile

agent that owns it.• Carry the code for the application specific task.

The JobSlice object is a Java object with the fol-lowing functionality–responsibilities:

• Contain the task initialization data.• Know which TaskHandler can perform the specific

JobSlice task.• Know if it is a normal or a result slice.

5.1. Extending the PaCMAn framework

In the Java agent-based technology, an agent can befurther viewed as an object that can be inherited, en-hanced or refined just like any other object. The Agletscan be viewed as the basic abstract object that pro-vides us with the basic capabilities of mobility, com-munication, ability to cloning themselves, i.e. imple-menting all the capabilities supported by the AgletsFramework.

The PaCMAn framework enhances these capabili-ties with the addition of parallel processing features.We create a hierarchy that can later on be used tocreate more complex and sophisticated applications.Fig. 4 shows the PaCMAn hierarchy. In order to over-come the lack of multiple inheritance in Java, we hadto use both interfaces 4 and classes.

3 A ResultSlice object is technically a JobSlice object created bythe processing of other JobSlices.

4 Interfaces are somewhat like abstract classes (they containmethod declarations but not code whereas abstract classes maycontain code). The difference is that a class can implement asmany interfaces as it likes instead of only one, which is the casewhen extending another class.

Page 9: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 273

To enhance the new Aglets with additional capabil-ities is now straightforward. For example, to extendthe parallel framework with database capabilities weonly have to create a database TaskHandler [8], makeit a descendant of the TaskHandler class and extendthe PaCMAn mobile agent in order to use the DBMSTaskHandler (which could be any of the Developer’sTaskHandler in Fig. 4). Such a scenario is demon-strated in the section below.

5.2. Implementation roadmap

The PaCMAn framework provides a great degreeof generality. The static part of our agent which is re-sponsible for the mobility, the general coordination,the creation and manipulation of other agents. The dy-namic part deals with the assignment of TaskHandlers.PaCMAn framework a user must follow the followingsteps:

1. The client creates a PaCMAn-MW that acts as themaster process:• Extends the PaCMAn mobile agent class or im-

plements our PaCMAn mobile agent interface 5

or• Programs the Parallel-applet either by extend-

ing our Parallel-applet class or by writing a newone which extends the IBM’s FijiApplet abstractclass. (This applet is used as the interface be-tween the user and the PaCMAn mobile agent.)

• Programs the TaskHandler objects by extend-ing our TaskHandler abstract class (thus addingthe necessary objects in the TaskHandler library)or by using an existing one from the currentTaskHandler library.

2. The client machine contacts the broker and requestsa network-based forecast. Based on the forecast, theclient machine decides on the number and locationof the servers it will utilize.

3. A number of PaCMAn-MW and PaCMAn-MCDare created and launched.

4. The PaCMAn-MWs arrive at their destinations andrequest from the TaskHandlers to perform specifictasks. While waiting the first TaskHandler to per-form its tasks, more TaskHandler objects can be

5 We provide this option for compatibility with other frameworksthat use IBM’s Aglets.

launched to accomplish other tasks as determinedby the available JobSlices.

5. Finally the PaCMAn-MWs coordinate amongthemselves, possibly with the aid of a PaCMAn-MCD, to collect the partial results and returnsthem to the user.

6. Prototype implementation and testing

Our prototype implementation consists of the PaC-MAn agents, the TaskHandler library and the PaCMAnGUI. The PaCMAn family agent and the TaskHandlerlibrary have been extensively explained in the previ-ous sections. The PaCMAn interface is a generic userinterface that is used to start PaCMAn applications.Through this interface the user can input the neces-sary parameters (for each application), start the appli-cation and finally see the produced results. The exe-cution steps taken once the application starts, throughthe GUI, are the following (Fig. 5):

1. An MW (and possibly an MCD) is created. Thisfirst MW acts as the leader. Note that when anMCD is not created the user interface provides theleader with all the necessary JobSlices (for all theMWs). In the case that an MCD is used, only theMCD is provided with a JobSlice. From that pointon the MWs get their JobSlices from the MCD.

2. If necessary creates and fires other MWs and MCDs(passing the appropriate JobSlices to them).

3. The MWs dispatch themselves to the appropriateserver. Note all MWs other than the leader startfrom this step.

4. Each MW creates a TaskHandler object and ini-tiates it in new separate thread. Which TaskHan-dler will be loaded is determined from the Job-Slice currently assigned to the MW. To completethe TaskHandler initialization the MW passes it’sJobSlice to the TaskHandler.

5. The TaskHandler perform its task concurrently withthe MW.

6. The MW requests from the TaskHandler to performa specific task. While waiting for the first TaskHan-dler to perform its tasks more TaskHandler objectscan be launched to accomplish other tasks.

7. Finally the MW collects, in coordination with theother MWs and under the guidance of the MCD,

Page 10: The PaCMAn Metacomputer: parallel computing with Java

274 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

Fig. 5. Execution steps.

the results (the produced ResultSlices) and returnsthem to the user. In other words the results arechanneled through various MWs back to the user.If necessary more TaskHandlers are created andutilized to join the results (just like the databaseapplication case).

To evaluate the performance of PaCMAn prototypewe developed some sample applications, which weretested and analyzed. These applications demonstratethat all of the PaCMAn framework components withthe exception of the broker and the VCQ that havenot been fully integrated in the prototype yet. The testsuite used for the performance analysis was made upwith following applications:

1. Parallel database querying over the network [10]:This is a medium computation load application thatis split into two parts. This application is a databaseapplication in which the client wants to retrievedata that are distributed in a number of differentservers. Thus the actual problem is first to retrievethe data and then join them as needed.

2. Computing prime numbers: This is a heavycomputation load application that is difficult toload-balance statically; It counts the number ofprime numbers in a given range.

3. The trapezoidal rule: This is a light computationload application: an implementation of the classictrapezoidal rule problem.

4. Cracking the RC5 algorithm [12]: This is a mediumcomputation load application; it searches for thedecryption key in a given range.

For the implementation of the above applications,we just need to create the appropriate TaskHandlersthat would perform the given tasks:

1. DBMS TaskHandler, which performs the task ofthe data connection and retrieval.

2. Join TaskHandler, which joins the results producedby the DBMS TaskHandler.

3. Primes TaskHandler, which finds the prime num-bers in a given range.

4. Trap TaskHandler, which calculates the area of atrapezoid.

5. RC5 TaskHandler, which finds the key (if it exists)in a given range.

Each of these TaskHandlers is an autonomousJava object and can exist outside the scope of thePaCMAn framework. For example the code of thePrimes TaskHandler was taken from another applica-tion and it was easily adapted to the PaCMAn frame-work. This is also the case for the RC5 TaskHandlerthat was provided to us by Patel et al. [12]. We testedthe above applications with a group of eight servers.Furthermore there were the following test modes:

• Mode 0: One agent moving from server to servergathering and joining all the results (applicable onlyto the database application).

Page 11: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 275

Fig. 6. Database application speed-ups.

• Mode 1: N agents on N servers and every agentsends its results to one (the leader) agent.

• Mode 2: N agents on N servers and the producedresults are joined in a tree reduction order.

The database application: This is Web-database queryapplication. The data is distributed on a number ofdatabases residing on different servers. MWs arelaunched to perform queries independently and thencooperated to join the results of the individual queries.The speed-ups for this experiment are tabulated inFig. 6. Sample size refers to the number of recordsreturned by each query. The speed-up of modes 1and 2 are 282 and 315% compared with mode 0. Thereason we achieved such high speed-ups is that mode0 simulates a serial execution while mode 1 and 2 areparallel implementations. The improvement (1–10%)between modes 1 and 2 is due to the fact that in mode2 we have more parallelism during the joining phase.We expect higher speed-ups of mode 2 compared tomode 1 in application where the joining of the partialresults requires heavy duty computation.

The prime generator: It is very difficult to have staticload balancing in applications such as the prime num-ber generator. Thus, we have experimented with threedifferent implementations. In the first case, there wasno load balancing, the entire range is divided into asmany equal parts as mobile agents used. In the secondcase, we statically load balanced the application whilein the third case we have dynamic load-balancing

through the use of an MCD. Under this implementa-tion we have divided the range into a large number ofJobSlices and loaded them on the MCD. Whenever amobile agent finishes the range specified by its Job-Slice it requests a new JobSlice from the MCD. Thetest parameters for this application are the number ofservers and the sample size (the largest number thatis going to be checked). We tested this application forproblem sizes 216–219. The execution timings fromthe experiments are summarized in Fig. 7. The relativespeed-ups are summarized in Fig. 8. The speed-upsreported are calculated with respect to the executiontime with only one mobile agent in one server.

Fig. 8 demonstrates as expected that the betterthe load-balancing is, the more efficient the parallelprocessing is. It is worth noting that as the problembecomes larger the dynamic load-balancing per-formed by the MCD becomes more efficient. Thissomewhat unexpected behavior is because our staticload-balancing did not achieve the absolute perfectbalance. On a closer examination, we have concludedthat the MCD provides better performance as thenumber of servers increases. This is due to the factthat even with identical machines you can never getthe exact same throughput. This becomes noticeableonly with sufficiently large problems where even theslightest difference in performance has a great impact.

The trapezoidal rule: The results from the trape-zoidal rule tests are depicted in Fig. 9. Test parametershere are the number of servers and the sample size. The

Page 12: The PaCMAn Metacomputer: parallel computing with Java

276 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

Fig. 7. Primes generator timings.

Fig. 8. Primes generator speed-ups.

sample size refers to the number of the trapezoidals.The speed-ups and the parallel efficiency achieved arevery respectable.

The RC5 cracker: Fig. 10 depicts the test results forthe RC5 cracker. The test parameters are the numberof the servers and the sample size. The sample sizerefers to the length of the range that is searched. Theresults show very good, almost linear, speed-ups aswas the case for the trapezoidal rule.

Overall the experimentation with the prototypeshowed that the PaCMAn approach provides goodparallel efficiency ranging from 79 to 88% over theentire test suite. These are very respectable figuresfor a network-based Metacomputer. The results forusing mode 2 did not appear to improve performancein any application other than the database one. Mode2 is suitable for large number of workers and/orwhen there is heavy computation requirements in the

Page 13: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 277

Fig. 9. Trapezoidal rule timings and speed-ups.

Fig. 10. RC5 cracker timings and speed-ups.

Page 14: The PaCMAn Metacomputer: parallel computing with Java

278 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

joining phase of the partial results. We will be able totest this mode more rigorously with the UNIX port ofPaCMAn.

7. Related work

A number of Java-based systems have been pro-posed for Metacomputing. Although the goal is moreor less the same, to utilize heterogeneous hosts forHPC, the approaches differ. In this section, we providea brief overview of systems that have similar goalsand/or approach with PaCMAn.

The systems that are more closely related to PaC-MAn are the volunteer-based systems. The basicidea is to have a collection of applets that commu-nicates with each other to solve problem distributed.This is possible with the use of brokers that routesthe messages between the applets. Example of suchsystems are Javelin [15], Charlotte [21] and Bayani-han [22]. The common ground of these systems andPaCMAn is the use mobile code, applets and mobileagents, respectively, to solve a problem in parallel.The applet-based approach is passive, it relies onservers to volunteer, each time their participation, bypointing to the systems WWW page. PaCMAn on theother hand is proactive. After a server is registered itcan be used without any need for collaboration fromthe owner of the server. A performance drawback ofthe applet-based is that all communication has to berouted through the broker. Furthermore, these systemsare browser-centric so they have to incur all the extrasecurity restrictions imposed by the browsers.

A different approach is to implement in Java thefunctionality of classical parallel processing and co-ordination systems such as PVM. The JPVM library[20] implements in Java the basic functionality of theC and Fortran interface of PVM. The mobility of oursystem makes it more dynamic and puts it in a betterposition to utilize the Internet for Metacomputing thatthese adaptations of such “legacy” systems.

New Java dialects and extensions of the JVM is an-other research direction for utilizing Java for HPC.One such project is the Java/DSM (distributed sharememory) [16] which extends the JVM to provide ashared memory abstraction over a group of physicallydistributed machines, thus creating a Metacomputerwhich is similar in concept with the PaCMAn Meta-

computer. PaCMAn is a more dynamic system that uti-lizes mobility and uses the message passing paradigm.Other dialects and extensions of Java (expansion byimplementing a compiler) provide parallel executionmodels and constructs. Titanium [17], a Java dialect,is one of them. A similar approach is the use of Javabytecode optimizers that discover implicit parallelism,like javab [18]. One very interesting approach is theautomated generation of binding for native librariessuch as MPI [19]. The last two approaches are of in-terest to us because they could be used in conjunctionwith PaCMAn. Such a combination will provide anoptimized bytecode that runs faster on the lowest layerwere the PaCMAn Metacomputer will operate on thehigher layer.

8. Concluding remarks

In this paper we have presented the PaCMAn sys-tem, an Internet based Metacomputer that can utilizeheterogeneous resources to tackle large problems inparallel. The PaCMAn Metacomputer has three ba-sic components: servers, brokers and clients. Clientslaunch multiple Java mobile agents that roam aroundthe Internet and visit registered servers to performtheir task in parallel by communicating and cooper-ating. The brokers/load forecasters keep track of theavailable resources and provide load forecast to theclients. Our system is modular and dynamic. The mo-bile shell is separated from the application code. Theapplication code is implemented as TaskHandler ob-jects which are implicitly loaded to our Mobile shell.Thus, the end-user of our system does have to dealwith the underlining primitives that support mobilityand coordination. The transformation of a Java Ap-plet to a PaCMAn TaskHandler is a straight forwardprocess and we are planning to have it automated.

An application that is implemented accordingly tothe PaCMAn framework is boosted by the inheritedbenefits of using Java mobile agents and the WWWinterface. An important advantage is that the num-ber of processes can change dynamically. The serversthat will host the PaCMAn-MW can be decided atrun time, after considering the network traffic or theworkload of the available servers. PaCMAn-MW canbe reused as they can receive new instructions andthen use new tasks, i.e. initiate different TaskHandler

Page 15: The PaCMAn Metacomputer: parallel computing with Java

P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280 279

objects according to its instructions. Furthermore, theservers that host worker processes are not required tobe part of the same local network with the client. Aserver can be located anywhere in the entire Web. APaCMAn-MW can move to servers where the availableresources and environment are suitable for its kind ofwork. Our system works equally well in wireline andwireless networks. PaCMAn-MWs can be launched(and forgotten) from a laptop or a palmtop computerduring a short Web session. The PaCMAn-MW canroam around the unstructured network to perform itstasks and then wait until the communication link isagain available to return home with their task results.

We have developed three versions of PaCMAn,two of them are based on the Aglet framework andone on the Voyager. The prototype used for exper-imentation presented here is using the aglets and itis PC-based. The Unix version that uses aglet andthe Voyager version are currently been tested. Theexperimental results from our prototype showed thatPaCMAn systems can have very good parallel ef-ficiency. Our future plans are to use PaCMAn fordistributed data-mining and also develop versions ofPaCMAn that utilize native libraries.

Acknowledgements

The authors wish to thank Stavros Papastavros forhis contributions in the earlier system for Parallel pro-cessing with Java mobile agents [26] that we devel-oped and its valuable input during the development ofPaCMAn.

References

[1] E. Pitoura, G. Samaras, Data Management for MobileComputing, Kluwer Academic Publishers, Dordrecht, 1997.

[2] Aglets Workbench, IBM Japan Research Group.http://aglets.trl.ibm.co.jp.

[3] Danny B. Lange, M. Oshima, Programming and DeployingJava Mobile Agents with Aglets. ISBN 0-201-3258-9.

[4] D. Chess, B. Grosof, C. Harrison, D. Levine, C. Parris,G. Tsudik, Itinerant agents for mobile computing, J. IEEEPersonal Comm. 2 (5) (1993).

[5] J.E. White, Mobile Agents. General Magic White Paper.http://www.genmagic.com/agents, 1996.

[6] J. Nelson, Programming Mobile Objects with Java, Wiley,New York. ISBN 0471254061.

[7] G. Samaras, Marios D. Dikaiakos, C. Spyrou, A. Liverdos,Mobile agent platforms for web databases: a qualitativeand quantitative assessment, in: Proceedings of the FirstInternational Symposium on Agent Systems and Applicationsand Third International Symposium on Mobile Agents(ASA/MA 99), Palm Springs, CA, October 1999.

[8] S. Papastavrou, G. Samaras, E. Pitoura, Mobile agents forWWW distributed database access, in: Proceedings of the 15thInternational Data Engineering Conference, Sydney, Australia,March 1999.

[9] D. Barelos, E. Pitoura, G. Samaras, Mobile agents procedures:metacomputing in Java, in: Proceedings of the ICDCSWorkshop on Distributed Middleware (in conjuction withthe 19th IEEE International Conference on DistributedComputing Systems (ICDCS99)), Austin, TX, June 1999.

[10] C. Panayiotou, G. Samaras, E. Pitoura, P. Evripidou, Parallelcomputing using Java mobile agents, in: Proceedings of the25th Euromicro Conference, Milan, Italy, September 1999.

[11] P. Evripidou, G. Samaras, E. Pitoura, C. Panayiotou, PaCMAnmobility for high performance computing, in: Proceedings ofthe Workshop on Java for High-Performance Computing (inconjunction with the 1999 ACM International Conference onSupercomputing), Rhodes, Greece, June 1999.

[12] A. Patel, P. Gladychev, D. Omahony, Cracking RC5 with Javaapplets, Concurrency Practice and Experience 10 (11–13)(1998) 1165–1171.

[13] L. Bic, M. Dillencourt, M. Fukuda, Mobile Agents, Co-ordination and Self-Migrating Threads: A Common Frame-work. http//:www.ics.uci.edu/bic/messengers/messengers.htm.

[14] R. Wolski, N.T. Spring, J. Hayes, The network weatherservice: a distributed resource performance forecasting servicefor metacomputing, J. Future Generation Comput. 15 (1999)757–768.

[15] Bernd O. Christiansen, et al. Javelin: Internet-based parallelcomputing using Java, in: Proceedings of the ACM Workshopon Java for Science and Engineering Computation, Las Vegas,NV, June 21, 1997.

[16] W. Yu, A. Cox, Java/DSM: a platform for heterogeneouscomputing, in: Proceedings of the ACM Workshop on Javafor Science and Engineering Computation, July 1997.

[17] K. Yelick, et al. Titanium: a high-performance Javadialect, in: Proceedings of the ACM Workshop on Javafor High-Performance Network Computing, Stanford, CA,February 1998.

[18] A.J.C. Bik, D.B. Cannon, javab Manual, ACM 1998, in:Proceedings of the Workshop on Java for High-PerformanceNetwork Computing, Stanford, CA, February 1998.

[19] V. Getov, S. Flynn-Hummel, S. Mintchev. High-performanceparallel programming in Java: exploiting native libraries,in: Proceedings of the ACM Workshop on Java forHigh-Performance Network Computing, Stanford, CA,February 1998.

[20] A.J. Ferrari, JPVM: network parallel computing in Java,in: Proceedings of the ACM Workshop on Java forHigh-Performance Network Computing, Stanford, CA,February 1998.

Page 16: The PaCMAn Metacomputer: parallel computing with Java

280 P. Evripidou et al. / Future Generation Computer Systems 18 (2001) 265–280

[21] A. Bartloo, et al., Charlotte: Metacomputing on the Web, in:Proceedings of the Ninth International Conference on PDCS,1996.

[22] L. Sarmenta, S. Hirano, Bayanihan: building and studyingWeb-based volunteer computing systems using Java, FutureGeneration Comput. Syst. 15 (6) (1999).

[23] DESCHALL. Internet-Linked Computer Challenge DataEncryption Standard, Press Release, 1997.

[24] Voyager, ObjectSpace. http://www.objectspace.com/.[25] Java Grande Forum. http://www.javagrande.org/.[26] S. Papastavros, G. Samaras, P. Evripidou, Parallel Web

querying using Java mobile agents and threads, TechnicalReport TR-98-7, Department of Computer Science, Universityof Cyprus, Cyprus, May 1998.

Paraskevas Evripidou was born inNicosia, Cyprus in 1959. He receivedthe HND in Electrical Engineering fromthe Higher Technical Institute in Nicosia,Cyprus in 1981. In 1983, he joined theUniversity of Southern California with ascholarship from the US Agency for In-ternational Development. He received theBSEE, MS and PhD in Computer Engi-neering in 1985, 1986 and 1990, respec-

tively. Currently he is an Associate Professor at the Departmentof Computer Science at the University of Cyprus. From 1990 to1994 he was on the Faculty of the Department of Computer Sci-ence and Engineering of the Southern Methodist University as anAssistant Professor (tennure track). His current research interestsare in parallel processing, computer architecture, parallelizingcompilers, real-time systems, Java-powered tools for teleworking,parallel processing with mobile agents and parallel input/outputand file systems. Dr. Evripidou is a member of the IFIP WorkingGroup 10.3, the IEEE Computer Society and ACM SIGARCH.He is also a member of the Phi Kappa Phi and Tau Beta Pi honorsocieties. He was the Program Co-chair and General Co-chairof the International Conference on Parallel Architecture andCompilation Techniques in 1999 and 1995, respectively.

George Samaras received a PhD inComputer Science from Rensselaer Poly-technic Institute, USA in 1989. He iscurrently an Associate Professor at theUniversity of Cyprus. He was previouslyat IBM Research Triangle Park, USA andtaught in the University of North Carolinaat Chapel Hill (adjunct Assistant Profes-sor, 1990–1993). He served as the leadarchitect of IBM’s distributed commit ar-

chitecture (1990–1994) and co-authored the final publication ofthe architecture (IBM Book, SC31-8134-00, September 1994). Hewas a member of IBM’s wireless division and participated in thedesign/architecture of IBM’s WebExpress, a wireless Web brows-ing system. He recently (1997) co-authored a book on data man-agement for mobile computing (Kluwer Academic Publishers). Hehas a number of patents relating to transaction processing technol-ogy and numerous technical conference and journal publications.His work on utilizing mobile agents for Web-database access hasreceived the best paper award of the 1999 IEEE International Con-ference on Data Engineering (ICDE/99). He has served as proposalevaluator at a national and international level and he is regularlyinvited by the European Commission to serve as project evalua-tor and auditor in areas related to mobile computing and mobileagents. He also served at IBM’s internal international standardscommittees for issues related to distributed transaction processing(OSI/TP, XOPEN, OMG). His research interest includes mobilecomputing, mobile agents, transaction processing, commit proto-cols and resource recovery, and real-time systems. He is a votingmember of the ACM and IEEE Computer Society.

Christoforos Panayiotou was born inAthens, Greece in 1977. In 1992, hemoved to Cyprus, where he concluded hishigh school education. In 1999, he re-ceived the BS in Computer Science fromthe University of Cyprus. His diplomaproject was in the area of mobile agents forHPC. Currently he is a graduate studentof the Department of Computer Science atthe University of Cyprus. His research in-

terests include Internet technologies, mobile computing and HPC.

Evaggelia Pitoura received her BSc fromthe Department of Computer Science andEngineering at the University of Patras,Greece in 1990 and her MSc and PhDin Computer Science from Purdue Univer-sity in 1993 and 1995, respectively. SinceSeptember 1995, she is on the faculty ofthe Department of Computer Science atthe University of Ioannina, Greece. Hermain research interests are data manage-

ment for mobile computing and multidatabases. Her publicationsinclude several journal and conference articles and a recently pub-lished book on mobile computing. She received the best paperaward in the IEEE ICDE 1999 for her work on mobile agents.She has served on a number of program committees and was Pro-gram Co-chair of the MobiDE workshop held in conjunction withMobiCom’99.