45
WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak, and Michał Węgiel Institute of Computer Science / ACC CYFRONET AGH Cracow, Poland [email protected]

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

  • View
    214

  • Download
    0

Embed Size (px)

Citation preview

Page 1: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Adaptation of Legacy Software to Grid Services

Bartosz Baliś, Marian Bubak, and Michał Węgiel

Institute of Computer Science / ACC CYFRONET AGHCracow, Poland

[email protected]

Page 2: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Outline

Introduction - motivation & objectives System architecture – static model

(components and their relationships) System operation – dynamic model

(scenarios and activities) System characteristics Migration framework (implementation) Performance evaluation Use case & summary

Page 3: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Introduction

Legacy software• Validated and optimized code• Follows traditional process-based model of

computation (language & system dependent)

• Scientific libraries (e.g. BLAS, LINPACK)

Service oriented architecture (SOA) • Enhanced interoperability• Language-independent interface (WSDL)• Execution within system-neutral runtime

environment (virtual machine)

Page 4: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Objectives

Originally: adaptation of the OCM-G to GT 3.0

After generalization:• design of a versatile architecture enabling for

bridging between legacy software and SOA• implementation of a framework providing tools

facilitating the process of migration to SOA

SM

LM NodeSite

LM Node

OMIS

Grid Service

ToolOMIS

OMIS

Page 5: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Related Work

Lack of comprehensive solutions Existing approaches possess numerous

limitations and fail to meet grid requirements

Kuebler D., Einbach W.: Adapting Legacy Applications as Web Services (IBM)

Main disadvantages: insecurity & inflexibility

Service Server

Web Service Container

AdapterClient

Page 6: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Roadmap

Introduction - motivation & objectives System architecture – static model

(components and their relationships) System operation – dynamic model

(scenarios and activities) System characteristics Migration framework (implementation) Performance evaluation Use case & summary

Page 7: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

General Architecture

Hosting Environment

Registry

Factory

Proxy Factory

Legacy System

Master

Service Requestor

Service Process

Instance

Proxy Instance

Slave

Monitor

SOAP

SOAP

Page 8: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Service Requestor

From client’s perspective, cooperation with legacy systems is fully transparent

Only two services are accessible: factory and instance; the others are hidden

Standard interaction pattern is followed: • First, a new service instance is created• Next, method invocations are performed• Finally, the service instance is destroyed

We assume a thin client approach

Page 9: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Legacy System (1/4)

Constitutes an environment in which legacy software resides and is executed

Responsible for actual request processing Hosts three types of processes: master,

monitor and slave, which jointly provide a wrapper encapsulating the legacy code

Fulfills the role of network client when communicating with hosting environment (thus no open ports are introduced and process migration is possible)

Page 10: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Legacy System (2/4)

Legacy System

Master

Slave

Monitor

creates

controls

responsible for responsible for host registration host registration and creation of and creation of

monitor and slave monitor and slave processes processes

one per hostone per hostpermanentpermanent

processprocess

Page 11: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Legacy System (3/4)

Legacy System

Master

Slave

Monitor

creates

controls responsible for responsible for reporting about reporting about and controlling and controlling the associated the associated slave process slave process

one per clientone per clienttransienttransientprocessprocess

Page 12: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Legacy System (4/4)

Legacy System

Master

Slave

Monitor

creates

controls

provides means provides means of interface-of interface-

based stateful based stateful conversation conversation with legacy with legacy

softwaresoftware

one per clientone per clienttransienttransientprocessprocess

Page 13: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Hosting Environment (1/5)

Maintains a collection of grid services which encapsulate interaction with legacy systems

Provides a layer of indirection shielding the service requestors from collaboration with backend hosts

Responsible for mapping between clients and slave processes (one-to-one relationship)

Mediates communication between service requestors and legacy systems

Page 14: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Hosting Environment (2/5)

Hosting Environment

Registry

Factory

Proxy Factory

Instance

Proxy Instance

one per serviceone per servicekeeps track of keeps track of backend hosts backend hosts

which registered which registered to participate in to participate in

computationscomputations

permanent services

transient services

Page 15: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Hosting Environment (3/5)

Hosting Environment

Registry

Factory

Proxy Factory

Instance

Proxy Instance

permanent services

transient services

one per serviceone per serviceresponsible for responsible for creation of the creation of the corresponding corresponding

instancesinstances

Page 16: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Hosting Environment (4/5)

Hosting Environment

Registry

Factory

Proxy Factory

Instance

Proxy Instance

permanent services

transient services

one per clientone per client directly called by directly called by client, provides client, provides

externally visible externally visible functionalityfunctionality

Page 17: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Hosting Environment (5/5)

Hosting Environment

Registry

Factory

Proxy Factory

Instance

Proxy Instance

permanent services

transient services

one per clientone per client responsible for responsible for

mediation mediation between backend between backend host and service host and service

clientclient

Page 18: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Roadmap

Introduction - motivation & objectivesSystem architecture – static model

(components and their relationships) System operation – dynamic model

(scenarios and activities) System characteristics Migration framework (implementation) Performance evaluation Use case & summary

Page 19: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Resource Management (1/2)

Resources = processes (master/monitor/slave)

Registry service maintains a pool of master processes which can be divided into:• static part – configured manually by site

administrators (system boot scripts)• dynamic part – managed by means of job

submission facility (GRAM)

Optimization: coarse-grained allocation and reclamation performed in advance in the background (efficiency, smooth operation)

Page 20: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Resource Management (2/2)

Coarse-grained resource = master process Fine-grained resource = monitor & slave process

Registry

Master

Monitor/Slave

Information Services

Data Management

Job Submission

Resource Broker

Coarse-Grained Allocation (c)

Fine-Grained Allocation (f)

c.1

c.2

c.3

c.4

c.5

f.1 f.

2

Page 21: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Invocation patterns

Apart from synchronous and sequential mode of method invocation our solution supports:

1. Asynchronism – assumed to be embedded into legacy software; our approach: invocation returns immediately and a separate thread is blocked on a complementary call waiting for the output data to appear

2. Concurrency – slave processes handle each client request in a separate thread

3. Transactions - the most general model of concurrent nested transactions is assumed

Page 22: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Legacy Side Scenarios (1/2)

1. Client assignment - master process repetitively volunteers to participate in request processing (reporting host CPU load). When registry service assigns a client before timeout occurs, new monitor and slave processes are created.

2. Request processing – embraces: input retrieval, request processing and output delivery.

3. System self-monitoring - monitor process periodically reports to proxy instance about the status of the slave process and current CPU load statistics (both system- and slave-related).

Page 23: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Legacy Side Scenarios (2/2)

Registry Master

Assign

Monitor

Slave

[success]

Assign

[timeout]

Create

Proxy Instance

Create

Heartbeat

[continue]

Heartbeat

[migration]

Assign

[timeout]

Destroy

Request

Response

Request

Response

Page 24: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Client Side Scenarios (1/2)

1. Instance construction - involves two steps:• Creation of the associated proxy instance,• Assignment of one of the currently

registered master processes.

2. Method invocation - client call is forwarded to the proxy instance, from where it is fetched by the associated slave process; the requestor is blocked until the response arrives.

3. Instance destruction - destruction request is forwarded to the associated proxy instance.

Page 25: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Client Side Scenarios (2/2)

Factory

Proxy Instance

Instance

Proxy Factory Registry

Create

NewCreate

NewAssign

InvokeInvoke

DestroyDestroy

Page 26: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Process Migration (1/5)

Indispensable when we need to:• dynamically offload work onto idle

machines (automatic load-balancing)• silently mask recovery from system failures

(transparent fail-over) Challenges: state extraction &

reconstruction Low-level approach

• Suitable only for homogeneous environment (e.g. cluster of workstations)

• Supported by our solution since legacy systems act as clients rather than servers

Page 27: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Process Migration (2/5)

High-level approach• Can be employed in heterogeneous

environment• State restoration is based on the combination

of checkpointing and repetition of the short-term method invocation history

• Requires additional development effort (state serialization, snapshot dumping and loading)

Proxy instance initiates high-level recovery upon detection of failure (lost heartbeat) or overload

Only slave and monitor processes are transferred onto another computing node

Page 28: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Process Migration (3/5)

Selection of optimal state reconstruction scenario is based on transaction flow and checkpoint sequence (multiple state snapshots are recorded and the one enabling for fastest recovery procedure is chosen)

Committed

Aborted

Committed

Aborted

Committed

Aborted

Unfinished

Check point Failure point

Transaction omitted

Transaction repeated

Time

Page 29: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

CPU load generated by slave process (as reported by monitor process) is approximated as a function of time and used to estimate the cost of invocations

time [ms]

load

[%

]

probe points

t1 t2

Process Migration (4/5)

2

1

t

tl(t)dtfc

c – total costf – frequencyl – CPU loadt – time

Page 30: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Process Migration (5/5)

In case of concurrent method invocations, emulation of synchronization mechanisms employed on the client side is necessary• Timing data is gathered (method

invocation start & end timestamps),• If two operations overlapped in time, they

are executed concurrently (otherwise sequentially).

Prerequisite: repeatable invocations (unless system state was changed, in response to the same input data identical results are expected to be obtained).

Page 31: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Roadmap

Introduction - motivation & objectivesSystem architecture – static model

(components and their relationships)System operation – dynamic model

(scenarios and activities) System characteristics Migration framework (implementation) Performance evaluation Use case & summary

Page 32: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

System Features (1/3)

Non-functional requirements:• QoS-related (the fashion that service

provisioning takes place in): performance & dependability,

• TCO-related (expenses incurred by system maintenance): scalability & expandability.

Efficiency – coarse-grained resource allocation; pool of master processes always reflects actual needs; algorithms have linear time complexity; checkpointing and transactions jointly allow for selection of optimal recovery scenario.

Page 33: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

System Features (2/3)

Availability – fault-tolerance based on both low-level and high-level process migration; failure detection and self-healing; checkpointing allows for robust error recovery; in the worst case A = 50% (when the whole call history needs to be repeated we have MTTF = MTTR).

Security – no open incoming ports on backend hosts are introduced; authentication of legacy systems is possible; we rely upon the grid security infrastructure provided by the container.

Page 34: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

System Features (3/3)

Scalability - processing is highly distributed and parallelized (all tasks are always delegated to legacy systems); load balancing is guaranteed (by registry and proxy instance); job submission mechanism is exploited (resource brokering).

Versatility - no assumptions are made as regards programming language or run-time platform; portability; non-intrusiveness (no legacy code alteration needed); standards-compliance and interoperability.

Page 35: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Migration Framework (1/2)

Code-named L2G (Legacy To Grid) Based on GT 3.2 (hosting environment) and

gSOAP 2.6 (legacy system) Objective: to facilitate the adaptation of

legacy C/C++ software to GT 3.2 services by automatic code generation (with particular emphasis on ease of use and universality)

Structural and operational compliance with the proposed architecture

Served as a proof of concept of our solution

Page 36: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Migration Framework (2/2)

Most typical development cycle:1. Programmer specifies the interface that will

be exposed by the deployed service (Java)2. Source code generation takes place

(Java/C++/XML/shell scripts) 3. Programmer provides the implementation

for the methods on legacy system side (C++)

Support for process migration, checkpointing, transactions, MPI (parallel machine consists of multiple slave processes one of which is in charge of communication with proxy instance)

Page 37: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Roadmap

Introduction - motivation & objectivesSystem architecture – static model

(components and their relationships)System operation – dynamic model

(scenarios and activities)System characteristicsMigration framework (implementation) Performance evaluation Use case & summary

Page 38: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Performance evaluation (1/5)

Benchmark: comparison of two functionally equivalent grid services (the same interface) one of which was dependent on legacy system

Both services were exposing a single operation:

int length (String s);

Time measurement was performed on the client side; all components were located on a single machine; no security mechanism was employed; relative overhead was estimated

Page 39: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Performance evaluation (2/5)

0

20

40

60

80

100

120

140

160

180

0 5 10 15 20 25 30 35 40 45 50

tim

e [m

s]

length [kB]

legacy service

ordinary service

Measurement results for method invocation

time = length/bandwidth + latency

Page 40: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Performance evaluation (3/5)

0

2

4

6

8

10

12

14

16

18

20

0 5 10 15 20 25 30 35 40

tim

e [s

]

iterations []

legacy service

ordinary service

Measurement results for instance construction

time = iterations/throughput

Page 41: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Performance evaluation (4/5)ti

me

[s]

iterations []

legacy service

ordinary service

0

0,5

1

1,5

2

2,5

3

3,5

0 5 10 15 20 25 30 35 40

Measurement results for instance destruction

time = iterations/throughput

Page 42: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Performance evaluation (5/5)

Increased 2.5 x 37.8 ms 15.4 ms Latency

Reduced 2.5 x 370.4 kB/s 909.1 kB/s Bandwidth

Relative change Legacy service Ordinary service Quantity

Reduced 2.1 x12.2 iterations/s 25.4 iterations/s Destruction

Reduced 3.1 x2.0 iterations/s 6.2 iterations/s Construction

Relative changeLegacy serviceOrdinary serviceScenario

Instance construction and destruction

Method invocation

Page 43: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Use Case: OCM-G

Grid application monitoring system composed of two components: Service Manager (SM) and Local Monitor (LM), compliant to OMIS interface

LM

SM

Node

LM

Node

Site

Slave

Proxy Instance

Instance

SOAP

SOAP

MCI

MCIMCI

Page 44: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

Summary

We elaborated a universal architecture enabling to integrate legacy software into the grid services environment

We demonstrated how to implement our concept on the top of existing middleware

We developed a framework (comprising a set of the command line tools) which automates the process of migration of C/C++ codes to GT 3.2

Further work: WSRF, message-level security, optimizations, support for real-time applications

Page 45: WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004 Adaptation of Legacy Software to Grid Services Bartosz Baliś, Marian Bubak,

WS on Component Models and Systems for Grid Applications, St Malo, June 26, 2004

www.icsr.agh.edu.pl/lgf/

see alsowww.eu-crossgrid.org

andwww.cyfronet.krakow.pl/ICCS2004/

More info