Upload
marylou-dorsey
View
213
Download
0
Embed Size (px)
Citation preview
11
11
Collaboration Grid work at Anabas and Community Grids
LaboratoryIndiana University
July 30 2007
Geoffrey Fox, Marlon PierceComputer Science, Informatics, Physics
Community Grids LaboratoriesIndiana University Bloomington IN 47404
Rui Wang, Alex Ho, Geoffrey FoxAnabas Inc
Bloomington, San Francisco
[email protected]://www.infomall.org
22
Community Grids LaboratoryTechnology Expertise
Web Service and Web 2.0 technologies for “Broad Grids”• Open Grid Forum Web Service architectures
• Integrate ideas in Flickr Connotea Slideshare Youtabe into large scale systems
• Need to build “Broad Grids of Narrow Grids” (Systems of systems)
Geographical Information Systems in Grids Streaming Sensor data (including audio-video streams) Portals Multicore parallel computing
33
Community Grids Laboratory Funded by NSF NASA NIH DoE and DoD Cheminformatics – High Throughput Screening data and
filtering; PubChem PubMed including document analysis Interactive Physics Data Analysis Earthquake Science Sensor Grid GPS global positioning system eSports collaboration for real time trainers and sportsman with
HPER IU School of Health, Physical Education, and Recreation. Ice Sheet Dynamics – melting of Glaciers Navajo Nation Grid Education (Science Gateways) and
Healthcare• Web 2.0 tutorial and distance education course spring 2007
Minority Outreach – working with national organizations representing 335 Minority Serving Universities/Colleges
eScience VP for Open Grid Forum
44
Anabas Collaboration Systems (Impromptu)
• Similar in goal to Webex but with scalable event-based architecture using publish-subscribe model
Works with Community Grids Laboratory on uses of Grids in DoD• Analysis of Net Centric Operations (NCOW)
• Analysis of FLTC SBIR shifted from Grid Information systems to Sensor
Grids based around a network of Grid agents• Automate construction of Grid from library of services and
dynamically discovered services
• Fault Tolerant operation
55
Essential Ideas Distributed software systems are being
“revolutionized” by developments from e-commerce, e-Science and the consumer Internet. There is rapid progress in technology families termed “Web services”, “Grids” and “Web 2.0”
Many of these developments have important implications for collaboration both in terms of core technology and capabilities
The emerging picture is of distributed services with advertised interfaces but opaque implementations communicating by streams of messages over a variety of protocols• Complete systems are built by combining either services or
Grids (predefined/pre-existing collections of services) together to achieve new capabilities
66
The Three Technology Families Web Services have clearly defined protocols (SOAP) and a well
defined mechanism (WSDL) to define service interfaces• There is good .NET and Java support• The so-called WS-* specifications provide a rich sophisticated standard
set of capabilities for security, fault tolerance, meta-data, discovery, notification etc.
Web Service (OGF) Grids build on Web Services and provide a robust managed environment with growing adoption in Enterprise systems and distributed science (so called e-Science)
Web 2.0 supports a similar architecture to Web services but has developed in a more chaotic but remarkably successful fashion with a service architecture with a variety of protocols including those of Web and Grid services• Over 350 Interfaces defined at http://www.programmableweb.com/apis
Web 2.0 also has many well known capabilities with Google Maps and Amazon Compute/Storage services of clear general relevance to DoD
There are also Web 2.0 services supporting novel collaboration modes as seen in social networking sites, portals, MySpace, YouTube,
77
The three service technologies and DoD The Web Service, OGF Grid and Web 2.0 approaches differ in
important detail but their broad architectures are similar and so it is possible to use them all in DoD applications
We expect growing support with rich functionality for all three technology approaches and this plus the broad interoperability enabled by a service architecture, has important implications for capabilities and ease of maintenance and upgrade for DoD systems built on these broad-based service technologies
Anabas analyzed in detail the Net Centric NCOW specifications and showed how they could be mapped into Web and Grid services• This included the NCOW Core Enterprise Services and also Sensor Grids
and the NCOW Data Model
• Anabas also addressed how one could achieve managed consistent architecture with the intrinsically distributed architecture
88
Anabas SBIR Approach This is in collaboration with Community Grids Laboratory at
Indiana University We follow the OGF Grid architecture and use Web services for
all capabilities; if one needs a capability like Google Maps from Web 2.0 it is wrapped as a Web Service (and in fact to use an Open Geospatial Consortium Interface)
We use the powerful open source publish-subscribe messaging NaradaBrokering environment to provide collaboration (via software overlay networks) and fault-tolerance• The same software is used to support both Web Service messaging (TCP)
and audio-video conferencing (UDP) We package collections of services as Grids which provide
particular composite capability such as hosting a sensor Grid or supporting one or more collaboration functions
We are improving core Anabas collaboration technology to support shared video not supported well by Webex
We provide Grid Builder tool to build Grids by composing other Grids together and to dynamically manage them
We provide sensor Grid architecture and will demonstrate with many types of sensors
99
Comparison of Web 2.0 and Grids
See http://grids.ucs.indiana.edu/ptliupages/presentations/CTSpartIMay21-07.ppt
1010
Architecture of Streaming Grids of Grids
And describing the underlying messaging system NaradaBrokering and how message
multicast enables collaboration
11
Database
SS
SS
SS
SS
SS
SS
SS
SS
SS
SS
FS
FS
FS
FS
FS
FS
FS
FS FS
FS
FS
FS
FS
FS
FS
FS
FS FS
FS
FS
PortalFS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
MD
MD
MD
MD
MD
MD
MD
MD
MD
MetaDataFilter Service
Sensor Service
OtherService
AnotherGrid
Raw Data Data Information Knowledge Wisdom
Decisions
SS
SS
AnotherService
AnotherService
SSAnother
Grid SS
AnotherGrid
SS
SS
SS
SS
SS
SS
SS
SS
FS
SOAP Messages
Portal
1212
Grid Service Philosophy I Services receive data in SOAP messages, manipulate it
and produce transformed data as further messages Knowledge is created from information by services
• Information is created from data by services Semantic Grid comes from building metadata rich
systems of services Meta-data is carried in SOAP messages The Grid enhances Web services with semantically rich
system and application specific management One must exploit and work around the different
approaches to meta-data (state) and their manipulation in Web Services
1313
Grid Service Philosophy II There are a horde of support services supplying security,
collaboration, database access, user interfaces The support services are either associated with system or
application where the former are WS-* and GS-* which implicitly or explicitly define many support services
There are generalized filter services which are applications that accept messages and produce new messages with some data derived from that in input• Simulations (including PDE’s and reactive systems)• Data-mining• Transformations• Agents• Reasoning are all termed filters here
Agent Systems are a special case of Grids Peer-to-peer systems can be built as a Grid with particular
discovery and messaging strategies
1414
Grid Service Philosophy III Filters can be a workflow which means they are “just
collections of other simpler services” Grids are distributed systems that accept distributed
messages and produce distributed result messages A service or a workflow is a special case of a Grid A collection of services on a multi-core chip is a Grid Sensors or Instruments are “managed” by services;
they may accept non SOAP control messages and produce data as messages (that are not usually SOAP)
Collaborative services share either input (replicated model) or output ports
Collaboration involves a sharing messaging system (naturally publish-subscribe) and a control formalism (XGSP is SOAP compatible H323/SIP)
15
Database
SS
SS
SS
SS
SS
SS
SS
SS
SS
FS
FS
FS
FS
FS
FS
FS
FS FS
FS
FS
FS
FS
FS
FS
FS
FS
FS
PortalFS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
MD
MD
MD
MD
MD
MD
MD
MetaDataFilter Service
Sensor Service
OtherService
AnotherGrid
Raw Data Data Information Knowledge Wisdom
Decisions
SS
SS
AnotherService
AnotherService
SSAnother
Grid SS
AnotherGrid
SS
SS
SS
SS
SS
SS
SS
SS
FS
SOAP Messages
Portal
Portal
Collaboration byMessage Replication
FS
FS
FS
FS
MD
16
WSDisplay
WSViewer
WS Display
WS ViewerEvent
(Message)Service
Master
WSDisplay
WS Viewer
WebServic
e
F
I
U
O
F
I
S
O
OtherParticipants
WebServic
e
F
I
U
O
F
I
S
O
WebServic
e
F
I
U
O
F
I
S
O
Shared Input Port (Replicated WS) Collaboration with UFIOas User Facing and SFIO as Service Facing Ports
17
WSDisplay
WSViewer
WS Display
WS Viewer
Event(Message)
Service
Master
WSDisplay
WS Viewer
Application orContent source
WSDL
Web Service
F
I
U
O
F
I
S
O
Shared Output Port (Single WS) Collaboration thatCan be shared at any point on visualization pipeline
OtherParticipants
18
Database
SS
SS
SS
SS
SS
SS
SS
SS
SS
SS
FS
FS
FS
FS
FS
FS
FS
FS FS
FS
FS
FS
FS
FS
FSFS FS
FS
FS
PortalFS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
OS
MD
MD
MD MD
MD
MD
MD
MD
MetaDataFilter Service
Sensor Service
OtherService
AnotherGrid
Raw Data Data Information Knowledge Wisdom
Decisions
SS
SS
AnotherService
AnotherService
SSAnother
Grid SS
AnotherGrid
SS
SS
SS
SS
SS
SS
SS
SS
FS
SOAP Messages
Portal
Portal
Collaboration byMessage Replication
at any point infilter chain
MD
Shared Display is the “last” filter
2020
NaradaBrokering 2003-2006 Messaging infrastructure for collaboration, peer-to-peer and Grids
Implements JMS and native high-performance protocols (message transit time of 1 to 2 ms per hop)
Order-preserving message transport with QoS and security profiles Support for different underlying transport such as TCP, UDP,
Multicast, RTP SOAP message support and WS-Eventing, WS-RM and WS-Reliability. Active replay support: Pause and Replay live streams. Stream Linkage: can link permanently multiple streams – using in
annotation of real-time video streams Replicated storage support for fault tolerance and resiliency to storage
failures. Management: Scripting Interface to streams and brokers (uses WS-
Management) for initialization, firewall issues and fault tolerance Broker Topics and Message Discovery: Locate appropriate High Performance Transport supporting SOAP Infoset for GIS
applications
21
0
1
2
3
4
5
100 1000 10000 0
1
2
3
4
5
Mean
tra
nsi
t d
ela
y
(Millise
con
ds)
Sta
nd
ard
Devia
tion
(M
illise
con
ds)
Content Payload Size (Bytes)
Round-trip delays for different payload sizes (100B - 10KB)
Delay Standard Deviation
These measurements are messages from client to broker and back using latest Java 1.6 release that is about twice performance of earlier releases
22
0
10
20
30
40
50
60
70
100 1000 10000 100000 1e+006 0
2
4
6
8
10
12
14
16
18
20
Mean
tra
nsi
t d
ela
y
(Millise
con
ds)
Sta
nd
ard
Devia
tion
(M
illise
con
ds)
Content Payload Size (Bytes)
Round-trip delays for different payload sizes (100B - 1MB)
Delay Standard Deviation
These measurements are messages from client to broker and back using latest Java 1.6 release that is about twice performance of earlier releases.This graph is identical to previous one for small messages
23
Average Video Delays UDP Performance when NaradaBrokering used
for audio-video conferencing
Latency ms
# Receivers
One sessionMultiple sessions
30 frames/sec
2626
What is a Simple Service? Take any system – it has multiple functionalities
• We can implement each functionality as an independent distributed service
• Or we can bundle multiple functionalities in a single service Whether functionality is an independent service or one of many
method calls into a “glob of software”, we can always make them as Web services by converting interface to WSDL
Simple services are gotten by taking functionalities and making as small as possible subject to “rule of millisecond”• Distributed services incur messaging overhead of one (local) to
100’s (far apart) of milliseconds to use message rather than method call
• Use scripting or compiled integration of functionalities ONLY when require <1 millisecond interaction latency
Apache web site has many (pre Web Service) projects that are multiple functionalities presented as (Java) globs and NOT (Java) Simple Services• Makes it hard to integrate sharing common security, user
profile, file access .. services
27
Grids of Grids of Simple Services• Link via methods messages streams• Services and Grids are linked by messages• Internally to service, functionalities are linked by methods• A simple service is the smallest Grid• We are familiar with method-linked hierarchy
Lines of Code Methods Objects Programs Packages
Overlayand ComposeGrids of Grids
Methods Services Component Grids
CPUs Clusters ComputeResource Grids
MPPs
DatabasesFederatedDatabases
Sensor Sensor Nets
DataResource Grids
2828
Component Grids? So we build collections of Web Services which we package as
component Grids
• Visualization Grid
• Sensor Grid
• Utility Computing Grid
• Collaboration Grid
• Earthquake Simulation Grid
• Control Room Grid
• Crisis Management Grid
• Drug Discovery Grid
• Bioinformatics Sequence Analysis Grid
• Intelligence Data-mining Grid We build bigger Grids by composing component Grids using the
Service Internet
29Physical Network (monitored by FS16)
7: Discovery 8:Metadata
BioInformatics GridChemical Informatics Grid
…Domain SpecificGrids/Services
…
4: Notification
6: Security 5: Workflow3: Messaging 9: Management
14: Information Instrument/Sensor
12: Computing
Core Low Level Grid Services
9: Management 18: Scheduling 10: Policy
15: Application Services
Screening ToolsQuantum Calculations
15: Application Services Sequencing ToolsBiocomplexity Simulations
11: Portals
17: Collaboration
Ser
vice
s
13: Data Access/Storage
Using the Grid of Grids and Core Services to build multiple application grids re-using common components.
3030
Net Centric and Critical Infrastructure (CI) Grids built as Grids of Grids and re-using subGrids
Flood Servicesand Filters
Physical Network
Registry Metadata
Military Servicesand Filters
Net Centric Grid Flood CIGrid… Electricity CIGrid …
Data Access/Storage
Security WorkflowNotification Messaging
Portals Information Management Grid
Collaboration Grid
Sensor Grid Compute GridGIS Grid
Core Grid Services
31
Mediation and Transformation in a Grid of Grids and Simple Services
Po
rtP
ort
Port PortInternal
Interfaces
Subgrid or service
Po
rtP
ort
Port PortInternal
Interfaces
Subgrid or service
Po
rtP
ort
Port PortInternal
Interfaces
Subgrid or service
Messaging
Mediation andTransformationServices
External facingInterfaces
3232
Technology Nuggets produced for
Collaboration Grids• Group Support in Anabas Collaboration Framework• Hybrid Shared Display• GlobalMMCS is a collaboration system built using services and publish-subscribe messaging• Improved Java Media Framework
Collaborative Groups Illustrated In Anabas Impromptu
Examples of applications: private discussions in conference/lecture simultaneous breakout groups Multiple broadcasting in the same session (e.g.
audio/voice or video/TV channels for user-defined, such as particular need-to-know, groups)
Group & Sharedlets An Anabas Sharedlet is a shared application, e.g. TextChat, VoIP,
Video Conferencing, Shared Applications, Whiteboard GroupManager provides preliminary Group information to each
sharedlet, include joined sessions, active session, session participants, participant privileges (e.g. host, presenter) in each session
Each Sharedlet has its own specific method to handle Group. E.g. Text Sharedlet stores all conversations in every sessions Video Sharedlet displays the videos in the active session only Audio Sharedlet plays the audio in the active session only Shared Display Sharedlet may store data in every sessions or
in the active session only The Sharedlet specific method depends on network bandwidth
requirement (e.g. Is the network bandwidth sufficient?) and usage difference (e.g. Can past data be disposed? Who can share information?)
HSD – Hybrid Shared Display
HSD builds on a combination of Classic Shared Display (CSD) and Video Shared Display (VSD)
Problem: Video sharing using lossless encoding scheme consumes very high network bandwidth
Motivation of HSD: Find the video or fast changing regions in the shared application, and encode them using video codec e.g. H.261 and MPEG4 to save network bandwidth while retaining good visual quality
Illustration of Hybrid Shared Display on the sharing of a browser window with a fast changing region.
Screen capturing
Region finding
Video encoding SD screen data encoding
Network transmission (RTP) Network transmission (TCP)
Video Decoding (H.261) SD screen data decoding
Rendering Rendering
Screen display
HSD Flow
Presenter
Participants
Through NaradaBrokering
VSD CSD
4040
GlobalMMCS Web Service Architecture
SIP H323 Access Grid Native XGSPAdmire
Gateways convert to uniform XGSP Messaging
High Performance (RTP)and XML/SOAP and ..
Media ServersFilters
Session ServerXGSP-based Control
NaradaBrokeringAll Messaging
Use Multiple Media servers to scale to many codecs and manyversions of audio/video mixing
NB Scales asdistributed
WebServices
NaradaBrokering
4141
Global-MMCS Community Grid This includes an open source protocol independent Web Service
“MCU” which will scale to an arbitrary number of users and provides support for thousands of simultaneous users of collaboration services.
The function of A/V media server is distributed using NaradaBrokering architecture.• Media Servers mix and convert A/V streams
Open XGSP MCU based on the following open source projects• openh323 is basis of H323 Gateway• NIST SIP stack is basis of SIP Gateway• NaradaBrokering is open source messaging• Java Media Framework basis of Media Servers• Helix Community http://www.helixcommunity.org for Real
Media http://www.globalmmcs.org open source release
4242
Break up into “Services” Monolithic MCU becomes many different “Simple Services”
• Session Control• Thumbnail “image” grabber• Audio Mixer• Video Mixer• Codec Conversion• Helix Real Streaming• PDA Conversion• H323/SIP Session/Signaling Gateways
As independent can replicate particular services as needed• Codec conversion might require 20 services for 20 streams
spread over 5 machines 1000 simultaneous users could require:
• 1 session controller, 1 audio mixer, 10 video mixers, 20 codec converters, 2 PDA converters and 20 NaradaBrokers
Support with a stream optimized Grid Farm in the sky• Future billion way “Video over IP” serving 3G Phones and home media
centers/TV’s could require a lot of computing
43
Collaboration Grid
UDDI NaradaBroker
HPSearch
WS-Context
Gateway
WS-Security
NaradaBroker
NaradaBroker
Gateway
Gateway
Gateway
XGSP MediaService
Video Mixer
Transcoder
Audio Mixer
Replay
Record
Annotate
Thumbnail
WhiteBoard
SharedDisplay
SharedWS
4444
GlobalMMCS and NaradaBrokering All communication – both control and “binary” codecs are
handled by NaradaBrokering Control uses SOAP and codecs use RTP transport Each stream is regarded as a “topic” for NB Each RTP packet from this stream is regarded as an “event” for
this topic Can use replay and persistency support in NB to support
archiving and late clients Can build customized stream management to administer replay,
and who gets what stream in what codec NaradaBrokering supports unicast and multicast Use firewall penetration and network monitoring services in NB
to improve Q0S
4545
XML based General Session Protocol XGSP
The XGSP conference control includes three services: Conference management supports user sign-in, user create/terminate/join/leave/invite-
into XGSP conferences conference calendar service Application session management provides users with the service for creating/terminating
application sessions, managing session related services such as audio/video mixing
Floor control manages the access to shared collaboration resources in
different application sessions for example, in a large scale of meetings having thousands of people, only
limited people are allowed to become presenters so that they can send audio/video
4646
Improved Java Media Framework Performance
0
10
20
30
40
50
60
1 2 3 4 5 6 7 8
AG VIC
SunJMF
FastJMF
0
10
20
30
40
50
60
70
1 2 3 4 5 6 7 8
AG VIC
SunJMF
FastJMF
Video Rendering performance (left: still desktop, right: movie sequence)We plot CPU percentage use versus number of streams rendered
4949
Real time annotation and replay IOpens an
eSports sessionCloses the
eSports session Session IDSession
Description
GlobalMMCS Video 1
GlobalMMCS Video 2
Whiteboard area(Snapshot annotation
tool area)
Session List
Session Information
Area
Stream Information
Area
Snapshot Button for Video 1
Snapshot Button for Video 2
Timeline
Starting and stopping replay
sessions and streams
5252
Sensor and GIS Grids
See also PhD Thesis http://grids.ucs.indiana.edu/ptliupages/publications/GalipAydin-Thesis.pdf http://grids.ucs.indiana.edu/ptliupages/presentations/galip-aydin-defense.ppt
Paper http://grids.ucs.indiana.edu/ptliupages/publications/PEPIRealTimeGISAydin_YB.pdf
Separate talk by Marlon Pierce
54
The Grid and Web Service Institutional Hierarchy
OGSA GS-*and some WS-*GGF/W3C/….XGSP (Collab)
WS-* fromOASIS/W3C/Industry
Apache Axis.NET etc.
Must set standards to get interoperability
2: System Services and Features(WS-* from OASIS/W3C/Industry)
Handlers like WS-RM, Security, UDDI Registry
3: Generally Useful Services and Features(OGSA and other GGF, W3C) Such as
“Collaborate”, “Access a Database” or “Submit a Job”
4: Application or Community of Interest (CoI)Specific Services such as “Map Services”, “Run
BLAST” or “Simulate a Missile”
1: Container and Run Time (Hosting) Environment (Apache Axis, .NET etc.)
XBMLXTCE VOTABLECMLCellML
55
The Ten areas covered by the 60 core WS-* Specifications
WS-* Specification Area Examples
1: Core Service Model XML, WSDL, SOAP
2: Service Internet WS-Addressing, WS-MessageDelivery; Reliable Messaging WSRM; Efficient Messaging MOTM
3: Notification WS-Notification, WS-Eventing (Publish-Subscribe)
4: Workflow and Transactions BPEL, WS-Choreography, WS-Coordination
5: Security WS-Security, WS-Trust, WS-Federation, SAML, WS-SecureConversation
6: Service Discovery UDDI, WS-Discovery
7: System Metadata and State WSRF, WS-MetadataExchange, WS-Context
8: Management WSDM, WS-Management, WS-Transfer
9: Policy and Agreements WS-Policy, WS-Agreement
10: Portals and User Interfaces WSRP (Remote Portlets)
56
Activities in Global Grid Forum Working Groups
GGF Area GS-* and OGSA Standards Activities
1: Architecture High Level Resource/Service Naming (level 2 of slide 6),Integrated Grid Architecture
2: Applications Software Interfaces to Grid, Grid Remote Procedure Call, Checkpointing and Recovery, Interoperability to Job Submittal services, Information Retrieval,
3: Compute Job Submission, Basic Execution Services, Service Level Agreements for Resource use and reservation, Distributed Scheduling
4: Data Database and File Grid access, Grid FTP, Storage Management, Data replication, Binary data specification and interface, High-level publish/subscribe, Transaction management
5: Infrastructure Network measurements, Role of IPv6 and high performance networking, Data transport
6: Management Resource/Service configuration, deployment and lifetime, Usage records and access, Grid economy model
7: Security Authorization, P2P and Firewall Issues, Trusted Computing
57
Net-Centric Core Enterprise Services Core Enterprise Services Service Functionality
NCES1: Enterprise Services Management (ESM)
including life-cycle management
NCES2: Information Assurance (IA)/Security
Supports confidentiality, integrity and availability. Implies reliability and autonomic features
NCES3: Messaging Synchronous or asynchronous cases
NCES4: Discovery Searching data and services
NCES5: Mediation Includes translation, aggregation, integration, correlation, fusion, brokering publication, and other transformations for services and data. Possibly agents
NCES6: Collaboration Provision and control of sharing with emphasis on synchronous real-time services
NCES7: User Assistance Includes automated and manual methods of optimizing the user GiG experience (user agent)
NCES8: Storage Retention, organization and disposition of all forms of data
NCES9: Application Provisioning, operations and maintenance of applications.
58
Produce the Needed Core Services
• We can classify services in many ways and following 2 charts are one way; slightly changed from proposal as NCOW and our work changed a little.
• Green is “in hand”; we know a lot• Orange is “in hand” with outside but available
solutions• Red has problems – Security does not have
industry consensus while current Scheduling work does not address DoD real-time service and network requirements
59
The Core Features/Service Areas IService or Feature WS-* GS-* NCES
(DoD)Comments
A: Broad Principles
FS1: Use SOA: Service Oriented Arch.
WS1 Core Service Architecture, Build Grids on Web Services. Industry best practice
FS2: Grid of Grids Distinctive Strategy for legacy subsystems and modular architecture
B: Core Services
FS3: Service Internet, Messaging
WS2 NCES3 Streams/Sensors. Team
FS4: Notification WS3 NCES3 JMS, MQSeries.
FS5 Workflow WS4 NCES5 Grid Programming
FS6 : Security WS5 GS7 NCES2 Grid-Shib, Permis Liberty Alliance ...
FS7: Discovery WS6 NCES4 UDDI
FS8: System Metadata & State
WS7 Globus MDSSemantic Grid, WS-Context
FS9: Management WS8 GS6 NCES1 CIM
FS10: Policy WS9 ECS
60
The Core Feature/Service Areas IIService or Feature WS-* GS-* NCES Comments
B: Core Services (Continued)
FS11: Portals and User assistance
WS10 NCES7 Portlets JSR168, NCES Capability Interfaces
FS12: Computing GS3
FS13: Data and Storage GS4 NCES8 NCOW Data StrategyFederation at data/information layer major research area; CGL leading role
FS14: Information GS4 JBI for DoD, WFS for OGC
FS15: Applications and User Services
GS2 NCES9 Standalone ServicesProxies for jobs
FS16: Resources and Infrastructure
GS5 Ad-hoc networks
FS17: Collaboration and Virtual Organizations
GS7 NCES6 XGSP, Shared Web Service ports
FS18: Scheduling and matching of Services and Resources
GS3 Current work only addresses scheduling “batch jobs”. Need networks and services
6161
Some Conclusions I One can map nearly all NCOW/NCES and GiG core
capabilities into Web Service (WS-*) and Grid (GS-*) architecture and core services• Analysis of Grids in NCOW/NCES document
inaccurate (confuse Grids and Globus and only consider early activities)
Some “mismatches” on both NCOW and Grid sides GS-*/WS-* do not have collaboration and miss some
messaging NCOW does not have at core level system metadata
and resource/service scheduling and matching Higher level services of importance include GIS
(Geographical Information Systems), Sensors and data-mining
6262
Some Conclusions II Criticisms of Web services in a paper by Birman seem to
be addressed by Grids or reflect immaturity of initial technology implementations
NCOW/NCES does not seem to have any analysis of how to build their systems on WS-*/GS-* technologies in a layered fashion; they do have a layered service architecture so this can be done• They agree with service oriented architecture• They seem to have no process for agreeing to WS-*
GS-* or setting other standards for CES Grid of Grids allows modular architectures and natural
treatment of legacy systems• Note Grids, Services and Handlers are all “just”
entities with distributed message-based input and output interfaces
63
Additional Services• Sensors have low level support listed as FS3; higher
level integration using SensorML and Filters well understood. Some work in phase I
• GIS Grid services pioneered by team and already shown in phase I
• Mediation (Interoperability) Services needed to link Grids (defined as a collection of ≥ 1 Services)– Need to generalize existing solutions for Sensor Grids
and for MQSeries-SOAP Mediation– View NaradaBrokering as a SOAP Intermediary
64
Out of Scope for Phase II• Many areas are still evolving significantly
– Mediation/Interoperation– Security– Scheduling of non-compute Resources– Data/Information Federation– Semantic Grid and management
• We will not test scalability on large number of services, sensors and component Grids
• Integrating legacy systems not addressed• Grid of Grids building tool is “new idea” – can expect will benefit from
further work