6
Interoperability in large scale cyber-physical systems Jesus Bermejo Muñoz Telvent [email protected] Terje Grimstad Karde [email protected] Diego R. López Telefonica I+D [email protected] S.G. Galán, L.R. López, R.P. Prado and J.E. Muñoz University of Jaén. {sgalan,lrlopez,rperez,jemunoz}@ujaen.es AbstractWhile the capability for growing on demand is the design foundation of the new generation of software platforms, physical systems always have limited resources. Therefore, there is a need to optimise the existing infrastructures and evolv- ing them building on interoperability. The optimisation of the infrastructure entails not only addressing the computational and storage capabilities but also the network, its associated intelligence and the exchanged information. Therefore, the inter- operability requirement in a large scale cyber-system spans from the lowest layers, interfacing with the physical resources, to the software building blocks, client devices and handled data. In this paper, several initiatives addressing interoperability among cloud IaaS layers, IaaS-PaaS, PaaS-SaaS, Cloud-Network and data are presented. Index Terms—Cloud computing; OCCI; OSGi; Interoperabil- ity I. I NTRODUCTION.I NCREASING SOFTWARE AND DATA RELEVANCE.CLOUD COMPUTING. Architecturally, software types can generally be described as belonging to one of three basic categories i) applications, ii) operating systems, or iii) middleware. Traditionally these cat- egories are also defining different business layers of software players in the industry. A particular category of software is embedded software. This category has usually special require- ments such as real time requirements and strong dependency of the hardware. Due to the links with the hardware product the development of this type of software is carried out in the context of organisations that are not frequently identified as software industries. However, more and more, products depend on electronics and software to implement the many new functions we demand. Currently there is an explosion in the software, indepen- dently of the categories above, combined with the deployment of communication technologies that are linking previously isolated products. Cloud computing emerges as a promising technology allowing for software growth on demand leading to a new paradigm in which the computing has been fre- quently compared with an utility, like electricity, natural gas or water. Many of the new business models are similar to those in utilities in which the cost depends on the use. This has significant advantages for the users that are having low entry cost for complex infrastructures and applications. The recognition of the potential of the new approach is leading a fast transformation of the traditional software layers; operating system, middleware and applications. And in the context of cloud computer they are frequently referred as infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Application as a Service (SaaS) [1]. The transition of the software main stream from a product to a service has many other implications. While user require- ments are in both cases leading the development, delivering as a product, entails loosing o reducing substantially the relation- ship with the user. In a service model approach the interaction is continuous. Thus, the information derived from the user is becoming the key driver of the software requirements when the software is delivered as a service. This complemented with the amount of public available data, is pushing the fast growth of “Big Data” technologies. The resulting platform of the increasing software growth and networking makes it not possible to be developed by one single organisation. Therefore, interoperability and reuse, for lowering development cost, are becoming key issues for targeting future software systems. The rest of the paper is summarized as follows. Section 2 introduces the intercloud concept and the interoperability across IaaS layers of different clouds. In section 3, two Iaas- PaaS interoperability appoaches are proposed. In Section 4, the PaaS-SaaS is analyzed. Section 5 introduces emerging paradigms of interoperability between Cloud and Network. In section 6, an experience on data reuse from the public sector is presented as representative approach for achieving interoperability in the data layer. Finally, section 7 outlines the main conclusions. II. I NTEREOPERABILITY ACROSS I AAS LAYERS, I NTERCLOUD CONCEPT (MEGHA INITIATIVE) While the capability for growing on demand is the design foundation of the new generation of software platforms, phys- ical systems always have limited resources. The Intercloud is an interconnected global "cloud of clouds" and an extension of the Internet "network of networks" on which it is based. Megha (Cloud in Sanskrit) [2] is one of the few initiatives of this nature currently running in the world as a platform for 978-1-4673-4737-2/12/$31.00 ©2012 IEEE

[IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

  • Upload
    diego-r

  • View
    215

  • Download
    2

Embed Size (px)

Citation preview

Page 1: [IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

Interoperability in large scale cyber-physicalsystems

Jesus Bermejo MuñozTelvent

[email protected]

Terje GrimstadKarde

[email protected]

Diego R. LópezTelefonica [email protected]

S.G. Galán, L.R. López, R.P. Prado and J.E. MuñozUniversity of Jaén.

{sgalan,lrlopez,rperez,jemunoz}@ujaen.es

Abstract—While the capability for growing on demand is thedesign foundation of the new generation of software platforms,physical systems always have limited resources. Therefore, thereis a need to optimise the existing infrastructures and evolv-ing them building on interoperability. The optimisation of theinfrastructure entails not only addressing the computationaland storage capabilities but also the network, its associatedintelligence and the exchanged information. Therefore, the inter-operability requirement in a large scale cyber-system spans fromthe lowest layers, interfacing with the physical resources, to thesoftware building blocks, client devices and handled data. In thispaper, several initiatives addressing interoperability among cloudIaaS layers, IaaS-PaaS, PaaS-SaaS, Cloud-Network and data arepresented.

Index Terms—Cloud computing; OCCI; OSGi; Interoperabil-ity

I. INTRODUCTION. INCREASING SOFTWARE AND DATA

RELEVANCE. CLOUD COMPUTING.

Architecturally, software types can generally be describedas belonging to one of three basic categories i) applications, ii)operating systems, or iii) middleware. Traditionally these cat-egories are also defining different business layers of softwareplayers in the industry. A particular category of software isembedded software. This category has usually special require-ments such as real time requirements and strong dependencyof the hardware. Due to the links with the hardware productthe development of this type of software is carried out inthe context of organisations that are not frequently identifiedas software industries. However, more and more, productsdepend on electronics and software to implement the manynew functions we demand.

Currently there is an explosion in the software, indepen-dently of the categories above, combined with the deploymentof communication technologies that are linking previouslyisolated products. Cloud computing emerges as a promisingtechnology allowing for software growth on demand leadingto a new paradigm in which the computing has been fre-quently compared with an utility, like electricity, natural gasor water. Many of the new business models are similar tothose in utilities in which the cost depends on the use. Thishas significant advantages for the users that are having lowentry cost for complex infrastructures and applications. The

recognition of the potential of the new approach is leading afast transformation of the traditional software layers; operatingsystem, middleware and applications. And in the context ofcloud computer they are frequently referred as infrastructure asa Service (IaaS), Platform as a Service (PaaS) and Applicationas a Service (SaaS) [1].

The transition of the software main stream from a productto a service has many other implications. While user require-ments are in both cases leading the development, delivering asa product, entails loosing o reducing substantially the relation-ship with the user. In a service model approach the interactionis continuous. Thus, the information derived from the user isbecoming the key driver of the software requirements whenthe software is delivered as a service. This complemented withthe amount of public available data, is pushing the fast growthof “Big Data” technologies.

The resulting platform of the increasing software growthand networking makes it not possible to be developed byone single organisation. Therefore, interoperability and reuse,for lowering development cost, are becoming key issues fortargeting future software systems.

The rest of the paper is summarized as follows. Section2 introduces the intercloud concept and the interoperabilityacross IaaS layers of different clouds. In section 3, two Iaas-PaaS interoperability appoaches are proposed. In Section 4,the PaaS-SaaS is analyzed. Section 5 introduces emergingparadigms of interoperability between Cloud and Network.In section 6, an experience on data reuse from the publicsector is presented as representative approach for achievinginteroperability in the data layer. Finally, section 7 outlinesthe main conclusions.

II. INTEREOPERABILITY ACROSS IAAS LAYERS,INTERCLOUD CONCEPT (MEGHA INITIATIVE)

While the capability for growing on demand is the designfoundation of the new generation of software platforms, phys-ical systems always have limited resources. The Intercloud isan interconnected global "cloud of clouds" and an extensionof the Internet "network of networks" on which it is based.Megha (Cloud in Sanskrit) [2] is one of the few initiatives ofthis nature currently running in the world as a platform for

978-1-4673-4737-2/12/$31.00 ©2012 IEEE

Page 2: [IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

innovation and evaluation. Its objective is to coordinate andpromote innovation in cloud technologies focussing on IaaSinteroperability based on open standards such as Open CloudComputing Interface (OCCI) [3], Cloud Data ManagementInterface (CDMI) [4] and Open Virtualization Format (OVF)[5]. It is also targeting to act as a catalyst for the applicationof open and interoperable cloud technologies in other areas inwhich it can influence, such as business and government. Sinceits start the Megha group established direct links with initia-tives such as e-Science, CRUE-TIC and international projectsin academia and research environments such as TERENA (TF-MSP, TF-Storage, TF-EMC2), GÉANT, EGI and OGF.

Initially, three geographically distributed service centres(CESCA, PIC and CESGA) with public cloud capabilitieswere networked and other R&D groups will also join in thefuture. The first conceptual pilot, federating cloud resources,has validated the approach built on interoperability in connec-tivity and dynamic management of virtual resources (network,machines and disk images).

III. IAAS-PAAS INTEROPERABILITY

OCCI [3] is a standardized and extensible interface thatallows managing cloud resources disregarding the actual im-plementation of the actions. OCCI offers the mechanismsfor deploying a cloud system as IaaS, among others. Inparticular, OCCI is designed to be used head-to-head with anative management interface. On the other hand, OSGi (OpenServices Gateway Initiative) [6] is a framework that allowsdefining services and applications in a modular way using codepackages known as “bundles”. Bundles can be managed anddeployed remotely and dynamically. Our approach intends todefine OSGi services in a way that they allow managing cloudresources from other OSGi bundles. For this, we could use theiPOJO component model [7], which offers a higher level ofabstraction, including service configuration, synchronizationand composition. The required management actions will beexecuted using OCCI interface calls. The following sectionswill highlight the most prominent features of each involvedtechnology according to our goals.

A. OCCI (Open Cloud Computing Interface)

It is the outcome of one of the first efforts in creating astandard APIs in the cloud scope [3]. OCCI allows managingcloud resources in a way that is implementation-independent.Nowadays, only the IaaS functionality is specified, but itsfoundation is so flexible that extra functionality (i.e. PaaS)can be easily included. OCCI is an open specification that iscommunity-managed through the Open Grid Forum. It usesentities as an abstract representation of the different cloudresources. Each of those entities (i.e. network, storage, VMtemplates, etc.) has properties and related actions, and canbe linked to other entities. Moreover, properties and actionscan be dynamically extended through an extension mechanism.This allows introducing additional detail on the entities, whichmay be related to the actual implementation of the resource.The abstraction model is so generic that any application

using OCCI can manage resources of unknown origin withno additional effort; this is supported by a discovery that iscategory-based.

1) Functionality/Scope : OCCI uses a modular approach. Infact, this modularity is key for reaching the different possibili-ties the cloud computing offers, which are very heterogeneous.As stated before, the current specification only describes IaaSsupport, but OCCI’s design allows to be extended with ease.Extensions can specify new kinds of resources or complementexisting ones due to dynamical addition of properties andactions (mixins). As an example, PaaS management extensionscould be integrated into OCCI. Because of this, discoveringservices are mandatory as they allow clients to browse newcapabilities and features, even if they are not previouslyknown.

2) Architecture: Current OCCI specification is split intothree parts: “OCCI Core”[8], “OCCI Infrastructure” [9] and“OCCI RESTful HTTP Rendering” [10]. “OCCI Core” isthe foundation. The other two are built upon and definethe “Core model”. This part establishes a base abstractionmodel of resources, a method for defining actions over themand the extension mechanisms. Furthermore, it describes the“Category” concept as a grouping feature that allows browsingand discovering any kind of resources. Extensions are newkinds of resources and additions to existing ones. The onlyarchitectural constraint is that a path between every extensionto the “core” must exist. In fact, the other two parts defined bythe specification are implemented through extensions. “OCCIInfrastructure” adds typical IaaS elements support to the “Coremodel”, like computing, storage or networking resources.The last part of the specification, “OCCI RESTful HTTPRenderer”, offers an API and a serialization mechanism thatis based on a RESTful model. This means that standard HTTPrequests are used for managing resources in a intuitive way.

3) Interoperability features: OCCI abstracts resource man-agement for the actual implementation. This allows offeringthe same service and programming interface disregarding theparticular cloud computing technology lying underneath. Thisabstraction can exist at almost any layer due to the extensionsupport on the “Core model”. Interoperability between IaaSand PaaS using OCCI can be based on this extension mech-anism: A client interacting with PaaS functionality offeredby an extension could transparently work with another OCCIimplementation that also includes that particular extension.This way our problem is focused in defining a generic “OCCIPlatform” specification that extends support for managingPaaS resources. As a result, these resources could be easilybuilt over any OCCI-compliant IaaS.

B. RFP-133 Cloud Computing

RFP-133 [11] discusses the applicability of OSGi standardservices in a Cloud Computing environment, it proposes newpotential features to be added and details different use-cases.For each section, the document details the reasoning behindevery choice and some bibliographic references to other papersthat support them.

Page 3: [IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

1) Functionality/Scope : OSGi is a framework that allowsdeploying complex services and applications through the ag-gregation and interaction of simpler but independent modules(bundles). Each bundle uses exposed functionality from otherbundles to offer their own and enjoy a life-cycle that allowthem to be run, stopped or exchanged in a dynamic waywithout affecting the behaviour of the applications whichdepend on them. Should we need to extrapolate OSGi featuresinto the cloud, it would be equivalent to a PaaS offering asthe framework is a platform where applications and/or serviceare dynamically deployed. RFP-133 highlights that OSGimodularity allows solving some of the issues frequently foundon elastic applications development, i.e. variable resourceallocation depending of the actual load.

2) Interoperability features: RFP-133 proposal consists ondeploying the applications over one or several OSGi platformshosted in the cloud. In this way, the different componentsand their associated resources could be automatically locatedand deployed on the VM where they are needed, dynamicallycreating or destroying instances of them according to the appli-cation load. As OSGi architecture allows abstracting from theparticular operating system and hardware running underneath,interoperability of any OSGi application/service on the cloudwill be guaranteed once the technical development work isfinished. Generic concepts in the bundle will be translatedinto particular cloud management calls through the same APIimplemented by OSGi.

C. Different approaches of interoperability

Once both OCCI and OSGi technologies have been intro-duced, we propose two different approaches of consideringinteroperability between IaaS and PaaS. The first one intendsto set up a direct link between OCCI entities and an internalrepresentation inside an OSGi bundle. A Java object instancewill be associated to each OCCI entity so managing thoseentities will be second nature in an OSGi environment. Thesecond approximation suggests implementing the differentcloud elements and their relations as described in RFP-133in OSGi. The actual implementation of those elements wouldbe done through an OSGi service that translates method callsto a cloud management API like OCCI. Both approaches aredescribed below:

1) Thin approach: In the light of the modular characterof the OCCI specifications, each of its parts can be im-plemented as independent OSGi bundles that interact withthemselves. The critical component is the “OCCI Core”, sinceit describes the interaction of the seven classes (Category,Kind, Action, Mixin, Entity, Resource and Link) on which theOCCI abstraction is built. Essentially, the approach consistson its implementation as Java classes and development of themethods for its handling. See Figure 1. Other parts will usethe interfaces and classes offered by the “OCCI Core” bundleto derive new classes, such as for example Compute, Networkor Storage in the case of the “OCCI Infrastructure”. Differentbundles can extend these classes to specify lower level detailssuch as the current storage provider.

����

�������� ��������� ������

�������

�������� �����������

�� ��

���� ����

�� ��� ��

��������� ��������

����

IaaS Services API

��������

�������

Figure 1. Thin Approach Architecture.

The major disadvantage of this approach is twofold. Onthe one hand, the implementation of a service that imitatesthe OCCI conditions is prompting the usage of such interfaceto interact with computational resources; this difficulties theinteroperability with other IaaS implementations which do notcommunicate in OCCI. A second drawback, and probablythe most important one, is the fact that this approach doesnot follow the OSGi philosophy of using already standarizedservices by OSGi Alliance. For instance, although OCCIassigns properties to the entities and they can be dynamicallyextended with mixins, it does not use the same concept asOSGi ConfigAdmin Service which would be the logical optionto manage the properties in an OSGi environment. Similarly,the link among the different entities is done through Linkobjects, in spite that it could be more flexible and natural touse the OGSi WireAdmin Service. It is a necessary price toobtain a 1:1 correspondence with the OCCI model.

2) Thick approach: The second approach consists of mak-ing the abstraction of the cloud resources and the managementAPI independent. This gives more freedom to implement themodel. The RFP-133 document details a decomposition of theelements which can offer cloud services when related. Thus,a hierarchical structure is distinguished with the “Cloud” asorigin which is made up of “Resource Pools” which at thesame time involves “Systems”. “System” is an entity that willeventually be deployed in the cloud. It is made up of intercon-nected elements that provide and demand services/packages. A“System” can be described by the number of instances whichare required of each element, in a way that a particular number(i.e., minimum) is established. When deploying the “System”,the proposal will download, if necessary, the required ele-ments. Working with these entities will have a reflection onthe IaaS resources, as seen in Figure 2.

This abstraction fits very well in the OSGi philosophybecause it defines the deployments of the applications in thecloud as a smaller set of correlated elements with dependenciesamong them that offer and demand services. In addition, manyof the services of the OSGi standard can be used to makesuch deployments as well as to configure (ConfigAdmin),init/cancel (ApplicationAdmin) or connect (WireAdmin) the

Page 4: [IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

����

�������� ��������� ������

�������

�������� �����������

�� ��

���� ����

�� ��� ��

��������� ��������

����

IaaS Services API

��������

�������

Figure 2. Thick Approach Architecture.

elements. This results in a much more natural in OSGienvironments.

The main disadvantage is given by the fact that it isnecessary to make a translation between the resources modelin the RFP-133 document and the underlying managementinterface invocations of IaaS, which can be a complex problemif such models significantly differ in their philosophy or scope.

IV. PAAS-SAAS INTEROPERABILITY -(OSAMI-COMMONS)

A. Platform and middleware

A computing platform is a broad concept including hard-ware infrastructure and the required software in which theapplication is able to run. If we are defining a platform fromthe application perspective, the middleware will be the partof it between the operating system and the application. Inthe context of Cloud Computing the platform is offered asa service and we refer it as PaaS. Therefore, the middlewareis an essential part of the software platform and an enabler forlowering software development effort.

In a context of a global increase of software needs thestrategies for addressing middleware layers have depended onthe constraints imposed by the hardware. While server basedsoftware middleware has grown significantly incorporatingnew requirements, embedded systems organisations initiatedproduct line technologies late in the nineties.

B. Integrating main streams - OSAmI-Commons project

Although the strategies to address the increasing softwareneeds in the enterprise domain and in embedded systems wereoriginally different they are currently converging. On the otherhand, the interoperability requirements emerging from the net-working of servers and devices seems accelerating this process.OSAmI-Commons R&D project was initiated with this vision.The new environment is an enabler for a new concept ofglobal and transversal platform that can exploit the potential ofthe networking through a cross-industry component-orientedplatform enabling the creation of intelligent service solutions.

The consortium involved in the project shared the visionof this platform emerging from a community process that will

Figure 3. Open services ecosystem.

lead to the evolution of the platform. The initiative consisted ofa number of networked national sub-projects that use the samearchitecture principles to build solutions for different industrysegments that still can be used in a networked cross-industryenvironment, see figure 3. A computing node provided with theservices required for the dynamic provisioning of deployableentities was developed. And a set of demonstrators in differ-ent domains validated the approach for achieving fine graininteroperability both in the middleware and application layersright form the design phase. However, while the applicationsare frequently perceived in terms domains, the approach forallowing a broad range of personalisation has to be architecturecentric. Therefore, the extension of the achievements is neededfor linking the application and architecture domains.

In the context of Cloud Computing the platform is offered asa service and we refer it as PaaS. Similarly, the applicationscan be referred as SaaS when they are offered as a serviceeither in a “pay per use” service model or free.

V. INTEROPERABILITY BETWEEN THE CLOUD AND THE

NETWORK. EMERGING PARADIGMS. (CCCSO.NET)

A basic element for any cloud infrastructure is the networkfabric in which it relies. Clouds are in essence a set ofvirtualized components accessed through a communicationnetwork. And network virtualization becomes the natural nextstep in the attempt of achieving a full interoperable virtualcomputing service.

First steps towards network virtualization were taken build-ing abstract models of the network components: links, routers,firewalls, load balancers, etc. The strategy was to providea common interface, so applications were able to interactcontrolling network properties accordingly. The problem inthe approach was that the abstract models were based inthe same concept of network that has been in use sincethe first computer-to-computer connection: the network as aset of autonomous computing elements, loosely coupled by

Page 5: [IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

links connecting them, commuting data according to their owninternal state. Users could not deal with the network as asingle element but had to be aware of the individual elements,their connections, and their individual state, and taking intoaccount that they were not following uniform patterns fororganizing themselves. Any abstraction at that level requireda deep knowledge of the network internal structure, so it wasnot a network abstraction.

Identifying the need for separating the mechanisms for con-trolling the network (the “control plane”) from the mechanismsperforming data routing and forwarding (the “data plane”) isthe first necessary step for a network abstract model. Controlprotocols are defined, so individual network components cancoordinate their behaviour. But this is not sufficient; anyattempt to make an effective control of the network requirescontrolling every individual component.

In the recent years, a new paradigm in network managementhas emerged, fully decoupling the control and data planes.The control decisions are taken by a central element (thecontroller) while the switching decisions are actually appliedby distributed elements (the switches). A common protocolallows the controller to communicate its decisions to theswitches. Having this central element allows for abstractingthe network into a single element, as it becomes the onein charge of the whole network behaviour. Furthermore, thecommon protocol acts in a similar way to a processor in-struction set controlling its registries, processing units andperipherals. Therefore, the network becomes a programmableentity, suitable to be controlled in the same way as anyother element in the computing infrastructure. This is whatis currently known as Software Defined Networking (SDN),with OpenFlow [12]standard as its flagship.

SDN is the cornerstone for advancing network virtualizationcapabilities to a point similar to other virtualisation tech-nologies. SDN is in the process of becoming a mainstreamtechnology. There are many demonstrations of the possibilitiesit offers, and its supporters insist in that we have only started toexplore the surface of the deep change it will bring to networkplanning and management. Just think on the possibilities ofapplying to network design and operation the same toolsand techniques available for software development: formaldesign methodologies, high-level languages, compilers andinterpreters, debuggers, profiling techniques, etc.

With the support of SDN, the network becomes anothercomputing resource, similar to others and suitable for beingintegrated as part of a service offer controlled by the applica-tion programmers according to the requirements.

VI. INFORMATION INTEROPERABILITY. EXPERIENCE

FROM THE PUBLIC SECTOR. (SEMICOLON)

Exploiting large sets is frequently a key objective in largescale cyber-physical systems. Therefore, data reuse strategiesare of primary importance. Public sector may be regarded asan information factory. It captures information from a varietyof sources, it takes decisions based on information, it producesnew information, and it publishes information for the benefit

of citizens and businesses. Information is the raw material, thebusiness processes are the processing machines.

The US Digital Government Strategy [13] has a clear viewof this and ranks information as one of four important assetsfor e-Government. The EU Digital Agenda [14] addressesopening up of public data resources for reuse and the adop-tion of a European Interoperability Strategy and Framework.The ISA-program and the Action-cluster «Trusted informationexchange» states Improving semantic interoperability in Eu-ropean e-Government systems, Improving cross-border accessto government data, and Making administrative data availablefor reuse (Open Government Data). Europe addresses theimportance of information, but information governance is notexplicitly one of the action points.

Information governance and systematic work with metadataand semantics are crucial elements for a feasible implemen-tation of an open, transparent, accessible, accountable, user-friendly and service-oriented public sector. The developmentof cross-sector services and the demand for reuse of publicservice information, both in the public sector itself, but alsofor commercial services, underpins the importance of infor-mation properties like measurable quality level, well-defined,trustworthy, applicable for reuse (including licenses for reuse)and a well-defined governance regimes. Participation in cross-sector e-Services demands the establishment of metadatarepositories and ontologies as obligatory parts of the publicsector information governance regimes.

However, working practices are unfortunately different.Today, public services are mostly provided by one agencyonly. You very seldom find combined services from severalpublic agencies, even though it would have been profitablefor the citizen or business. There seems to be no goodand established methodologies within public sector whichsupport service development across different agencies. Thishas to with legal matters which actually are political issues,organizational issues, service development processes, businessprocesses and the meaning and representation of information(semantics). The interaction between the topic professionalsand the ICT-people which should implement the cross-sectorservices seems to be much too lose. There are countlessexamples of point –to-point solution with implicit informationhandling with no reuse effects, ending up in a maintenancenightmare, which again cause an expensive public sector withlittle funds to develop new and better services.

In Semicolon [15] we have observed that some effectsof systematic work with metadata and semantics are betterquality services, avoidance of double work in the production,increased reuse, increased ability to cooperate between de-partments in the same agency, identification of incompatibledefinitions of the same term, more robustness in relation tochange of personnel, reduced demand for user support and anincreased ability in the organisation to change [16].

Our research has also identified additional effects. Weclassify these effects as internal or external to the organization.Internal effects can be summarized as:

∙ By working with Information governance in a structured

Page 6: [IEEE 2012 IEEE 17th Conference on Emerging Technologies & Factory Automation (ETFA 2012) - Krakow, Poland (2012.09.17-2012.09.21)] Proceedings of 2012 IEEE 17th International Conference

manner, business becomes deeply involved in the defini-tion of concepts. This in turn leads to better alignmentbetween the business processes and the ICT-solutions andthe ability for businesses to develop services with lowerdegree of ICT.

∙ Individual knowledge is transformed to common knowl-edge. This is due to better documentation, i.e., over-viewof information, systems and processes. Due to better doc-umentation, the organization becomes more independentof specific resources and more robust to the exchange ofpersonnel.

∙ Less production errors, and as a side effect, less negativeattention in the media.

∙ More efficient service development, more efficient sys-tems development and maintenance, easier adaptation ofsystems to new rules and legal constraints.

∙ As a consequence of all effects, the competence andcapacity of the staff increases without employing morepeople. The ability for innovation increases.

External effects can be summarized as:∙ The publication of own information in such ways that

it can be reused both for cross-sector services and forcommercial services.

∙ Avoidance of double reporting obligations for citizens andbusinesses.

∙ More effective and efficient cross-sector service develop-ment.

∙ Improved implementation of rule of law principles andimproved interoperability.

Management attention and comprehension is crucial for theimplementation of a sufficient information governance regime.Management needs to be aware of metadata and semanticsas a crucial enabler for the goals set forth in strategies andrequirements from ministries. Furthermore, other effects arealso of value and must be communicated.

Recommendations for the public sector that can be partiallyextrapolated to interoperability in other domains could besummarized as follows: There is a need to increase the un-derstanding of national and international metadata strategies.Important elements of information governance and metadatastrategies should be pedagogically communicated so that theyare understood by top management. Information governanceshould be a requirement for the re-use of Public SectorInformation, e.g. Review of Directive 2003/98/EC, whichclaims that PSI has the potential for an immense commercialvalue. Effects of systematic work with metadata and semanticsshould be predicted. The necessity of information governancefor the development of cross-sector services should also bepedagogically communicated. The need for a new or existingpublic agency with the role of operating a national metadataservice with a clear mandate from the ministries should alsobe considered.

VII. CONCLUSIONS

We are experiencing an explosion of data and softwarecombined with a worldwide deployment of communication in-

frastructures. Due to the complexity of the emerging platform,it cannot be developed by one single organisation. Therefore,interoperability and reuse, both in the context of software anddata, are becoming key issues. Several on-going and successfulinitiatives, in the scope of infrastructure, platforms, applica-tions, networks and data are described in this paper. Theyillustrate relevant strategies in addressing the next generationsystems. The rapidly emergence and convergence, in parallel,of interoperability and reuse requirements from multiple ap-plication domains, computing and network infrastructure arecommon factors among the different software layers.

Acknowledgments

This work has been financially suppoerted by the Span-ish Government (Research Project P07-TIC-02713). TheSemicolon-project is partially funded by the Norwegian Re-search Council, contracts nº. 183260 and 201559. Osami-commons, ITEA project ip07019, Eureka

∑! 2023 Pro-

gramme, has been partially funded by the Ministries of Fin-land, France, Germany, Spain and Turkey.

REFERENCES

[1] R. Teckelmann, C. Reich, and A. Sulistio, “Mapping of cloud standardsto the taxonomy of interoperability in iaas,” in Cloud ComputingTechnology and Science (CloudCom), 2011 IEEE Third InternationalConference on, 29 2011-dec. 1 2011, pp. 522 –526.

[2] “Megha.” [Online]. Available: http://wiki.rediris.es/megha/MainPage[3] A. Edmonds, T. Metsch, A. Papaspyrou, and A. Richardson, “Towards

an open cloud standard,” Internet Computing, IEEE, vol. PP, no. 99,p. 1, 2012.

[4] “Storage networking industry association, cloud data managementinterface.” [Online]. Available: http://www.snia.org/tech activi-ties/standards/curr standards/cdmi/CDMI SNIA Architecture v1.0.pdf

[5] “Open virtualization format specification, distributed management taskforce, jan. 12, 2010, dsp0243 version 1.1.0.”

[6] OSGi Alliance. (2011) OSGi Service Platform Release 4.3. [Online].Available: http://www.osgi.org/Download/Release4V43.

[7] “ipojo.” [Online]. Available: http://felix.apache.org/site/apache-felix-ipojo.html

[8] M. Behrens, M. Carlson, S. Johnston, G. Mazzafero, A. Richardson,and S. Swidler. (2011) Open cloud computing interface - core. [Online].Available: http://occi-wg.org/about/specification/.

[9] M. Behrens, M. Carlson, S. Johnston, G. Mazzafero, R. Nyren,A. Papaspyrou, A. Richardson, and S. Swidler. (2011) Open cloudcomputing interface - infrastructure. [Online]. Available: http://occi-wg.org/about/specification/.

[10] M. Behrens, M. Carlson, S. Johnston, G. Mazzafero, R. Nyren, A. Pa-paspyrou, A. Richardson, and S. a. Swidler. (2011) Open cloud com-puting interface - restful http rendering. [Online]. Available: http://occi-wg.org/about/specification/.

[11] OSGi Alliance. (2011) RFP 133 Cloud Computing. [Online]. Avail-able: http://www.osgi.org/wiki/uploads/Design/rfp-0133-Cloud Comput-ing.pdf.

[12] “Open networking foundation. software defined network-ing: The new norm of networks.” [Online]. Avail-able: https://www.opennetworking.org/images/stories/downloads/white-papers/wp-sdn-newnorm.pdf

[13] US-Digital-Government. [Online]. Available:http://www.whitehouse.gov/sites/default/files/omb/egov/digital-government/digital-government.html

[14] EU-Digital-Agenda. [Online]. Available: http://ec.europa.eu/informationsociety/digital-agenda/index en.htm

[15] SEMICOLON-Project. [Online]. Available:http://www.semicolon.no/Hjemmeside-E.html

[16] T. Grimstad and P. Myrseth, “Information governance as a basis forcross-sector e-services in public administration,” The International Con-ference on E-Business and E-Government, Shanghai, May 2011.