120
Project Acronym Fed4FIRE Project Title Federation for FIRE Instrument Large scale integrating project (IP) Call identifier FP7ICT20118 Project number 318389 Project website www.fed4fire.eu D5.2 Detailed specifications regarding experiment workflow tools and lifecycle management for the second cycle Work package WP5 Task T5.1 Due date 31/01/2014 Submission date 25/04/2014 (Revised version after Internal review) 21/03/2014 (Internal review) Deliverable lead Mikhail Smirnov (Fraunhofer) Version 012 Authors Mikhail Smirnov (Fraunhofer) Florian Schreiner (Fraunhofer) Lucia Guevgeozian Odizzio (INRIA) Alina Quereilhac (INRIA) Thierry Rakotoarivelo (NICTA) Alexander Willner (TUB) Yahya AlHazmi (TUB) Danieln Nehls (TUB) Ozan Özpehlivan (TUB) Chrysa Papagianni (NTUA) Georgios Androulidakis (NTUA) Aris Leivadeas (NTUA) Donatos Stavropoulos (UTH)

D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

  • Upload
    others

  • View
    4

  • Download
    0

Embed Size (px)

Citation preview

Page 1: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

 

   

 

   Project  Acronym   Fed4FIRE  Project  Title   Federation  for  FIRE  Instrument   Large  scale  integrating  project  (IP)  Call  identifier   FP7-­‐ICT-­‐2011-­‐8  Project  number   318389  Project  website   www.fed4fire.eu      

D5.2  -­‐  Detailed  specifications  regarding  experiment  workflow  tools  and  lifecycle  management  for  the  

second  cycle    Work  package   WP5  Task   T5.1  Due  date   31/01/2014  Submission  date   25/04/2014  (Revised  version  after  Internal  review)  

21/03/2014  (Internal  review)  Deliverable  lead   Mikhail  Smirnov  (Fraunhofer)  Version   012  Authors   Mikhail  Smirnov  (Fraunhofer)  

Florian  Schreiner  (Fraunhofer)  Lucia  Guevgeozian  Odizzio  (INRIA)  Alina  Quereilhac  (INRIA)  Thierry  Rakotoarivelo  (NICTA)  Alexander  Willner  (TUB)  Yahya  Al-­‐Hazmi  (TUB)  Danieln  Nehls  (TUB)  Ozan  Özpehlivan  (TUB)  Chrysa  Papagianni  (NTUA)  Georgios  Androulidakis  (NTUA)  Aris  Leivadeas  (NTUA)  Donatos  Stavropoulos  (UTH)  

Page 2: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 2  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Aris  Dadoukis  (UTH)  Brecht  Vermeulen  (iMinds)  Loïc  Baron  (UPMC)  Carlos  Bermudo  (i2CAT)  Albert  Vico  (i2CAT)  

Reviewers   Steve  Taylor  (IT-­‐innovation)  Bernd  Bochow  (Fraunhofer)  Wim  Vandenberghe  (iMinds)  

     Abstract   This  deliverable  details  the  software  specifications  for  the  

common  federation  tools  for  experiment  lifecycle  management  as  a  first  step  towards  integration  and  testing,  as  driven  by  WP2,  in  the  second  development  cycle  

Keywords   Experiment,  testbed,  resource,  service,  process,  specification        Nature  of  the  deliverable   R   Report   X  

P   Prototype    D   Demonstrator    O   Other    

Dissemination  level   PU   Public   X  PP   Restricted  to  other  programme  participants  

(including  the  Commission)    

RE   Restricted  to  a  group  specified  by  the  consortium  (including  the  Commission)  

 

CO   Confidential,  only  for  members  of  the  consortium  (including  the  Commission)  

 

 

Page 3: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 3  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Disclaimer    This  project  has  received  funding  from  the  European  Union’s  Seventh  Framework  Programme  for  research,  technological  development  and  demonstration  under  grant  agreement  no  318389.    The  information,  documentation  and  figures  available  in  this  deliverable,  are  written  by  the  Fed4FIRE  (Federation  for  FIRE)  –  project  consortium  under  EC  co-­‐financing  contract  FP7-­‐ICT-­‐318389  and  does  not  necessarily  reflect  the  views  of  the  European  Commission.  The  European  Commission  is  not  liable  for  any  use  that  may  be  made  of  the  information  contained  herein.

Page 4: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 4  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Executive  Summary    This  report  specifies  the  experiment  lifecycle  management  in  the  federated  testbed  environment  as  it   will   be   developed   in   cycle   2   of   the   Fed4FIRE   project.   The   content   is   sequenced   similar   to   the  sequence  of  Tasks  as  defined   in  Work  Package  5.  Major  specification  material  constitutes  the  main  body   of   the   report,   additional   details   have   been   placed   in   six   appendices.   The   introduction   is  intentionally  short  because  this  report  is  the  logical  continuation  of  cycle  1  specification,  when  main  design  decisions  were  made,  and  which  proved  later  to  be  correct.    Two  sets  of   inputs  were  taken  into  account   in  this  deliverable:  first,  evaluated  priorities  of  multiple  possible   developments   in   cycle   2   and,   second,   architectural   and   sustainability   requirements  identified  in  WP2,  which  were  again  evaluated  with  respect  to  WP5  goals.  As  the  main  result  of  these  evaluations   we   describe   in   this   report   a   vision   of   the   experiment   life-­‐cycle   management   service  offered  to  experimenters  and  supported  by  service  components  that  are  also  seen  as  future  services  –  components  of  a  meta-­‐service.    This  meta-­‐service   is  “Experiment   lifecycle  management”,  and  it  allows  accredited  users  to  run  their  experiments  seamlessly  on  as  federation  of  heterogeneous  testbed.  The  corresponding  WP5  vision,  together  with  the  constraints  imposed  on  WP5’s  developments  imposed  by  WP2  (both  architecture  and   sustainability)   and   the   other   technical  WPs,   provided   the   needed  background   to   facilitate   the  specification  of  the  service  components  that  are  needed  to  provide  this  meta-­‐service.  These  different  service  components  are  further  specified  in  this  deliverable:  

• Resource  description  and  discovery  that  provides  semantic  directory  of  resources  within  the  federation  and  their  related  information  based  on  a  formal  representation  (ontology);  

• Resource   discovery     within   an   Application   Service   Directory   tool   that   provides   semantic  search   and   retrieval   within   a   collection   of   offerings   including   high-­‐level,   ready-­‐to-­‐use  functionality  that  allows  experimenters  to  ease  their  interaction  with  the  testbeds;  

• Resource   reservation   service   is   the   overarching   service   that   experimenters   can   utilize   to  reserve   heterogeneous   resources   spanning   multiple   testbeds,   based   on   a   multiplicity   of  selection  criteria  (time,  type,  etc.);  

• Resource  provisioning  service  allows  for  the  accredited  users  the  allocation  of  resources  from  one   or   several   testbeds   to   deploy   their   experiments;   it   can   be   direct   (user   selects   specific  resources)  or  orchestrated   (user  defines   requirements  and  service  provides   the  best   fitting  resources);  

• Experiment  control  offers  two  alternatives  NEPI  and  the  OMF  EC,  to  configure  the  resources  and   execute   the   different   steps   involved   in   the   experiment   deployment.   The   unified  experiment   configuration   and   execution   is   based   in   a   common   protocol   for   experiment  control  (FRCP),  that  must  be  available  in  Fed4FIRE  testbeds.;  

• User  interface  and  portal  service  that  combines  Fed4FIRE  portal  and  experimenter  tools  user  interfaces  allow  the   integration  of  the  resources  provided  by  the  facilities   in  a  user-­‐friendly  way.  

All  these  different  component  specifications  are  summarized  in  the  final  chapter.            

Page 5: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 5  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Acronyms  and  Abbreviations    AA   Authorization  and  Authentication  

AM   Aggregate  Manager  

API   Application  Programming  Interface  

CLI   Command  Line  Interface  

CMS   Content  Management  System  

CRUD   Create-­‐Read-­‐Update-­‐Delete  

Fed4FIRE   Federation  for  Future  Internet  Research  and  Experimentation  Facilities  

FCI   Federation  Computing  Interface  

FRCP   Federated  Resource  Control  Protocol    

GUI   Graphical  User  Interface  

OCF   OFELIA  Control  Framework  

OFELIA   OpenFlow  in  Europe:  Linking  Infrastructure  and  Applications  

OLA   Operational  Level  Agreement  

OMA   Open  Mobile  Alliance  

OMF   cOntrol  and  Management  Framework  

OML   ORBIT  Measurement  Library  

OMSP   OML  Measurement  Stream  Protocol  

PDP   Policy  Decision  Point  

PI   Principal  Investigators  

RA   Resource  Adapter  

REST   Representational  State  Transfer  

RPC   Remote  Procedure  Call  

RSpec   Resource  Specification  

SFA   Slice-­‐based  Federation  Architecture  

SLA   Service  Level  Agreement  

SSH   Secure  Shell  

Page 6: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 6  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

UI   User  Interface  

URL   Uniform  Resource  Locator  

VCT   Virtual  Customer  Testbed  

VM   Virtual  Machine  

XMPP   eXtensible  Messaging  and  Presence  Protocol  

Page 7: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 7  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Table  of  Contents    Table  of  Figures  ......................................................................................................................................  9  List  of  Tables  .........................................................................................................................................  11  1   Introduction  ...................................................................................................................................  12  2   Inputs  to  this  Deliverable  ...............................................................................................................  14  

2.1   Setting  priorities  for  cycle  2  plans  ...........................................................................................  14  2.2   Cycle  2  vs  Cycle  1  .....................................................................................................................  15  2.3   Experiment  life-­‐cycle  management  as  a  service  .....................................................................  16  

3   Specification  of  experiment  life-­‐cycle  management  (cycle  2)  ........................................................  19  3.1   Specification  of  resource  description  and  discovery  (Task  5.2)  ..............................................  19  

3.1.1   Semantic  Resource  Directory  as  a  Service  ........................................................................  19  3.1.2   Discovery  of  application  services  .....................................................................................  25  3.1.3   Documentation  Center  .....................................................................................................  30  

3.2   Specification  of  resource  reservation  (Task  5.3)  .....................................................................  32  3.2.1   Resource  Reservation  Overview  ......................................................................................  32  3.2.2   Resource  reservation  service  ...........................................................................................  34  3.2.3   Reservation  in  Cycle  2  ......................................................................................................  35  3.2.4   Reservation  Broker  Functional  Specification  ....................................................................  35  

3.3   Specification  of  resource  provisioning  (Task  5.4)  ....................................................................  43  3.3.1   Resource  provisioning  service  ..........................................................................................  43  3.3.2   SLA  Management  .............................................................................................................  44  

3.4   Specification  of  experiment  control  (Task  5.5)  .......................................................................  51  3.4.1   Introduction  .....................................................................................................................  51  3.4.2   Experiment  control  service  ..............................................................................................  51  3.6.2     Important  issues  for  federated  experiment  control  ........................................................  52  3.6.3     Cycle  2  developments  for  experiment  control  tools  .......................................................  54  

3.5   Specification  of  the  user  Interface  /  portal  (Task  5.6)  .............................................................  58  3.5.1   Portal  service  ....................................................................................................................  58  3.5.2   Cycle  2  Specification  of  the  Portal  ....................................................................................  60  3.5.3   Standalone  tool  –  jFed  .....................................................................................................  64  

4   Conclusion  and  Future  Plans  ..........................................................................................................  70  References  ............................................................................................................................................  71  5   Appendix  A:  WP5:  Evaluation  of  Cycle  2  items  (procedure  and  the  outcome)  ..............................  73  6   Appendix  B:  Service  Portfolio  Catalogue  Entry:  the  guide  .............................................................  78  7   Appendix  C:  Related  work  on  resource  reservation  .......................................................................  79  

7.1   References  for  Appendix  C  ......................................................................................................  80  8   Appendix  D:  SLA  Type  Guarantees  the  X%  Uptime  of  Y%  of  the  resources  ...................................  82  9   Appendix  E:  Cycle  2  use  case  for  the  experiment  controller  NEPI  .................................................  85  10   Appendix  F:  User  Interface  /  Portal  Requirements  for  cycle  2  (per  testbed)  and  commitments  (per  partner)  .........................................................................................................................................  86  11   Appendix  G:  Descriptions  of  methods  and  use  case  of  application  services  directory.  ...............  89  

11.1   Methods  Description  .............................................................................................................  89  11.2   Use  Cases  ..............................................................................................................................  92  

11.2.1   Services  discovery  ..........................................................................................................  92  11.2.2   Detailed  view  of  an  application  service  ..........................................................................  94  11.2.3   Application  services  management  .................................................................................  95  11.2.4   Service  Directory  management  ......................................................................................  96  

12   Appendix  H:  Example  Descriptions  and  Queries  of  Ontology-­‐based  Resource  Descriptions.  ......  99  12.1   Example  Ontologies  ...............................................................................................................  99  

Page 8: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 8  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

12.2   Example  Queries  .................................................................................................................  113  13   Appendix  I:  Overview  of  architectural  components  related  to  WP5  but  out  of  scope  of  this  deliverable  ..........................................................................................................................................  119  

13.1   SSH  Server  and  client  ..........................................................................................................  119  13.2   XMPP  server  ........................................................................................................................  119  13.3   Aggregate  Manager  .............................................................................................................  119  13.4   Authority  Directory  .............................................................................................................  119  13.5   Aggregate  Manager  directory  .............................................................................................  119  

   

Page 9: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 9  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Table  of  Figures  Figure  2-­‐1  WP5  Service  hierarchy  .........................................................................................................  18  Figure  3-­‐1  Structure  of  HTML  documents  as  seen  by  computers.  Right:  structure  of  HTML  documents  as  seen  by  humans  (from  [23]).  ............................................................................................................  21  Figure  3-­‐2  Integration  of  the  semantic  resource  directory  from  the  discovery,  reservation  and  provisioning  points  of  view  ...................................................................................................................  23  Figure  3-­‐3  Integration  of  the  semantic  resource  directory  from  the  monitoring  &  measurement  point  of  view  ..................................................................................................................................................  24  Figure  3-­‐4  Integration  of  the  semantic  resource  directory  from  the  experiment  control  point  of  view  ..............................................................................................................................................................  25  Figure  3-­‐5  JSON  description  of  an  Application  Service  .........................................................................  28  Figure  3-­‐6  Service  Directory  Architecture  ............................................................................................  29  Figure  3-­‐7  Homepage  of  the  documentation  center  ............................................................................  31  Figure  3-­‐8  :  Extract  of  the  testbed  catalogue  on  the  central  part  of  the  Documentaion  Center  (doc.fed4fire.eu)  ...................................................................................................................................  32  Figure  3-­‐9  Example  of  a  testbed-­‐specific  component  of  the  Documentation  Center  (in  this  case  the  website  operated  by  the  BonFIRE  project  regarding  using  BonFIRE  in  Fed4FIRE)  ...............................  32  Figure  3-­‐10  Reservation  Broker  Architecture  .......................................................................................  36  Figure  3-­‐11  Reservation  Broker  in  Fed4FIRE  Environment  ...................................................................  37  Figure  3-­‐12  Reservation  Broker  /MySlice/Manifold  integration  ..........................................................  38  Figure  3-­‐13  Resource  Discovery  ...........................................................................................................  39  Figure  3-­‐14  Resource  reservation:  bound  request  for  resources  through  the  portal  ..........................  40  Figure  3-­‐15  Resource  Reservation:  unbound  request  for  resources  ....................................................  41  Figure  3-­‐16  Showing  SLA  information  of  testbeds  which  support  SLA  .................................................  45  Figure  3-­‐17  :  Selection  resources  based  on  reserved  of  testbeds  offering  SLAs  ..................................  46  Figure  3-­‐18  Acceptance  of  the  SLA  Agreements  ..................................................................................  47  Figure  3-­‐19:  View  SLA  Agreements  .......................................................................................................  48  Figure  3-­‐20  :    Viewing  of  the  SLA  Evaluations  .......................................................................................  49  Figure  3-­‐21Fed4FIRE  Portal  Architecture  .............................................................................................  61  Figure  3-­‐22  Get  Platforms  MSC  ............................................................................................................  62  Figure  3-­‐23  Get  Resources  MSC  ...........................................................................................................  63  Figure  3-­‐24  User  registration  MSC  ........................................................................................................  64  Figure  3-­‐25  Building  blocks  of  the  jFed  toolkit  .....................................................................................  65  Figure  3-­‐26  jFed  Probe  (manual  testing  +  API  learning)  .......................................................................  66  Figure  3-­‐27:  Arhitecture  of  the  Fed4FIRE  SSH  gateway  server  deployed  by  iMinds  ............................  67  Figure  3-­‐28  Arhitecture  of  the  Fed4FIRE  SSH  gateway  server  deployed  by  iMinds  .............................  67  Figure  3-­‐29:  design  pane  of  jFed  UI,  representing  the  resources  in  an  abstracted  manner  while  allowing  the  experimenter  to  select  the  specific  testbeds  from  which  the  resource  should  be  requested  .............................................................................................................................................  67  Figure  3-­‐30:  Manual  editing  of  Rspec  to  support  the  most  testbed-­‐specific  features  in  jFed  .............  68  Figure  3-­‐31:  Easy  submission  of  bug  reports  that  contain  a  log  of  all  API  calls  that  were  generated  behind  the  curtains  ...............................................................................................................................  68  Figure  3-­‐32:  jFed  UI  Connectivity  Tester  ..............................................................................................  69  Figure  5-­‐1  Cycle  2  criticality  sorted  ......................................................................................................  77  Figure  8-­‐1  Availability  (A)  of  resources  during  the  lifecycle  experiment  ..............................................  83  Figure  11-­‐1  Application  Services  discovery  Use  Case  diagram  .............................................................  93  Figure  11-­‐2:  Application  Services  discovery  sequence  diagram  ...........................................................  94  Figure  11-­‐3:  Detailed  information  view  Use  Case  diagram  ..................................................................  95  

Page 10: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 10  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Figure  11-­‐4:  Application  service  management  Use  Case  diagram  .......................................................  96  Figure  11-­‐5:  Application  service  management  sequence  diagram  .......................................................  96  Figure  11-­‐6:  Service  Directory  management  Use  Case  diagram  ...........................................................  97  Figure  11-­‐7:  Service  Directory  management  Sequence  diagram  .........................................................  98  

Page 11: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 11  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 List  of  Tables  Table  2-­‐1  Top  Five  cycle  2  items  (Importance)  .....................................................................................  14  Table  2-­‐2  Top  Five  cycle  2  items  (Difficulty)  .........................................................................................  14  Table  2-­‐3  Top  Five  cycle  2  items  (Criticality)  ........................................................................................  14  Table  2-­‐4  Cycle  2  vs  cycle  1  from  the  WP5  viewpoint  ..........................................................................  15  Table  2-­‐5  Relations  of  experiment  lifecycle  management  functions  with  OLA  and  SLA  ......................  16  Table  2-­‐6  Experiment  lifecycle  management  as  a  sample  service  .......................................................  17  Table  3-­‐1  Resource  description  and  discovery  as  a  service  ..................................................................  21  Table  3-­‐2  Service  directory  as  a  service  ................................................................................................  26  Table  3-­‐3  Description  of  HTTP  requests  and  their  functions  ................................................................  29  Table  3-­‐4  Resource  reservation  as  a  service  .........................................................................................  34  Table  3-­‐5  Resource  provisioning  as  a  service  .......................................................................................  43  Table  3-­‐6  SLA  Specification  summary  and  responsibilities  ...................................................................  50  Table  3-­‐7  Experiment  control  as  a  service  ............................................................................................  51  Table  3-­‐8  Status  of  FRCP  deployment  (April  2014)  ...............................................................................  53  Table  3-­‐9  Portal  service  description  .....................................................................................................  58  Table  5-­‐1  Cycle  2  Delphi  evaluation  .....................................................................................................  74  Table  6-­‐1  How  to  fill  in  the  service  portfolio  catalogue  entry  ..............................................................  78  Table  8-­‐1  SLA  Evaluation  example  ........................................................................................................  84  Table  10-­‐1  Cycke  2  requirements  per  testbed  .....................................................................................  86  Table  10-­‐2  Partner  commitments  on  Portal  enhancements  ................................................................  87    

Page 12: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 12  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

1 Introduction  In  order  to  support  testbed  federation  consumer  growth  which  promises  to  facilitate  the  critical  shift  from   federated   testbeds   to   federated   research   the  experiment   lifecycle  management  must  meet  a  number   of   requirements   outlined   in   the   previous   deliverable   D5.1,   which   outlined   the   cycle   1  specifications   of   WP5   [1].   Following   the   phases   of   specification   and   development   as   well   as   the  overall  progress  of  the  project  the  cycle  2  specification  plans  were  first  prioritized  and  then  aligned  with  the  priorities  set  by  the  WP2  of  the  project;  those  are  reported  in  the  section  2.    The  overall  design  and  planning  of  cycle  2   is  outlined   in  the   last  sub-­‐section  of  section  2,  while  the  rest   of   the   deliverable   follows   the   structure   of   WP5.   The   main   contribution   in   each   task   can   be  summarised  as  follows:  ◦ Task  5.2  Resource  Description  and  Discovery:  A  semantic  information  model  and  resource  

directory  to  store  and  provide  generic  information  about  heterogeneous  resources  including  application  services  and  documentation  centre;    

◦ Task  5.3  Resource  Reservation:  the  overarching  service  that  experimenters  can  utilize  to  reserve  heterogeneous  resources  spanning  multiple  testbeds,  based  on  a  multiplicity  of  selection  criteria;  

◦ Task  5.4  Resource  Provisioning:  an  SLA  management  specification  for  resource  provisioning  service  and  a  front-­‐end  tool  for  this  management;  

◦ Task  5.5  Experiment  Control:  New  developments  for  the  experiment  tools  NEPI  and  OMF  EC,  including  enhancement  and  extending  support  for  protocols  and  platforms.  Discussion  of  relevant  issues  for  a  common  layer  for  experiment  control;  

◦ Task  5.6  User  interface  /  portal:  combines  Fed4FIRE  portal  and  experimenter  tools  (e.g.  jFed)  that  allow  the  integration  of  the  resources  provided  by  the  facilities  in  a  user-­‐friendly  way;  a  stand-­‐alone  tool  jFed  complements  this  with  more  options.  

Experiment   lifecycle  management  and  the  workflow  tools   that  enable   it  are  critical  components  of  any  testbed  federation.  Therefore  WP5  acknowledges  the  importance  of  finding  a  balance  between  pure  technical  aspects  influencing  its  developments,  and  aspects  related  to  making  the  final  result  as  sustainable  as  possible.   Inspired  by   the  sustainability  activities   in  T2.3  of   the  project,  and   the  clear  potential   of   the   ongoing   synergies   with   the   FedSM   project   in   that   context,   WP5   therefore   first  attempted  to  tentatively  specify  an  entry  within  a  service  portfolio  catalogue  for  an  imaginary  service  which  functionality  is  that  of  a  task.  This  is  then  accompanied  by  a  detailed  specification  of  the  task  work   in   cycle   2.   Clearly,   the   detailed   specification   and  would   be   service   description   do   not  match  currently,   however   their   juxtaposition   appears   to   be   very   helpful   in   projecting   the   road   ahead  towards  a  sustainable  –  necessarily  service  oriented  –  federation  of  heterogeneous  testbeds.    The   services   described   in   this   report   would   be   impossible   without   the   following   key   architectural  components  as  defined  by  WP2  for  cycle  2  [2]:  

! Aggregate  manager  and  Aggregate  manager  directory  ! CLI  for  control  (e.g.  SSH  server  &  client)  ! Resource  controller  ! XMPP  server  ! Experiment  controller/  control  server    ! Scenario  editor    ! Documentation  center    ! Portal    ! Authority  directory  ! Service  directory  

Page 13: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 13  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

! Future  reservation  broker  ! Stand-­‐alone  tools  (jFed,  NEPI)  

 Some  of  these  components  were  sufficiently  described  in  [1].  The  others  are  further  developed  in  this  specification.  Note  that  a  lot  of  available  material  is  deliberately  placed  as  Appendices,  aiming  at  this  specification  to  be  coherent  and  readable.  Justification  for  not  covering  certain  components  in  this  document  are  given  in  Appendix  I  (Section  13).    Next   to   these   two   WP5   specification   deliverables,   we   also   want   to   inform   the   reader   about   an  additional   interesting   source   of   technical   information   about   the   project   (including   Experiment  lifecycle  management  and  the  workflow  tools  ):  the  documentation  centre,  which  is  available  on-­‐line  at  http://doc.fed4fire.eu/.    

Page 14: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 14  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

2 Inputs  to  this  Deliverable  In   this  deliverable  WP5  has  consider  both   its  own  plans   for  cycle  2  expressed   in   the  WP’s  previous  work  (D5.1  “Detailed  specifications  for  first  cycle  ready”  [1]),  as  well  as  plans  from  WP2  expressed  as  in  D2.4  “Second  federation  architecture”  [2].  In  order  to  support  WP5’s  internal  organization  (while  of   course   still   considering   the   constraints   imposed   on   our   work   by   other   WPs’   needs),   WP5  prioritized   the   26   items   planned   for   cycle   2   that   have   originated   within   WP5   using   an   expert  evaluation  procedure  based  on  a  variant  of  the  Delphi  method  [3].    Next  to  this  input,  the  architectural  differences  of  cycle  2  from  cycle  1  (Section  4  in  D2.4  [2])  and  the  corresponding  updated  WP2  views  on  the  architectural  components  for  resource  discovery,  resource  specification   and   resource   provisioning   (Section   3.3   in  D2.4   [2])  were   important   inputs   that   finally  shaped  the  cycle  2  specification  that  is  reported  in  this  deliverable.  

2.1 Setting  priorities  for  cycle  2  plans  WP5  used  the  Delphi  method  in  an  attempt  to  set  priorities  of  the  cycle  2  plans  that  are  coming  from  the  WP2  and  from  the  D5.1.  This  was  an  expert  evaluation  of  the  importance  (Im)  of  each  of  the  16  collected   cycle   2   topics   and   of   the   associated   problem   level   (Pr),   i.e.   how   difficult   seems   that  implementation  of  the  topic.  All  16  physical  attendees  of  the  project’s  technical  workshop  in  Berlin  (October   2013)   took   part   in   the   exercise.   The   Im   and  Pr   evaluations  were   averaged,   and   then   the  averaged  values  were  composed  to  represent  the  topic  criticality  (Cr).      Table  2-­‐1  Top  Five  cycle  2  items  (Importance)  

1  (5.16)   Im=7,36   FRCP  2  (D5.1):    2  (5.1)   Im=7,14   Certificates  (D5.1):    3  (5.4)   Im=7,14   Infrastructure  community  (D5.1):    4  (2.2)   Im=6,93   Directory  (WP2):    5  (2.1)   Im=6,71   Testbed  Protection  (WP2)      Table  2-­‐2  Top  Five  cycle  2  items  (Difficulty)  

1  (5.4)   Pr=7,57   Infrastructure  community  (D5.1):    2  (5.14)   Pr=7,21   Future  reservations  (D5.1):    3  (2.1)   Pr=7,07   Testbed  Protection  (WP2)    4  (5.16)   Pr=7,00   FRCP  2  (D5.1):    5  (2.8)   Pr=6,86   SLA  Management  (WP2)      Table  2-­‐3  Top  Five  cycle  2  items  (Criticality)  

1  (5.4)   Cr=54,08   Infrastructure  community  (D5.1):    2  (5.16)   Cr=51,50   FRCP  1  (D5.1):    3  (2.1)   Cr=47,48   Testbed  Protection  (WP2  4  (5.5)   Cr=43,14   Service  community  (D5.1):    5  (5.14)   Cr=42,77   Future  reservations  (D5.1):    

The  results  for  the  Top  Five  most  important,  mostly  difficult  and,  finally  mostly  critical  cycle  2  items  are  presented  above  (Table  2-­‐1,  Table  2-­‐2,  Table  2-­‐3  show  in  brackets  the  source  of  the  requirement  of  such  item  within  cycle  2),  while  Appendix  5  contains  the  Delphi  questionnaire  with  all  considered  items   specified   and   evaluated   (Table   5-­‐1)   and   with   the   explanation   of   the   Delphi   evaluation  procedure.    

Page 15: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 15  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

2.2 Cycle  2  vs  Cycle  1    Section  4  of  D2.4  “Second   federation  architecture”   [2]   lists   the  17  major  differences  of   the  cycle  2  architecture  as  compared  to  the  one  of  cycle  1.    We  repeat  these  differences  (slightly  edited  to  meet  the  WP5  focus)   in   the  Table  2-­‐4   in  order   to  evaluate  the   importance  of   these  differences   from  the  viewpoint  of  Experiment  life-­‐cycle  Management.    Table  2-­‐4  Cycle  2  vs  cycle  1  from  the  WP5  viewpoint  

Difference  of  cycle  2  from  cycle  1  [2]   WP5  relevance  Definition  of  member  and  slice  authority  instead  of  Identity  provider   Low    Definition   of   federation   model   and   introduction   of   the   concepts   outside   of   the  federation  

Low    

Definition  of  slice  and  sliver   Low    Federation   services   layer   and   application   services   layer   instead   of   the   brokers  layer    

High    

Federator  instead  of  central  location(s)   Low    Definition  of  currently  three  authorities  in  the  Fed4FIRE  federation   Low    Rename  certificate  directory  to  authority  directory   Low    Introduction  of  documentation  service  in  the  federator   Low    Rename  testbed  directory  to  aggregate  manager  directory   Low    Experiment   control   (WP5):   FRCP   is   the   adopted   API   for   this   functionality;   FRCP  resource  controller  is  to  be  foreseen  on  every  resource.    Address   security   when   transferring   EC   from   the   single-­‐testbed   to   the   testbed  federation  domain.  

High    

Facility  monitoring   (WP6)  by  exporting   the  corresponding  data  as  OML  streams,  and  collecting  them  in  a  central  OML  server  that  is  queried  by  the  FLS  dashboard.    

Medium  

Application  services   layer  next  to  federation  services  was  introduced  to  facilitate  the  integration  of  the  testbeds  from  applications  and  services  community  [WP4].  

High  

Separation  of  architectural  components  into  optional  and  obligatory.     Low  In  experiment  control,  the  PDP  component  was  introduced.   Medium  Workflows  of  the  particular  parts  of  the  architecture.   Highest  SLA  management  was  introduced.   High  A  detailed  setup  was  introduced  for  the  First  Level  Support  dashboard.   Low  

The   majority   of   these   relevance   assessments   is   rather   straightforward,   and   needs   no   further  motivation.   There   is   one   specific   case   though   that   might   catch   the   attention   of   the   reader,   and  requires  a  bit  more.  This  case  is  the  assignment  of  the  highest  WP5  relevance  to  the  fact  that  WP2  introduced   Workflows   of   the   particular   parts   of   the   architecture   in   the   second   iteration   of   its  federation  architecture.  The  rationale  for  this  evaluation  stems  from  the  original  consideration  of  a  Fed4FIRE   federation  of   testbeds  as  a  double-­‐sided  market.  This  market  provides   its   services   to   the  two  customer  groups  –  experimenters  and  testbed  providers.    

The   service   offerings   to   the   experimenter   group   are   being   regulated   by   Service   Level   Agreements  (SLA)   and   PDP   rules   in   interaction   with   experiment   control   supported   by   the   facility   monitoring  [service]  and  eventually  by  services  from  the  application  services  layer.    

The   service   offerings   to   the   testbed   provider   group   are   regulated   by  Operation   Level   Agreements  (OLA),   which   agreements   are   best   implemented   and   managed   as   services   within   the   Federation  services  layer.  

Page 16: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 16  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Workflows   are   equally   important   for   both   sides   of   the   Fed4FIRE   market,   because   they   formally  specify   the   interactions   between   components   (within   particular   use   cases)   and   facilitate   coherent  operation  of   the  market  with  proper  orchestration  of  both  services  and  resources.  Workflow   is  yet  another  term  for  a  process,  and  since  Fed4FIRE  project  is  currently  on  the  way  of  adopting  the  FitSM  methodology  [4]  based  on  process  orientation  the   importance  of  workflows   in  experiment   lifecycle  management  (including  incident  management)  is  hard  to  overestimate.  

2.3 Experiment  life-­‐cycle  management  as  a  service  The  experiment  life-­‐cycle  management  contains  a  set  of  functions  described  in  section  1  of  [2].  These  functions  are  being   implemented  (enhanced)  according  to  the  project  development  cycles  with  the  target   of   sustainable   federation   of   Future   Internet   Research   and   Experimentation   testbeds.   The  specification  and  development  work  of  WP5   in   cycle  1  was   concentrated  on   the   selection  of  most  suitable   tools,   mechanisms   and   platforms   to   support   these   functions.   Now   in   this   cycle   2  specification   we   take   a   step   further   to   identify   how   these   functions   fit   together,   we   target   the  definition   of   lifecycle   management   workflows   (processes)   with   the   ultimate   goal   to   implement   a  sustainable  experiment  lifecycle  management  service.  We  start  with  Table  2-­‐5  where  we  identify  the  requirements   for   both   Operational   Level   Agreements     (OLA1)   and   Service   Level   Agreements   (SLA)  management  respectively  that  follow  from  the  main  functions.    Table  2-­‐5  Relations  of  experiment  lifecycle  management  functions  with  OLA  and  SLA  

Function   OLA  Requirement   SLA  Requirement  

Resource  discovery   Publish  resources   Discover  (subscribe)  resources  

Resource  specification   RSpec  conformance  

Resource  reservation   Provisioning   RSVP  conformance  

 Resource  provisioning  

Direct  (API)   N/A   N/A  Orchestrated   Best-­‐fit  search   Best-­‐fit  search  conformance  

request  check  Experiment  control   Incident  reporting   Experimenter  scripts  support    Monitoring  

Facility  monitoring   FLS  write   FLS  read  Infrastructure  monitoring  

N/A   Access  to  testbed  monitoring  capabilities  

Measuring   Experiment  measuring  

N/A   Allow  collection  of  data  by  E.  

Permanent  storage   Storage  [service]  dependability  

Storage  [service]  access  

Resource  release   Tracking  of  resource  usage  

Allow  resource  release  by  E.  

 As   mentioned   before   in   section   Error!   Reference   source   not   found.,   WP5   acknowledges   the  importance  of  sustainability  aspects,  and  sees  great  potential   in   the  current  collaboration  between  Fed4FIRE   and   the   EU   FP7   FedSM   project,   which   focuses   on   service   management   in   federated   e-­‐Infrastructures.   As   the   next   step   towards   the   definition   of   experiment   lifecycle  management   as   a  

                                                                                                                         1  We  list  these  requirements  here  for  completeness;  OLA  specification  and/or  implementation  are  not  parts  of  cycle  2  2  http://schema.org    3  http://www.geni.net/resources/rspec/ext/    

Page 17: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 17  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

service  we  therefore  describe  the  major  tasks  of  this  future  service  also  as  services.  For  this  we  use  the   FitSM   template   [5]   and   tentatively   provide   a   sample   description   of   experiment   lifecycle  management  as  a  service  in  Table  2-­‐6  Experiment  lifecycle  management  as  a  sample  service.  We  use  the  same  table  in  Appendix  B  (section  6)  to  provide  guidance  for  filling  it  in  for  service  components,  which  correspond  to  WP5  tasks  and  respectively  to  the  sections  of  this  chapter.      The  main  purpose  of  WP5   is  to  design  technical  specifications  and  their   implementations  that  shall  facilitate   an   experiment   lifecycle   management   in   a   federated   environment.   However   the  sustainability  requirement  as  discovered  in  the  joint  work  with  WP2  strongly  calls  for  a  service-­‐  and  process-­‐  orientation   in  all   facets  of   that  management  design.   In   fact,   following  the  adoption  of  the  FitSM  methodology  in  the  Fed4FIRE  project  for  sustainability  work  and  after  subsequent  training  the  WP5   has   pioneered   the   use   of   FitSM   approaches   and   templates.   This   attempt   was   deliberately  discussed   with   Task2.3,   which   resulted   in   certain   tailoring   of   FitSM   material   for   Fed4FIRE,   which  resulted  in  a  tentative  hierarchy  of  architectural  components;  this  hierarchy  has  a  service  at   its  top  level,  supported  by  service  components  (SC)  at  a  level  below,  and  finally  with  Configurable  Items  (CI)  at  the  bottom  level  (Figure  2-­‐1).      Table  2-­‐6  Experiment  lifecycle  management  as  a  sample  service  

Basic  Information  Service  name   Experiment  lifecycle  management  General  description   Allows  accredited  users  to  run  their  experiment  seamlessly  on  a  

federation  of  heterogeneous  testbeds  User  of  the  service   Experimenter  (innovator)  Service  management  Service  Owner   Now:  Fed4FIRE  consortium,  Later:  Federator    Contact  information  (internal)   https://fed4fire.intec.ugent.be/index.php  Contact  information  (external)   https://portal.fed4fire.eu/  Service  status   Live,  Experimental  (cycle  2)  Service  Area/Category   Training  |  Basic  services  under  specification  /  prototyping  /  testing  

/  evaluation  Service  agreements   SLA  is  n/a  Detailed  makeup  Core  service  building  blocks   Resource  description,  discovery,  reservation,  provisioning,  portal  Additional  building  blocks   Monitoring  (FLS),  measurements,  reputation,  CRM,  …  Service  packages   This  service  when  ready  is  a  package  of  core  and  additional  

technical  services  Dependencies   Testbed  services,  OLA,  Portal  and  plugins  availability  Technical  Risks  and  Competitors  Cost  to  provide   Maintenance  of  software,  of  portal,  coordination  and  testbeds  alive  Funding  source   Now:  EU,  later:  subscription  fee  via  OLA,  usage  fee  via  SLA  Pricing   Now:  free  of  charge,  later:  reputation  based  (?)  Value  to  customer   Seamless  and  automated  experiments  on  federation  of  

heterogeneous  testbed  Risks   Attacks,  lack  of  incentives  to  users,  lack  of  demanded  resources  Competitors   Other  testbeds  with  more  attractive  offerings  

Page 18: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 18  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  2-­‐1  WP5  Service  hierarchy  

Page 19: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 19  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

3 Specification  of  experiment  life-­‐cycle  management  (cycle  2)  This  section  is  structured  along  with  the  organization  of  WP5  in  its  tasks.  

3.1 Specification  of  resource  description  and  discovery  (Task  5.2)  This  section  specifies  semantic  resource  discovery  as  a  service,  the  description  and  discovery  of  application  level  services  and  Fed4FIRE  documentation  centre.  

3.1.1 Semantic  Resource  Directory  as  a  Service  

3.1.1.1 Motivation  and  Description  Experimenters   need   to   discover   and   gather   information   about   the   resources   available   at   different  heterogeneous   facilities   across   the   federation   to   select   appropriate   candidates   for   the  experimentation.  It  is  important  in  this  context  to  remove  entry  barriers  and  steep  learning  curve  for  new  users  and  to  offer  testbeds  capabilities  to  attract  new  users  by  getting  resources  discoverable.  In  the  context  of  the  high  level  architecture  defined  by  WP2  in  D2.4  [2],  this  relates  to  the  portal  and  stand-­‐alone  tools  that  connect  to  the  aggregate  managers  (AMs)  of  the  different  testbeds  in  order  to  discover   and   gather   information   about   the   available   resources.   The   interface   between   these  different  components  is  the  AM  API.  In  section  3.3.3.2  of  D2.4  WP2  it  has  defined  which  specification  should   be   adopted   by   testbeds   exposing   themselves   through   the   AM   API.   However,   there   is   one  element   that   neither   the   current   as   the   future   Common   AM   API   will   specify:   a   generic   resource  description  language  to  support  resource  discovery.  Therefore,  Task  5.2  aims  at  developing  a  generic  resource  description  language,  based  on  widely-­‐adopted  work.  This  includes  data  on  the  capabilities  of  a  particular  resource  as  well  as  information  on  its  requirements,  e.g.  in  terms  of  interconnectivity  or  dependencies.  Currently,   various   facilities   describe   their   existing   resources   in   different   ways.   The   common  denominator   is   to   use   the   XML-­‐based   GENI   RSpec   V3   to   encode   basic   information   about   simple  nodes  and  links  and  to  add  extensions  for  more  heterogeneous  resources.  While  XML  is  a  universal  meta-­‐language  for  defining  markup  that  provides  a  uniform  framework  for   interchange  of  data  and  metadata   between   applications,   it   does   not   provide   any   means   of   talking   about   the   semantics  (meaning)  of  data.  In  particular  there  is  no  intended  meaning  associated  with  the  nesting  of  tags  and  is  focused  on  a  tree  data  structure  with  limited  support  for  expressions  of  relationships  (dependency  graphs).   Furthermore,   the   vocabulary   of   the   tags   and   their   allowed   combinations   is   not   fixed   and  each  GENI  RSpec  extension  follows  its  own  approach.  Therefore,  the  goal  is  to  find  a  more  adequate,  unified  description  that  can  be  used  to  describe  the  heterogeneous   resources   within   Fed4FIRE   facilities   with   sufficient   details.   Adopting   semantics  enables   software   tools   to   understand   the  meaning   of   the   resource   description   and   therefore   can  support   experimenters   to   find   and   testbed   owners   to   publish   information   by   reasoning   over   the  data.   The   foundation   is   a   formal   representation   of   data   such   that   computers   are   capable   of  processing   them.   As   a   result,   standardized   and   wildely   adopted   mechanisms   to   describe  heterogeneous   resources   in   federated   environments  will   be   adopted  within   Task   5.2   as   a   starting  point.  This  approach  in  manifold:  

1. Following  the  approach  in  Request  for  Comments  (RFC)  3444  [17]  “On  the  Difference  between  Information  Models  and  Data  Models”,  the  focus  is  on  the  usage  of  a  stable  data  model  based  on  the  Resource  Description  Framework  [15]  (RDF)  and  the  development  of  a  common,  semantical  and  vivid  information  model  based  on  the  Resource  Description  Framework  Schema  [20]  (RDFS)  and  the  Web  Ontology  Language  [18]  (OWL).  

2. Incorporate  latest,  widely-­‐adopted  research,  developments  and  standardization  workin  this  context,  namely  the  Infrastructure  and  Network  Description  Language  [11],  [12]  (INDL),  the  

Page 20: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 20  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Network  Description  Language  based  on  the  Web  Ontology  Language  [10],  [13],  [14]  (NDL-­‐OWL)  and  Networking  innovations  Over  Virtualized  Infrastructures  [22]  (NOVI)  ).    An  interesting  approach  to  look  at  is  also  the  Testbed  as  a  Service  Ontology  Repository  (TaaSOR)  [19].    Example  of  how  testbed  resources  can  be  described  and  queried  are  listed  in  Appendix  H:  Example  Descriptions  and  Queries  of  Ontology-­‐based  Resource  Descriptions.  (Section  12).  

3. Support  testbeds  to  offer  their  resource  descriptions  in  appropiate  formatting  (e.g.  as  an  RDF/XML  serialized  ontology-­‐based  response  in  the  SFA  AM  listResource  method  call,  or,  if  needed  to  facilitate  the  transition  to  a  new  resource  model,  through  centralized  conversion  –  e.g.  between  custom  XSD-­‐based  GENI  RSpec  v3  extensions  and  the  chosen  ontology).  

4. Enable  experimenters  to  work  on  these  resource  information  (e.g.  MySlice  plugin  or  public  SPARQL  Protocol  And  RDF  Query  Language  [21]  (SPARQL)  endpoint)  

5. Offer  the  information  to  other  federation  services  (such  as  the  future  reservation  broker,  the  portal,  the  first  level  support,  …)  to  discover,  select,  reserve,  orchestrate,  provision,  monitor,  and  control  the  resources.  

This   approach   is   in   line   with   latest   trends   to   link   data   in   the   web.   Communities   in   the   fields   of  federated  cloud  computing   (e.g.   IEEE  P2302)  and   Internet  of  Things   (e.g.  OneM2M)  are  heading   in  the  same  direction  to  semantically  describe  resources  based  on  RDF  and  OWL.  A  related  example  is  the   myExperiment   [16]   semantic   workflow   directory   at   http://rdf.myexperiment.org.   More   public  SPARQL  endpoints  are  listed  at  http://www.w3.org/wiki/SparqlEndpoints.  One   of   the   new   developments   in   cycle   2   –   an   outline   of   a   service   that   allows   search   within   a  collection   of   federation   offerings   including   high-­‐level,   ready-­‐to-­‐use   functionality   that   allows  experimenters  to  ease  their  interaction  with  the  testbeds  –  is  presented  in  the  last  sub-­‐section  of  this  chapter.  Altogether,  adopting  semantics  will  allow  to  lower  entry  barriers  for  experimenters  by  enabling  tools  to  understand  the  meaning  of  the  resource  description  and  support  the  user  in  the  discovery  phase.  Figure   3-­‐1   sketches   the   difference   between   syntactically   driven   approaches   (such   as   HTML   or   the  current  GENI  RSpec)   and   semantically   driven  ones.   It  would   allow   to  provide   richer   search   results,  improve   the  display  of   information  and  making   it   easier   for   clients   to   find   the   right   resources   and  testbeds.  It  can  also  enable  new  tools  and  applications  that  make  use  of  the  structure.  Major  search  providers,  including  Bing,  Google,  Yahoo!  and  Yandex,  also  relay  on  such  technologies2.  Furthermore,  a  shared  formal  information  model  makes  it  easier  for  testbed  owners  to  decide  on  a  markup   schema   and   get   the   maximum   benefit   for   their   efforts.   Furthermore,   an   incentive   for  facilities   is   the   opportunity   to  make   it   easier   for   users   to   find   relevant   resources   on   their   testbed  within   the   federation.   Adopting   according   standards   also   allow   to   improve   the   sustainability   of   a  testbed  beyond  the  duration  of  a  project.    

                                                                                                                         2  http://schema.org    

Page 21: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 21  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐1  Structure  of  HTML  documents  as  seen  by  computers.  Right:  structure  of  HTML  documents  as  seen  by  humans  

(from  [23]).  

Table  3-­‐1  Resource  description  and  discovery  as  a  service  

Basic  Information  Service  name   Resource  description  and  discovery  General  description   Directory  of  resources  within  the  federation  and  their  related  

information  based  on  a  formal  representation  (ontology).  User  of  the  service   Within  the  federation  this  directory  can  be  used  by:  

• Federation  services  such  as  the  aggregate  manager  directory,  future  reservation  broker,  service  directory,  aggregate  manager,  testbed  manager,  experiment  control  server,  OML  server,  manifold,  …  to  push  data  into  the  repository  

• Federation  services  such  as  the  future  reservation  broker,  portal,  experiment  controler,  scenario  editor,  policy  decision  point,  reputation  engine,  manifold,  …  to  pull  data  from  the  repository  

• Federation  users  to  query  for  required  resource  manually.  

Service  management  Service  Owner   Task  5.2  Lead:  TUB  Contact  information  (internal)   Alexander  Willner  <alexander.willner@tu-­‐berlin.de>  Contact  information  (external)   https://portal.fed4fire.eu/  Service  status   Conception  (Cycle  1),  to  be  implemented  as  federation  service  

(Cycle  2),  to  be  distributed  as  testbed  service  (Cycle  3)  Service  Area/Category   Resource  Description  and  Discovery.  Underlying  ontology  and  

directory  service  can  be  used  in  all  other  phases  Service  agreements   Best-­‐effort  centralized  service  (Cycle  2),  best-­‐effort  distributed  

service  (Cycle  3)  Detailed  makeup  

Page 22: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 22  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Core  service  building  blocks   Formal  representation  framework  for  resources  (ontology)  and  persistent  CRUD  service  for  this  information.  

Additional  building  blocks   Extensions  to  the  ontology  for  different  use  cases.  Service  packages   This  service  will  offer  an  extensible  core    for  further  ontologies,  

while  offering  a  reusable  data/information  model  and  interfaces.  Dependencies   Depending  on  the  use  cases,  the  directory  needs  the  according  data  

about  the  resources.  For  example  generic  resource  description  per  testbed,  proper  monitoring  information  and  reservation  information.  This  information  must  be  pushed  by  or  pulled  from  other  components  (see  above).  

Technology  Risks  and  Competitors  Cost  to  provide   The  service  must  be  up  and  running,  bug  fixes  must  be  applied  and  

the  underlying  ontology  is  under  continuous  evolvement.  Funding  source   Now:  EU.  

Later:  continued  EU  funding;  operated  by  each  testbed;  or  subscription  based  federation  service  for  testbeds  

Pricing   free  (it  functions  as  a  facilitator  to  attract  users  by  publishing  the  list  of  available  resources)    

Value  to  customer   • Target  group  experimenter:  The  main  objective  of  customers  is  to  use  resource  for  experimentation.  Being  able  to  discover  available  resources  based  on  vivid  semantic  annotations  and  a  stable  underlying  data  model,  allows  the  customer  to  automatically  find  requested  heterogeneous  resources  in  a  federation.  

• Target  group  testbeds:  The  main  objective  of  testbed  owners  is  to  offer  testbed  resources  to  users.  Being  able  to  publish  available  resources  based  on  vivid  semantic  annotations  and  a  stable  underlying  data  model,  allows  the  heterogeneous  resources    to  be  found  in  a  federated  environment  both,  using  simple  searches  and  using  sophisticated  semantic  based  requests  (such  as  SPARQL  queries).  

Risks   The  incentives  for  new  customers  provided  by  this  services  might  have  been  underestimated  and  therefore  the  costs  of  operation  might  be  unjustified.  

Competitors   Other  testbeds  offering  semantically  annotated  resource  descriptions  and  therefore  might  be  more  attractive  for  experimenters.  

   

Page 23: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 23  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

3.1.1.3 Semantic  Resource  Directory:  Discovery,  Reservation  and  Provisioning  

 Figure  3-­‐2  Integration  of  the  semantic  resource  directory  from  the  discovery,  reservation  and  provisioning  points  of  view  

The  context  of  resource  discovery,  reservation  and  provisioning  is  the  main  use  case  of  the  extended  information   model   (IM).   As   shown   in   Figure   3-­‐2,   testbeds   will   publish   information   about   their  resources   using   the   new   native   IM   via   the   GENI   SFA   AM   API.   Note   that   the   “Semantic   resource  directory”   is   a   logical   component   in   this   context,   i.e.   it   is   NOT   a   single   centralized   federator  component,  as  it  is  not  defined  in  D2.4  as  such.  It  rather  represents  the  collected  data  itself  as  source  of   related   information   for  according  components.  These  components  either  already  store   resource  descriptions   gathered   in   listResources   calls   (e.g.   manifold,   fls,   …)   or   query   components   that   offer  them.  Although,   to   facilitate  the  transition  to  the  new  IM,  a   temporary  component   (at   the  testbed    level  or   federator  wide)  could  offer  both,  a   translation  service  between  the  GENI  RSpec  v3  with   its  according  extensions3  and  the  IM,  and  an  interface  that  could  be  queried  using  the  SPARQL  Protocol  And  RDF  Query  Language  (SPARQL)  by  other  federation  services  (see  above)  or  directly  by  advanced  experimenters   (e.g.   via   MySlice   plugin   or   standardized   HTTP   REST   SPARQL   endpoint)   to   discover  existing   information   about   the   resources   in   the   federation.   Furthermore,   the   Future   Reservation  Broker  needs  reservation  information  about  resources  that  can  be  queried  using  the  same  interface  (cf.  Paragraph  3.2).  

3.1.1.4 Semantic  Resource  Directory:  Monitoring  and  Measurements  The  second  most  important  use  case  for  the  semantic  resource  directory  is  to  collect,  transform  and  offer  monitoring  information  about  resources  (note:  a  testbed  itself  is  also  a  resource)  or  monitoring  itself  as  a  complex  resource.  Figure  3-­‐3  depicts  that  monitoring  information  are  usually  published  via  OMSP  to  an  OML  Server  within  the  Fed4FIRE  federation.  Besides  this,  more  monitoring  information  could  be  published  via  according  SFA  AM  method  calls  (again  either  as  GENI  RSpec  v3  extension  or  by  

                                                                                                                         3  http://www.geni.net/resources/rspec/ext/    

Exp

erim

ente

r to

ols

Test

bed

Test

bed

man

agem

ent

Testbed A Testbed B

Discovery, reservation, provisioning

Discovery, reservation, provisioning

Testbed directory

Member authority

Member authority

Experimenter

Portal (portal.fed4fire.eu)

Discover, reserve, provision

Ser

vice

s

AM Future reservation

broker

Tool directory

Slice authority

Federator

Slice authority

authority directory

Service Y

Service directory

Outside federation

Service A

Federation

Discovery, reservation, provisioning

Semantic resource directory

AM

query

query impo

rt

Page 24: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 24  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

using  natively   the  new   IM).  This  monitoring   information   is   then  connected/linked   to   the  according  resources   in   the   directory   (one   of   the   main   benefits   of   RDF)   and   can   be   queried   and   used   by  experimenters,  the  Future  Reservation  Broker  or  according  SLA  services.    

 Figure  3-­‐3  Integration  of  the  semantic  resource  directory  from  the  monitoring  &  measurement  point  of  view  

   

Exp

erim

ente

r to

ols

Test

bed

Test

bed

man

agem

ent

Testbed A Testbed B

Discovery, reservation, provisioning

Discovery, reservation, provisioning

Member authority

Experimenter

Ser

vice

s

AM AM

AM: Aggregate manager

Slice authority

Federation services

Service Y

Facility monitoring

Facility monitoring

OML server+DB

Infrastructure monitoring

Infrastructure monitoring

OML server+DB

Masurement Measurement

Masurement Measurement

FLS dashboa

rd

Semantic resource directory

import

import if exported via AM

access to data (to be discussed)

Page 25: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 25  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

3.1.1.6 Semantic  Resource  Directory:  Experiment  Control    Finally,  the  semantic  resource  directory  can  be  used  in  the  context  of  the  experiment  control.  On  the  one   hand,   analog   to   the   linked   data   of   http://rdf.myexperiment.org,   existing   workflows   could   be  discovered  and  stored  (either  within  a  federation  or  testbed  wide  directory).  On  the  other  hand  (cf.  Figure   3-­‐4),   control   capabilities   (i.e.   parameters   of   these   workflows)   of   resources   needed   for   the  experiment,   could   be   queried,   if   not   yet   supported   by   the   involved   experimentation   protocol   (i.e.  latest  FRCP  enhancements  focus  on  offering  this  information  natively  by  offering  schema  information  within  a  ‘context’  attribute4).    

 Figure  3-­‐4  Integration  of  the  semantic  resource  directory  from  the  experiment  control  point  of  view  

3.1.2 Discovery  of  application  services  

3.1.2.1 What  are  the  Application  Services?  The  Application  Services  layer  introduced  in  WP2  (document  D2.4  –  Second  federation  architecture)  fulfils   the   necessity   of   making   easier   for   experimenters   to   use   the   resources   of   the   federated  infrastructure   in  their  experiments.  Due  to  the  increasing  number  of  testbeds  being  covered  within  Fed4FIRE  and  the  diversity  of  technologies  available,   Infrastructure  Providers  are  given  the  capacity  to  offer  high-­‐level,  ready-­‐to-­‐use  functionality  that  allows  experimenters  to  access  specific  data  or  to  deploy  different  software  applications  on  top  of  existing  testbeds.  Those  functions  have  been  defined  as  Application  Services  and  those  who  offer  them  as  Service  Providers.  At   this   point   only   federated   testbed   providers   are   the   ones   offering   Application   Services   although  there   is   the  option   to  allow  external   Service  Providers   to  offer   services  using   federation  member’s  infrastructures  as  well.  As  a  first  approach  to  this  layer,  two  Application  Services  are  considered:  

                                                                                                                         4  https://github.com/maxott/frcp4node#high-­‐level-­‐api    

Exp

erim

ente

r to

ols

Test

bed

Test

bed

man

agem

ent

Testbed A Testbed B

Discovery, reservation, provisioning

Discovery, reservation, provisioning

Member authority

Experimenter

Ser

vice

s

AM AM

AM: Aggregate manager

Slice authority

Federation services

Service Y

Experiment control server

Experiment control server

Define scenario

XMPP XMPP PDP PDP

RC

FRCP

RC

FRCP

Semantic resource directory query

resource capabilities / properties (to be discussed)

Page 26: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 26  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

• Sensor  data  gathering  and  publication  through  an  API  from  SmartSantander’s  testbed.  • Automatic  deployment  of  Hadoop  clusters  on  resources  of  Virtual  Wall.  

3.1.2.2 Service  Directory  The  Service  Directory   is  used   for  experimenters   to  discover  available  services  and  to  decide   if   they  fulfil   their   needs.   This   directory   contains   information   about   Infrastructure   Services,   offered   by   the  federation  itself  as  defined  in  D4.2,  and  Application  Services,  offered  by  Service  Providers.  Application   Services   stored   in   the   directory  may   come   from   federated   and   non-­‐federated   Service  Providers.  In  this  sense,  federated  means  that  the  Service  Provider  has  a  valid  account  on  Fed4FIRE  and   therefore   the  corresponding  credentials   that  guarantee   the  Service  Provider   is  already   trusted  within  the  federation.    Two  types  of  Service  Providers  are  considered:  

• Testbed   Providers   that   act   as   Service   Providers   offering   Application   Services   using   their  infrastructure.  They  are  already  federated.  

• Service  Providers  that  do  not  own  any  testbed  and  use  the  infrastructure  provided  by  other  members  of  the  federation.  They  can  offer  Application  Services  whether  they  are  federated  or  not,  although  federation  membership  should  be  encouraged.  

The  service  directory  can  be  seen  as  offering  a  service  outlined  in  Error!  Reference  source  not  found..    Table  3-­‐2  Service  directory  as  a  service  

Basic  Information  Service  name   Resource  discovery    (Application  Service  Directory)  General  description   Collection  of  offerings  including  high-­‐level,  ready-­‐to-­‐use  

functionality  that  allows  experimenters  to  ease  their  interaction  with  the  testbeds    

User  of  the  service   Experimenter      Service  management  Service  Owner   Atos      Contact  information  (internal)   [email protected]      Contact  information  (external)   [email protected]        Service  status   Previous  Status  (cycle  1)  –  inexistent    

Current  status  (cycle  2)  –  conception  and  development  including  two  services  Future  (cycle  3)            -­‐  probably  further  services  will  be  included,  specific  SLAs  for  these  applications  could  be  defined  and  implemented    and  security  mechanisms  will  be  refined  

Service  Area/Category   • Resource  discovery  • Resource  description  • Resource  specification      

Service  agreements   SLA  is  n/a  Detailed  makeup  Core  service  building  blocks   All  the  requests  are  handled  by  a  core  element  which  then  is  in  

charge  of  performing  queries  to  a  database,  process  the  responses  and  provide  them  the  correct  format  to  be  sent  back  to  the  clients.  

Page 27: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 27  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Experimenters  can  browse  applications  and  service  providers  can  upload  and  manage  them.      

Additional  building  blocks          Service  packages   N/A        Dependencies   There  is  a  strong  dependency  with  security  mechanisms.            Technical  Risks  and  Competitors  Risks   The  application  services  included  in  this  directory  should  be  

“guaranteed”  by  the  federation  (they  must  be  validated  and  maintained).  Otherwise  the  image  of  the  federation  could  be  damaged  (“applications  in  Fed4FIRE  fail”,  etc.)          

Competitors   XIFI  yellow  pages  and  services      

3.1.2.3 Authentication  and  authorisation  for  Application  Services  Authentication  and  authorization  mechanisms  required  for  Application  Services  depend  on  the  type  of   Service   Provider   and   if   it   is   federated   or   not.   Testbed   Providers   that   already   belong   to   the  federation  can  act  as  Service  Providers  and  use   the   same  user   certificate   for   the  authentication  at  their  Application  Service  as  they  use  for  authentication  at  their  testbeds.  .  Service  Providers  without  own  testbed  can  decide  if  they  become  part  of  the  federation  or  offer  their  services   independently.   Since   some   Quality   of   Service   conditions  may   be   required   for   Application  Services   from   experimenters,   allowing   non-­‐federated   Service   Providers   to   use   federated   testbeds  would   require   mechanisms   to   guarantee   quality   assurance,   service   availability   and   confidentiality  policies.  This  kind  of  requirements  can  be  handled  in  an  easier  way  if  Service  Providers  are  federated  because  of   the  mutual   relation  of   trust  established  between   testbeds  and  Service  Providers  within  the  federation.    Introducing   this   new   actor,   the   Service   Provider   without   testbed,   implies   the   definition   of   new  authentication  requirements  in  the  federation  to  be  included  in  WP7.  

• There  have  to  be  a  mechanism  for  testbeds  and  Service  Providers  to  trust  each  other.  • Experimenters   that  authenticate   in  a  Service  Provider  do  not  need  to  authenticate  again   in    

the  testbed  used  by  the  service.    

In   order   to   simplify   these   requirements,   only   Application   Services   offered   by   federated   Service  Providers  could  be  contemplated  for  cycle  2.  

3.1.2.4 Quality  Control  of  Service  Directory  entities  Services   offered   from   the   central   service   directory   in   Fed4FIRE   require   some   quality   control  mechanisms.   Testbeds   and   external   service   providers   may   upload   their   own   applications   for  experimenters  to  use.  Quality   is  an   important  aspect  as  far  as  sustainability   is  concerned,  since  the  available  solutions  must  be  proven  to  work  properly  when  offered  from  this  central  point.  Because  of   the  complexity  of  using   the  ROCQ  (Reputation,  Opinion,  Credibility,  Quality)   framework  introduced   in  WP7   for   the  quality   control  on   the  Service  Directory,   as  a   temporary   solution,   it  has  been  decided   to   restrict   for   cycle  2   the  use  of   a   simple   community-­‐based  quality   control.   This  has  been  agreed   to  be  performed  by  means  of  Google  Groups  platform.  Unlike  ROCQ   framework,   this  solution  only  allows  performing  the  quality  control  of  application  services  based  on  the  feedback  of  Google   Group  members.   ROCQ   integration  will   result   in   a  more   robust   reputation  mechanism   for  application  services  and  the  unification  of  th  quality  control  framework  for  testbeds  and  application  services.  Every   service   provider   has   to   propose   the   offered   service   to   be   added   to   the   directory   and   the  Google  Groups  community  can  then  jointly  test  and  discuss  this  new  service,  and  approve  (or  reject)  its  addition  to  the  general  public  service  directory.  According  to  the  temporary  nature  of  this  solution  

Page 28: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 28  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

further   impact   of   this   community   approach   on   the   Service   Directory   has   not   been   taken   into  consideration.  The  steps  to  publish  an  Application  Service  in  the  Service  Directory  would  be  as  follows:  

1. The  Service  Provider  creates  a  new  topic  on  the  Google  Group  by  either  accessing  the  web  interface   at   http://groups.google.com/group/fed4fire-­‐servicedirectory   or   by   email   to  fed4fire-­‐[email protected].   This   topic   will   contain   all   the   details   of   the  Application  Service  to  be  offered.  In  particular,  the  minimum  information  to  be  included  is:  

a. Application  Service  name  b. Description  c. Provider  name  d. Endpoint  URI  e. API  Protocol  f. Detailed  evaluation  check  procedure  

2. Any   tester   of   the   Google   Group   community   checks   the   Application   Service   following   the  procedure  and  tools  described  in  the  “Detailed  evaluation  check  procedure”.  

3. If   the   Application   Service   performance   is   correct   and   works   as   expected,   the   Application  Service  is  approved  by  the  tester  for  the  official  publication  in  the  Service  Directory.  

4. The  Service  Provider  is  now  granted  to  publish  the  offered  Application  Service  in  the  Service  Directory.  

The  future  plans  for  the  Service  Directory  will  be  to  increment  the  amount  of  published  services  and  incorporate  quality  control  within  an  extension  of  the  ROCQ  framework  already  used  for  reputation  of  testbeds.    

3.1.2.5 Application  Service  Representation  Application  Service  models  contain  the  name  and  description  of  every  Application  Service  and  all  the  required   information   to   allow   experimenters   to   start   using   those   services   straightaway,   i.e.   used  protocol,  endpoint   location,  API  basic  usage,  authentication  method.  Furthermore,  a   link  to  a  more  detailed  documentation  of  the  service  API  is  provided  as  well.  With   the   purpose   of   keeping   things   simple   JSON   has   been   decided   to   be   the   metadata  representation   format   for   Application   Service   models.   The   structure   with   main   fields   is   shown   in  Figure  3-­‐5  

{  "appService":  {  

"id":  "{Application  Service  ID}",  “name”:  “Application  Service  name”,  “iconURL”:  “URL  pointing  to  the  Application  Service  icon  representation”  “provider”:  “Name  of  the  Service  Provider”,    "briefDescription":  "One  line  description  of  the  Application  Service",  "fullDescription":  "Extended  description  of  the  Application  Service",  "endPoint":  "Endpoint  URI  where  the  Application  Service  API  is  located",  "protocol":  "Protocol  in  which  the  Application  Service  is  provided",  "authMethod":  "Required  authentication  method",  "APIbasic":  "Basic  information  about  API  methods  to  use  the  Application  Service",  "APILink":  "URL  with  detailed  information  about  Application  Service  API  usage"  

   }  }  

Figure  3-­‐5  JSON  description  of  an  Application  Service  

3.1.2.6 Service  Directory  API  Description  

Page 29: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 29  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

The   access   to   the   Service   Directory   is   performed   through   a   REST   API,   with   access   restriction  depending  on   the  user.  Federated  Experimenters  can  browse  available  services  using   the  Fed4FIRE  portal,   being   able   to   view   a   brief   description   of   all   services   or   using   free   text   searches   and   filter  results  by  categories.  Access  to  the  Service  Directory  with  the  same  filtering  capabilities  is  allowed  as  well  for  those  users  that  are  not  federated  but  the  requests  and  processing  of  responses  to  discover  application  services  have  to  be  performed  by  means  of  their  own  tools  (REST  clients).      Service  Providers   can  use   the  methods  provided  by   the  REST   interface   to  publish   their  Application  Services   and  make   them  available   for   experimenters   once   they   are   validated   to   be  on   the   Service  Directory.  Despite  being  federated  or  not,  access  restriction  policies  to  the  Service  Directory  have  to  be  used  in  order  to  avoid  bad  uses  from  malicious  users.  Figure   3-­‐6   provides   an  overview  of   the   Service  Directory   architecture.  Note   that   the  API   has   been  represented  in  two  components  to  make  the  different  accesses  it  provides  clearer.  Experimenter  API  has  only  read  permissions  whereas  Service  Provider  API  allows  read,  write  and  delete  actions.  All  the  API  requests  are  handled  by  the  core  element  which  then  is  in  charge  of  performing  queries  to  the   database,   process   the   responses   and   provide   them   the   correct   format   to   be   sent   back   to   the  clients.  

 Figure  3-­‐6  Service  Directory  Architecture  

The  Service  Directory  API  is  defined  using  a  REST  interface  focusing  on  what  is  available:  Application  Services  stored  in  the  Service  Directory.  The  Fed4FIRE  portal  will  provide  a  Graphical  User  Interface  (GUI)  to  experimenters  for  performing  search  queries  to  the  API.  Service  Providers  access  directly  to  the   Service   Directory   API   to   publish,   update   and   delete   Application   Services   by   means   of   HTTP  methods.  

Table  3-­‐3  Description  of  HTTP  requests  and  their  functions  

Method   Function  GET   Retrieve  an  Application  Service  information  resource  POST   Publish  an  Application  Service  in  the  repository  PUT   Update  the  Application  Service  information  resource  DELETE   Remove  an  Application  Service  entry  from  the  repository  

 To  avoid  unauthorized  use  of   the  Service  Directory,   each  HTTP   request  against   this  API   to  publish,  update   or   remove   an   Application   Service   will   require   the   inclusion   of   specific   authentication  credentials  by  means  of  a  security  token  or  a  X.509  certificate.  

Page 30: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 30  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

URI  Format  The  API  is  exposed  using  the  following  URI  structure:  

• {ServiceDirectoryURI}/fedservices:  Used  to  expose  the  complete  list  of  Infrastructure  Services.  • {ServiceDirectoryURI}/appservices:  Used  to  expose  the  complete  list  of  Application  Services.  • {ServiceDirectoryURI}/appservices/search:  Used  to  perform  free  text  search  queries.  • {ServiceDirectoryURI}/appservices/{appserviceID}:  Used   to  expose  detailed   information  of  a  

specific  Application  Services.  AppserviceID  field  is  generated  automatically  at  the  insertion  of  a   new  application   service  with   an   integer   number   format.   The   first   application   service  will  have  appserviceID  equal  to  1,  the  second  equal  to  2  and  so  on.  

Descriptions  of  methods  and  use  cases  of  application  services  directory  is  specified  in  Appendix  G  (Section  11).  

3.1.3 Documentation  Center  The  documentation  center  defined  by  WP2  in  D2.4  can  be  seen  as  a  superset  of  the  components  of  the  cycle  1  architecture   that  were   then  called   the  human-­‐readable   testbed  directory  and   the   tools  directory.  The  documentation  center  is  expected  to  contain  both  types  of  information  (catalogue  of  Fed4FIRE  testbeds,  and  information  about  how  to  use  Fed4FIRE  tools  on  them).  The  documentation  center  should  at  least  cover  the  following  Fed4FIRE  areas:  

• Testbed  catalogue  (including  both  high  level  overviews  and  detailed  testbed-­‐specific  information)  

• Tools  catalogue  (including  both  high  level  overviews  and  tutorials  about  how  to  use  them)  • Information  to  get  started  (including  how  to  register  for  an  account,  how  to  retrieve  the  

needed  certificates  and  SSH  keys,  how  to  run  a  hello-­‐world  type  of  experiment)  • Information  about  how  to  get  support.  • Background  information  about  the  Fed4FIRE  project  • Documentation  for  testbed  owners  and  developers  (F4F  architecture,  specifications  of  the  

F4F  APIs,  specifications  of  architectural  components,  how  to  add  SFA  support  to  a  testbed,  how  to  add  support  for  your  testbed  to  the  Fed4FIRE  tools  (portal,  jFed,  NEPI,  …)).  

• FAQ,  tips  and  best  practices  The  implementation  of  the  documentation  center  has  already  started  in  cycle  1,  but  it  does  not  yet  include   all   the   information  mentioned   above.   Therefore   the   content   of   the  Documentation   center  will  be  further  populated  in  the  course  of  cycle  2.  However,  the  technical  software  framework  driving  this  architectural  component  can  already  be  considered  final.  For  this  we  adopted  the  Sphinx  Python  Documentation   Generator   software   that   can   be   downloaded   from   http://sphinx-­‐doc.org.   This   is   a  tool  that  makes  it  easy  to  create  intelligent  and  beautiful  documentation.  From  a  single  set  of  source  files   in   the   markup   language   reStructuredText   (similar   to   LaTex),   the   Sphinx   software   can  automatically   create   both   a   HTML   output   and   a   PDF   version   of   the   documentation.   Because   all  sources  are  text-­‐file  based,  it  is  very  easy  to  manage  the  documentation  center  in  an  SVN  subversion  system.  The  HTML  version  of  the  Documentation  Center  can  be  accessed  on  http://doc.fed4fire.eu.  As  can  be  seen  in      and  Error!   Reference   source   not   found.,   this   is   currently   still   a  work   in   progress,   but   it   has   been  already  populated  adequately  to  support  our  first  wave  of  open  call  experimenters.   In  that  context  the   adopted   format   of   a   Sphinx   based   HTML   documentation   center   was   already   well   received.   It  should  be  remarked  that  a  specific  approach   is   taken  towards   the  assignment  of   responsibilities   to  populate  the  content:  

• The  central  component  of  the  Documentation  Center  is  the  part  that  has  been  described  in  this  section.  It  consists  of  a  Sphinx-­‐based  HTML  site  that  can  be  accessed  on  

Page 31: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 31  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

http://doc.fed4fire.eu.  It  is  managed  by  someone  from  the  Federator,  and  is  mainly  intended  to  contain  rather  generic  and  high  level  information  about  the  different  aspects  of  the  federation.  That  information  is  also  quite  static.  For  instance  for  the  testbed  catalogue,  it  contains  a  few  high-­‐level  paragraphs  per  testbed  explaining  what  kind  of  testbed  it  is.  The  intention  is  that  this  information  only  has  to  be  added  once,  when  the  testbed  joined  the  federation.  

• A  specific  Fed4FIRE  information  website  hosted  by  every  testbed  owner  or  Fed4FIRE  tool  builder:  this  website  is  deployed  under  the  full  control  of  the  testbed  or  tool  owner.  It  can  therefore  be  continuously  updated  as  needed.  It  can  also  go  into  any  desired  level  of  detail.  Every  subsection  of  the  central  component  of  the  Documentation  Center  will  always  provide  a  pointer  to  its  counterpart  at  the  testbed  owner  or  tool  builder  website.  But  it  is  important  to  emphasize  that  these  pages  have  to  be  Fed4FIRE  specific:  they  should  explain  everything  a  Fed4FIRE  user  needs,  taking  the  context  of  Fed4FIRE  into  account  (so  referring  to  typical  Fed4FIRE  tools  or  testbeds  in  examples,  etc).  For  every  testbed,  such  a  website  should  be  provided.  It  should  at  least  contain  detailed  information  on:  

o URLs  of  aggregate  managers  o RSpecs  o Specific  tips  and  tricks  

 Figure  3-­‐7  Homepage  of  the  documentation  center  

Page 32: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 32  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐8  :  Extract  of  the  testbed  catalogue  on  the  central  part  of  the  Documentaion  Center  (doc.fed4fire.eu)  

 Figure  3-­‐9  Example  of  a  testbed-­‐specific  component  of  the  Documentation  Center  (in  this  case  the  website  operated  by  

the  BonFIRE  project  regarding  using  BonFIRE  in  Fed4FIRE)  

 

3.2 Specification  of  resource  reservation  (Task  5.3)  

3.2.1 Resource  Reservation  Overview  The   Reservation   Broker   is   the   overarching   service   that   experimenters   can   utilize   to   reserve  heterogeneous   resources   spanning   multiple   administrative   domains   (testbeds)   based   on   a  multiplicity  of  selection  criteria   (time,  type  etc.).  Based  on  the  first  specification  of   the  Reservation  Broker  [D5.1],  the  distinction  between  the  various  types  of  reservations  is  made  upon  two  different  dimensions;   i.   time   (instant,   advance  and  elastic   reservations)   and   ii.   guarantee  of   resources   (hard  and  best-­‐effort  reservations).    The   concept   of   brokerage   is   very   important   in   Fed4FIRE   as   it   provides   benefits   to   both  experimenters  and  research  infrastructures.  The  Reservation  Brokering  service   is  set  out  to  provide  experimenters  with  the  ability  to  create  slices  of  heterogeneous  resources,  via  the  Fed4FIRE  portal,  

Page 33: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 33  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

that   belong   to   a  wide   and   varied   choice   of   testbeds  within   the   Fed4FIRE   federation   environment.  Users  will  significantly  benefit  from  the  brokerage  service  as  they  will  be  able  to  simplify  the  overall  process  of  identifying  and  reserving  suitable  resources  for  their  experiments,  especially  in  cases  that  they  need  to  run  large  scale  experiments  where  possibly  combined  resources  from  multiple  testbeds  are  required  to  fulfil  their  needs.  On  the  other  hand,  via  the  portal  an  extended  user  community,  not  just   the   users   of   a   particular   testbed   or   users   with   strong   technical   background,   will   have   the  opportunity   to   submit   requests   for   resources  expressing   them  even   in   a  more  abstract  way,   along  with  their  specific  preferences.  Thus,  the  particular  federation  service  will  offer  the  testbeds  with  a  potential   increase   in   the   number   of   end   users,   an   important   step   towards   providing   sustainable  research  infrastructures.    In  order  to  attract  non-­‐technical  users,  Fed4FIRE  will  also  enable  expressing  requests  for  federated  slices   in   a  more  abstract   form   (e.g.,   not   specifying   the  actual   substrate   resources   to  be  allocated).  Hence   requests   may   contain   a   complete,   partial,   or   empty   mapping   between   the   resources   an  experimenter  might  desire   and   the  physical   resources   available  on   the   Fed4FIRE   federation   (e.g.,   I  want   two   communicating   802.11b/g   nodes,   on   June   15th   2014   15:00-­‐18:00   pm).   Adopting   the  ProtoGENI  [24]  terminology  a  request   is  called  bound,   if   it  provides  complete  mapping   information  regarding   the   requested   resources,   and   unbound   otherwise.   In   other   words,   the   Reservation  Brokering   service  aspires   to  act  as  a  bridge  between   the  experimenter  and   the  Fed4FIRE   testbeds,  translating   (mapping)   user   requirements   to   actual   substrate   resources   or   services   provided   by   the  testbeds.    The   Reservation   Broker  will   create   the   actual   federated   slice   request,   based   on   different   levels   of  (detailed)   information   provided   by   the   users,   and   forward   it   to   the   testbeds   involved   for   actual  reservation.  Therefore   information   regarding  e.g.,   availability,  utilization  etc.  of  brokered   substrate  resources  and  testbed  services,  must  be  available  to  the  brokering  service.  However,  each  selected  testbed  is  solely  responsible  for  the  actual  provisioning  of  resources.  Based  on  this  description,  there  are  two  basic  examples  of  how  testbeds  could  leverage  the  Reservation  Broker:  

1. A  testbed  may  choose  to  use  Fed4FIRE  to  offer  its  spare  capacity  or  a  dedicated  subset  of  its  resources.  In  this  case,  the  testbed  can  notify  the  broker  of  the  resources  it  has  available  (spare)  so  that  they  can  be  allocated  to  users  of  the  Fed4FIRE  federation.  

2. A  testbed  may  choose  to  expose  the  entire  set  of  its  resources  to  the  Fed4FIRE  community.  

Deliverable  5.1  provides  a  taxonomy  of  the  reservation  types  and  their  correspondence  to  testbeds  within   the   Fed4FIRE   federation,   as  well   as   the   high   level   required   functionality   of   the   Reservation  Brokering  service.  Based  on  the  comparison  and  evaluation  of  existing  tools  available  to  the  Fed4FIRE  community   (NITOS/NICTA  Broker,  GRID5000    Scheduler,  NETMODE  Scheduler),   the  adoption  of   the  NITOS/NICTA  Broker  was  decided,  as  the  most  complete  system  in  terms  of  the  required    Fed4FIRE  functionalities.   Finally,   a   detailed   description   of   the   NITOS/NICTA   Broker's   architecture   and  implementation  was  provided.  In   Deliverable   5.2   we   initially   look   into   proposed   approaches   in   literature   for   handling   instant   or  advance  requests  for  resources,  in  infrastructures  that  provide  shared  or  exclusive  access  to  physical  resources.   In   the   following   we   provide   a   short   discussion   on   the   functionality   of   the   Reservation  Broker   to   be   supported   by   the   end   of   Cycle   2.   Finally,   adapting   the   initial   NITOS/NICTA   Broker  implementation,   we   provide   the   functional   specification   of   the   Reservation   Broker,   including   the  broker’s  architecture,  its  interactions  with  other  components  in  the  F4F  environment,  considerations  and  requirements   regarding   the  description  of   resources  with   regards   to   the  particular  service  and  the  status  of   implementation.  We  provide   in  Appendix  C   (section  0)  an  analysis  of   related  work  on  resource  reservation.  Similarly   to   the   other   sections   of   this   deliverable,   the   resource   reservation   in   Fed4FIRE   can   be  described  as  a  specific  service.  The  summary  of  that  representation  is  given  in  the  Table  3-­‐4.  

Page 34: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 34  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

3.2.2 Resource  reservation  service  Table  3-­‐4  Resource  reservation  as  a  service  

Basic  Information  Service  name   Resource  Reservation    General  description   The  Resource  Reservation  Service  is  the  overarching  service  that  

experimenters  can  utilize  to  reserve  heterogeneous  resources  spanning  multiple  testbeds,  based  on  a  multiplicity  of  selection  criteria  (time,  type,  etc.).  

User  of  the  service   Users  of  the  service  are  experimenters  who  would  like  to  create  slices  of  heterogeneous  resources,  that  belong  to  a  wide  and  varied  choice  of  testbeds  within  the  Fed4FIRE  federation  environment.Reservation  Broker  will  also  enable  experimenters  to  express  requests  for  federated  slices  in  a  more  abstract  form  (e.g.,  not  specifying  the  actual  substrate  resources  to  be  allocated).  Hence  requests  may  contain  a  complete,  partial,  or  empty  mapping  between  the  resources  an  experimenter  might  desire  and  the  physical  resources  available  on  the  Fed4FIRE  federation  (e.g.,  I  want  two  communicating    802.11b/g  nodes,  on  June  15th  2014  15:00-­‐18:00  pm).  

Service  management  Service  Owner   NTUA,  UTH      Contact  information  (internal)   [email protected],  [email protected]    Contact  information  (external)   https://portal.fed4fire.eu/  Service  status   Initial  Design  (Cycle  1),  Detailed  specifications  and  Implementation  

(Cycle  2),  Enhancements  (Cycle  3)  Service  Area/Category   Resource  Reservation        Service  agreements   SLA  is  n/a  Detailed  makeup  Core  service  building  blocks   NITOS  Broker  Additional  building  blocks   MySlice/Manifold,  OML  Data  Broker  Service  packages   This  service  will  offer  two  modes  of  resource  reservation:  instant  

where  users  will  be  able  to  reserve  resources  from  the  time  the  request  arrives  at  the  reservation  system  and  in  advance  where  the  reservation  of  resources  is  scheduled  for  the  future    

Dependencies   Resource  Discovery,  Resource  Description,  Infrastructure  Monitoring          

Technical  Risks  and  Competitors  Risks   The  Reservation  Broker  depends  on  resource  reservation  data  

gathered  from  testbeds  during  the  Resource  Discovery  phase  and  infrastructure  monitoring  information  regarding  e.g.  availability,  utilization,  etc.  Therefore,  the  optimization  of  the  mapping  process  of  user  request  to  substrate  resources  highly  depends  on  the  aforementioned  information.  If  this  information  is  not  provided  by  all  testbeds,  the  broker  will  not  be  able  to  provide  Fed4FIRE  users  with  the  optimal  resource  offerings.  

Competitors   Other  Resource  Reservation  approaches.  See  Appendix  C  (Section  0.)  

Page 35: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 35  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

3.2.3 Reservation  in  Cycle  2  In  the  Fed4FIRE  environment,  experimenters  will  be  able  to  reserve  resources  via:  1.  The  Fed4FIRE  portal  where  the  user  can  either  perform  bound  or  unbound  requests  for  resources  utilizing  MySlice  Reservation  plugin.   In  the  case  of  an  unbound  request,  MySlice  Reservation  plugin  interacts   with   the   Reservation   Broker   that   matches   and   optimizes   the   timeframe   and   resources  requirements,  as  set  by  the  experimenter,  over  one/multiple  testbeds  in  the  Fed4FIRE  federation.      2.  Appropriate  experimenter   tools.  Experimenters  may  use  existing   tools   to  either  directly   reserve  resources   that   belong   to   a   specific   testbed   (e.g.   using   testbed-­‐specific   experimenter’s   tool)   or   use  appropriate  tools  that  allow  directly  reserving  resources  from  federated  testbeds  e.g.  SFI.  With  regards  to  reservation  in  Cycle  2,  the  creation  of  combined  slices  over  a  set  of  wireless  testbeds  as  well  as  PlanetLab  and  Virtual  Wall,  will  be  supported.  The  type  of  access  (shared  or  exclusive)  and  the  time  of  the  actual  reservation  (instant  and  advance)  is  actually  dictated  by  the  testbeds  involved  [1].  In  the  case  of  a  bound  slice  request,  the  user   interacts  only  with  the  MySlice  Reservation  plugin  or  any   other   experimenter   tool   supporting   reservations   in   order   to   choose   the   desired   physical  resources   and   the   corresponding   timing   information   (instant   or   advance   reservations).   In   other  words,  MySlice  Reservation  plugin  acts  as  an  intermediary  between  users  and  testbeds.    A  basic  level  of  abstraction  will  be  supported  for  Cycle  2  on  expressing  the  slice  request,  depending  on   the   expressiveness   of   the   currently   supported   data   format   (e.g.,   I   want   two   communicating  802.11b/g  nodes,  on  June  15th  2014  15:00-­‐18:00  pm).    In  the  case  of  an  unbound  slice  request,  the  Reservation  Broker  will  act  as  a  slice  embedding  service  [24].   Specifically,   it   will   select   for   every   unmapped   resource   the   most   appropriate   underlying  infrastructure  to  provide  the  resource  (e.g.,  NETMODE  testbed  for  the  two  communicating  802.11b/g  nodes  of  the  example).  The  selection  will  be  based  on  e.g.,  availability,  fairness  in  the  use  of  testbeds.  In   the   following,   within   each   testbed   involved,   the   broker   will   identify   exclusively   each   substrate  resource   that   matches   the   user's   requirements   (intra-­‐domain   Virtual   Network   Embedding).   The  annotated   federated   slice   request   will   be   submitted,   as   in   the   case   of   a   bound   request   to   the  testbeds  involved.    

3.2.4 Reservation  Broker  Functional  Specification  

3.2.4.1 Reservation  Broker  Architecture      In  this  section,  the  architectural  design  and  supported  functionalities  of  the  Reservation  Broker  are  presented  in  detail.  The  Reservation  Broker  consists  of  several  modules,  each  of  them  responsible  for  different   functionalities.   Figure   3-­‐10   depicts   the  main   components   of   the   Reservation   Broker   and  their  interactions.  In  the  following,  we  outline  the  modules  that  constitute  the  Reservation  Broker.  

• Communication  Interfaces:  the  Reservation  Broker  can  communicate  with  other  components  of  the  F4F  ecosystem  via:    

o XMLRPC  /  SFA  interface  that  supports  SFA  clients.  o XMPP  /  FRCP  interface  that  support’s  FRCP  messages.  o RESTful  interface  that  supports  3rd  party  clients.  

• Authentication  /  Authorization:    the  policy  enforcement  point  of  the  Reservation  Broker.  • Database  (DB):  contains  all  the  required  information  regarding  the  available  resources.  • Scheduler:  handles  requests  for  resource  reservation.  • AM  Liaison:  responsible  for  contacting  FRCP  resources.  

Page 36: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 36  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐10  Reservation  Broker  Architecture  

A  more   detailed   description   of   the   above  modules   is   already   provided   by   the   predecessor   of   this  document  (Deliverable  5.1  [1]).  Nevertheless,  what  we  need  to  specify  is  the  fact  that  in  Cycle  2,  the  Reservation   Broker’s   Scheduler   will   be   enriched   with   a   new   sub-­‐module   that   will   be   capable   of  handling  unbound  requests,  mapping  the  requested  resources  to  the  physical  ones  available  within  the   Fed4FIRE   federation.   This   new   sub-­‐module   will   be   easily   reprogrammable   in   order   for   new  mapping  techniques  and  algorithms  to  be  tested  within  the  Fed4FIRE  ecosystem.    Scheduler’s  mapping  sub-­‐module  This   mapping   sub-­‐module   will   be   entrusted   with   the   responsibility   of   (i)   splitting   efficiently   the  request  among  underlying  infrastructures,  based  on  e.g.,  availability,  fairness  in  the  use  of  testbeds  and  (ii)  mapping  the  corresponding  partial  unbound  requests  to  the  appropriate  substrate  resources  from  selected  testbeds  in  the  Fed4FIRE  federation.  Resource  mapping  within  the  context  of  Fed4FIRE  depends   highly   on   (i)   specific   characteristics   of   each   testbed   (e.g.,   type   of   resources,   QoS  provisioning   schemes   etc.)   and   (ii)   requested   resources   and   constraints   imposed   by   the   user.   For  example   in   the   case   of   an   unbound   request   for   two   standalone   virtual   machines,   a   greedy   node  mapping   algorithm   responsible   also   for   load   balancing   can   efficiently   map   requested   to   physical  resources   whereas   an   unbound   request   for   a   virtual   topology   requires   a   more   sophisticated  approach.   In   order   for   the   Fed4FIRE   community   to   be   able   to   experiment  with   a   variety   of   these  techniques,  and  conclude  in  which  one  suits  better  to  the  F4F  needs,    it  is  intended  that  the  mapping  sub-­‐module   will   be   fully   modular   in   order   to   be   easy   for   anyone   (e.g.,   testbed   providers)   to  implement  his  own  mapping  techniques.  Ruby’s  strong  meta-­‐programming  capabilities  are  ideal  for  developing  such  a  sub-­‐module.  The  scheduler’s  mapping  sub-­‐module  will  be   in  the  core  of  the  Reservation  Broker,  thus  we  need  a  way   to   expose   its   functionality.   Therefore,   the   necessity   of   developing   an   appropriate   API   is  

Page 37: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 37  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

introduced.  This  will  constitute  the  main  mean  of  communication  between  the  mapping  sub-­‐module  and   other   federation   services   within   Fed4FIRE,   but   it   will   also   allow   for   third   party   clients   to   be  supported  in  later  stages.  Due  to  the  fact  that  both  SFA  and  FRCP  protocols  are  not  designed  with  the  proposed   capabilities   in  mind   and   that   there   are   no  methods   provided   for   them,   it   seems  natural  that  the  RESTful  interface  will  be  extended  accordingly.  

3.2.4.2 Reservation  in  the  Fed4FIRE  Ecosystem  At  the  end  of  cycle  2,  the  user  with  Fed4FIRE  will  be  able  to  i.  select  the  preferred  resources  spanning  the  federation,  as  well  as  express  the  request  for  resources   in  an  abstract  form  (i.e.,  not  specifying  the  actual  substrate  resources  to  be  allocated)  ii.  select  the  preferred  reservation  timeframe  for  the  particular   resources   (instant   and   advance   reservations).     Within   the   Fed4FIRE   ecosystem,   the  Reservation  Broker  interacts  with  Fed4FIRE  users  via  the  portal  as  depicted  in  Figure  3-­‐11.    

 

Figure  3-­‐11  Reservation  Broker  in  Fed4FIRE  Environment  

The  Reservation  Broker   interacts  directly  with  the  Fed4FIRE  portal,   in  order  to   implement  the  basic  functionalities;   that   is   resource  discovery   and  mapping/scheduling   across   the   Fed4FIRE   federation.  Adding  one  more   level  of  detail,  Figure  3-­‐12  depicts  the   integration  of  the  Reservation  Broker  with  MySlice.  The  details  of  the  depicted  interactions  are  illustrated  in  the  following  with  UML  sequence  diagrams  complemented  by  appropriate  description.    

Page 38: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 38  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 

Figure  3-­‐12  Reservation  Broker  /MySlice/Manifold  integration  

Let  us  also  note  here  that  in  order  for  the  broker  to  optimize  the  mapping  of  resources  to  the  user  request  (unbound  request  case)   it  needs  to  gather  measurable  characteristics  of  resources  that  are  useful   for   the   resource   selection   process.   Specifically,   infrastructure   monitoring   information  regarding  e.g.  availability,  utilization  etc.  of  brokered  substrate   resources   should  be  collected   from  the  central  F4F  monitoring  data  broker,  via  the  Manifold/  OML  gateway.    Resource  Discovery  The   Slice-­‐based   Federation   Architecture   (SFA)   API   provides   a   common   method   for   advertising  resources   over   multiple   administrative   domains.   The   central   data   structure   used   by   SFA   is   the  Resource  Specification   (RSpec),  used  as  an   interchange   format   for  platforms   to  advertise   substrate  resources   or   describe   allocated   resources   via   an   appropriate   manifest.   The   Reservation   Broker,  utilizing  the  Manifold-­‐SFA  gateway  and  the  Manifold-­‐OML  gateway  via  the  XMLRPC  interface,  will  be  able   to   periodically   retrieve   advertised   resources   spanning   the   federation   as   well   as   monitoring  information  regarding  these  resources  and  store  it  at  the  local  database.  The  latter  can  be  repeated  prior   to   the   mapping   process,   so   that   the   mapping   sub-­‐module   is   up   to   date   with   the   latest  monitoring  information.  In  the  following  sequence  diagram  the  resource  discovery  process  is  described  in  detail.  

Page 39: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 39  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 

Figure  3-­‐13  Resource  Discovery  

Step  1:  The  Manifold  -­‐  SFA  gateway  sends  periodically  a  request  to  each  testbed  in  the  federation  to  retrieve  advertised  resources  (ListResources()  SFA  call)  Step  2:  Each  testbed  provides  the  response  in  the  form  of  RSpec  advertisement.  Step  3:  The  broker  periodically  collects  this  information  from  the  SFA  gateway  in  the  manifold  data  format  and  stores  it  in  the  broker  database  for  further  processing.  Step   4:   The   broker   collects   monitoring   information   regarding   the   advertised   resources   via   the  Manifold  -­‐  OML  Gateway.  Resource  Mapping/Scheduling    The  broker  will  act  as  a  slice  embedding  service  for  requests  that  do  not  contain  a  complete  mapping  between   requested   and   physical   resources   across   the   federation.   Specifically,   the   broker   after  receiving  such  a  request  via  its  RESTful  interface  will  utilize  the  mapping  sub-­‐module  for  converting  the  received  unbound  request  to  an  annotated  (bound)  one.  This  will  contain  adequate  information  for  the  requested  reservation  to  be  completed.    In   the  proposed  architecture,  MySlice  will   still  be   responsible   for   initiating   the   reservation  process.  However,  there  is  a  need  for  extending  the  current  Reservation  plugin  in  MySlice  with  the  capability  of   expressing   unbound   requests   such   as   "I  want   two   VMs   now"   or   “I  want   three   nodes   from   any  federated   testbed   for   4   hours  with   802.11n  Wi-­‐Fi   interface”.   This   description  will   be   subsequently  sent  to  the  Reservation  Broker  for  further  processing  e.g.  mapping  the  request  to  available  resources  in  the  broker  database,  populated  with  the  results  of  the  resource  discovery  process.    In  the  case  of  a  bound  request,  MySlice  Reservation  plugin  uses  the  Manifold/SFA  GW,  retrieves  all  the   available   resources   and   then   presents   them   to   the   user   through   the   MySlice   Graphical   User  Interface.   After   the   selection   of   the   desired   resources   by   the   user,   MySlice   Reservation   plugin  communicates   with  Manifold/SFA   GW   via   the   UpdateSlice   command,   which   consecutively   creates  

Page 40: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 40  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

the   appropriate   RSpec   and   completes   the   bound   slice   request   through   the   SFA   interface   of   the  testbed  in  question.    The  complete  sequence  of  the  resource  reservation  processes  is  depicted  in  the  following  sequence  diagrams,  considering  unbound  and  bound  requests.  Bound  Request    In   the   following   sequence   diagram   (Figure   3-­‐14)   the   process   of   a   bound   request   for   resources   is  described  in  detail.  

 

Figure  3-­‐14  Resource  reservation:  bound  request  for  resources  through  the  portal  

Step  1:  User  makes  a  bound  request  for  resources  at  the  MySlice  Reservation  plugin.  Step  2:  The  Reservation  plugin  forwards  the  request  to  the  SFA  gateway  via  the  Manifold  API.  Step   3:  The  SFA  gateway  sends  a   request   for   slice  update   to   the  corresponding   testbeds  using   the  RSpec  format.  Step  4:  Each  testbed  provides  the  response  to  the  SFA  gateway.  Step  5:  The  SFA  gateway  provides  the  response  to  the  Reservation  plugin  (MySlice).  Step  6:  User  gets  notified  about  the  result  of  his  request  from  the  Reservation  plugin.      Unbound  Request    In  the  following  sequence  diagram  (Figure  3-­‐15)  the  process  of  an  unbound  request  for  resources  is  described  in  detail.  

Page 41: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 41  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 

Figure  3-­‐15  Resource  Reservation:  unbound  request  for  resources  

Step  1:  User  makes  an  unbound  request  for  resources  via  the  MySlice  Reservation  plugin.  Step  2:  The  Reservation  plugin  forwards  the  request  to  the  broker  in  the  Manifold  data  format.  Step   3:   The   broker   performs   request   partitioning   to   (possibly)   partial   requests   between   selected  testbeds.  Resource  allocation  costs  are  defined  based  on  resource  discovery  information,  available  at  the  broker's  database.    Step   4:  The  broker  maps   the  corresponding  partial  unbound  requests   to   the  appropriate  substrate  resources   from   the   selected   testbeds.   Information     regarding     available     resources     that     match    functional     and     non     functional   requirements   posed  by   the   user   are   based  on   resource   discovery  information,  available  at  the  broker's  database.    Step  5:  The  broker  sends  the  response  with  the  mapping  information  in  the  manifold  data  format  to  MySlice  Reservation  plugin.  Step  6:  The  Reservation  plugin  forwards  the  request  to  the  SFA  gateway  via  the  Manifold  API.  Step   6:   The  SFA  gateway  sends  a   request   for   slice  update   to   the  corresponding   testbeds  using   the  RSpec  format.  Step  7:  Each  testbed  provides  the  response  to  the  SFA  gateway.  Step  8:  The  SFA  gateway  provides  the  response  to  the  Reservation  plugin  (MySlice).  Step  9:  The  Reservation  plugin  provides  the  response  to  the  broker.  Step  10:  The  broker  stores  mapping  information  regarding  the  slice.  Step  11:  User  gets  notified  about  the  result  of  his  request  from  the  Reservation  plugin.  

3.2.4.3 Reservation  Broker  Data  Models  Reservation  within  the  federated  environment  is  fully  dependent  on  the  Information  Model  (IM).    A  set  of  requirements  are  identified  in  the  following:  

• The  IM  must  facilitate  requests  for  resources  related  to  the  ones  supported  by  the  Reservation  service  (bound  and  unbound).  Therefore  the  IM  should  support  requests  for:  

o a  set  of  (shared  or  exclusive)  resources  and,  o a  set  of  interconnected  resources  (topologies)  

Page 42: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 42  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Additional  information  that  the  IM  should  express,  necessary  for  reservation  process  includes:  o mapping  information  (e.g.  virtual  links  to  substrate  paths,  or  candidate  mapping  of  

virtual  to  a  list  of  physical  resources)  and,  o information  on  the  duration  of  the  reservation  and  the  timeframe  of  actual  resource  

provisioning  (instant  or  advance)    in  a  strict  or  more  elastic  fashion.  • The  modeling  language  should  provide  appropriate  abstractions  for  the  manipulation  and  

analysis  of  IT  infrastructure  that  can  also  be  seen  as  large  graphs  of  connected  resources,  in  order  to  facilitate  the  request  partitioning  or  mapping  algorithms.      

• It  must  be  possible  to  translate  resource  descriptions  of  the  underlying  heterogeneous  platforms  to  the  common  IM.    

• The  IM  should  facilitate  unified  handling  of  monitoring  data  for  (physical  or  virtual)  resources,  including  platform-­‐independent  and  unit-­‐aware  representation  of  monitored  properties  and  information  disclosing  the  state  of  resource  (e.g.,  utilization,  availability).  

• The  IM  must  be    able  to  describe  (distributed)  requests  or  slices  in  the  federated  environment.  

• The  IM  must  be  able  to  provide  sufficient  service  representation,  expressing  different  service  levels  (e.g.,  gold,  silver  and  premium  or  guaranteed/best  effort  resource  provisioning).  

• The  IM  must  be  able  to  support  policies  that  are  used  to  define  the  behavior  of  the  federated  environment.  It  would  be  desirable  to  provide  support  both  for  authorization  and  access  control  policies.  

Adopting   a   semantic  web   approach,   such   as  OWL,   is   a   good  match,   as   using   the   triples,   the  main  data-­‐format   of   OWL,   a   semantic   graph   structure   is   formed   describing   information   about   the  elements  [Ghijsen]  [VDHAM].  However  given  that  the  RSpec-­‐to-­‐Fed4FIRE  IM  translation  services  will  be   provided  with   an   adoption   roadmap   that   goes   beyond   the   end   of   Cycle   2,  while   the   brokering  service  requires  a  common  abstraction   language,   the  description  of   resources  will  be  based  on  the  Manifold  data  format  as  the  service  that  is  currently  used  in  MySlice  /  Manifold    (see  previous  sub-­‐sections)  

3.2.4.4 Implementation  Status    During  Cycle  1  of   the  F4F  development  there  was  a  big  ongoing  progress   in   the   implementation  of  the  Reservation  Broker.  With  respect  to  the  communication  component  of  the  Reservation  Broker  all  promised  interfaces  have  been  added.  An  XMPP  interface  that  supports  all  FRCP  messages  has  been  deployed   and   an   XML-­‐RPC   interface   is   functional,   supporting   SFA   messages.   Finally,   a   RESTful  interface  has  been  deployed  that  can  support  any  3rd  party  RESTful  clients.    The   Scheduler   component   is   fully   capable   of   orchestrating   the   requests   for   resource   reservations.  Currently   it   serves   requests   using   an   algorithm   that   resembles   the   “First   Come   First   Serve   (FCFS)”  technique,  nevertheless  its  structure  is  modular  and  the  FCFS  can  be  easily  modified  to  support  more  complex  algorithms.    The  Database   scheme  was  enriched   in   terms  of   extensibility  of   the  model,   in   a  way   that   it   can  be  extended   according   to   the   needs   of   the   testbed   operator   and   its   resources.   This   highly   dynamic  Database   model   is   the   core   of   the   Reservation   Broker   and   will   always   be   evolving   during   the  development  stage  of  F4F.  Finally,   the   Authentication   /   Authorization   mechanism   grew   along   with   OMF6‘s   corresponding  mechanisms.  This  component  supports  x509  certificates  to  both  authenticate  and  authorize  incoming  messages  of  the  communication  protocols  of  FRCP  and  SFA.    To   summarize,   in   the   first   Cycle   of   development   the  main   goal  was   for   the   Reservation   Broker   to  become  a  functional  part  of  the  federated  testbeds  by  achieving  two  objectives,  a   local  reservation  mechanism  fully  interoperable  with  OMF  and  a  structure  to  expose  this  mechanism  through  SFA.  In  the  second  Cycle  of  development  the  efforts  will  be  focused  in  the  necessary  modifications,  in  order  

Page 43: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 43  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

to  create  a  central   instance  of   the  Reservation  Broker   that  will   fit   the  description  presented   in   the  current  document.    

3.3 Specification  of  resource  provisioning  (Task  5.4)  Provisioning  of   [reserved]  resources   is   inherently  coupled  with  SLA  management,  however  has  also  some   dependencies   on   the   OLA   side   of   the   F4F   federation.   We   start   with   an   attempt   to   define  resource  provisioning  as  a  service,  which  then  is  followed  by  the  SLA  management  specification.  The  remainder  of  this  section  will   first   introduce  the  SLA  front-­‐end  tool  which  is  the  SLA  Graphical  User  Interface  developed  through  a  plugin  for  the  Django-­‐based  MySlice  Portal.  Then,  it  will  go  into  more  details   regarding   the  different  parts  of   the   same  one:  presentation  of   the   SLAs,   acceptance  of   the  SLAs  and  presentation  of  the  SLA  evaluations.  This  section  explains  how  the  implementation  plan  and  the   different   windows   of   the   SLA   plugin   in   the   Portal   will   be.   Finally,   it   will   end   with   a   table  summarizing  the  different  requirements  and  the  partners  in  charge  of  them.  

3.3.1 Resource  provisioning  service  Table  3-­‐5  Resource  provisioning  as  a  service  

Basic  Information  Service  name   Resource  provisioning  General  description   Resource  provisioning  allows  to  the  accredited  user  the  

instantiation  of  resources  from  one  or  several  testbeds  to  deploy  his  experiment.  It  can  be  direct  (user  selects  specific  resources)  or  orchestrated  (user  defines  requirements  and  service  provides  the  best  fitting  resources)  

User  of  the  service   The  final  user  of  the  Resource  Provisioning  service  is  the  Experimenter,  but  the  direct  F4F  components  that  invoke  it  are  the  MySlice  Portal,  future  reservation  broker,  policy  decision  point,  service  directory  and  reputation  engine.  

Service  management  Service  Owner   Task  Leader:  i2CAT  Contact  information  (internal)   [email protected]  Contact  information  (external)   https://portal.fed4fire.eu/  Service  status   Cycle  1:  conception  

Cycle  2:  service  development  for  some  testbeds  Cycle  3:  integration  

Service  Area/Category   • Resource  provisioning.  • Resource  description.  • Resource  specification  • Infrastructure  monitoring    

Service  agreements   Best-­‐match  search  for  the  orchestrated  provisioning.  Orchestrated  provisioning  requires  to  find  the  resources  that  match  experimenters’  requests  and  provision  them.  

Detailed  makeup  Core  service  building  blocks   For  the  direct  provisioning,  testbed  directory.  For  orchestrated  

provisioning,  resource  discovery.  Additional  building  blocks   SFA  Registry,  SFA  AMs  Service  packages   This  service  will  offer  two  modes  of  resource  provisioning  (direct  

Page 44: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 44  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

and  orchestrated)  each  one  can  be  a  different  package  although  with  some  common  functions.  

Dependencies   Services  dependencies:  Resource  Discovery,  Resource  specification  and  Infrastructure  monitoring  to  know  the  availability  of  resources  to  be  provisioned.  Software  dependencies:  SFA  implementation  (SFAWrap  or  AMsoil),  MySlice  Portal,  plugin  for  MySlice,  monitoring  of  available  resources.  

Technical  Risks  and  Competitors  Risks   The  resource  provisioning  relies  on  the  correct  authentication  of  

the  users  in  order  to  provide  resources.  If  this  authentication  is  overpassed,  resources  could  be  provisioned  to  unauthorized  user.  Availability  of  resources  must  be  monitored  correctly,  otherwise  users  can  be  granted  access  to  resources  already  provisioned  to  other  experiments.  For  the  provider,  the  risk  is  a  loss  of  revenue  since  unauthorized  users  may  consume  resources,  for  the  user  it  is  a  risk  that  his  experiment  is  potentially  disclosed  to  unauthorized  users.  

Competitors   There  are  specific  solutions  for  resource  provisioning  in  other  testbed-­‐based  projects.  Most  of  those  solutions  are  part  of  testbeds  that  will  be  part  of  Fed4FIRE  federation,  so  the  potential  risk  of  competitors  is  reduced  as  they  are  assimilate  by  this  solution.  

3.3.2 SLA  Management  The  SLA  functionality  will  be  accessible  in  Fed4FIRE  through  the  Django-­‐based  MySlice  Portal.    The  SLA  plugin  from  the  Portal  interacts  with  the  SLA  management  component  installed  in  each  testbed  and  shows  the  SLA  information  to  the  experimenter.  The  detailed  specification  for  this  is  detailed  in  deliverable  D7.2  [6].  An  SLA  plugin  will  be  developed  on  the  Portal  in  order  to  add  the  following  features:  

• Presentation  of  the  SLAs.  • Enabling  acceptance  of  the  SLAs.  • And  presentation  of  the  SLA  evaluations.  

In  cycle  2,  Fed4FIRE  will   implement  an  SLA  mechanism  adopting  commitments  on  resources  within  those   testbeds   (thus  based  on   infrastructure  monitoring)   for   iMinds'   testbeds   (w-­‐iLab.t  and  Virtual  Wall).  In  case  there  is  reservation  of  resources  in  place,  the  SLA  will  apply  to  the  particular  reserved  resources.  This  can  be  seen  as  a  pilot  implementation  of  more  elaborated  SLAs  that  can  be  extended  to  other  testbeds  in  the  future  [2].  

3.3.2.1 Showing  SLA  information  of  testbeds  supporting  SLA  In  cycle  2,  a  single  description  of  the  SLA  will  be  shown  per  Testbed  in  the  Fed4FIRE  documentation  center  (http://doc.fed4fire.eu/testbeds.html)  The  SLA  offered  by  each  testbed  will  be  unique  and  the  same  for  all  the  different  slivers.  There  will  be  a  distinct  SLA  between  the  experimenter  and  the  different  providers5.  In   cycle   2,   a   single   description   of   the   SLA   will   be   shown   per   Testbed   in   the   Federated   Testbed  Directory  from  the  Portal,  even  if  then  there  will  be  several  slivers6  in  a  single  testbed.  This  is  because  the  SLA  offered  by  each  testbed  will  be  unique  and  the  same  for  all  the  different  slivers.  

                                                                                                                         5  That  is  why  we  have  noted  earlier  that  OLA  is  not  supported  in  cycle  2  

Page 45: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 45  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

A  Boolean  type   indicating   if   the  testbed  supports  SLA  and,   in   that  case,  also  a  description  with  the  details  of  the  SLA  will  be  included  in  the  getVersion  struct  files  retrieved  daily  from  the  testbeds'  AM  with  the  GetVersion  call.  The   human   readable   Testbed  Directory   plugin  will   retrieve   all   that   testbeds’   information   including  the  SLA   information  detailed  above,  and  display   it  on  a  single  webpage   in  the  Portal.   In  the  future,  this  information  could  also  be  used  in  other  front-­‐end  tools.    The  following    Figure  3-­‐16  Showing  SLA  information  of  testbeds  which  support  SLA    depicts  how  the  SLA  information  will  be  shown  in  the  Fed4FIRE  documentation  centre  .    

 Figure  3-­‐16  Showing  SLA  information  of  testbeds  which  support  SLA  

3.3.2.2 Implementation  Plan  for  the  SLA  Plugin  The  SLA  plugin  is  composed  of  the  following  files:  

! init_.py    This  file  is  the  entry  point  for  the  SLA  plugin.  It  holds  the  Python  code  necessary  for  the  plugin   to   initialize   the   static  parts  and  define   the   template   to  be  used,   java   script  and  CSS  dependencies.  

! templates/{name_of_sla_templates}.html  The   SLA   plugin   is   composed   of   two   templates   which   are   located   under   the   directory  templates  at  the  root  of  the  installation.    The  templates  are:    

o sla_acceptance.html  o sla_evaluations.html  

A   description   of   the   templates   is   detailed   below.   These   files   contain   the   HTML  templates  of  the  static  part  of  the  plugin,  in  Django  template  language.  They  define  the  structure  which  is  going  to  have  the  different  SLA  windows  shown  in  the  Portal.  

                                                                                                                                                                                                                                                                                                                                                                                           6  A  sliver  is  the  part  of  that  slice  which  represents  a  single  offering  of  resources  at  one  testbed  

Page 46: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 46  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

! static/js/{sla_java_script_file}.js  This   file   contains   the   major   part   of   the   code   of   the   plugin   which   is   responsible   for  managing  the  dynamic  aspects  of  user  interaction  and  visualization.  

! static/css/sla_plugin.css  This  file  holds  the  Cascading  Style  Sheets  (CSS)  used  for  describing  the  different  forms  of  the  SLA  plugin.  

The   SLA   Plugin   is   built   using   the   jQuery   framework.   It   inherits   from   the   base   Plugin   class   which  implements  the  common  aspects  between  all  plugins  and  provides  a  set  of  helper  functions.  The  SLA  plugin  can  be  divided  into  three  main  parts,  namely  the  acceptance  of  the  agreements,  the  viewing  of  the  SLA  agreements  and  SLA  evaluation  presentations.  

3.3.2.3 Acceptance  of  the  agreements  Once   the   experimenter   has   consulted   which   testbeds   offer   SLAs   through   the   “Fed4FIRE  Documentation  Centre”  (see  Figure  3-­‐16),  he/she  will  be  able  to  filter  and  select  resources  associated  to  these  testbeds  for  the  experiment.  This  bundle  of  resources  will  be  distributed  in  different  slivers  over   one   or   multiple   testbeds   and   all   of   these   slivers   will   be   part   of   a   slice.   Furthermore,   these  resources  could  be  based  on  reservation  or  immediate  provisioning.    If   the  experimenter  wishes  to  book  the  resources,  he  should   indicate  which  resources  are  required  for   his   experiment   and   the   ranges   of   dates   in   which   he   wants   to   have   them.   For   indicating   that  ranges  of  dates,  he  will  select  a  start  time  and  the  number  of  timeslots  for  each  one  of  the  resources.    

 Figure  3-­‐17  :  Selection  resources  based  on  reserved  of  testbeds  offering  SLAs  

And   in   the   case   the  experimenter  wishes   to   select   resources  based  on   immediate  provisioning,  he  will  only  have  to  select  the  desired  resources.  Once   the  experimenter  has   selected   the   resources,  he  can   review   the  pending  changes  and  a  new  window  with  the  different  selected  resources  will  appear.  If   the   experimenter   agrees   with   these   resources,   he   should   press   the   button   “apply”   and   a   new  window  with  the  different  SLAs  will  appear  for  the  experimenter  to  accept  or  reject.    For   automated   discovery   of   SLA   enabled   testbeds,   we   can   look   into   solutions   as   getVersion  extensions  like  including  a  boolean  type  and  a  description  with  the  details  of  the  SLA  [7].  

Page 47: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 47  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

The  following  Figure  3-­‐18  depicts  a  mockup  of  how  the  acceptance  of  the  SLA  Agreements  windows  will  be.      

 

Figure  3-­‐18  Acceptance  of  the  SLA  Agreements  

The  concrete  values  shown  in  the  previous  picture  (e.g.  90%  and  80%  uptime  of  the  resources)  are  values  taken  for  this  example.  They  will  be  configurable  for  each  testbed.  Finally,  if  the  experimenter  agrees  with  the  SLAs  and  accepts  them,  the  evaluation  of  the  SLAs  would  start  right  at  the  moment  those  resources  will  be  provisioned.    The  following  two  conditions  are  required  to  implement  the  acceptance  process:  

1. Include   inside   the  “resourcesSelected”  plugin,  previously  known  as   “query  updater”,   a  function   which   verifies   if   some   of   the   selected   resources   belong   to   a   testbed   which  supports   SLA.   If   it   is   fulfilled,   a   call   will   be   done   to   the   SLA   Plugin   to   show   the   SLAs  adopted  in  those  testbeds  (see  Figure  3-­‐18).  

2. Implement  the  following  files  inside  the  SLA  Plugin:  

! templates/sla_acceptance.html  

This  template   is  used  to  show  the  specific  details  of  the  SLA  Agreements  of  the  testbeds  related  to  the  selected  resources.  The  possibility  to  accept  or  reject  them  for  the  experimenter  is  also  given.  

! static/js/sla_acceptance.js  

This  file  includes  all  the  functions  related  to  dynamic  aspects  of  user  interaction  and  visualization  of  the  SLA  acceptance  window.  In  particular,  the  functions  implemented  are:  

o Function  to  retrieve  SLA  Agreements  for  a  specific  testbed.    o Function  to  store  SLA  Agreements.  

Page 48: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 48  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

3.3.2.4 Viewing  SLA  agreements  Once  the  experimenter  has  chosen  the  resources  and  accepted  the  SLA  agreements,  he  will  be  able  to  query  them  (the  SLA  agreements)  from  a  section  called  “SLA"  (see  Figure  3-­‐19).  The  values  shown  about  the  description  of  the  SLA  are  configurable  for  each  sliver.    

 Figure  3-­‐19:  View  SLA  Agreements  

The  requirements  to  display  the  SLA  Agreements  are  listed  below:  

! templates/sla_evaluation.html  This  template  is  the  same  as  that  used  in  showing  the  SLA  Agreements.  

! static/js/sla_evaluation.js  This  file  includes  the  classto  retrieve  the  SLA  Agreements  of  the  different  testbeds.  

The  code  of  the  class  to  retrieve  the  SLA  Violations  is  as  follows:  class Agreements(object): def __init__(self, root_url, path=_AGREEMENTS_PATH): """Business methods for Agreement resource :param str root_url: url to the root of resources :param str path: path to resource from root_url The final url to the resource is root_url + "/" + path """ resourceurl = _buildpath_(root_url, path) converter = xmlconverter.AgreementConverter() self.res = _Resource(resourceurl, converter) def getall(self): """ Get all agreements :rtype : list[model.Agreement] """ return self.res.getall() def getbyid(self, agreementid): """Get an agreement :rtype : model.Agreement

Page 49: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 49  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

""" return self.res.getbyid(agreementid) def getbyconsumer(self, consumerid): """Get a consumer's agreements :rtype : list[model.Agreement] """ return self.res.get(dict(consumerId=consumerid))

3.3.2.5 Viewing  SLA  Evaluation  The  SLA  evaluations  are  available  and  can  be  displayed  on  demand  once  each  one  of   these   slivers  belonging   to  a  slice   is   released.  The  SLA  evaluation  determines   its   fulfillment  or  not  by  each  sliver,  showing  in  the  case  of  nonfulfillment  the  corresponding  violations  occurred  per  sliver.    The  SLA  Evaluation  is  valid  for  both  cases  of  experimentation:  experimentation  based  on  immediate  provision   of   resources   and   reservation-­‐based   experimentationIn   cycle   2,   the   SLA   Type   agreed   to  implement   in   iMinds’   testbeds   guarantees   a   certain   X%   Uptime   for   Y   %   of   resources   during   the  experiment.   The   SLA   is   fulfilled   in   case   the   uptime   total   rate   obtained   is   equal   or   greater   than   X.  Otherwise,   the   SLA   is   not   fulfilled   (when   this   percentage   is   less   than   X),   Appendix   D   provides   an  example  of  SLA  evaluation.    The  detailed  information  for  this  is  detailed  in  deliverable  D7.2  [6]  For   further   implementations   beyond   cycle   2,   suitable   actions   according   to   the   result   could   be  launched  such  as  showing  warnings,  reports,  invocation  of  policy  engines,  etc.  The  following  Figure  3-­‐20  depicts  the  mockup  of  how  the  SLA  Evaluation  will  be  shown:    

 Figure  3-­‐20  :    Viewing  of  the  SLA  Evaluations  

The  requirements  to  display  the  SLA  Evaluations  are  listed  below:  

Page 50: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 50  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

! templates/sla_evaluation.html  This  template  is  the  same  as  the  one  used  for  showing  the  SLA  Agreements.  In  this  case,  the  result  of  the  SLA  evaluations  would  also  be  shown  to  the  experimenter.  

! slaplugin/model.py  This  file  includes  the  classes  related  to  how  to  retrieve  the  SLA  Violations.    

An  example  of  this  class  to  retrieve  SLA  Violations  is  as  follows:    

class Violations(object): def __init__(self, root_url, path=_VIOLATIONS_PATH): """Business methods for Violation resource :param str root_url: url to the root of resources :param str path: path to resource from root_url The final url to the resource is root_url + "/" + path """ resourceurl = _buildpath_(root_url, path) converter = xmlconverter.ViolationConverter() self.res = _Resource(resourceurl, converter) def getall(self): """ Get all violations :rtype : list[wsag_model.Violation] """ return self.res.getall() def getbyid(self, violationid): """Get a violation :rtype : model.Violation """ return self.res.getbyid(violationid) def getbyagreement(self, agreement_id, term=None): """Get the violations of an agreement. :param str agreement_id: :param str term: optional GuaranteeTerm name. If not specified, violations from all terms will be returned :rtype: list[model.Violation] """ return self.res.get({"agreementId": agreement_id, "guaranteeTerm": term})

3.3.2.6 Summary  of  SLA  specifications  A  summary  of  the  different  requirements  and  the  partners  in  charge  of  them  are  shown  in  Table  3-­‐6  SLA  Specification  summary  and  responsibilities.    

Table  3-­‐6  SLA  Specification  summary  and  responsibilities  

Area   Requirement   Who   Comments  

Showing  SLA  information  of  testbeds  which  support  SLA  

Implement  this  part  in  the  plugin  

ATOS  

UPMC  will  provide  support  to  Atos  iMinds  will  provide  the  SLA  information  to  put  in  Fed4FIRE  documentation  centre.  

Page 51: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 51  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

SLA  PLUGIN  

Acceptance  of  the  agreements  

Implement  this  part  in  the  plugin.  

 

Atos  

UPMC   will   provide  support  to  Atos  

Viewing   SLA  agreements  

 Implement  this  part  in  the  plugin  

Atos  

UPMC   will   provide  support  to  Atos  

Viewing   SLA  evaluation  

Implement  this  part  in  the  plugin  

Atos  UPMC   will   provide  support  to  Atos  

   

3.4 Specification  of  experiment  control  (Task  5.5)  

3.4.1 Introduction  We   start   with   the   tentative   definition   of   experiment   control   service,   based   on   a   particular  experiment  controller  –  NEPI7  (The  Network  Experiment  Programming  Interface)  –  within  cycle  2.    This  rest  of  the  section  is  centered  in  two  points,  the  first  point  address  the  main  issues  to  be  sort  out  during  cycle  2  for  experiment  control  within  the  federation,  mainly  on  the  testbed  side,  meanwhile  the  second  point  describes  new  developments  for  two  experiment  control  tools,  NEPI  and  OMF  EC,  which  represent  the  experimenter  side,  in  order  to  include  advances  done  so  far  by  the  architectural  components.   The   advances   on   testbeds   and   tools   allows   experimenters   to   run   federated  experiments  easily.  

3.4.2 Experiment  control  service  Table  3-­‐7  Experiment  control  as  a  service  

Basic  Information  Service  name   Experiment  control  NEPI  General  description   NEPI  is  a  Python-­‐based  language  to  design  and  easily  run  network  

experiments  on  network  evaluation  platforms  (e.g.  PlanetLab,  OMF  wireless  testbeds,  network  simulators,  etc).  

User  of  the  service   NEPI  allows  to  specify  resources,  to  define  an  experiment  workflow  and  to  automate  deployment,  resource  control  and  result  collection.  

Service  management  Service  Owner   Developer:  INRIA  -­‐  Sophia  Antipolis  Contact  information  (internal)   nepi-­‐[email protected]  

nepi-­‐[email protected]    Contact  information  (external)   http://nepi.inria.fr/  Service  status   Cycle  1:  NEPI  3.0  beta  release,  involving  software  architecture  and  

resource  model  changes  that  allowed  the  support  for  time  and  condition  based  actions  (scheduler)  .  Support  for  PlanetLab  testbed.  Support  for  OMF  5.4  testbed.  Support  for  Manifold  API  in  its  development  state.  Cycle  2:  Support  for  SFA.  Support  for  FRCP.  Validation  of  the  

                                                                                                                         7  http://nepi.inria.fr/    

Page 52: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 52  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

scheduler  using  experiment  cases.  Additional  support  future  internet  technologies,  OpenFlow  and  CCN.    Cycle  3:    Support  for  heterogeneous  experiment  use  cases  in  testbeds  that  are  SFA  and  FRCP  compliant.  Support  for  hybrid  experiments.  Support  for  data  collection  using  the  federation  service.  

Service  Area/Category   • Resource discovery through  SFA  (sfi  tool)  

• Resource  reservation/provision  through  SFA  (sfi  tool)  

• Experiment  orchestration  through  FRCP  and/or  SSH  

• Experiment  execution  through  FRCP  and/or  SSH  

• Data  collection  through  Manifold  and/or  OML  

Service  agreements   SLA  is  n/a  Detailed  makeup  Core  service  building  blocks   NEPI  Additional  building  blocks   SFA  AMs,  OML,  Manifold,  SFI,  FRCP,  SSH  Service  packages   NEPI  version  3.0  is  not  packaged  yet.  Is  available  through  the  

mercurial  repository  hg clone http://nepi.inria.fr/code/nepi -r nepi-3.0-release

Dependencies   Software  dependencies  :  Python  2.6+,  SleekXMPP,  sfa-­‐common,  sfa-­‐client  Federation  dependencies  :  SFA  compliant  and  FRCP  compliant  testbeds.  

Technical  Risks  and  Competitors  Risks   For  detailed  risks  see  D5.2  Important  issues  for  a  common  layer  for  

experiment  control  Competitors   N/a  

 

3.6.2     Important  issues  for  federated  experiment  control  Using   as   a   reference   the   experiment   control   section   from   deliverable   2.4   Second   Federation  Architecture,  which  aggregate  all  architectural  components  needed  to  support  experiment  control,  we  describe   for   some   component   the  main   issues   to  be   foreseen  during   cycle   2   in  order   to  move  forward  to  federated  experiment  control.  The  following  issues  arise  from  the  experimenter  tool  perspective  when  looking  to  the  architectural  components  at   the  testbed  and  the  experimenter  side.  We   identified   few   limitations  to  access  and  control   the   federated   resources,   either   using   experiment   control   tools,   or   just   accessing   and  controlling   the   resources   directly   using   a   federation's   user.   The   following   paragraphs   explain   and  propose  possible  solutions  to  some  of  these  issues.  In  the  experiment  control  section  from  deliverable  2.4,  at  the  testbed  side  components,  the  resource  controller  appears  as  one  of  the  main  components,  and  FRCP  as  the  communication  protocol  (from  D2.4:  “actions  have  to  be  communicated  to  the  resource  controller  in  the  FRCP  protocol”).    We  found  the  following  two  shortcomings  for  the  communication  model  for  cycle  2.  

Page 53: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 53  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

1  -­‐  Need  for  an  information  model  (Testbed  side)    Some  FRCP  message  may  include  a  <props>  child  element,  which  declare  various  properties  related  to  that  message,  and  the  basic  child  elements  are  <value>  and  <unit>.  This  is  for  example  the  case  for  the  configure  and  request   types  of  message.  We  would   like  to  assure   for  same  type  of   resource   in  different   testbeds,   for  example   the  VM  resource,   that   the  properties  names  will  match.  Moreover,  the   type   of   values   to   configure   should  match   too,   and   the   setting   of   that   value   should   leave   the  resource   in   the   same   state.   The   last   ensures   uniform   control   of   the   VM   resource   across   the  federation.    Similar   issues   are   being   faced   within   the   discovery   and   provisioning   stages   for   the   federation   of  resources,  the  resource  description  in  the  form  of  RSpecs  don't  necessary  match  the  resource  names  nor  their  attributes  for  equal  resources  in  other  testbeds.  The  definition  of  an  information  model  for  the   resource   federation,   and   the   adoption   on   the   testbed's   side   of   this   information  model,   could  mitigate  this  problem.     2  -­‐  Defined  resource's  states  (Testbed  side)  As mention above, controlling the resources will imply most of the times, to bring one resource from one   state   to  a  different  one.   The  different   resources   can  have  different   states,   and  most  probably  they   can   share   common   states   names,   for   example,   ACTIVE,   INACTIVE,   PAUSED,   etc.   The   tasks  performed   in   the   resource   when   configuring   it   to   a   certain   state,   will   depend   on   the   resource  controller   implementation,   but   for   the   same   type   of   resource   the   state   name   should   represent  exactly  the  same  state.  We  believe  we  need  to  have  this  information  well  documented  by  resource  controller's   developers,   and   we   could   also   study   the   possibility   to   include   it   as   part   of   an   inform  message,  as  this  is  a  key  point  for  experiment  deployment.    The  adoption  of  FRCP  and  resource  controllers  is  a  key  factor  for  architecture  in  cycle  2,  the  ability  to  control   the   resources   depend   on   this   issue.   We   present   the   current,   plus   end   of   cycle   2   status  regarding  these  two  components.   3  -­‐  Diversity  of  Resource  Controller  within  the  federation  (Testbed  side)  The  following  RCs  are  supported:      

• support  for  Virtual  Machine  resources  (currently  only  KVM-­‐based  with  Ubuntu's  visualization  management  tools)    

• support  for  OpenFlow  resources    • support   for   PC   hardware   resources   with   network   interfaces   (wired/wireless)   and   their  

applications    

 A   survey   to   the   testbeds  within   the   federation   done   in   the   Plenary  meeting   in  Ghent   (April   2014)  showed  the  status  of  deployment  of  FRCP  resources:     Table  3-­‐8  Status  of  FRCP  deployment  (April  2014)  

Planetlab Europe FRCP enabled

Wilabt FRCP enabled

Ofelia FRCP enabled for cycle 2

Nitos FRCP enabled

Netmode FRCP enabled

Koren NO

Norbit FRCP enabled

Page 54: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 54  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Ultra access FRCP enabled for review

Virtual Wall FRCP enabled

Bonfire FRCP enabled

Smart Santander FRCP enabled for cycle 2 or cycle 3

PerformLTE FRCP enabled

C-LAB FRCP enabled for cycle 2

Fuseco FRCP enabled 4  -­‐  Crosstestbed  account  (Testbed  side)  In  order  for  experimental  tools,  or  federation  users,  to  be  able  to  reserve  resources  for  cross  testbed  experiments  without   having   to   deal  with   different   types   of   credentials,   is   important   that   testbeds  adopt  the  SFA  interface.  With  the  SFA  user  credential,  slice  credential  and  authority  credential,  the  user   can   list   resources,   allocate   them,   provision   them,   delete   them   from   his   slice,   plus,   add   or  remove  slices  when  is  allowed,  in  any  SFA  compliant  testbed  that  trust  each  others  registry.    This   last   being   pure   control   plane   operations,   we   want   to   support   a   similar   approach   for  experimental  plane  operations.  We  think  it  is  important  to  facilitate  uniform  access  and  control,  we  identify   as   an   important   issue   to   establish   a   clear   mechanism   to   inform   the   user   how   to   start  controlling   its   resources.  This  can  be  done  either  using   the  SFA  manifest,  adding   information  there  about  XMPP  topics,  gateways,  usernames,  etc,  or  by  agreeing  within  the  federation  on  conventions  between   the   SFA   world   and   the   particular   resources,   for   example   predetermining   a   relationship  between  the  resource  URN  and  the  XMPP  topic.     5  -­‐  Files  uploading  to  resource  (Testbed  side  D4.2  pag  41  ssh  server)  Currently  FRCP  does  not  support   files  uploading  to  resources  where  this   is  a  possibility.   In  order  to  sort  out  this  problem  for  the  time  being,  we  propose  to  allow  SSH  access,  as  part  of  the  allocation  or  provision  step  done  in  the  testbed  (copying  user's  public  key  to  the  resources),  for  resources  where  files   uploading   or   downloading,   such   as   sources,   videos,   traces,   pcap   files,   during   the   experiment  development  could  be  necessary.  We  are  aware  that  many  testbed  offer  indeed  this  option,  but  we  encourage   to  bring   this   information  back   to   the  user   in  a   standardize  way,   for  example   in   the  SFA  manifest  after  the  resources  are  allocated.    In  the  future,  FRCP  resource  controllers  can  be  extended  to  support  this  feature.   6  -­‐  Resource  bootstrapping  (Testbed  side)  In   some   cases   the   user  would  want   to   upload   a  modified   version   of   an  OS   image   to   the   resource  needed   for   the   experiment.  Not   every   testbed   or   every   resource   supports   this   feature,   but   in   the  ones   that   do   support   it,  we  distinguish   two  ways  of   addressing   the  bootstrapping  of   the  modified  image.  The  bootstrapping  can  be  done   triggered  by   the  SFA  AM  call   to  allocate   the   resource,   this   call  will  schedule   a   task   for   the   control   and  management   framework   of   the   testbed,   in   order   to   take   the  modified  image  from  a  repository  and  install  it  when  reservation  time  is  met.  A  second  alternative  of  dealing  with  the  bootstrapping  is  scheduling  the  provision  of  a  FRCP  resource  controller  that  handle  the  image  bootstrapping  as  a  response  to  a  CONFIGURE  message.  

3.6.3     Cycle  2  developments  for  experiment  control  tools     Previously  in  deliverable  5.1,  we  described  and  compared  two  experiment  control  user  tools  for  the  federation:  NEPI  and  the  OMF  EC.  Having  complied  with  the  requirements  of  the  cycle  1,  we  propose  

Page 55: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 55  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

in  this  document  new  objectives  for  both  tools  for  cycle  2.  The  first  objective  is  to  include  advances  done  by  other  architectural  components  into  the  experiments  control  tools  (more  testbeds  adopting  SFA,  FRCP  enabled  resources,  PDP  support,  measurements,  etc).  The  second  objective  is  to  propose  experiment  use  cases  to  assess  the  usability  and  usefulness  of  the  tools  within  the  federation.  These  scenarios  will  then  be  used  to  showcase  the  advances  of  cycle  2  as  demos.  To  this  end,  we  suggest  at  the   end  of   this   section   a   possible   demo   scenario   for   the   second   review,  which  will   be   held   at   the  middle  of  the  second  cycle,  in  month  21.      

3.6.3.1  NEPI  (experimenter  side  running  in  user's  preferred  location)    

◦ Performance  evaluation  During   cycle   1   NEPI  was   adapted   to   support   scheduling   of   tasks   and   the   definition   of   experiment  workflows   to   fulfill   Fed4FIRE   requirements.   Scheduling   refers   to   deferring   the   execution   of  DEPLOYMENT,   START   AND   STOP   of   resources   until   the   necessary   conditions   are   met,   this   is  important  for  parallel  processing  over  two  testbeds,  for  example  to  allow  transmitting  of  processed  results  from  one  resource  to  another).  The   scheduling   of   tasks   is   done   based   on   different   conditions,   for   example,   time   conditions   and  resource  state  conditions.  The  way  scheduled  tasks  are  handled  by  NEPI  can  have  a  huge  impact  on  the  performance  of  the  tool  (e.g.  How  fast  an  experiment  is  deployed  and  what  are  the  limits  to  the  number  of  simultaneous  tasks   it  can  handle),  and  could   lead  to  unmet  workflow  constraints  set  by  the  user.  One  important  objective  for  cycle  2  is  to  validate  the  tasks  scheduling  mechanisms  in  NEPI,  to  thoroughly  evaluate  its  performance  and  to  identify  possible  issues  and  address  them.      ◦ Scenario  validation  

Another  important  objective  for  cycle  2  is  to  validate  the  NEPI's  ability  to  support  different  federated  scenarios.  To  this  end  we  will  propose  different  use  cases,  we  believe  from  the  excersise  of  running  the   same   experiment   in   more   than   one   testbed,   and   also   targeting   resources   in   more   than   one  testbed  at  a   time,  we  move  ahead  with   the  ultimate  goal   for   federation   in   the  Fed4FIRE,  probably  only  100%  feasible  during  cycle  3.  More  information  about  the  experiment  cases  in  section  3.6.3.      ◦ OpenFlow  support    

In   order   to   support   experimentation  with   Future   Internet   technologies   in   the   context   of   Fed4FIRE  testbeds,  another  task  planned  for  NEPI  during  cycle  2,  is  deploying  OpenFlow  experiments  using  the  Planetlab  Europe  Testbed.  PLE  supports  OpenFlow  through  a  modified  version  of  OpenVSwtich  called  sliver-­‐ovs.  The  sliver-­‐ovs   is  a  customized  version  of  the  userspace  datapath  of  Open  vSwitch,  which  means,   it   share   all   the   features   of   the  Open   vSwitch   software   and   also   the  means   to   connect   the  switches  to  OpenFlow  controllers.  We  want  to  establish  a  connection  between    OpenFlow   switches  that  are  running  on  PLE  nodes,  configuring  a  L2  overlay  network.    The  switches  are  connected  to  each  other  by  virtual  cables,  and  each  OpenFlow  switch   isequipped  with  a  TAP  device  unique  for  each  sliver.  As  a  consequence,  each  sliver  on  a  PLE  node  that  belongs  to  the  same  subnet  of  the  L2  overlay  network,  will  have  a  different  TAP  device.  Moreover,  other  slivers  on  the  same  nodes  have  no  access  to  the  overlay  network.      After   cycle   2,   NEPI   will   support   describing   and   deploying   a   network   of   OpenFlow   switches   on  PlanetLab  nodes.  The  switches  will  be  interconnected  to  client  nodes  through  tunnels,  also  deployed  by  NEPI.  Deploying  OpenFlow  controllers   to  manage   the  whole  network   should  also  be  possible  at  the  end  of  cycle  2.      

Page 56: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 56  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

In  order  to  deploy  and  configure  OpenFlow  experiments  in  PlanetLab,  we  will  implement  new     NEPI's  Resource  Managers  (RMs).  Although  this  is  not  yet  completely  defined,  we  would  like     to   implement  four   new   types   of   RM   to   describe   OpenFlow   experiments:   the   OpenvSwitch,   the   OpenFlow  controller,  the  OpenvSwitch  port,  and  the  switch-­‐to-­‐client  node  tunnels.    ◦ Manifold  support    

Manifold  is  the  component  at  the  heart  of  MySlice,  the  web  interface  for  Fed4FIRE  Portal.  Manifold  provides  an  interconnection  framework  for  heterogeneous  data,  as  well  as  a  set  of  users  interfaces.  The  portal  inherits  the  architecture  of  this  component  as  well  as  its  extensions  capabilities,  gateways  allows  new  platforms  to  be  added,  such  as   testbeds  via  SFA  and  plugins  allow  for  a  consistent  and  full-­‐featured  user  interface,  adapted  to  the  specifics  of  each  testbed.      Within   the   NEPI,   we   are  mainly   interested   in   the   advantages   of   resource   browsing,   selection   and  reservation  via  MySlice,  by  querying  the  Manifold  API.  We  started  testing  the  API  and  its  objects  and  queries  during  cycle  1,  and  we   intend   to  continue   this  work  during  cycle  2  until  we  have  achieved  discovery,   provisioning   and   reservation   of   SFA   federated   resources.   We   will   most   probably   face  issues  such  as  the  lack  of  agreement  on  resource  specification,  but  we  would  like  to  minimize  the  per  testbed  specific  post  processing  of  the  Manifold  query  result.    Depending   on   the   advances   of   Manifold   regarding   measurement   information,   we   would   like   to  extend  this  feature  to  NEPI's  Resource  Managers  using  Manifold  where  possible.    ◦ FRCP  support    

During   cycle   1  many   testbeds   implemented   an   SFA   interface,   and   with   the   adoption   of   OMF6   by  several   testbed   as   their   control   and   management   framework,   they   started   exposing   an   FRCP  interface.   FRCP  may  be  used  by   any   software   component   such   as  NEPI   to   control   and  orchestrate  distributed   testbed   resources   in   the  context  of  Fed4FIRE.  The  use  of  both  SFA  and  FRCP   interfaces  aims  at  establishing  a  federation  layer  for  control  plane  and  experimental  plane  respectively.      In  order  to  advance  NEPI's  support  for  federated  testbeds  regarding  FRCP,  we  plan  to  deploy  our  own  OMF   6   testbed,   containing   the   different   FRCP   enabled   components,   and   study   the   messages  exchanged  between  the  different  Resource  Controllers  and  the  OMF  experiment  controller.  We  will  then   implement   a   OMF6   protocol   client   in   NEPI,   supporting   the   five   XML   messages,   INFORM,  CONFIGURE,  REQUEST,  CREATE,  and  RELEASE.    We  will  also  extend  NEPI's  existing  XMPP  client  class,  currently  working  with  the  OMF  5.4  messages,  to  support  more  functionalities   (such  as  OML  measurements).  Finally,  we  will  choose  between  two  approaches   to   model   OMF   6   resources   in   NEPI.   One   approach   will   be   to   create   new   Resource  Manager   for   each   OMF6   resource,   the   other,   will   be   to   create   generic   Resource   Manager   and  associate   it   with   a   set   of   OMF   resources,   this   decision   will   be   based   on     the   similarities   of   the  exchanged  messages  for  the  set  of  resources.  For  example,  to  put  different  OMF6  resources  in  state  ACTIVE,   they  might  use   the   same  configuration  of   the  XML  message  CONFIGURE,  even   though   the  final   state   ACTIVE   refers   to   different   actions   being   performed   in   the   resources.   There   shows,  therefore,  the  possibility  to  re  use  code,  we  plan  to  study  further  this  matter  during  cycle  2.    

3.6.3.2  OMF  EC  (experimenter  side  running  in  experiment  server)    Our   current   development   plans   for   the   OMF   Experiment   Control   Framework   is   along   the   four  following  points.    1)  Integration  of  previous  OMF  5.4  features  into  OMF  6  

Page 57: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 57  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 The   latest  version  6  of  OMF  has  been  a  complete   re-­‐design   from  the  previous  version  5.4.  This   re-­‐design  was  made  necessary  as  we  adopted  a  new  model   for   resource  handling,  and  the  new  FRCP  protocol   to   exchange   control   requests   and   information   between   controllers   and   resources.   This   is  clearly   in   issue   in   the   context   of   Fed4FIRE,   since   one   of   the   goals   of   federating   al   these   different  facilities   is   to   enable   large-­‐scale   experiments.   To   focus   on   the   timely   delivery   of   the   first   stable  releases   of   OMF   6,   we   deliberately   left   aside   some   of   the   less   used   OMF   5.4   features   for   later  implementation.      Our  plan   for   the   coming  months   includes   the  gradual   implementation  of   these   features   in  OMF  6.  One   example   of   such   feature   is   the   support   for   custom   user-­‐defined   events.   In   the   previous   5.4  version,   a   user  was   able   to   define   a   custom  event   based  on   the   state  of   some   resources   or   some  measurements   that   they   are   collecting.  While   some   user-­‐defined   event   support   currently   exist   in  OMF  6,  it  does  not  yet  support  this  scenario,  and  we  will  extend  it  to  do  so.  Another  example  of  such  feature  is  the  support  for  loading  different  part  of  an  experiment  from  different  standalone  files,  e.g.  one   file   containing   the   main   experiment   description   together   with   multiple   other   files   with  application   definitions.   These   files   could   be   local   to   the   experimenter's   platform   or   available  remotely  via  URI  addresses.    2)  Support  long-­‐running  experiments    Current   OMF-­‐based   experiment   are   executed   in   the   context   of   an   Experiment   Controller   (EC)  instance.   Thus   when   the   EC   instance   is   terminated,   so   is   the   experiment.   In   many   experimental  scenario,  the  researcher  should  be  able  to  terminate  the  EC  process  used  to  launch  the  experiment,  while   still   having   the   involved   remote   resources   continue   running  whatever   tasks   required   by   the  experiment.  At  a  later  time,  the  researcher  should  be  able  to  start  a  new  EC  instance,  "re-­‐attach"  it  to  the   running   experiment,   and   query/issue   further   request/control   commands   to   the   remote  resources.    The   implementation   of   such   long-­‐running   experiment   feature   requires   many   separate   smaller  features  such  as:  

• allow  EC  to  detach  from  experiment  without  resetting  the  involved  resources  • allow   EC   to   re-­‐establish   communication   with   resources   which   are   still   involved   in   an  

experiment  • allow  EC  to  store  state  of  an  experiment,  e.g.  in  an  OML  database  • extend   OEDL   (i.e.   the   language   used   to   describe   experiment)   support   for   long-­‐running  

experiment.  For  example,  when  re-­‐attaching  to  an  experiment  and  providing  an  experiment  script  at  the  same  time,  any  already  executed  part  of  that  script  should  not  be  re-­‐executed  again,  while  new  parts  should  be  executed.    

 3)  Improved  scalability    The   communication   between   the   OMF   entities   has   so   far   been   using   a   XMPP-­‐based   publish-­‐and-­‐subscribe   system.   The   publish-­‐and-­‐subscribe   paradigm   offers   many   interesting   properties   such   as  being   asynchronous,   supporting   any-­‐to-­‐any   messaging,   scaling   to   a   large   number   of   entities.  However,   following  recent  performance  tests  we   identified  some   limits  of   the  current  XMPP-­‐based  solution   that  OMF   is   using,   and   some   internal   communication-­‐related   inefficiencies  within   the   EC.  These  issues  effectively  limit  the  number  of  resources  and  the  number  of  experiments  that  could  be  supported  by  the  current  OMF  entities.  This  is  clearly  in  issue  in  the  context  of  Fed4FIRE,  since  one  of  the  goals  of  federating  al  these  different  facilities  is  to  enable  large-­‐scale  experiments.    

Page 58: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 58  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 We   have   started   addressing   these   inefficiencies   and   testing   a   new   publish-­‐and-­‐subscribe   scheme  based  on  AMPQ.  While  the   initial   results  are  promising,   i.e.  we  manage  to  support  more  resources  and  experiments,  our  current  investigations  are  not  yet  completed.  We  plan  to  integrate  the  results  of  this  work  in  a  production-­‐ready  release  of  OMF  6  and  update  its  documentation  accordingly.    4)  Support  new  resources    The  list  of  resources  supported  by  the  OMF  Experiment  Control  tools  has  been  growing  steadily  (e.g.  PC-­‐based,  KVM-­‐based  virtual  machines,  software  application,  Android  phones).  We  plan  to  add  some  new  resources  to  this  list,  such  as  GENI  Racks,  Amazon  Cloud-­‐based  virtual  machines.    

3.5 Specification  of  the  user  Interface  /  portal  (Task  5.6)  In   line   with   the   cycle   2   specification   plans   this   section   first   provides   a   tentative   definition   of   a  Fed4FIRE   portal   as   a   single   point   of   access   to   service.   This   is   followed   by   the   summary   of   cycle   1  achievements  and   the  description  of   cycle  2  architecture  and   its   components.  As  an  alternative   to  the  portal  we  describe  jFed  –  a  standalone  tool  that  supports  the  WP5  service.  

3.5.1 Portal  service      Table  3-­‐9  Portal  service  description  

Basic  Information  Service  name   User  interface  and  portal  General  description   Fed4FIRE  portal  is  a  user  interface  that  allow  experimenters  to  

register  and  access  the  resources  provided  by  the  facilities  in  a  user-­‐friendly  way.  This  component  is  coded  in  python,  using  the  Django  Framework,  as  well  as  html  and  Javascript  for  the  user  interface  and  Manifold  backend  including  an  SFA  gateway.  Experimenters  can  register  accounts  on  the  portal.  A  registered  experimenter  will  be  offered  mechanisms  to  discover  available  resources  and  gather  information  on  their  nature  and  capabilities,  reserve  resources,  describe  their  research  and  related  experiments,  get  information  on  running  experiments  such  as  monitoring  data,  and  access  and  analyze  the  results  of  an  experiment.  

User  of  the  service   Experimenters  Service  management  Service  Owner   Developer:  UPMC  Contact  information  (internal)   [email protected]  Contact  information  (external)   Fed4Fire  First  Line  Support  Service  status   Cycle1:  Beta  version,  development  

Cycle2:  production  version,  integration  of  new  testbeds  and  plugins    Cycle3:  integration  of  experiment  control  and  monitoring  

Service  Area/Category   • Account  management  and  authentication  • Resource  discovery    • Resource  reservation  • Data  collection    

Service  agreements   SLA  is  n/a  

Page 59: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 59  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Detailed  makeup  Core  service  building  blocks   Manifold,  MySlice  Additional  building  blocks   SFA  Registry,  SFA  AMs,  OML  Service  packages   n/a  Dependencies   SFA  Registry  and  AMs  

pyOpenSSL    m2crypto    xmlsec1-­‐openssl-­‐devel    libxslt-­‐python    python-­‐ZSI    util-­‐linux-­‐ng    python-­‐lxml    python-­‐setuptools    python-­‐dateutil    postgresql    postgresql-­‐python    python-­‐psycopg2    python-­‐sqlalchemy    python-­‐migrate    python-­‐xmlbuilder    postgresql-­‐server  Manifold  python2.7    python-­‐lockfile    python-­‐setuptools  python-­‐pyparsing    python-­‐BeautifulSoup    python-­‐networkx    python-­‐pygresql  python-­‐twisted    python-­‐lxml    python-­‐daemon    sqlite3    python-­‐m2crypto  cfgparse  make  MySlice  python-­‐django  jQuery  Bootstrap  css  Google  Map  

Technical  Risks  and  Competitors  Risks   The  portal  relies  on  the  distributed  architecture  of  the  federation  of  

testbeds.  Thus,  faults  can  occur  on  the  portal  because  of  one  of  the  distributed  AMs.  Therefore,  the  Manifold  framework  is  able  to  handle  partial  responses  from  available  AMs.  A  proper  monitoring  of  AMs  in  the  Fed4Fire  federation  is  required  and  is  already  available.  The  SFA  Registry,  which  stores  the  users  and  delivers  credentials  is  

Page 60: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 60  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

a  potential  central  point  of  failure  and  has  to  be  carefully  monitored.  However,  user’s  credentials  are  stored  in  a  database  local  to  the  portal  and  can  be  used  until  their  expiration  date  is  reached.    The  portal  web  interface  relies  on  third  party  software  like  django,  jQuery  and  Bootstrap  css.  The  differences  between  the  versions  have  to  be  tested  to  ensure  a  full  compatibility.  The  map  plugin  relies  on  GoogleMaps  service  and  couldn’t  be  used  if  the  Google  service  is  not  reachable,  however  the  user  could  still  use  other  plugins  such  as  the  table  view  of  resources.    Other  third  party  software  installed  through  linux  packages  depend  on  the  linux  distribution  and  versions  and  have  to  be  tested,  when  updates  are  required.  

Competitors   GENI  Portal    

3.5.2 Cycle  2  Specification  of  the  Portal    

3.5.2.1 Portal  Architecture  MySlice  as  a  portal  for  testbeds  relies  on  3  layers  as  shown  on  Figure  3-­‐21:  (1)  MySlice  web  frontend;  (2)  Manifold  backend;   (3)   SFA  AM  at   thetestbed   side.   The  below   sub-­‐section  describe   these   layers  while  Appendix  B  (Section    0)  contains  Table  10-­‐1  with  the  summary  of  achieved  functionalities  per  testbed  within  cycle  1  and  lists  essential  requirements  to  be  fulfilled  in  cycle  2;  and  Table  10-­‐2  with  cycle  2  commitments  and  plans  per  project  partner.  

3.5.2.1.1 MySlice  web  frontend  The  MySlice  web  frontend  is  coded  in  Python  and  uses  the  Django  framework.  MySlice  allows  issuing  queries  to  the  Manifold  backend  using  an  XMLRPC  API.  An  example  query  to  get  the  list  of  testbeds  is  given  below  testbed_query = Query().get('network').select('network_hrn','platform') page.enqueue_query(testbed_query) When  the  web  fronted  gets  the  result  back  from  Manifold,  the  plugins  are  responsible  of  displaying  it  to   the  user.  This   testbedList  plugin   inherits   from  simpleList  plugin  and  takes  a  query  as  parameter.  The  page.enqeue_query  function  allows  to  load  the  results  of  a  query  asynchronously  in  the  plugin.    testbedlist = TestbedList( page = page, title = "testbeds", query = testbed_query, ) More  information  about  the  development  of  plugins  can  be  found  in  the  MySlice  documentation:  http://trac.myslice.info/wiki/Manifold/Extensions/Plugins  

Page 61: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 61  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐21Fed4FIRE  Portal  Architecture  

3.5.2.1.2 Manifold  backend  Manifold  receives  the  queries  issued  either  by  MySlice  web  frontend  or  by  a  3rd  party  tool.  All  these  queries   are   sent   to   Manifold   backend.   Manifold   identifies   the   relevant   platforms   to   answer   to   a  query:   Figure   3-­‐22   shows   the  message   sequence   chart   (MSC)   for  Get   Platforms,  while   Figure   3-­‐23  shows  MSC  for  Get  Resources.  The  underlying  information  model  has  to  be  specified  using  Metadata  files..  Extract  of  the  SFA  metadata  file:  class resource { const text urn; const text hrn; const text type; const text network_hrn; const text hostname; const text component_manager_id; const text component_id; const bool exclusive; const text component_name; const hardware_type hardware_types[]; const location location; const interface interfaces[]; const text boot_state; const text country; const text longitude; const text latitude; const text x; const text y; const text z; initscript initscripts[]; tag tags[]; slice slice[]; KEY(urn);

Page 62: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 62  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

CAPABILITY(retrieve, join, fullquery); };  Then  Manifold   core   dispatches   this   query   to   the   relevant   platforms:   either   testbeds,   databases   or  other   services.   These   platforms   can   be   contacted   through   different   Manifold   gateways:   SFA,  PostgreSQL,  TDMI,  …      

 Figure  3-­‐22  Get  Platforms  MSC  

More  information  about  the  Gateways  in  Manifold  can  be  found  in  the  documentation:  http://trac.myslice.info/wiki/Manifold/Extensions/GatewayIndex    The  results  from  the  various  platforms  are  retrieved,  aggregated  and  sent  back  to  the  user,  either  to  the  MySlice  web  interface  or  to  the  3rd  party  tool  that  issued  the  query.  Example  queries  can  be  found  in  the  documentation:    http://trac.myslice.info/wiki/MySlice/SampleQueries  

Page 63: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 63  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐23  Get  Resources  MSC  

 

3.5.2.1.3 User  Registration  Process  When   an   experimenter   registers   on   the   portal,   several   actions   are   performed   in   order   to   provide  her/him  access  to  the  federated  testbeds.  Firstly,  an  account   is   locally  created  on  the  portal  and   in  Manifold  backend  with  a  pending  status.  The  portal  will  then  request  the  list  of  PIs  from  the  Fed4Fire  Registry  using  the  SFA  Gateway.  The  portal  will  send  an  email  to  the  PIs  of  the  relevant  authority  (ie:  fed4fire.upmc,  fed4fire.iminds,  …).  Receiving  the  email,  a  PI  will  navigate  to  the  portal  and  validate  the  request  of  the  new  user.  This  validation  will  trigger  a  user  account  creation  toward  the  Fed4Fire  Registry   and   enable   the   local   Manifold   account   of   the   user.   Finally,   an   email   will   be   sent   to   the  experimenter  informing  him  that  his  account  has  been  validated,  see  figure  below.    

Page 64: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 64  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐24  User  registration  MSC  

Requirement:  A  testbed  willing  to  join  the  federation  (see  the  MSC  at  Figure  3-­‐24)  has  to  trust  identity  providers,  in  the  Fed4Fire  context  the  Fed4Fire  central  Registry.  

3.5.2.2 Cycle  2  requirements  for  measurement  and  monitoring  Following  the  Fed4FIRE  Berlin  technical  meeting  in  October  2013,  WP6  partners  have  identified  the  requirements  for  measurements  and  monitoring  as  described  in  D6.2.  Experimenters  will  be  able  to  send  Queries  to  Manifold  backend  in  order  to  access  OML  measurements  and  SFA  information.      The  following  sources  of  data  will  be  accessible  through  an  API  using  Manifold:  

• OML  exposed  measurements  realized  during  an  experimentt.  OML  is  also  adopted  by  Fed4FIRE  to  collect  infrastructure  (first  line  support  like  Nagios  and  Zabbix  information)  and  facility  measurements  (CPU  load,  etc.).  

• SFA  provides  testbed,  resources  and  slice-­‐level  information  • TDMI  provides  infrastructure  measurement  by  performing  network  measurement  (like  

traceroute  measurements  between  each  pair  of  nodes  of  PlanetLab  Europe).  

3.5.3 Standalone  tool  –  jFed  The   cycle   2   architecture   defined   by   WP2   in   D2.4   mentions   the   existence   of   a   component   called  “Stand-­‐alone  tool”.  This  refers  to  the  fact  that  the  experimenter  has  the  freedom  to  use  any  tool  that  he/she   desires   to   perform   the   different   steps   of   the   experiment   management   lifecycle.   The   only  requirement  is  that  these  tools  should  adopt  the  appropriate  Fed4FIRE  interfaces  (being  the  SFA  AM  API   for   resource  discovery,   reservation  and  provisioning,  FRCP  for  experiment  control  and  OML  for  measurements  and  monitoring).  In  Fed4FIRE  iMinds  is  developing  one  such  tool,  intending  to  provide  a  user   interface  that  abstracts  as  many  underlying  technical  details  as  possible,  while  being  able  to  handle  the  heterogeneity  that  is  typical  for  Fed4FIRE.  The  specifications  were  not  yet  part  of  the  first  

Page 65: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 65  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

round  of  WP5  specs,  described  in  D5.1.  However,  quite  some  progress  has  been  made  with   jFed   in  that  first  development  cycle  of  the  project.  Therefore  this  specifications  of  jFed  listed  in  this  section  relate  both  to  cycle  1  and  cycle  2  of  the  project.    

 Figure  3-­‐25  Building  blocks  of  the  jFed  toolkit  

jFed  can  actually  be  considered  as  more  than  a  user  interface,  more  appropriate  is  to  refer  to  it  as  a  toolkit  implemented  in  Java  to  support  federation  of  Future  Internet  resources.  As  depicted  in  Error!  Reference  source  not  found.,  this  jFed  toolkit  is  composed  of  different  building  blocks:  

• jFed  low  level  library:  this  is  a  Java  wrapper  around  the  XMLRPC  functionality  that  is  needed  to  call  the  different  Fed4FIRE  APIs.  In  cycle  1  the  focus  was  on  supporting  the  SFA  AM  API,  in  cycle  2  it  will  be  explored  if  the  addition  of  support  for  FRCP  and  OML  is  feasible.  So  this  library  abstracts  the  details  of  the  XMLRPC  aspect  of  calling  the  API,  meaning  that  the  applications  developed  on  top  of  this  library  can  focus  on  the  actual  meaning  of  the  API  calls  (arguments,  functionality,  return  values).  

• High  level  library:  for  certain  developers  (e.g.  experimenter  user  interface  developers),  there  is  no  need  to  be  acquainted  with  every  little  detail  of  the  Fed4FIRE  APIs.  For  these  people  it  is  sufficient  to  be  able  to  discover  and  reserve  resources  in  a  more  abstracted  manner.  The  jFed  High  level  library  was  developed  for  this  purpose:  it  is  an  additional  library  intended  to  be  used  by  application  developers,  providing  abstracted  functionalities  for  working  with  testbed  resources.  The  high-­‐level  library  itself  makes  use  of  the  low  level  library.    

• jFed  probe:  tool  to  manually  verify  if  every  call  of  the  API  is  supported  correctly  by  a  given  testbed.  This  is  very  useful  in  the  context  of  learning  about  the  Fed4FIRE  APIs,  or  when  adding  a  new  testbed  to  the  federation.  It  can  be  used  both  by  the  testbed  developers  pursuing  compatibility  with  the  Fed4FIRE  architecture,  and  by  the  federator  in  charge  of  the  quality  control  of  new  Fed4FIRE  testbeds.  A  screenshot  of  this  tool  is  given  in  Error!  Reference  source  not  found..  

 

jFed low level library (wrapper around XMLRPC)

jFed probe jFed probe CLI jFed automated tester

jFed automated tester CLI

jFed UI

High level library

Page 66: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 66  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 • jFed  probe  CLI:  this  is  the  command  line  version  of  the  jFed  probe.  This  allows  the  easy  

integration  of  jFed  functionality  in  other  frameworks  (e.g.  the  First  Level  Support  dashboard  of  Fed4FIRE).  In  such  cases  one  would  typically  script  the  call  of  some  specific  API  functions  using  jFed  Probe  CLI,  and  would  then  redirect  the  text  output  to  the  input  of  another  process,  script  or  daemon.    

• jFed  automated  tester:  where  the  probe  treats  every  performed  API  call  entirely  independently,  the  automated  tester  performs  a  sequence  of  API  calls  where  the  relation  between  subsequent  calls  is  kept.  So  this  is  a  stateful  API  testing  tool,  typically  used  for  the  automated  compliance  testing  of  testbeds  that  are  already  part  of  the  federation.  

• jFed  User  Interface:  this  part  of  the  jFed  toolkit  is  targeting  actual  experimenters.  It  intends  to  provide  a  user  experience  that  is  as  easy  as  possible,  by  abstracting  as  much  underlying  details  as  possible.  At  the  same  time,  it  allows  experienced  experimenters  to  get  in  touch  with  those  underlying  details  if  they  want  to.  The  tool  includes  (some  aspects  finalized  in  cycle  1,  others  are  being  further  developed  in  cycle  2  and  possibly  even  cycle  3):  

o a  graphical  design  pane  that  allows  experimenters  to  easily  find  appropriate  resources  and  add  them  to  the  experiment,  and  connect  them  in  a  desired  topology.    

o an  editor  that  allows  manual  editing  of  the  underlying  request  rspecs  in  order  to  support  even  the  most  testbed-­‐specific  functionalities.  

o the  possibility  to  save  experiment  configurations  locally,  and  load  them  again  later.  o built  in  functionality  to  report  a  bug  to  the  developers,  which  includes  much  meta  

information  about  the  experiment  in  order  to  allow  high-­‐quality  support.    o A  mechanism  to  login  through  SSH  on  any  of  the  resources  of  the  experiment  with  

the  click  of  a  button  and  using  a  single  set  of  credentials.  o Support  of  an  SSH  gateway  server  in  case  e.g.  IPv6  is  needed  but  not  available  at  the  

experimenter  side,  or  if  the  experimenter  is  behind  firewalls  that  block  some  of  the  ports  needed  by  the  Fed4FIRE  APIs.  iMinds  has  deployed  such  an  SSH  gateway  server  that  can  be  used  by  jFed  UI  experimenters.  The  corresponding  architecture  is  depicted  in    

o The  possibility  to  show  a  log  of  all  the  API  calls  that  have  been  generated  behind  the  curtains  

o An  automated  check  to  easily  identify  any  connectivity  issues:  this  allows  the  experimenter  to  assess  if  it  is  needed  to  utilize  the  SSH  gateway  server.  

Figure  3-­‐26  jFed  Probe  (manual  testing  +  API  learning)  

Page 67: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 67  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

o A  mechanism  to  orchestrate  certain  actions  that  need  to  be  executed  on  the  resources  during  the  course  of  the  experiment.  

o A  mechanism  to  retrieve  OML  monitoring  and  measuring  data  from  the  experiment  and  present  it  visually  to  the  experimenter.  

     Some  screenshots  of  some  of  the  more  mature  aspects  are  given  below  as  an  illustration  of  the  look  and  feel  to  which  the  newly  developed  components  should  comply.    

Figure  3-­‐29:  design  pane  of  jFed  UI,  representing  the  resources  in  an  abstracted  manner  while  allowing  the  experimenter  to  select  the  specific  testbeds  from  which  

the  resource  should  be  requested  

Virtual Wall Authority

SSH gateway server AMs, nodes, …

Public SSH keys of PEM cert

IPv4 API calls TCP 22

IPv4 SSH TCP 22

IPv4/IPv6 API calls SSH login Even private vpns connected to SSH gateway

Figure  3-­‐27:  Arhitecture  of  the  Fed4FIRE  SSH  gateway  server  deployed  by  iMinds  Figure  3-­‐28  Arhitecture  of  the  Fed4FIRE  SSH  gateway  server  

deployed  by  iMinds  

Page 68: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 68  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

   

 

 Figure  3-­‐31:  Easy  submission  of  bug  reports  that  contain  a  log  of  all  API  calls  that  were  generated  behind  the  curtains  

   

Figure  3-­‐30:  Manual  editing  of  Rspec  to  support  the  most  testbed-­‐specific  features  in  jFed  

Page 69: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 69  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  3-­‐32:  jFed  UI  Connectivity  Tester  

   

Page 70: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 70  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

4 Conclusion  and  Future  Plans    In  this  deliverable,  the  experiment  life  cycle  management  was  specified  as  a  service  in  conformance  with   the   emerging   industry   standard   for   service   management   in   federated   IT   infrastructures.   To  foster   further   this   work  WP5   shall   follow   the   usage   patterns,   incidents   and   both   the   architecture  development  and  the  sustainability  work  in  the  project.    Regarding  the  semantic  description  of  resources  within  the  federation,  the  future  plan  is  two  fold,  in  both  particularly  for  sustainability  reasons:  on  the  one  hand  to  is  envisioned  that  each  testbed  offers  resource   descriptions   based   on   the   developed   ontology.   On   the   other   hand,   based   on   the  experienced   gained   by   creating   and   applying   the   information  within   the   Fed4FIRE   federation,   the  model   will   be   more   harmonized   by   joint   research   with   related   communities,   such   as   GENI,   IEEE  P3203  and  OneM2M.    This   document   also   discussed   that   during   the   second   cycle   of   development   for   the   reservation  broker,   the   efforts   will   be   focused   on   the   necessary   modifications   in   order   to   create   a   central  instance   of   the   service.     The   service   is   tightly   coupled   with   MySlice/Manifold,   hence   the  corresponding  data  model  will  be  adopted   in  cycle  2.     In   the  following  period  we  will   look   into  the  adaptation   of   existing   approaches   for   the   delivery   of   the   mapping   sub-­‐module   in   the   Fed4FIRE  environment  as  well   as   investigate   the  exploitation  of   the   semantic-­‐web   technologies   (OWL  based  information  model)  on  request  partitioning  and  resource  mapping.    For  SLA  management,  coupled  to  resource  provisioning,  a  first  introduction  of  the  SLA  front-­‐end  tool  was  presented,  together  with  an  explanation  of  how  the  implementation  plan  will  be  for  the  Django-­‐based  MySlice  Portal.  As  the  next  step  towards  the  implementation  of  SLA  management  for  cycle  3,  we  described  the  following  major  tasks:  

• Implement  a  general  SLA  for  all  facilities  based  on  facility  monitoring  (testbed  up/down)    • Gather  monitoring  data  about  resources  from  the  semantic  resource  database.  • Differentiate  between  Service  Level  Agreements  (SLA)    and  Operational  Level  Agreements  

(OLA)  management.  

For  resource  control  service  the  main  plan  for  the  future  work  is  to  get  experience  on  the  service  deployment  and  its  support  in  heterogeneous  testbeds.  User  interface  and  portal  service  developers  will  collect  feedback  from  multiple  users  of  this  service  and  shall  plan  the  improvements  for  the  cycle  3  of  the  project.    Finally,  we  want   to   remark   that  we  need   to  understand   the   lessons   from   the   service  usage   in   the  operational   environment,   specifically   from   the   use   of   the   service   by   Open   calls   experiments.  Therefore  we  plan  to  maintain  good  communication  with  WP10  in  order  to  gain  this  very  important  knowledge.    

Page 71: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 71  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

References    [1] Fed4FIRE.  (2013).  Detailed  specifications  for  first  cycle  ready  (Deliverable  5.1).  EU:  Fed4FIRE  

Consortium.  [2] Fed4FIRE.  (2014).  Scond  Federation  Architecture  (Deliverable  2.4  version88).  EU:  Fed4FIRE  

Consortium.  [3] Delphi  method.  (2014,  February  24).  In  Wikipedia,  The  Free  Encyclopedia.  Retrieved  12:08,  

February  28,  2014,  from  http://en.wikipedia.org/w/index.php?title=Delphi_method&oldid=596934962  

[4] FitSM  -­‐  Standard  for  lightweight  service  management  in  federated  IT  infrastructures,  on-­‐line  resource  of  EU  FedSM  project.  Retrieved  11:41  February  27,    2014  from  http://www.fedsm.eu/fitsm    

[5] FitSM,  PR1:  Service  Portfolio  Management,  Template:  Service  Portfolio  /  Catalogue  entry,  on-­‐line  resource  of  EU  FedSM  project.  Retrieved  17:33  February  27,    2014  from  http://www.fedsm.eu/fitsm/4    

[6] S.  Taylor,  V.  Krivcov,  P.  Rey,  F.  Lobillo,  G.  Androulidakis,  V.  Pouli,  A.  Kapoukakis  “D7.2:  Detailed  specifications  regarding  trustworthiness  for  the  second  cycle.”  Deliverable  of    the  FP7  Fed4FIRE  project,  February  2014  

[7] Documentation  of  the  testbeds  federated  in  Fed4FIRE:    http://doc.fed4fire.eu/testbeds.html  [8] S.  Taylor,  T.  Leonard,  M.  Boniface,    G.  Androulidakis,  L.  Baron,  M.  Ott  “D7.1:  Detailed  

specifications        regarding  trustworthiness  for  the  second  cycle.”  Deliverable  of    the  FP7  Fed4FIRE  project,  February  2014.  

[9] Extended  the  Web  GUI  with  plugins:  http://trac.myslice.info/wiki/Manifold/Extensions/Plugins  [10]  X.  Yufeng,  I.  Baldine,  J.  Chase,  and  K.  Anyanwu,  “TR-­‐  13-­‐02:  Using  Semantic  Web  Description  

Techniques  for  Managing  Resources  in  a  Multi-­‐Domain  Infrastructure-­‐  as-­‐a-­‐Service  Environment,”  RENCI  Technical  Report  Series,  Tech.  Rep.  April,  2013.    

[11]  M.  Ghijsen,  J.  van  der  Ham,  P.  Grosso,  and  C.  de  Laat,  “Towards  an  Infrastructure  Description  Language  for  Modeling  Computing  Infrastructures,”  in  IEEE  10th  International  Symposium  on  Parallel  and  Distributed  Processing  with  Applications,  IEEE,  Jul.  2012,  pp.  207–214.    

[12]  M.  Ghijsen,  J.  van  der  Ham,  P.  Grosso,  C.  Dumitru,  H.  Zhu,  Z.  Zhao,  and  C.  de  Laat,  “A  Semantic-­‐Web  Approach  for  Modeling  Computing  Infrastructures,”  To  appear  in  Computers  and  Electrical  Engineering,  2013.    

[13]  I.  Baldine,  Y.  Xin,  A.  Mandal,  C.  H.  Renci,  U.  J.  Chase,  V.  Marupadi,  A.  Yumerefendi,  and  D.  Irwin,  “Networked  cloud  orchestration:  A  geni  perspective,”  in  GLOBECOM  Workshops  (GC  Wkshps),  2010  IEEE,  IEEE,  2010,  pp.  573–578.    

[14]  Y.  Xin,  C.  Hill,  I.  Baldine,  A.  Mandal,  C.  Heermann,  and  J.  Chase,  “Semantic  Plane  :  Life  Cycle  of  Resource  Representation  and  Reservations  in  a  Network  Operating  System,”  RENCI,  Tech.  Rep.,  2013.    

[15]  G.  Klyne,  J.  J.  Carroll,  and  B.  McBride,  “Resource  description  framework  (RDF):  Concepts  and  abstract  syntax,”  W3C,  W3C  Recommendation,  2004.  

[16]  D.  Newman,  S.  Bechhofer,  and  D.  D.  Roure,  “myExper-­‐  iment:  An  ontology  for  e-­‐Research,”  2009.  [17]  A.  Pras  and  J.  Schoenwaelder,  On  the  Difference  between  Information  Models  and  Data  Models,  

RFC  3444  (Informational),  2003.  [18]  D.  L.  McGuinness  and  F.  van  Harmelen,  “OWL  Web  Ontology  Language  Overview,”  W3C,  Tech.  

Rep.,  2004.  

                                                                                                                         8  The  D2.4v8  at  the  time  of  this  writing  was  under  project  internal  reviewing  however  the  content  borrowed  from  that  deliverable  was  not  revised  after  the  review  was  completed  

Page 72: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 72  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

[19]  M.  Tosic,  I.  Seskar,  and  F.  Jelenkovic,  “TaaSOR:  Testbed  as  a  Service  Ontology  Repository,”  Testbeds  and  Research  Infrastructure.  Development  of  Networks  and  Communities,  vol.  44,  pp.  419–420,  2012.  

[20]  D.  Brickley  and  R.  V.  Guha,  “Resource  Description  Framework  (RDF)  Schema  Specification  1.0:  W3C  Candidate  Recommendation  27  March  2000,”  W3C,  Tech.  Rep.,  1998.  

[21]  E.  Prud’Hommeaux  and  A.  Seaborne,  “SPARQL  query  language  for  RDF,”  W3C  recommendation,  2008.  

[22]  Y.  Arens,  C.-­‐N.  Hsu,  and  C.  A.  Knoblock,  “Query  processing  in  the  SIMS  information  mediator,”  Advanced  Planning  Technology,  vol.  32,  pp.  78–93,  1996.  

[23]  Ben  Adida  and  Mark  Birbeck.  RDFa  Primer  –  Bridging  the  Human  and  Data  Webs.  http://www.w3.org/TR/xhtml-­‐rdfa-­‐primer/,  October  2008.  

[24]  ProtoGENI,  http://www.protogeni.net/wiki/RSpecLifeCycle  

Page 73: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 73  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

5 Appendix  A:  WP5:  Evaluation  of  Cycle  2  items  (procedure  and  the  outcome)  

! Goal  of  this  study  is  to  mutually  agree  on  the  priorities  of  possible  WP5  plans  for  the  cycle  2    ! Sources  of  these  possible  plans  are  

◦ WP2  reported  evolution  of  the  architecture    " based  on  \WP2  Architecture\2.1  Architecture\cycle2_inputs  

◦ WP5  Promises  /  Issues  reported  in  D5.1  " Can  be  biased,  need  reality  check,  feel  free  to  comment  

! Method:  we  shall  use  Delphi  method  to  collect  individual  opinions  on  Importance  (Im)  and  Problem  level  (Pr)  per  item,  average  them  (Im,  Pr)  and  prioritize  items  based  on  the  value  of  item‘s  Criticality  (Cr):  

Cr  =  Im  •  Pr  

! Expert  opinions:  should  be  expressed  as  integer  values  (ranks)  taken  from  the  below  scales;    generally  Delphi  methodology  recommends  that  experts  do  not  take  too  much  time  in  thinking  on  their  opinions  (it  works  better  when  you  use  your  blink  type  of  thinking);      in  this  particular  study  we  ask  you  not  to  use  the  0  (=don‘t  know)  value  

! Scales:    Im  =  {0                          1                      2                    3                      4                      5                6                    7                    8                    9}        don‘t  know                                        low                                                            fair                                                        critical  Pr  =  {0                          1                      2                    3                      4                      5                6                    7                    8                    9}  

! Rules:  please  use  the  pages  with  questions  for  drafting  and  keep  them  for  your  own  reference  at  later  time  (when,  possibly,  we  shall  run  the  2nd  phase  of  this  evaluation,  in  which  all  experts  will  be  asked  whether  they  wish  to  edit  their  initial  ranking  per  item  based  on  the  knowledge  of  average  values);    please  mind  to  transfer  your  ranks  to  the  sheet  labeled  To  Be  Returned  To  Moderator  

! Additional  items:  are  very  much  welcome,  please  use  extra  sheets  to  describe  and  to  evaluate  them  

Thank  you  in  advance!  

   

Page 74: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 74  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

The  Delphi    outcome  below  shows  per  item  its  average  importance,  difficulty  level  and  criticality  as  evaluated  by  project  members  attending  the  Berlin  Technical  workshop  (October  2013).  

Table  5-­‐1  Cycle  2  Delphi  evaluation  

Item  ID,  Item  name  (source)    Item  description    

 

Mean  Importance  

Mean  Difficulty  

Criticality  

2-­‐1                      Testbed  Protection  (WP2)  Policy  Decision  Point  for  dynamic,  fine-­‐grained  protection  of  the  testbeds  with  the  following  authorization,  slice  credentials,  (delegation?)     6.714286   7.071429   47.47959  2-­‐2                      Directory  (WP2):    computer  readable  testbed  and  certificate  directory  for  testbeds  (and  tools)  to  get  a  list  of  testbeds  (URLs)  and  of  root  CAs  Use  Cases:  XML,  XML+date+version+MD5+,GENI/Utah  Clearinghouse     6.928571   5.214286   36.12755  2-­‐3                      FLS  Monitoring  (WP2):    First  Level  Support  monitoring  for:  Zabbix  to  OML  interoperability  Use  Cases:  aggregation,  other:  ______________?   5.142857   4.785714   24.61224  2-­‐4                      Reputation  (WP2):    mechanisms  and  tools  towards  building  trustworthy  services  based  on  the  combination  of  reputation  and  monitoring  data  for  empowering  the  users/experimenters  to  select  testbeds  based  on  dynamic  performance  metrics;  Use  Cases:    testbed  evaluation:  K  testbeds  and  N  users;  How:  testbed  service  reputation  based  on  ROCQ  (:  Reputation,  Opinion,  Reliability,  and  Quality);  user  evaluation  (cycle  3?)  based  on  user  credibility  and  behaviour,  How:  like  at  eBay  based  on  user  behaviour  counters   4.714286   6.142857   28.95918  2-­‐5                      Advanced  RSVP  (WP2):    Advanced  reservation  of  resourced  to  provide  exclusive  access  to  physical  machines,  like  the  wireless  testbeds  Use  Cases:  central  RSVP  broker   6.428571   6.642857   42.70408  2-­‐6                      Services  (WP2):    Add  services  to  testbeds  that  currently  have  only  infrastructure  for:  agnostic  use  of  resources,  reusability,  abstraction,  composability,  SLA  conformance,  simplicity,  automatic  deployment,  sustainability  Use  Cases:  Catalogue,  Service  Composition  editor  ,  Cluster  (hadoop/storm)  service  deployment    WP4  input  on  services  and  architectures:  weather  forecast,  IMS  NFV  IMS  video-­‐conference  over  cloud  app.  testing  in  disruptive  wireless  environment,  BONFIRE  stats,  Cloud  manufacturing   5.571429   6.642857   37.0102  2-­‐7                      Clearing  House  (WP2):     6,0   6,0   36,0  

Page 75: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 75  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Uniform  clearing  house  (CH)  for  standard  APIs  for  Member  Authority,  Slice  Authority  2-­‐8                    SLA  Management  (WP2)  SLA  Management  will  add  an  additional  level  of  abstraction  so  that,  instead  of  the  resources  directly,  the  SLAs  offered  based  on  these  are  presented;  Use  Cases:  [Federated]  SLA  lifecycle  management  (One  SLA  per  resource  |  pre-­‐determined  resource  packaging  SLA  Management  architecture  (centralized  |  distributed;  (example:  web-­‐services  agreement  spec)  

5.142857    

6.857143   35.26531  5-­‐1                    Certificates  (D5.1):    experimenter  certificate  creation  and  sign  up  process  for:  experimenters  (both,  profs  and  students)  to  join  and  leave  the  federation  easily  Use  Cases:  with  delegation  |  w/o  delegation  |  revocation  of  certificate  (?)  

7.142857    

5.928571   42.34694  5-­‐2                    FIRE  Generic  (D5.1):    Generic  requirements  of  a  FIRE  federation:  scalability,  upgrade  support,  concurrent  versions,  ease  of  use   6.071429   5.714286   34.69388  5-­‐3                    Sustainability  (D5.1):    Sustainability  requirements  for  experimenters  (both  types:  prof,|  std)  to  join/leave  easily  via  certificates  Use  Cases:  detailed  certification   6.142857   5.642857   34.66327  5-­‐4                    Infrastructure  community  (D5.1):    High  priority  requirements  of  the  infrastructure  community:  Ontology  based  resource  discovery  of  node;  Ontology  based  resource  discovery  of  intra-­‐infra  topology;  Ontology  based  resource  discovery  of  inter-­‐infra  topology(structured  interconnection);    Resource  reservation(  soft  |  advance  |  multi-­‐site|    rsvp  info);  Interconnectivity;  (L3  connectivity  |  transparency  information)   7.142857   7.571429   54.08163  5-­‐5                    Service  community  (D5.1):    High  priority  requirements  of  the  services  community:  Resource  discovery  for  connectivity;  (structured  interconnection);  Experiment  control  (structured  interconnection)   6.357143   6.785714   43.13776  5-­‐6                    Portal  1  (D5.1):    General  Portal  Enhancements  for  Full  life-­‐cycle  support  by  Tb  owners,  WP5  via  custom  plug-­‐ins   6.642857   5.857143   38.90816  5-­‐7                    Portal  2  (D5.1):    Portal  registration  for  Convenience  and  efficiency  with  Use  Cases:  automatic  data  verification  |  requesting  Tb-­‐specific  information|  other:_________________________________     6.357143   5.857143   37.23469  5-­‐8                    Portal  3  (D5.1):    Portal  Authentication  for    extended  logins  handling  with  Use  Case:  approach  selection  based  on  usage  experience.     4.857143   5   24.28571  

Page 76: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 76  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

5-­‐9                    Portal  4  (D5.1):    Portal  Authorization  and  Access  for  Tb  policy  conformance  and  to  allow  Tb  PI's  and/or  Admins  to  validate  accounts  with  Use  Case:  rule-­‐based  authorization   6.071429   5.857143   35.56122  5-­‐10                    Portal  5  (D5.1):    Portal  to  include  Tb  resources  to  allow  Tb-­‐specific  plug-­‐ins  (e.g.  OpenFlow  with  Use  Case:  Integration  and  HO  between  MySlice  and  experimenter  control  tools.   5.928571   5.785714   34.30102  5-­‐11                    MySlice  (D5.1):    To  extend  MySlice  API    to  support  additional  plug-­‐ins   5.5   5.214286   28.67857  5-­‐12                    Tb  directory  (D5.1):    Tb  directory  to  become  both  human  and  machine  readable  Yellow  Pages  service  with  Use  Cases:  add  new  methods  to  SFA  API  |  MySlice  Update  script  for  refresh   6.285714   5.357143   33.67347  5-­‐13                    Tools  directory  (D5.1):    Tools  Directory  to  conform  Tools  wiki  structure:  Content  arrangement  for  sustainable  usage  by  the  community  with      Use  Cases:  Best  Current  Practices  (BCP)  |  Tools  endorsement   5.428571   4.857143   26.36735  5-­‐14                    Future  reservations  (D5.1):    Future  RSVP  Broker  to  support  all  five  types  of  reservation  with  Use  Cases:  Scheduler  extension  beyond  FCFS  |  Conflict  resolution  (custom  algorithms)  |  AM  Liaison  (send  commands)  with  reserved  resources  |  Portal  plug-­‐in  =  RSVP  Broker  front-­‐end  |RSVP  Broker  packaging  for  Tb-­‐specific  installations   5.928571   7.214286   42.77041  5-­‐15                  SFA  Exposure  (D5.1):    Generic  Tools  for  Tb  SFA  Exposure  for:  local  administration  and  global  availability  of  Tb  with  Use  Cases:  select  and  deploy  (replace  existing  by)  Generic  SFA  Wrapper  |  develop  and  deploy  AMSoil  |  Other  _____________________________   6.571429   5.857143   38.4898  5-­‐16                FRCP  1  (D5.1):    FRCP  /OMF6  General  interaction  for  federated  resource  control  with    Use  Cases:  reference  interface  adoption  |  novel  development     7.357143   7   51.5  5-­‐17                  FRCP  2  (D5.1):    FRCP:  experiment  controller  for  AAA  and  resources  description  and  discovery   6.214286   6.785714   42.16837  5-­‐18                  FRCP  3  (D5.1):    FRCP:  adaptation  for  interoperability  with  Use  Cases:  NEPI  |  OMF6   6   6   36  O-­‐1                    (only  personal  evaluations):    

1. Project  sustainability  (by  ID=Jame  Bond)  

 

9   9   81  

Page 77: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 77  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  5-­‐1  Cycle  2  criticality  sorted  

0  

10  

20  

30  

40  

50  

60  

1   2   3   4   5   6   7   8   9  10  11  12  13  14  15  16  17  18  19  20  21  22  23  24  25  26  

Cr  

Cr  

Page 78: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 78  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

6 Appendix  B:  Service  Portfolio  Catalogue  Entry:  the  guide  The   below   table   is   to   be   applied   to   each   component   of   the   Experiment   Lifecycle   Management  Service,  the  table  refers  to  your  component  as  ‘your  service’.  Even  if  your  component  currently  is  not  a  fully  fledged  service  try  anyway  to  fill  in  the  table  from  a  service  viewpoint.    Table  6-­‐1  How  to  fill  in  the  service  portfolio  catalogue  entry  

Basic  Information  Service  name   Obvious,  corresponds  to  the  task  name  General  description   Be  short  but  precise  User  of  the  service   Please  try  to  identify  all  possible  invocations  of  your  service:  which  

F4F  component  at  both  OLA  and  SLA  sides  can  issue  a  request  for  your  service?  all  these  components  are  your  users  

Service  management  Service  Owner   You,  the  developer,  Task  leader    Contact  information  (internal)   Your  e-­‐mail  Contact  information  (external)   Portal  entry  Service  status   Previous  Status  (cycle  1)  Current  status  (cycle  2)  Future  (cycle  3)  Service  Area/Category   Please  specify  as  ‘category’  all  uses  of  your  service  within  the  

experiment  life  cycle  management,  you  may  wish  to  refer  to  Table  2-­‐5  and  differentiate  between  OLA  and  SLA  categories  

Service  agreements   SLA  is  n/a  Detailed  makeup  Core  service  building  blocks   Please  refer  to  your  service  architecture,  note  what  is  planned  Additional  building  blocks   Please  refer  to  your  service  architecture,  note  what  is  planned  Service  packages   This  service  when  ready  is  a  package  of  core+additional  technical  

services  Dependencies   This  is  very  important:  please  try  to  imagine  all  possible  faults  not  

only  within  your  service  infrastructure  but  within  the  surround,  you  may  wish  to  refer  to  Table  2-­‐5  

Technology  Risks  and  competitors  Cost  to  provide  (optional)   Maintenance  of  software?  Coordination  costs?  Computing  cost?  

Communication  cost?  Other?  Funding  source  (optional)   Now:  EU,  later:  subscription  fee  via  OLA?  Usage  fee  via  SLA?  Pricing  (optional)   Now:  free  of  charge,  later:  reputation  based?  Value  to  customer  (optional)   Seamless  and  automated  <your  service  name>  on  federation  of  

heterogeneous  testbed  Risks   Attacks!  What  did  you  write  in  dependencies?  Competitors   Please  refer  to  the  state  of  the  art  in  your  service  domain  This  service  portfolio  catalogue  entry   follows  the  FitSM  template  adopted  by  the  Fed4FIRE  project,  however  given  the  technical  nature  of  WP5  the  last  sub-­‐table  is  renamed  to  read  “Technology  Risks  and   Competitoprs”   instead   of   original   title   “Business   Case”;   accordingly   the   first   four   rows   are  becoming  optional.  This  decision  was  made  in  WP5  together  with  Task2.3  “Sustainability”,  where  the  Business  case  issues  will  be  addressed.  

Page 79: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 79  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

7 Appendix  C:  Related  work  on  resource  reservation  In   shared-­‐access   experimentation   platforms,   requests   are   usually   handled   upon   arrival,   with  immediate   (instant)   reservation   of   the   selected   physical   resources,  while   the   reservation   period   is  dictated  by  the  "lifetime"  of  the  requested  topology  (e.g.   [Chowdhury],   [Papagianni],   [Houidi]).  The  problem   of   allocating   shared   substrate   resources   to   requests   for   virtual   topologies   is   commonly  referred   to   in   literature  as  Virtual  Network  Embedding   (VNE)  problem  or  Virtual  Network  Mapping  problem   [Chowdhury].   Efficient   allocation   of   physical   resources   among   multiple   requests   is  extremely  important  in  order  to  maximize  the  number  of  coexisting  virtual  topologies,  and  increase  the  utilization  of  underlying  testbeds.  The  VNE  problem  with  constraints  on  virtual  nodes  and  virtual  links  can  be  reduced  to  the  NP-­‐hard  multi-­‐way  separator  problem  [Andersen].      Several   approaches  have  been   followed   to  deal  with   the   complexity   and   challenges   related   to   the  VNE   problem.   In   [Fischer].[LeivadeasC][Haider][ChowdhuryB]   literature   reviews   are   provided,  including  comprehensive  overviews  of  the  main  challenges  and  diverse  aspects  of  VNE,  highlighting  existing  approaches  and  emerging  requirements.  Most  of  the  proposed  approaches  decompose  the  problem  into  the  node  mapping  phase  and  the  link  mapping  phase  to  reduce  the  complexity  of  the  allocation.  During   the  node  mapping  phase  a  greedy  solution  can  be  employed  to   find  appropriate  physical  nodes  that  satisfy  the  users  requirements,  while   link  allocation  can  be  performed  using  (k)  shortest   path   or   multi-­‐commodity   flow   algorithms   [Yu],   [Szeto].   Other   approaches   use   more  sophisticated  algorithms  (e.g.  Mixed  Integer  Programming)  to  solve  the  two  problems  simultaneously  [Houidi]   or   provide   some   type   of   coordination   among   the   two   phases   [Chowdhury],   [Papagianni].  Furthermore,  the  algorithms  that  are  used  can  be  categorized  as  distributed  [Houidi8]  [Houidi10]  or  centralized   [Chowdhury].   Centralized   approaches   are   defined   by   the   existence   of   a   central   entity  which  is  responsible  for  receiving  and  assigning  user  requests  for  virtual  topologies,  while  distributed  algorithms   (e.g.   [Houidi08])   avoid   using   the   central   entity   as   it   has   been   proven   to   surmount  scalability  limitations.  The  objective  of  most  of  the  above  approaches  is  to  maximize  the  revenue  of  the  physical  infrastructure  during  resource  allocation  with  the  target  to  achieve  balanced  load  on  the  physical  resources  (both  nodes  and  links)  [Yu],  [Zhu],  [Papagianni].  Other  works  consider  also  Quality  of  Service  (QoS)  related  parameters  (e.g.,  delay  [Papagianni]  [Lischka]).  Moreover,  virtual   topologies  might  be  spread  over  multiple  administrative  domains   (testbeds).  The  resource  allocation  problem  is  denoted  as  inter-­‐domain  VNE  and  can  be  basically  broken  down  to  the  following  sub-­‐problems   [Pittaras];   (i)   selecting   the  appropriate   testbed   to  embed  a  segment  or   the  entire   request   (request   partitioning)   and   (ii)   solving   the   resulting   distinct   VNE   problems   for   each  testbed  involved.    To  address  the  first  issue,  graph  partitioning  algorithms  are  employed.  Specifically,  authors   in   [Xin]  adopt  the  k-­‐cut  algorithm  while  authors   in   [Houidi]  use  a  modification  of  max-­‐flow  min-­‐cut  algorithm  called  the  Ford-­‐Fulkerson  algorithm.  Other  approaches  use  local  search  techniques  [Zaheer]   [Leivadeas],   while   authors   in   [Houdi]   propose   an   optimal   solution   based   in   linear  programming   formulation.   The   objective   of   the   above   approaches   is   to   minimize   the   cost   of  provisioning   the   particular   request,   based   on   the   resource   allocation   costs   as   advertised   by  infrastructure   providers.   The   cost   of   allocating   a   specific   type   of   resource   can   be   fixed/random  [Houidi]   or   can   be   a   function   of   the   scarcity   and   the   average   utilization   of   the   specific   type   of  resource  in  each  physical  infrastructure  [Leivadeas].  In   case   of   advanced   reservation   in   a   shared-­‐access   platform,   the   reservation   system  must   keep   a  time  line  of  all  current  reservations  that  determines  what  resources  are  available  at  any  given  time,  incorporating  extra  complexity  and  computational  time  during  resource  allocation  [Wior].  Authors  in  [Wiseman]  focus  on  a  more  general  version  of  the  VNE  that  supports  advance  scheduling  of  virtual  network   mappings,   in   the   context   of   the   Open   Network   Laboratory   [ONL].   Specifically,   the   user  requests   a   reservation   for   resources  which   is   defined  by  a   graph   representing  a   virtual   network,   a  time   interval   of   acceptable   start   times   in   the   future   and   the   lifetime   of   the   request.   Following,   a  

Page 80: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 80  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

scheduler  discovers  the  set  of  potential  begin  and  finish  times  for  the  request,  in  the  specified  time  interval,  and  allocates  the  resources  according  to  two  heuristics.  The  first  heuristic  tries  to  minimize  the   usage   of   bandwidth   between   the   interconnected   nodes,   while   the   second   prefer   to   reserve  nodes  which  are  not  already  reserved  for  other  requests.    In   experimentation   platforms   with   exclusive   resources,   reservations   are   usually   scheduled   in  advance.   The   user   is   guided   through   a  web   interface   (calendar   service)   to   reserve   non   conflicting  resources  with  other  experiments  on  pre-­‐specified  time  slots,  while  the  reservation  system  ensures  that   each   user   will   have   access   to   reserved   resources   at   the   corresponding   time   [Stasi],   [Hurni],  [Welsh],   [Ju],   [AnadiotisB],   [LeivadeasB].   On   the   other   hand,   authors   in   [Niavis],   [Chun]   adopt   a  combinatorial  auctioning  model,  where  users  compete  for  the  testbed  resources  offering  a  maximum  value   amount   for   space/time   resources.   The   users   submit   their   request   as   an   abstract   resource  specification,   while   the   resource   allocation   system   is   responsible   for   discovering   the   desired  resources   that   satisfy   the   constraints.   The   broker   selects   the   users   that  maximize   the   aggregated  value,  giving  the  opportunity  to  the  users  to  prioritize  the  requests  according  to  the  bidding  value.  

7.1 References  for  Appendix  C  [Chowdhury]   M.,   Chowdhury,   M.   R.   Rahman,   R.   Boutaba,   “ViNEYard:   Virtual   Network   Embedding  

Algorithms  With  Coordinated  Node  and  Link  Mapping,  ”,  Networking,  IEEE/ACM  Transactions  on  ,  vol.  20,  no.  1,  pp.  206-­‐219,  Feb.  2012.  

[ChowdhuryB]   N.M.  Mosharaf   Kabir   Chowdhury,   R.   Boutaba,   “A   survey   of   network   virtualization,”  Computer   Networks,   vol.   54,   no.   5,   pp.   862–876,   Apr.   2010,  doi:10.1016/j.comnet.2009.10.017.  

[Haider]  A.  Haider,  R.  Potter,  A.  Nakao,  “Challenges  in  Resource  Allocation  in  Network  Virtualization,”  ITC  Specialist  Seminar  on  Network  Virtualization,  May  2009.  

 [Papagianni]  C.  Papagianni,  A.  Leivadeas,  S.  Papavassiliou,  V.  Maglaris,  C.  Cervello-­‐Pastor,  A.  Monje,  “On  the  Optimal  Allocation  of  Virtual  Resources  in  Cloud  Computing”,  IEEE.  Transactions  on  Computers,  vol.  62,  no.  6,  pp.  1060-­‐1071,  Jun.  2013.  

[Houidi]   I.   Houidi,   W.   Louati,   W.   B.   Ameur,   D.   Zeghlache,   ”Virtual   network   provisioning   across  multiple   substrate   network”   ELSEVIER   Computer   Networks,   vol.55,   no.   2,   pp.   1011-­‐1023,  2011.  

[Andersen]    D.  Andersen,  “Theoretical  Approaches  To  Node  Assignment,”  Unpublished  Manuscript,  available  at  http://www-­‐2.cs.cmu.edu/  _dga/papers/andersen-­‐assign.ps,  2002.  

[Leivadeas]   A.   Leivadeas,   C.   Papagianni,   S.   Papavassiliou,   “Efficient   Resource  Mapping   Framework  over  Networked  Clouds  via  Iterated  Local  Search  based  Request  Partitioning”,  vol.  24,  no.  6,  pp.  1077-­‐1086,  June  2013.  

 [LeivadeasB]   A.   Leivadeas,   C.   Papagianni,   S.   Papavassiliou,   “An   Architecture   for   Virtual   Network  Embedding  in  Wireless  Systems”,  IEEE  NCCA,  pp.  62-­‐68,  Nov.  2011  

[LeivadeasC]   A.   Leivadeas,   C.   Papagianni,   and   S.   Papavassiliou,   “Socio-­‐aware   virtual   network  embedding,”  IEEE  Network,  vol.  26,  no.  5,  pp.  35–43,  2012.  

[Fischer]  A.  Fischer,  J.  Botero,  M.  Beck,  H.  De  Meer,  and  X.  Hesselbach,  “Virtual  Network  Embedding:  A  Survey,”  Communications  Surveys  &  Tutorials,  IEEE  ,  vol.  PP  ,  no  99,  pp.  1-­‐19,  2012.  

[Yu]   M.   Yu,   Y.   Yi,   J.   Rexford,   and   M.   Chiang,   “Rethinking   virtual   network   embedding:   substrate  support  for  path  splitting  and  migration,”  ACM  SIGCOMM  Computer  Communication  Review,  Vol.38,  no.  2,  pp.  17–29,  Apr.  2008.  

[Szeto]  W.  Szeto,  Y.  Iraqi,  R.  Boutaba,  “A  Multi-­‐Commodity  Flow  Based  Approach  to  Virtual  Network  Resource  Allocation,”   IEEE  Global  Telecommunications  Conference   (GLOBECOM  ’03),  vol.  6,  pp.  3004–  3008,  Dec.  2003.  

[Houidi08]   I.   Houidi,  W.   Louati,   D.   Zeghlache,   “A  Distributed  Virtual  Network  Mapping  Algorithm,”  IEEE  International  Conference  on  Communications  (ICC’08),  pp.  5634–5640,  May  2008,      

Page 81: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 81  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

[Zhu]   Y.   Zhu,   M.H.   Ammar,   “Algorithms   for   Assigning   Substrate   Network   Resources   to   Virtual  Network   Components,”   IEEE   International   Conference   on   Computer   Communications  (INFOCOM’06),  pp.  1–12,  Apr.  2006.  

 [Lischka]   J.   Lischka,   K.   Holger,   “A   Virtual   Mapping   Algorithm   based   on   Subgraph   Isomorphism  Detection,”   ACM   workshop   on   Virtualized   infrastructure   systems   and   architectures  (SIGCOMM’09),  pp.  81–88,  August  2009.  

[Pittaras]   C.   Pittaras,     C.   Pappagianni,   A.   Leivadeas,   P.   Grosso,   J.   van   der   Ham,   S.   Papavassiliou,   "  Resource   Discovery   and   Allocation   for   Federated   Virtualized   Infrastructures",   accepted   for  publication  in  Future  Generation  Computer  Systems  (Elsevier),  January,  2014.  

[Xin]   Y.   Xin,   I.   Baldine,   A.   Mandal,   C.   Heermann,   J.   Chase,   A.   Yumerefendi,   “Embedding   Virtual  Topologies  in  Networked  Clouds”,  Proc.  Sixth  Int.  Conf.  Future  Internet  Technologies,  pp.  26-­‐29,  June  2011.  

[Zaheer]     F.   Zaheer,   J.   Xiao,   R.   Boutaba,   ”Multi-­‐provider   service   negotiation   and   contracting   in  network   virtualization”,   Network   Operations   and   Management   Symposium   (NOMS),   2010  IEEE,  pp.  471-­‐478,  Osaka,  June  2010.    

[Wior]  I.  R.  Wior,  Z.  J.  Zhao,  M.  Luo,  J.  B.  Zhang,  S.  S.  Ge,  and  H.  C.  Lau,  “Conceptual  framework  of  a  dynamic   resource   allocation   testbed   and   its   practical   realization   with   ProModel”,   IEEE  Control  Applications,  (CCA)  &  Intelligent  Control,  (ISIC),  pp.  1613  –  1618,  2009.  

[Wiseman]   C.   Wiseman,   J.   Turner,   “The   Virtual   Network   Scheduling   Problem   for   Heterogeneous  Network   Emulation   Testbeds”,   Washington   University   in   Saint   Louis,   Tech.   Rep.   WUCSE-­‐2009-­‐68,  September  2009.    

[ONL]  Available  at:  http://onl.wustl.edu/  [Stasi]   G.   Di   Stasi,   R.   Bifulco,   S.   Avallone,   R.   Canonico,   A.   Apostolaras,   N.   Giallelis,   T.   Korakis,   L.  

Tassiulas,  “Interconnection  of  a  geographically  distributed  wireless  mesh  testbeds:  Resource  sharing  on  a  large  scale”,  Ad  Hoc  Networks,  vol.  9,  no.  8,  pp.  1389-­‐1403,  2011  

[Hurni]   P.   Hurni,   M.   Anwander,   G.   Wagenknecht,   T.   Staub,   T.   Braun,   “TARWIS   –   A   Testbed  Management  Architecture  for  Wireless  Sensor  Network  Testbed”,  pp.  1-­‐4,  2011.  

[Welsh]   M.   Welsh,   and   G.   Werner-­‐Allen,   “MoteLab:   Harvard   Sensor   Network   Testbed”,  http://motelab.eecs.harvard.edu  

[Ju]   X.   Ju,   H.   Zhang,   D.   Sakamuri,   “NetEye:   A  User-­‐Centered  Wireless   Sensor   Network   Testbed   for  High-­‐Fidelity,   Robust   Experimentation”,   International   Journal   of   Communication   Systems,  vol.  25,  pp.  1213-­‐1229,  2012.  

[AnadiotisB]  A.C.  Anadiotis,  A.  Apostolaras,  D.  Syrivelis,  T.  Korakis,  L.  Tassiulas,  L.  Rodriguez,  M.  Ott,  “A  New  Slicing  Scheme  for  Efficient  Use  of  Wireless  Testbeds”,  WINTECH  ’09,  pp.  83-­‐84,  Sep.  2009.  

[Niavis]  H.  Niavis,  K.  Choumas,  G.   Iosifidis,  T.  Korakis  and  L.  Tassiulas,  “Auction-­‐based  Scheduling  of  Wireless   Testbed   Resources”,   to   be   presented   in   IEEE   Wireless   Communications   and  Networking  Conference  (WCNC),  Instanbul,  Turkey,  6  -­‐  9  April  2014.  

[Chun]  B.  N.  Chun,  P.  Buonadonna,  A.  AuYoung,  C.  Ng,  D.  Parkes,  J.  Shneidman,  A.  C.  Snoeren,  and  A.  Vahdat,  “Mirage:  A  Microeconomic  Resource  Allocation  System  for  Sensornet  Testbeds”,   In  Proc.  of  the  2nd  IEEE  Workshop  on  Embedded  Networked  Sensors,  pp.  1-­‐6,  2005.  

[Ghijsen]   Mattijs   Ghijsen,   Jeroen   van   der   Ham,   Paola   Grosso,   Cosmin   Dumitru,   Hao   Zhu,   Zhiming  Zhao,   Cees   de   Laat,   A   semantic-­‐web   approach   for   modeling   computing   infrastructures,  Computers  &  Electrical  Engineering,  Volume  39,  Issue  8,  November  2013,  Pages  2553-­‐2565,  

[VDHAM]    Jeroen  van  der  Ham,  József  Stéger,  Sándor  Laki,  Yiannos  Kryftis,  Vasilis  Maglaris,  Cees  de  Laat,  The  NOVI   information  models,  Future  Generation  Computer  Systems,  Available  online  18  December  2013,  ISSN  0167-­‐739X,  http://dx.doi.org/10.1016/j.future.2013.12.017.  

Page 82: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 82  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

8 Appendix  D:  SLA  Type  Guarantees  the  X%  Uptime  of  Y%  of  the  resources  

The  evaluations  will  be  based  in  the  different  constraints  defined  in  the  Agreement.    An   experiment   is   performed   over   a   slice   and   the   enforcement   of   the   SLA   is   per   sliver.   The   SLA  evaluation   will   be   the   set   of   evaluations   of   all   the   slivers   which   could   be   required   by   the  experimenter  once  the  sliver  is  released.    The   SLA   Type   agreed   to   implement   in   iMinds’   testbeds   (w-­‐iLab.t   and   Virtual   Wall)   guarantees   X    Uptime  rate  for  Y  rate  of  the  resources  during  the  sliver.    This  can  be  seen  as  a  pilot   implementation  of  more  elaborated  SLAs  that  can  be  extended  to  other  testbeds  in  the  future.  This  mechanism  will  be  based  on  infrastructure  monitoring  and  valid  for  both  immediate  provision  of  resources  and  reservation-­‐based  experimentation.    The   SLAs   on   reservations   in   the   future   are   similar   than   SLA   evaluation   per   sliver.   Once   the  corresponding   slivers  are   released,  one  can  evaluate   if   the  uptime/availability  of  all  promised  SLAs  was  met.  Some  requirements  needed  in  order  to  do  the  SLA  Evaluation  are:  

• The   Aggregate   manager   (AM)   should   inform   the   SLA   management   when   the   slivers   have  been   provisioned   or   released   and   which   resources   are   contained   in   them.   Once   the  resources   of   a   sliver   have   been   provisioned,   they   will   not   change   during   the   entire   sliver  lifecycle.  This  is  the  Rspec  Manifest.  

 • Once  the  AM  has  indicated  the  SLA  management  module  which  resources  are  involved  in  the  

sliver,  the  SLA  management  contacts  and  provides  the  following  information  to  the  Monitoring  System  in  order  to  retrieve  monitoring  data:  

! The  identifiers  of  the  resources  involved  in  that  sliver.  ! Metric  to  monitor  the  resources  (i.e.  availability).  ! A  time  period  of  SLA  monitoring.  ! Max  number  of  results:  In  case  of  receiving  several  metrics  per  resource  in  this  

interval  time,  the  number  of  metrics  can  be  limited  to  a  maximum  number.  • As   soon  as   the   SLA  management  begins   receiving  monitoring  data,   the  enforcement   starts  

(evaluation).  • The  SLA  management  is  notified  the  sliver  has  been  released  by  the  AM.    And  it  is  only  at  this  

moment  when  the  evaluation  can  be  requested  by  the  experimenter.  

To   further  clarify   the  operation  of   the  evaluation  of   the  SLAs,  we   illustrate  how  the  corresponding  resources   of   different   slivers   from   two   testbeds   (Testbed   A,   Testbed   B)   can   be   used   during   the  lifecycle  of  an  experiment.  

• Testbed  A  :  During  the  lifecycle  of  the  experiment,  two  slivers  will  be  provisioned  with  their  corresponding  resources:  

-­‐ Sliver_1  :  resource  1,  resource  2  -­‐ Sliver_2  :  resource  3,  resource  4  

• Testbed  B:  Only  one  sliver  is  going  to  be  provisioned  in  this  testbed.  -­‐ Sliver_3:  resource  5,  resource  6  -­‐  

The  lifecycle  of  these  slivers  could  be  in  parallel  or  consecutive.    In  the  example,  they  are  considered  to  be  parallel.  

Page 83: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 83  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  8-­‐1  Availability  (A)  of  resources  during  the  lifecycle  experiment  

             X  –  Rate  of  the  uptime  of  a  sliver  Y  –  Rate  of  the  number  of  resources  (0..1)  Ari  –  Availability  value  of  the  resource:        (0-­‐  not  available,  1-­‐  available)  uTri  –  Uptime  total  rate  of  a  specific  resource.    tri  -­‐  active  time  of  a  specific  resource  during  the  lifecycle  of  the  sliver.    The  evaluation  of  the  SLA  is  calculated  of  the  following  way:  

• The   status   of   availability   (UP=1,   DOWN=0)   of   each   resource   will   be   stored   by   the   SLA  management  module   at   every   SLA  monitoring   interval   during   the   time   it   has   been   active  within  a  sliver.    

•  The  uptime  rate  of  each  sliver  is  calculated  of  the  following  way:    

-­‐ The  sum  of  all  the  availability  values  (UP  =  1,  DOWN  =  0)  obtained  of  each  resource  divided  by  the  SLA  monitoring  interval  during  the  time  this  resource  has  been  active.  

uTri=  !!"tri

!!!tri  

-­‐ The  SLA  is  met  when  the  X  uptime  rate  defined  in  the  SLA  is  fulfilled  in  at  least  Y  rate  of  the  resources  of  the  corresponding  sliver.      

• Finally,  the  SLA  Evaluation  of  the  different  testbeds  is  shown  to  the  experimenter,  who  might  also  request  the  SLA  performance  of  each  sliver  for  a  specific  testbed  as  soon  as  the  slivers  are  released.  

Page 84: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 84  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

The  following  table  shows  the  values  of  the  evaluation  of  the  SLA,  supposing  that  the  SLA  to  be  met  is:  “Testbeds  guarantee  0.75  Uptime  rate  for  0.8  rate  of  the  resources  during  the  sliver.”    Table  8-­‐1  SLA  Evaluation  example  

Testbed   Sliver   Resource  

Availability  total  of  each  

resource  (uTri)  

Guarantee  met  by  each  resource  

uTri  >=  X    >  OK  uTri  <    X    -­‐>  KO  

Rate  of  resources  per  sliver  which  met  the  

uptime  rate  agreed  

SLA  Evaluation  

total  

Testbed  A  

Sliver_1  

resource_1   2/3   2/3  <  0.75        -­‐>  KO  

0   (0<0.8)  SLA  KO  resource_2   2/3   2/3  <  0.75        -­‐>  

KO  

Sliver_2  

resource_3   3/4   3/4  >=  0.75    -­‐>  OK  

1   (1>0.8)  SLA  OK  resource_4   3/4   3/4  >=  0.75    -­‐>  

OK  

Testbed  B   Sliver_3  

resource_5   2/3   2/3  <  0.75        -­‐>  KO  

0.5   (O.5  <  1)  SLA  KO  Resource_6   1   1  >  0.75                -­‐>  

OK      

Page 85: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 85  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

9 Appendix  E:  Cycle  2  use  case  for  the  experiment  controller  NEPI  We   propose   a   VLC/CCN   streaming   experiment   involving   the   PlanetLab   testbed   and   an   OMF   6.0  testbed.  We  want   to   use   different   components   of   the   Fed4FIRE   architecture   for   cycle   2,   to   run   a  cross  testbed  experiment.  The   goal   of   this   experiment   is   to   compare   the   performance   of   CCN   and   classic     VOD   Streaming  deployments,   to   stream   video   from   a   content   provider   to   a   wireless   home   setting,   across   the  Internet.    Concrete  interpretation:    You  are  living  in  an  house  with  3  chambers.  You  want  to  see  a  movie  in  your  living  room.  As  you  didn't  buy  this  movie,  you  will  use  a  VOD  website  as  NetFlix  to  provide  this  video.  This   video  will   have   to   be   transferred   from   the  VOD  Content   Provider   until   the   Internet   box   from  your  house.  Moreover,  your  2  kids  decide  to  see  this  movie.  However,  they  want  to  watch  it  from  their  bedroom  because   they   want   to   play   at   the   same   time.   You   decide   to   share   the   stream   content   through  wireless  to  their  laptop.    Modelisation:    The  Netflix  servers  will  be  modeled  by  PlanetLab  Node.  Indeed,  PlanetLab  is  a  plateform    containing  nodes  spread  all  around  the  world  connected   through   Internet.   It   represents     similar  conditions  as  the  Netflix  Architecture.  Using  PlanetLab,  the  video  can  be  streamed    though  Internet  until  reaching  the  entrance  of  the  wireless  testbed.    OMF  (  cOntrol  and  Management  Framework  )  is  a  well  known  components  used  in  Wireless  Testbed.  OMF  Testbed  are   composed  by  a  Gateway,   that  orchestrate   the  nodes  and  usually  host   the  XMPP  Server  (used  for  the  communication),  and  the  nodes  themselves.  For  this  use  case,  the  OMF  nodes  will   represent   the  video  streaming  client   (in  each  bedroom),  and   the  gateway  will  be  as   the  Home  Media  Center.  

Page 86: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 86  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

10 Appendix  F:  User  Interface  /  Portal  Requirements  for  cycle  2  (per  testbed)  and  commitments  (per  partner)  

Table  10-­‐1  Cycke  2  requirements  per  testbed  

testbed   Cycle  1   Cycle  2  

PlanetLab  Europe   getVersion  (SFA)  ListResources  (SFA)  Resources  in  a  slice  Browse  resources  in  MySlice  Reservation  

-­‐  

NITOS     getVersion  (SFA)  ListResources  (SFA)  Resources  in  a  slice  Browse  resources  in  MySlice  

Reservation  

Virtual  Wall   getVersion  (SFA)  ListResources  (SFA)  Resources  in  a  slice  Browse  resources  in  MySlice  

Reservation  

Norbit  (NICTA)   -­‐   -­‐  

KOREN   -­‐   -­‐  

w-­‐iLab.t   getVersion  (SFA)  ListResources  (SFA)  Resources  in  a  slice  Browse  resources  in  MySlice  

Reservation  

NETMODE     getVersion  (SFA)  ListResources  (SFA)  Resources  in  a  slice  Browse  resources  in  MySlice  

Reservation  

FUSECO  Playground  

getVersion  ListResources  Browse  resources  in  MySlice  

Resources  in  a  slice  Reservation  

Smart  Santander   RSpec  definition  getVersion    

ListResources  Resources  in  a  slice  Browse  resources  in  MySlice  

OFELIA     RSpec  definition  VPN  setup  getVersion  ListResources  

Resources  in  a  slice  Browse  resources  in  MySlice  Reservation  

Page 87: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 87  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

BonFire     RSpec  definition  Development  of  an  SFAWrap  Driver  

getVersion  ListResources  Resources  in  a  slice  Browse  resources  in  MySlice  Reservation  

 Table  10-­‐2  Partner  commitments  on  Portal  enhancements  

UPMC   Provide  support  to  Fed4Fire  partners  • User  friendliness  • Send  an  email  to  PI  • Pending  requests  per  Sub-­‐Authority    • Validation  of  Users  &  Slices  in  the  Web  (Do  we  allow  slice  creation?)  • Warn  the  user  if  the  delegation  expired  

UTH   MySlice  Scheduler  Plugin  • Adapt  the  Nitos  plugin  to  the  new  Django  version  • Extend  the  plugin  with  a  new  design  

iMinds   Investigation  on  how  the  portal  can  work  together  with  the  Virtual  Wall  Authority  for  users.  

• We  will  make  the  jFed  topology  startup  from  the  portal  using  Java  webstart  and  investigate  a  smooth  handover  of  credentials  from  the  portal  to  the  jFed  topology  tool.  

NTUA   Implementation  Plan  for  the  Reputation  Plugin:  • Communicates  with  the  Central  Reservation  Broker  to  obtain  a  list  of  

user’sexperiments  and  provides  a  “Rate”  button  for  each  experiment  • A  form  is  provided  to  the  user  with  questions  regarding  his  experience  with  

the  F4F  testbeds  for  each  experiment  • Use  of  the  SQL  gateway  in  Manifold  to  store  user’s  feedback  in  our  remote  

reputation  database  (MySQL)  • Reputation  service  will  combine  user’s  feedback  with  the  corresponding  

infrastructure  monitoring  data  for  each  experiment  (+  SLA  information  if  available)  and  will  compute  reputation  scores  for  the  testbeds  

• Reputation  scores  will  be  posted  in  a  web  page  (F4F  Portal)  

Atos   Implementation  Plan  for  the  SLA  Plugin  • Communicates  with  the  SLA  Central  Component.  

-­‐ It  acts  as  a  broker  between  the  different  client  tools  and  the  different  SLA  Management  modules  of  each  testbed.:  

-­‐ It  will  receive  experiment  and  SLA  information  from  the  Portal  and  send  it  to  the  SLA  Management  Module  at  each  testbed.  

-­‐ It  will  gather  the  different  warnings  and  the  final  evaluation  when  the  experimenter  requests  it.      

• It  will  communicate  and  provide  SLA  evaluation  information  to  the  Reputation  service.  

In  this  cycle,  there  will  be  two  forms  in  the  Portal:  • One  of  them  at  the  moment  of  the  Reservation/negotiation  –  It  will  be  

Page 88: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 88  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

integrated  within  the  Reservation  form.  • Two  possible  options.  Depend  on  the  provider  chosen:  

-­‐ iMinds  :  The  uptime  percentage  of  availability  of  the  resources  will  be  shown  in  order  to  be  accepted  by  the  experimenters.  

-­‐ Other  testbeds:  The  uptime  percentage  of  each  facility  will  be  shown  in  order  to  be  accepted  by  the  experimenters.  

• There  will  be  only  one  SLA  per  experimenter  and  per  every  facility  about  the  availability  of  the  facility.  One  ok  will  be  enough  for  all  SLAs.  

• And  the  other  one  for  the  SLA  Evaluation:  SLA  evaluations  will  be  shown  to  the  experimenters.  Only  reports  and  warnings  will  be  shown  when  the  experimenter  requires  them  (all  testbeds  will  be  shown).  

I2Cat  &  Univ  Bristol  

Provide  testbed  connectivity:    • MySlice  must  be  connected  to  OFELIA  VPN  • Plugin  for  OpenFlow  resources  graphical  representation  and  reservation  • Plugin  for  listing  VM  resources  (already  started)  

TUB   Implementation  Plan  for  the  Ontology  Plugin:  • Parsing  and  representation  of  ontology-­‐based  resource  descriptions  pushed  

by  the  involved  testbeds  (browsing  resources).  • Based  on  the  chosen  architecture  potentially  offering  a  central  ontology-­‐

based  resource  catalog  based  on  different  resource  descriptions  pushed  by  the  involved  testbeds  (central  resource  database).  

• A  form  to  search  for  resources  based  on  the  ontology-­‐based  technology  (resource  selection).  

• Use  of  Manifold-­‐based  information  to  enlarge  the  information  base  for  resources  (e.g.  pushed  monitoring  data)  

• Needed  extensions  for  other  MySlice  plugins  (such  as  NEPI,  SLA,  …)  INRIA   ! Validate  the  usefulness  and  usability  of  NEPI  scheduling/workflow  capabilities  

! Support  running  experiments  on  FRCP  enabled  testbeds  (OMF  6.0  support)  ! Support  handover  with  MySlice  (support  Manifold  API)  ! Support  conducting  OpenFlow  experiments  in  PlanetLab    with  OpenVSwitch  

Page 89: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 89  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

11 Appendix  G:  Descriptions  of  methods  and  use  case  of  application  services  directory.  

11.1 Methods  Description  A  detailed  description  of  the  available  REST  methods  is  described  below.  

GET  –  Infrastructure  Service  Discovery  

Syntax:  GET  /fedservices    Functionality:  Returns  a  list  with  the  Infrastructure  Services  available  from  the  federation.  Access  to  this  method  is  restricted  to  federated  Experimenters.    Parameters:  N/A    Returns:  HTTP/1.1  200  OK  List  with  the  name  and  a  brief  description  of  the  Infrastructure  Services  available.    Errors:  N/A  

GET  –  Application  Service  Discovery  

Syntax:  GET  /appservices    Functionality:  Returns  a  list  with  the  name  and  a  brief  description  of  the  Application  Services  available.    Parameters:  N/A    Returns:  HTTP/1.1  200  OK  List  with  the  name  and  a  brief  description  of  the  Application  Services  available.    Errors:  N/A  

 

GET  –  Filtered  Search  

Syntax:  

Page 90: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 90  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

GET  –  Filtered  Search  

GET  /appservices?attr={value}    Functionality:  Returns  a  list  with  the  name  and  a  brief  description  of  the  Application  Services  whose  attributes  match  the  specified  values.    Parameters:  attr:  Application  Service  attribute  defined  in  the  metamodel.    Returns:  HTTP/1.1  200  OK  List  with  the  name  and  a  brief  description  of  the  Application  Services  whose  attributes  match  the  specified  values.    Errors:  HTTP/1.1  400  Bad  Request:  If  the  request  contains  invalid  parameters.  

 

GET  –  Free  Text  Search  

Syntax:  GET  /appservices/search?keywords={words}    Functionality:  Returns  a  list  with  the  name  and  a  brief  description  of  the  Application  Services  that  contain  the  specified  words  in  their  description.    Parameters:  keywords:  Specific  words  to  search  for  in  the  Service  Directory.    Returns:  HTTP/1.1  200  OK  List  with  the  name  and  a  brief  description  of  the  Application  Services  that  contain  the  specified  words  in  their  description.    Errors:  HTTP/1.1  400  Bad  Request:  If  the  request  contains  invalid  parameters.  

 

GET  –  View  Application  Service  Detailed  Information  

Syntax:  GET  /appservices/{appserviceID}    Functionality:  

Page 91: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 91  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

GET  –  View  Application  Service  Detailed  Information  

Returns  the  specified  Application  Service  detailed  description.    Parameters:  appserviceID:  Application  Service  Identifier.    Returns:  HTTP/1.1  200  OK  Body  with  specified  Application  Service  detailed  description.    Errors:  HTTP/1.1  400  Bad  Request:  If  the  request  contains  invalid  parameters.  

 

POST  –  Publish  Application  Service  

Syntax:  POST  /appservices  Body  containing  the  Application  Service  structure  described  in  Error!  Reference  source  not  found..  All  attributes  must  be  filled.    Functionality:  Creates  a  new  Application  Service  description  in  the  Service  Directory.  This  is  a  method  that  should  be  only  used  by  the  Service  Provider,  requiring  the  validation  of  the  corresponding  authentication  credentials.    Parameters:  N/A    Returns:  HTTP/1.1  201  Created:  The  "Location"  header  will  contain  the  Lookup  URL  for  the  newly  created  resource.    Errors:  HTTP/1.1  401  Unauthorized:  If  Service  Provider’s  credentials  are  not  correct.  

 

PUT  –  Update  Application  Service  Information  

Syntax:  PUT  /appservices/{appserviceID}  Body  containing  the  Application  Service  structure  described  in  Error!  Reference  source  not  found.  with  the  attributes  to  be  updated.        Functionality:  Updates  the  information  of  an  Application  Service.  This  is  a  method  that  should  be  only  used  by  the  

Page 92: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 92  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

PUT  –  Update  Application  Service  Information  

Service  Provider,  requiring  the  validation  of  the  corresponding  authentication  credentials.    Parameters:  appserviceID:  Application  Service  Identifier.    Returns:  HTTP/1.1  200  OK    Errors:  HTTP/1.1  400  Bad  Request:  If  request  contains  an  invalid  Application  Service  identifier.  HTTP/1.1  401  Unauthorized:  If  Service  Provider’s  credentials  are  not  correct.  

 

DELETE  –  Remove  Application  Service  

Syntax:  DELETE  /appservices/{appserviceID}    Functionality:  Removes  the  specified  Application  Service  description  from  the  Service  Directory.  This  is  a  method  that  should  be  only  used  by  the  Service  Provider,  requiring  the  validation  of  the  corresponding  authentication  credentials.    Parameters:  appserviceID:  Application  Service  Identifier.    Returns:  HTTP/1.1  204  No  Content:  The  Application  Service  is  successfully  deleted  and  server  does  not  need  to  return  anything.    Errors:  HTTP/1.1  400  Bad  Request:  If  request  contains  an  invalid  Application  Service  identifier.  HTTP/1.1  401  Unauthorized:  If  Service  Provider’s  credentials  are  not  correct.  

11.2 Use  Cases  Use  cases  for  the  different  available  actions  on  the  Service  Directory  are  presented  below.  

11.2.1 Services  discovery  Use  case:  

Services  discovery  Primary  actor:  

  Experimenter  Description:    

An  Experimenter  (federated  or  not)  can  access  the  services  offered  by  Federation  Members.  This  can  be  done  in  three  different  ways:  by  browsing  the  complete  set  of  application  services  or  through  filtered  queries  by  predefined  categories  or  free  text  search.  

Page 93: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 93  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Objective:  Federated   Experimenters   through   the   Fed4FIRE   portal   or   non-­‐federated   Experimenters  through  an  alternative  access  should  be  able   to  obtain  a   list  with  a  brief  description  of   the  available  services.    

Preconditions:  Infrastructure  and  Application  services  have  to  be  published  in  the  Service  Directory.  

Workflow:  The  federated  Experimenter  uses  the  Fed4FIRE  portal  to  discover  the  available  services  in  the  Service  Directory.  A  non-­‐federated  Experimenter  uses  an  alternative  access  to  discover  Application  Services.  Application  Service  discovery  can  be  done  in  three  different  ways:  

a. Browsing  through  the  complete  set  of  application  services.  b. Using   category   filters   composed   of   a   key-­‐value   pair.   The   key   is   defined   by   the  

metamodel  of  the  services.  c. Using  a  free  text  search  applied  to  all  the  services.  

2. The  Service  Directory  obtains  the  requested  application  services    from  the  repository.  

3. The  Service  Directory  returns  a  list  with  the  specified  resources.  

Variations:  The  Experimenter  can  select  the  method  for  discovering  the  application  services:  • Browsing  through  the  complete  set.  • Using  category  filters.  • Using  free  text  search.  

Diagrams:  

 Figure  11-­‐1  Application  Services  discovery  Use  Case  diagram  

Page 94: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 94  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  11-­‐2:  Application  Services  discovery  sequence  diagram  

11.2.2 Detailed  view  of  an  application  service  Use  case:  

Detailed  information  view  of  an  application  service  Primary  actor:  

  Experimenter  Description:    

An  Experimenter  (federated  or  not)  can  access  to  the  detailed  information  of  a  specific  service  application.  The  information  provided  will  contain  the  necessary  information  to  use  the  service.  

Objective:  Federated  Experimenters  through  the  Fed4FIRE  portal  or  non-­‐federated  Experimenters  through  an  alternative  access  should  be  able  to  obtain  a  detailed  view  of  a  selected  application  service  that  allows  them  to  use  it.  

Preconditions:  Infrastructure  and  Application  services  have  to  be  published  in  the  Service  Directory.  

Workflow:  1. The   Experimenter   uses   the   Fed4FIRE   portal   to   obtain   a   detailed   view   of   the   selected  

application  service.  A  non-­‐federated  Experimenter  uses  an  alternative  access  to  discover  those  services.  

2. The   Service   Directory   obtains   all   the   information   available   of   the   requested   service  application  from  the  repository.  

3. The  Service  Directory  returns  the  detailed  view.  

Page 95: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 95  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Variations:  N/A  Diagrams:  

 Figure  11-­‐3:  Detailed  information  view  Use  Case  diagram  

11.2.3 Application  services  management  Use  case:  

Application  service  management  Primary  actor:  

Service  Provider  Description:    

A  Service  Provider  can  publish  application  services  offered  by  its  facility  on  the  Service  Directory.  Service  Providers  should  also  be  able  to  modify  the  information  provided  or  even  remove  an  application  service  that  is  no  longer  offered.  

Objective:  Service  Providers  should  be  able  to  manage  the  application  services  they  offer  on  the  Service  Directory.    

Preconditions:  Service  Provider  has  to  be  federated.  Service  Provider  has  to  be  authenticated.  

Workflow:  1. The  Service  Provider  publishes  a  new  application  service  by  introducing  the  corresponding  

information  in  the  Service  Directory.  

2. The  Service  Provider  can  update  the  information  provided  for  that  or  previously  published  services.  

3. The  Service  Provider  can  remove  the  application  services  that  are  no  longer  offered.  

Variations:  N/A  Diagrams:  

Page 96: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 96  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  11-­‐4:  Application  service  management  Use  Case  diagram  

 Figure  11-­‐5:  Application  service  management  sequence  diagram  

11.2.4 Service  Directory  management  Use  case:  

Service  Directory  management  

Page 97: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 97  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Primary  actor:     Federator  

Description:    The  Federator,  as  defined  in  D2.4,  can  manage  the  Service  Directory  performing  the  following  tasks:  Viewing  and  fixing  format  errors  in  published  application  services,  managing  the  repository  database  and  managing  user  access  to  the  Service  Directory.  

Objective:  Manage  the  Service  Directory.    

Preconditions:  Federator  has  to  be  authenticated.  

Workflow:  1. The   Federator   provides   access   to   create,   update   and   delete   application   services   to  

federated  Service  Providers.  2. The  Federator  can  check  published  application  services  format  and  fix  any  error.  3. The  Federator  can  manage  the  repository  database.  

Variations:  N/A  

Diagrams:  

 Figure  11-­‐6:  Service  Directory  management  Use  Case  diagram  

Page 98: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 98  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

 Figure  11-­‐7:  Service  Directory  management  Sequence  diagram  

Page 99: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 99  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

12 Appendix  H:  Example  Descriptions  and  Queries  of  Ontology-­‐based  Resource  Descriptions.  

12.1 Example  Ontologies  The  below  cited  documents  show  ontologies  used  to  describe  and  query  information  about  resources.    

Definition  of  GENI  related  information  (i.e.  type  of  request)  

@prefix  :  <http://geni.net/ontology#>  .  @prefix  nml:  <http://schemas.ogf.org/nml/base/2013/02#>  .  @prefix  omn:  <http://open-­‐multinet.info/ontology#>  .  @prefix  owl:  <http://www.w3.org/2002/07/owl#>  .  @prefix  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  .  @prefix  xml:  <http://www.w3.org/XML/1998/namespace#>  .  @prefix  xsd:  <http://www.w3.org/2001/XMLSchema#>  .  @prefix  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  .  @base  <http://geni.net/ontology#>  .    <http://geni.net/ontology>  rdf:type  owl:Ontology  ;                                                          rdfs:comment  """Ontology  to  support  SFA  related  terminologies  in  the  Open  Multinet  Ontology.    Alexander  Willner  <alexander.willner@tu-­‐berlin.de>"""@en  .    #################################################################  #  #        Annotation  properties  #  #################################################################    ###    http://geni.net/ontology#expires    :expires  rdf:type  owl:AnnotationProperty  ;                      rdfs:domain  :Message  .    ###    http://geni.net/ontology#generated    :generated  rdf:type  owl:AnnotationProperty  ;                          rdfs:domain  :Message  .    ###    http://geni.net/ontology#manager    :manager  rdf:type  owl:AnnotationProperty  ;  

Page 100: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 100  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  GENI  related  information  (i.e.  type  of  request)  

                   rdfs:domain  nml:Group  .    ###    http://geni.net/ontology#type    :type  rdf:type  owl:AnnotationProperty  .    #################################################################  #  #        Object  Properties  #  #################################################################    ###    http://geni.net/ontology#type    :type  rdf:type  owl:ObjectProperty  ;                rdfs:domain  :Message  ;                rdfs:range  :MessageType  .    #################################################################  #  #        Data  properties  #  #################################################################    ###    http://geni.net/ontology#expires    :expires  rdf:type  owl:DatatypeProperty  ;                      rdfs:range  xsd:dateTime  .    ###    http://geni.net/ontology#generated    :generated  rdf:type  owl:DatatypeProperty  ;                          rdfs:range  xsd:dateTime  .    ###    http://geni.net/ontology#generatedBy    :generatedBy  rdf:type  owl:DatatypeProperty  ;                              rdfs:domain  :Message  ;                              rdfs:range  xsd:string  .  

Page 101: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 101  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  GENI  related  information  (i.e.  type  of  request)  

 ###    http://geni.net/ontology#manager    :manager  rdf:type  owl:DatatypeProperty  ;                      rdfs:range  xsd:anyURI  ;                      rdfs:subPropertyOf  owl:topDataProperty  .    #################################################################  #  #        Classes  #  #################################################################    ###    http://geni.net/ontology#Advertisement    :Advertisement  rdf:type  owl:Class  ;                                  rdfs:subClassOf  :MessageType  .    ###    http://geni.net/ontology#AggregateManager    :AggregateManager  rdf:type  owl:Class  ;                                        rdfs:subClassOf  nml:Service  .    ###    http://geni.net/ontology#Manifest    :Manifest  rdf:type  owl:Class  ;                        rdfs:subClassOf  :MessageType  .    ###    http://geni.net/ontology#Message    :Message  rdf:type  owl:Class  .    ###    http://geni.net/ontology#MessageType    :MessageType  rdf:type  owl:Class  .    ###    http://geni.net/ontology#Request    :Request  rdf:type  owl:Class  ;                      rdfs:subClassOf  :MessageType  .  

Page 102: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 102  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  GENI  related  information  (i.e.  type  of  request)  

 ###    http://schemas.ogf.org/nml/base/2013/02#Service    nml:Service  rdf:type  owl:Class  .    ###    Generated  by  the  OWL  API  (version  3.4.2)  http://owlapi.sourceforge.net    

   

Definition  of  a  possible  upper  ontology  (i.e.  basis  for  more  specific  ones)  

@prefix  :  <http://www.semanticweb.org/owl/owlapi/turtle#>  .  @prefix  nml:  <http://schemas.ogf.org/nml/base/2013/02#>  .  @prefix  owl:  <http://www.w3.org/2002/07/owl#>  .  @prefix  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  .  @prefix  xml:  <http://www.w3.org/XML/1998/namespace#>  .  @prefix  xsd:  <http://www.w3.org/2001/XMLSchema#>  .  @prefix  geni:  <http://geni.net/ontology#>  .  @prefix  indl:  <http://www.science.uva.nl/research/sne/indl#>  .  @prefix  novi:  <http://fp7-­‐novi.eu/im.owl#>  .  @prefix  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  .  @base  <http://open-­‐multinet.info/ontology>  .    <http://open-­‐multinet.info/ontology>  rdf:type  owl:Ontology  ;                                                                              owl:imports  nml:  ,                                                                                                    <http://www.science.uva.nl/research/sne/indl>  .    #################################################################  #  #        Annotation  properties  #  #################################################################    ###    http://open-­‐multinet.info/ontology#endpoint    <http://open-­‐multinet.info/ontology#endpoint>  rdf:type  owl:AnnotationProperty  ;                                                                                                rdfs:domain  nml:Service  .    ###    http://open-­‐multinet.info/ontology#partOfGroup    <http://open-­‐multinet.info/ontology#partOfGroup>  rdf:type  owl:AnnotationProperty  .    ###    http://open-­‐multinet.info/ontology#version  

Page 103: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 103  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  upper  ontology  (i.e.  basis  for  more  specific  ones)  

 <http://open-­‐multinet.info/ontology#version>  rdf:type  owl:AnnotationProperty  ;                                                                                              rdfs:comment  "todo:  find  an  ontology  for  semantic  versioning"  ;                                                                                              rdfs:domain  nml:Service  .    #################################################################  #  #        Object  Properties  #  #################################################################    ###    http://open-­‐multinet.info/ontology#partOfGroup    <http://open-­‐multinet.info/ontology#partOfGroup>  rdf:type  owl:ObjectProperty  ;                                                                                                      rdfs:range  nml:Group  ;                                                                                                      rdfs:domain  nml:NetworkObject  .    ###    http://open-­‐multinet.info/ontology#status    <http://open-­‐multinet.info/ontology#status>  rdf:type  owl:ObjectProperty  ;                                                                                            rdfs:range  <http://open-­‐multinet.info/ontology#Status>  ;                                                                                            rdfs:domain  nml:Node  .    #################################################################  #  #        Data  properties  #  #################################################################    ###    http://open-­‐multinet.info/ontology#certificate    <http://open-­‐multinet.info/ontology#certificate>  rdf:type  owl:DatatypeProperty  ;                                                                                                      rdfs:domain  <http://open-­‐multinet.info/ontology#Testbed>  ;                                                                                                      rdfs:range  xsd:anyURI  .    ###    http://open-­‐multinet.info/ontology#endpoint    <http://open-­‐multinet.info/ontology#endpoint>  rdf:type  owl:DatatypeProperty  ;  

Page 104: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 104  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  upper  ontology  (i.e.  basis  for  more  specific  ones)  

                                                                                             rdfs:range  xsd:anyURI  .    ###    http://open-­‐multinet.info/ontology#exclusive    <http://open-­‐multinet.info/ontology#exclusive>  rdf:type  owl:DatatypeProperty  ;                                                                                                  rdfs:domain  nml:Node  ;                                                                                                  rdfs:range  xsd:boolean  .    ###    http://open-­‐multinet.info/ontology#expires    <http://open-­‐multinet.info/ontology#expires>  rdf:type  owl:DatatypeProperty  ;                                                                                              rdfs:domain  <http://open-­‐multinet.info/ontology#Slice>  ,                                                                                                                    nml:Node  ;                                                                                              rdfs:range  xsd:dateTime  .    ###    http://open-­‐multinet.info/ontology#exportsTo    <http://open-­‐multinet.info/ontology#exportsTo>  rdf:type  owl:DatatypeProperty  .    ###    http://open-­‐multinet.info/ontology#exportsToOML    <http://open-­‐multinet.info/ontology#exportsToOML>  rdf:type  owl:DatatypeProperty  ;                                                                                                        rdfs:subPropertyOf  <http://open-­‐multinet.info/ontology#exportsTo>  ;                                                                                                        rdfs:domain  nml:NetworkObject  ;                                                                                                        rdfs:range  xsd:anyURI  .    ###    http://open-­‐multinet.info/ontology#monitoringsupport    <http://open-­‐multinet.info/ontology#monitoringsupport>  rdf:type  owl:DatatypeProperty  ;                                                                                                                  rdfs:domain  nml:Node  ;                                                                                                                  rdfs:range  xsd:boolean  .    ###    http://open-­‐multinet.info/ontology#up    <http://open-­‐multinet.info/ontology#up>  rdf:type  owl:DatatypeProperty  ;    

Page 105: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 105  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  upper  ontology  (i.e.  basis  for  more  specific  ones)  

                                                                               rdfs:domain  nml:Node  ;                                                                                    rdfs:range  xsd:boolean  .    ###    http://open-­‐multinet.info/ontology#urn    <http://open-­‐multinet.info/ontology#urn>  rdf:type  owl:DatatypeProperty  ;                                                                                      rdfs:domain  <http://open-­‐multinet.info/ontology#Slice>  ,                                                                                                            nml:Node  ;                                                                                      rdfs:range  xsd:anyURI  .    ###    http://open-­‐multinet.info/ontology#version    <http://open-­‐multinet.info/ontology#version>  rdf:type  owl:DatatypeProperty  ;                                                                                              rdfs:comment  "todo:  find  an  ontology  for  semantic  versioning"  ;                                                                                              rdfs:range  xsd:string  .    #################################################################  #  #        Classes  #  #################################################################    ###    http://fp7-­‐novi.eu/im.owl#Reservation    novi:Reservation  rdf:type  owl:Class  ;                                      rdfs:subClassOf  nml:Group  .    ###    http://open-­‐multinet.info/ontology#Allocated    <http://open-­‐multinet.info/ontology#Allocated>  rdf:type  owl:Class  ;                                                                                                  rdfs:subClassOf  <http://open-­‐multinet.info/ontology#AllocationStatus>  .    ###    http://open-­‐multinet.info/ontology#AllocationStatus    <http://open-­‐multinet.info/ontology#AllocationStatus>  rdf:type  owl:Class  ;                                                                                                                rdfs:subClassOf  <http://open-­‐multinet.info/ontology#Status>  .    ###    http://open-­‐multinet.info/ontology#ComputeNode  

Page 106: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 106  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  upper  ontology  (i.e.  basis  for  more  specific  ones)  

 <http://open-­‐multinet.info/ontology#ComputeNode>  rdf:type  owl:Class  ;                                                                                                      rdfs:subClassOf  nml:Node  .    ###    http://open-­‐multinet.info/ontology#OperationalStatus    <http://open-­‐multinet.info/ontology#OperationalStatus>  rdf:type  owl:Class  ;                                                                                                                  rdfs:subClassOf  <http://open-­‐multinet.info/ontology#Status>  .    ###    http://open-­‐multinet.info/ontology#Pending    <http://open-­‐multinet.info/ontology#Pending>  rdf:type  owl:Class  ;                                                                                              rdfs:subClassOf  <http://open-­‐multinet.info/ontology#OperationalStatus>  .    ###    http://open-­‐multinet.info/ontology#Provisioned    <http://open-­‐multinet.info/ontology#Provisioned>  rdf:type  owl:Class  ;                                                                                                      rdfs:subClassOf  <http://open-­‐multinet.info/ontology#AllocationStatus>  .    ###    http://open-­‐multinet.info/ontology#Ready    <http://open-­‐multinet.info/ontology#Ready>  rdf:type  owl:Class  ;                                                                                          rdfs:subClassOf  <http://open-­‐multinet.info/ontology#OperationalStatus>  .    ###    http://open-­‐multinet.info/ontology#Slice    <http://open-­‐multinet.info/ontology#Slice>  rdf:type  owl:Class  ;                                                                                          rdfs:subClassOf  nml:Group  .    ###    http://open-­‐multinet.info/ontology#SoftwareComponent    <http://open-­‐multinet.info/ontology#SoftwareComponent>  rdf:type  owl:Class  ;                                                                                                                  rdfs:subClassOf  indl:NodeComponent  .    ###    http://open-­‐multinet.info/ontology#Started    <http://open-­‐multinet.info/ontology#Started>  rdf:type  owl:Class  ;                                                                                              rdfs:subClassOf  <http://open-­‐multinet.info/ontology#OperationalStatus>  .  

Page 107: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 107  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  upper  ontology  (i.e.  basis  for  more  specific  ones)  

 ###    http://open-­‐multinet.info/ontology#Status    <http://open-­‐multinet.info/ontology#Status>  rdf:type  owl:Class  .    ###    http://open-­‐multinet.info/ontology#Testbed    <http://open-­‐multinet.info/ontology#Testbed>  rdf:type  owl:Class  ;                                                                                              rdfs:subClassOf  nml:Group  .    ###    http://schemas.ogf.org/nml/base/2013/02#Group    nml:Group  rdf:type  owl:Class  .    ###    Generated  by  the  OWL  API  (version  3.4.2)  http://owlapi.sourceforge.net    

     

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

@prefix  :  <http://fuseco.fokus.fraunhofer.de/ontology#>  .  @prefix  geo:  <http://www.w3.org/2003/01/geo/wgs84_pos#>  .  @prefix  nml:  <http://schemas.ogf.org/nml/base/2013/02#>  .  @prefix  omn:  <http://open-­‐multinet.info/ontology#>  .  @prefix  owl:  <http://www.w3.org/2002/07/owl#>  .  @prefix  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  .  @prefix  xml:  <http://www.w3.org/XML/1998/namespace#>  .  @prefix  xsd:  <http://www.w3.org/2001/XMLSchema#>  .  @prefix  foaf:  <http://xmlns.com/foaf/0.1/>  .  @prefix  geni:  <http://geni.net/ontology#>  .  @prefix  indl:  <http://www.science.uva.nl/research/sne/indl#>  .  @prefix  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  .  @prefix  novi:  <http://fp7-­‐novi.eu/im.owl#>  .  @base  <http://fuseco.fokus.fraunhofer.de/ontology#>  .    <http://fuseco.fokus.fraunhofer.de/ontology>  rdf:type  owl:Ontology  ;                                                                                              rdfs:seeAlso  "http://fuseco.fokus.fraunhofer.de"^^xsd:anyURI  ;                                                                                              rdfs:comment  """FUSECO  Playground  Ontology  Advertisement.    Alexander  Willner  <alexander.willner@tu-­‐berlin.de>"""@en  ;    

Page 108: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 108  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

                                                                                         owl:imports  <http://open-­‐multinet.info/ontology>  .    #################################################################  #  #        Annotation  properties  #  #################################################################    ###    http://geni.net/ontology#expires    geni:expires  rdf:type  owl:AnnotationProperty  .    ###    http://geni.net/ontology#generated    geni:generated  rdf:type  owl:AnnotationProperty  .    ###    http://geni.net/ontology#manager    geni:manager  rdf:type  owl:AnnotationProperty  .    ###    http://geni.net/ontology#type    geni:type  rdf:type  owl:AnnotationProperty  .    ###    http://www.w3.org/2003/01/geo/wgs84_pos#lat    geo:lat  rdf:type  owl:AnnotationProperty  .    ###    http://www.w3.org/2003/01/geo/wgs84_pos#long    geo:long  rdf:type  owl:AnnotationProperty  .    ###    http://xmlns.com/foaf/0.1/Image    foaf:Image  rdf:type  owl:AnnotationProperty  .    ###    http://xmlns.com/foaf/0.1/based_near    foaf:based_near  rdf:type  owl:AnnotationProperty  .    #################################################################  #  #        Classes  #  #################################################################    

Page 109: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 109  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

###    http://fuseco.fokus.fraunhofer.de/ontology#Attenuator    :Attenuator  rdf:type  owl:Class  ;                            rdfs:subClassOf  :EpcNode  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#EpcClient    :EpcClient  rdf:type  owl:Class  ;                          rdfs:subClassOf  :EpcNode  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#EpcLink    :EpcLink  rdf:type  owl:Class  ;                      rdfs:subClassOf  nml:Link  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#EpcNode    :EpcNode  rdf:type  owl:Class  ;                      rdfs:subClassOf  nml:Node  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#EpcPCRFService    :EpcPCRFService  rdf:type  owl:Class  ;                                    rdfs:subClassOf  :EpcService  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#EpcService    :EpcService  rdf:type  owl:Class  ;                            rdfs:subClassOf  nml:Service  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#GPRS    :GPRS  rdf:type  owl:Class  ;                rdfs:subClassOf  :EpcLink  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#ImsService    :ImsService  rdf:type  owl:Class  ;    

Page 110: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 110  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

                       rdfs:subClassOf  nml:Service  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#LTE    :LTE  rdf:type  owl:Class  ;              rdfs:subClassOf  :EpcLink  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#MmeComponent    :MmeComponent  rdf:type  owl:Class  ;                                rdfs:subClassOf  omn:SoftwareComponent  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#MmeHandoverService    :MmeHandoverService  rdf:type  owl:Class  ;                                            rdfs:subClassOf  nml:Service  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#OpenStackServer    :OpenStackServer  rdf:type  owl:Class  ;                                      rdfs:subClassOf  nml:Node  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#Shieldbox    :Shieldbox  rdf:type  owl:Class  ;                          rdfs:subClassOf  :EpcNode  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#UMTS    :UMTS  rdf:type  owl:Class  ;                rdfs:subClassOf  :EpcLink  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#WIFI    :WIFI  rdf:type  owl:Class  ;                rdfs:subClassOf  :EpcLink  .    ###    http://geni.net/ontology#AggregateManager    

Page 111: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 111  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

geni:AggregateManager  rdf:type  owl:Class  .    ###    http://geni.net/ontology#Message    geni:Message  rdf:type  owl:Class  .    ###    http://www.w3.org/2003/01/geo/wgs84_pos#Point    geo:Point  rdf:type  owl:Class  .    #################################################################  #  #        Individuals  #  #################################################################    ###    http://fuseco.fokus.fraunhofer.de/ontology#FusecoPlayground    :FusecoPlayground  rdf:type  omn:Testbed  ,                                                        owl:NamedIndividual  ;                                        rdfs:label  "FUSECO"@en  ;                                        foaf:Image  "http://testbeds.eu/images/fuseco.png"^^xsd:anyURI  ;                                        foaf:homepage  "http://www.fuseco-­‐playground.org"^^xsd:anyURI  ;                                        rdfs:comment  "The  Future  Seamless  Communication  (FUSECO)  Playground  -­‐  located  in  Berlin  -­‐  is  a  pioneering  reference  testbed,  integrating  various  state  of  the  art  wireless  broadband  networks  into  a  3GPP  Evolved  Packet  Core  (EPC)  prototype  platform,  allowing  the  rapid  validation  of  new  networking  paradigms,  and  prototyping  of  innovative  Future  Internet  and  smart  city  applications."@en  ;                                        omn:certificate  "https://fuseco.fokus.fraunhofer.de/api/fed4fire/v1/certificates/download/ca.fiteagle-­‐fuseco.fokus.fraunhofer.de"^^xsd:anyURI  ;                                        foaf:based_near  :location  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#am    :am  rdf:type  geni:AggregateManager  ,                            owl:NamedIndividual  ;            geni:manager  "urn:publicid:IDN+fuseco.fokus.fraunhofer.de+authority+cm"^^xsd:anyURI  ;    

Page 112: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 112  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

       omn:endpoint  "https://fuseco.fokus.fraunhofer.de/api/sfa/am/v3"  ;            omn:version  "3.0"  ;            omn:partOfGroup  :FusecoPlayground  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#location    :location  rdf:type  owl:NamedIndividual  ,                                        geo:Point  ;                        geo:lat  "52.5258083"  ;                        geo:long  "13.3172764"  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#message    :message  rdf:type  geni:Message  ,                                      owl:NamedIndividual  ;                      geni:expires  "2013-­‐07-­‐24T06:20:19Z"^^xsd:dateTime  ;                      geni:generated  "2013-­‐07-­‐24T06:20:19Z"^^xsd:dateTime  ;                      geni:type  geni:Advertisement  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#vmserver1    :vmserver1  rdf:type  :OpenStackServer  ,                                          owl:NamedIndividual  ;                          rdfs:label  "vmserver1"@en  ;                          omn:exclusive  "false"^^xsd:boolean  ;                          omn:monitoringsupport  "true"^^xsd:boolean  ;                          omn:partOfGroup  :FusecoPlayground  ;                          omn:up  "true"^^xsd:boolean  ;                          foaf:based_near  :location  .    ###    http://fuseco.fokus.fraunhofer.de/ontology#vmserver2    :vmserver2  rdf:type  :OpenStackServer  ,  

Page 113: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 113  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Definition  of  a  possible  ontology  for  the  FUSECO  Playground  incl.  resource  instances  

                                       owl:NamedIndividual  ;                          omn:up  "true"^^xsd:boolean  ;                          omn:partOfGroup  :FusecoPlayground  .    :vmserver3  rdf:type  :OpenStackServer  ,                                          owl:NamedIndividual  ;                          omn:up  "false"^^xsd:boolean  ;                          omn:monitoringsupport  "false"^^xsd:boolean  ;                          omn:partOfGroup  :FusecoPlayground  .    :reservation1  rdf:type  novi:Reservation  ;            rdfs:label  "MyReservation1"  ;            rdfs:comment  "a  future  reservation"  ;            novi:startTime  "2015-­‐01-­‐01T00:00:00-­‐02:00"^^xsd:dateTime  ;            novi:endTime  "2015-­‐01-­‐01T01:00:00-­‐02:00"^^xsd:dateTime  .    :epcservice1  rdf:type  :EpcService  ,                                          owl:NamedIndividual  ;                        omn:monitoringsupport  "true"^^xsd:boolean  ;                        omn:partOfGroup  :reservation1  .    ###    Generated  by  the  OWL  API  (version  3.4.2)  http://owlapi.sourceforge.net    

 

12.2 Example  Queries    The  below  cited  documents  show  how  to  query  specific  data  using  the  above  mentioned  ontologies  using  https://github.com/AlexanderWillner/openmultinet.    

Get  Type  of  a  Message  

$  ./bin/runQuery.sh  example1  Running  'example1'  (Get  the  type  of  the  message)...  Data  file:  'data/example-­‐request-­‐vm.ttl'  Query  file:  'queries/query-­‐getType.sparql'  Query:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  prefix  geni:  <http://geni.net/ontology#>    SELECT  ?type  WHERE  {  ?message  geni:type  ?type  }  

Page 114: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 114  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Get  Type  of  a  Message  

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    Result:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  type  http://geni.net/ontology#Request  Time:  0.136  sec  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  

   

Get  the  Nodes  Published  in  the  FUSECO  Playground  

$  ./bin/runQuery.sh  example2  Running  'example2'  (Get  the  nodes  published  in  the  FUSECO  Playground  example)...  Data  file:  'data/example-­‐advertisement-­‐fp.ttl'  Query  file:  'queries/query-­‐getnodes.sparql'  Query:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  PREFIX  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  PREFIX  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  PREFIX  nml:  <http://schemas.ogf.org/nml/base/2013/02#>      SELECT  ?node  WHERE  {  ?node  rdf:type  ?type  .  ?type  rdfs:subClassOf  nml:Node  }  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    Result:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  node  http://fuseco.fokus.fraunhofer.de/ontology#vmserver3  http://fuseco.fokus.fraunhofer.de/ontology#vmserver2  http://fuseco.fokus.fraunhofer.de/ontology#vmserver1  Time:  0.081  sec  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  

       

Get  Nodes  with  Reservations  

$  ./bin/runQuery.sh  example3  

Page 115: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 115  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Get  Nodes  with  Reservations  

Running  'example3'  (Get  nodes  with  reservations)...  Data  file:  'data/example-­‐advertisement-­‐fp.ttl'  Query  file:  'queries/query-­‐getreservations.sparql'  Query:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  PREFIX  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  PREFIX  owl:  <http://www.w3.org/2002/07/owl#>  PREFIX  xsd:  <http://www.w3.org/2001/XMLSchema#>  PREFIX  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  prefix  novi:  <http://fp7-­‐novi.eu/im.owl#>  prefix  omn:  <http://open-­‐multinet.info/ontology#>      SELECT  ?resource  ?start  ?stop  WHERE{          ?resource  omn:partOfGroup  ?reservation  .          ?reservation  novi:startTime  ?start  ;                                    novi:endTime  ?stop  .          FILTER(?start  >  "2010-­‐01-­‐19T16:00:00Z"^^xsd:dateTime  )  }  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    Result:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  resource,start,stop  http://fuseco.fokus.fraunhofer.de/ontology#epcservice1,2015-­‐01-­‐01T00:00:00-­‐02:00,2015-­‐01-­‐01T01:00:00-­‐02:00  Time:  0.100  sec  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  

   

Get  Nodes  that  offer  Monitoring  Capabilities  

$  ./bin/runQuery.sh  example4  Running  'example4'  (get  resources  with  monitoring  capabilities)...  Data  file:  'data/example-­‐advertisement-­‐fp.ttl'  Query  file:  'queries/query-­‐getnodes-­‐with-­‐mon.sparql'  Query:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  PREFIX  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  PREFIX  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  PREFIX  nml:  <http://schemas.ogf.org/nml/base/2013/02#>    PREFIX  omn:  <http://open-­‐multinet.info/ontology#>  PREFIX  xsd:  <http://www.w3.org/2001/XMLSchema#>    SELECT  ?node  WHERE  {  ?node  rdf:type  ?type  .  

Page 116: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 116  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Get  Nodes  that  offer  Monitoring  Capabilities  

?type  rdfs:subClassOf  nml:Node  .  ?node  omn:monitoringsupport  "true"^^xsd:boolean  ;  }  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    Result:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  node  http://fuseco.fokus.fraunhofer.de/ontology#vmserver1  Time:  0.073  sec  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  

   

Get  the  Instances  that  Pushing  Monitoring  Data  to  OML  Servers  

$  ./bin/runQuery.sh  example5  Running  'example5'  (get  instances  that  exports  to  OML  server)...  Data  file:  'data/example-­‐manifest-­‐fp.ttl'  Query  file:  'queries/query-­‐getinstances-­‐with-­‐oml.sparql'  Query:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  PREFIX  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  PREFIX  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  PREFIX  nml:  <http://schemas.ogf.org/nml/base/2013/02#>    PREFIX  omn:  <http://open-­‐multinet.info/ontology#>  PREFIX  xsd:  <http://www.w3.org/2001/XMLSchema#>    SELECT  ?sliver  ?omlserver  WHERE  {     ?sliver  omn:exportsToOML  ?omlserver  .  }  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    Result:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  sliver,omlserver  http://example.org/myexperiment#boundvm4,http://myexperiment2.example.org:54321  http://example.org/myexperiment#boundvm4,http://myexperiment.example.org:54321  Time:  0.070  sec  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  

   

Get  Information  about  a  Testbed  

Page 117: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 117  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Get  Information  about  a  Testbed  

$  ./bin/runQuery.sh  example6  Running  'example6'  (get  information  about  the  testbeds)...  Data  file:  'data/example-­‐advertisement-­‐fp.ttl'  Query  file:  'queries/query-­‐gettestbedinfo.sparql'  Query:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  PREFIX  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  PREFIX  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  prefix  omn:  <http://open-­‐multinet.info/ontology#>  prefix  foaf:  <http://xmlns.com/foaf/0.1/>    prefix  geo:  <http://www.w3.org/2003/01/geo/wgs84_pos#>    prefix  geni:  <http://geni.net/ontology#>      SELECT  ?label  ?url  ?image  ?lat  ?long  ?amendpoint  ?amversion  ?cert  WHERE  {  ?testbed  rdf:type  omn:Testbed  ;        rdfs:label  ?label  ;        foaf:Image  ?image  ;        foaf:homepage  ?url  ;        foaf:based_near  [                geo:lat  ?lat  ;                geo:long  ?long        ]  ;        omn:certificate  ?cert  .  ?am  rdf:type  geni:AggregateManager  ;     omn:partOfGroup  ?testbed  ;     omn:endpoint  ?amendpoint  ;     omn:version  ?amversion  .  }  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^    Result:  vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv  label,url,image,lat,long,amendpoint,amversion,cert  FUSECO,http://www.fuseco-­‐playground.org,http://testbeds.eu/images/fuseco.png,52.5258083,13.3172764,https://fuseco.fokus.fraunhofer.de/api/sfa/am/v3,3.0,https://fuseco.fokus.fraunhofer.de/api/fed4fire/v1/certificates/download/ca.fiteagle-­‐fuseco.fokus.fraunhofer.de  Time:  0.085  sec  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^  

   

Request  a  Virtual  Machine  (both,  bound  and  unbound)  

@prefix  :  <http://example.org/myexperiment#>  .  @prefix  nml:  <http://schemas.ogf.org/nml/base/2013/02#>  .  

Page 118: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 118  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

Request  a  Virtual  Machine  (both,  bound  and  unbound)  

@prefix  rdf:  <http://www.w3.org/1999/02/22-­‐rdf-­‐syntax-­‐ns#>  .  @prefix  geni:  <http://geni.net/ontology#>  .  @prefix  indl:  <http://www.science.uva.nl/research/sne/indl#>  .  @prefix  rdfs:  <http://www.w3.org/2000/01/rdf-­‐schema#>  .  @prefix  fp:  <http://fuseco.fokus.fraunhofer.de/ontology#>  .  @prefix  omn:  <http://open-­‐multinet.info/ontology#>  .    :message  rdf:type  geni:Message  ;                  geni:type  geni:Request  .    :boundvm1  rdf:type  indl:VirtualNode  ;                      nml:implementedBy  fp:vmserver1  .    :unboundvm1  rdf:type  indl:VirtualNode  .  

 

Page 119: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 119  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

13 Appendix  I:  Overview  of  architectural  components  related  to  WP5  but  out  of  scope  of  this  deliverable  

 The  goal  of  this  deliverable  is  to  specify  all  components  of  the  cycle  2  architecture  of  the  project  (as  presented  in  D2.4  “Second  Federation  Architecture”)  that  fall  within  the  scope  of  WP5.  Most  but  not  all  of  these  components  have  been  specified  across  the  different  chapters  of  this  deliverable.  This  is  intentional,   for   those   that   cannot   be   found   there   it  was   considered   that   there   is   no   need   for   any  further  specification  of   them.   In   this  appendix  we  briefly   list   these  components,  and  motivate  why  they  were  not  further  specified.  

13.1 SSH  Server  and  client  These  components  are  very  common,  and  in  Fed4FIRE  these  components  are  exactly  the  same  as  in  any   other   context.   This   means   that   the   SSH   server   component   that   is   expected   to   be   present   at  testbed   resource   (if   applicable)   can   be   any   SSH   service   that   allows   Secure   Shell   login   on   that  resource.  A  well-­‐known  Linux  example  is  OpenSSH  Server  (others  exists  such  as  Dropbear,  lsh,  Tectia  SSH  Server,  etc.).  On  Windows  multiple  possibilities  also  exist   (freeSSHd,  KpyM  SSH  server,  Pragma  Fortress  SSH  Server,  etc).  In  the  end  testbeds  are  free  to  adopt  any  SSH  Server  that  they  prefer.  The  same  is  true  for  the  experimenter:  he/she  can  use  any  common  SSH  client  as  preferred  (e.g.  PuTTY  on  Windows  or  the  ssh  application  that  can  be  found  in  about  every  Linux  distribution).  

13.2 XMPP  server  This   is  also  a  common  component  that  has  no  specific  Fed4FIRE  requirements.  Example  candidates  for  the  deployment  of  such  a  messaging  server  are  Openfire  and  Prosody.  

13.3 Aggregate  Manager  This   component   is   responsible   for   the   management   of   the   testbed,   and   for   the   exposure   of   its  resources   through   the   SFA   Aggregate   Manager   API.   A   testbed   has   the   freedom   to   implement   or  adopt   any   testbed   management   framework   as   it   sees   fit,   as   long   as   it   exposes   the   resources  correctly.  This   is  already  specified   in  a  sufficient   level  of  detail  by  WP2   in  D2.4  “Second  Federation  Architecture”.   The  only   remark   that   should  be   given   is   that   at   the  moment   the   requirement   is   for  AMs  to  support  the  GENI  SFA  AM  API  version  3,  and  that  currently  WP8  is  pursuing  the  definition  of  a  new  version  of  that  API,  which  will  be  called  the  Common  AM  API.  This  definition  is  a  joint  effort  with  partners  from  the  GENI  initiative  in  the  US.    

13.4 Authority  Directory  This  component  was  specified   in  sufficient  detail   in  the  cycle  1  specifications  of  WP7  (D7.1).  A  that  time  it  was  still  called  the  certificate  directory,  it  has  been  renamed  to  the  more  appropriate  name  of  authority  directory   in   the   cycle  2  architecture.  But   its   function  has  not   changed:   it   is   a   federation-­‐wide  store  of  root  certificates  related  to  the  different  member  and  slice  authorities  that  are  part  of  the   Fed4FIRE   federation.   This   component   was   specified   in   D7.1   as   an   HTTP   server   with   stringent  write  access  control.  After   first  experiences  with   this   implementation   in  cycle  1  of   the  project,   this  approach  based  on  off-­‐the-­‐shelf   technology   is   still   considered   to  be   suitable.   Therefore  no   further  specification  of  this  component  is  needed.  

13.5 Aggregate  Manager  directory  This  component  was  specified   in  sufficient  detail   in  the  cycle  1  specifications  of  WP5  (D5.1).  A  that  time   it   was   still   called   the  machine-­‐readable   testbed   directory,   it   has   been   renamed   to   the  more  appropriate  name  of  aggregate  manager  directory  in  the  cycle  2  architecture.  The  other  aspect,  the  

Page 120: D5.2(;(Detailed(specifications( … · 2017. 7. 7. · ProjectAcronym! Fed4FIRE(ProjectTitle! Federation(for(FIRE(Instrument! Largescaleintegratingproject(IP)(Call!identifier! FP7;ICT;2011;8(Projectnumber!

FP7-­‐ICT-­‐318389/FRAUNHOFER/R/PU/D5.2    

 120  of  120  

©  Copyright  Fraunhofer  FOKUS  and  other  members  of  the  Fed4FIRE  consortium,    2014      

human-­‐readable  testbed  directory  has  been  shifted  into  the  documentation  center  in  cycle  2.  But  the  function  of  the  machine  readable  testbed  directory/aggregate  manager  directory  has  not  changed.  It  is  a  federation-­‐wide  overview  of  the  contact  details  of  all  the  different  Aggregate  Managers  that  are  part   of   the   Fed4FIRE   federation.   In   laymen   terms,   you   could   call   it   the   testbed   phonebook.   This  component   was   specified   in   D5.1   as   a   reuse   of   the   listing   capabilities   of   MySlice.   After   first  experiences  with  this  implementation  in  cycle  1  of  the  project,  this  approach  is  still  considered  to  be  suitable.  Therefore  no  further  specification  of  this  component  is  needed.