View
217
Download
0
Category
Preview:
Citation preview
Federated E-infrastructure Dedicated to European Researchers
Innovating in Computing network Architectures
Mauro Campanella - GARROn behalf of the FEDERICA project
CONVEGNO "Incontro con l'Ingegneria delle Telecomunicazioni"Firenze, June 18th, 2008
2Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Agenda
FEDERICA at a glance, vision and principles
Description and status
Boundaries and (control plane) challenges
3Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA at a glance
What: European Community co-funded project in its 7th Framework Program in the area “Capacities - Research Infrastructures”3.7 MEuro EC contribution, 5.2 ME budget, 461 Man Months
When: 1st January 2008 - 30 June 2010 (30 months)
Who: 20 partners, based on stakeholders on network research and management: 11 National Research and Education Networks, DANTE (GÉANT2), TERENA, 4 Universities, Juniper Networks, 1 small enterprise (MARTEL), 1 research centre (i2CAT) - Coordinator: GARR (Italian NREN)
Where: Europe-wide e-Infrastructure, open to external connections
4Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA Vision
• Create by fall 2008 an e-Infrastructure for all researchers on Future Internet. Allow researchers complete control of resources in a “slice”, enabling disruptive experiments.
• Support research in virtualization of e-Infrastructures integrating network resources and nodes capable of virtualization (V-Nodes). In particular on multi(virtual)domain control, management and monitoring, including virtualization services and user oriented control in a federated environment
• Pave the way, research and create experience for the next generation of the European Research and Education Networks.
5Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
Partners’ Location
NREN
Univ. orRes. Centre
NREN partners provide a European coverage using the GN2+ service and
- allow connection to Univ. and Research Center partners
- Provide “HUB” functionalities and possibility extend the e-Infrastructure to other countries and projects using physical or logical circuits
- Contribute with tools and specific expertise
Dark Fiber GÉANT2 GÉANT+ service
Cross Border Fiber or future GÉANT2
GARR
DFN CESNET
PSNC
Switch
NORDUnet
Red.ES
GRNET
HEAnet
FCCN
NIIFHungarnet
PoliTO
KTH
i2CAT UPC
ICCS
TERENADANTE
JUNIPERNetworks
SME,Associations,
Vendors
Martel
6Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA Partners
National Research & Education Networks (11)CESNET Czech Rep.DFN GermanyFCCN PortugalGARR (coordinator) ItalyGRNET GreeceHEAnet IrelandNIIF/HUNGARNET HungaryNORDUnet Nordic countriesPSNC PolandRed.es SpainSWITCH Switzerland
Small EnterpriseMartel Consulting Switzerland
NREN Organizations
TERENA The NetherlandsDANTE United Kingdom
Universities - Research Centersi2CAT Spain KTH SwedenICCS (NTUA) GreeceUPC SpainPoliTO Italy
System VendorJuniper Networks Ireland
7Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu The Enabling Elements1. Virtualization in computing systems and in network is available. It
creates “resources”, given a supporting physical substrate, which :- Have a looser or no dependency from a specific physical location or
entity (computing, data, circuits may migrate)- On-the-fly reconfiguration, cancellation and creation of resources in
the e-Infrastructure (e.g. a routing element, a generic resource)- off-the-shelf components offer in hardware, embedded virtualization
functionalities.2. The European NRENs are managing owned hybrid infrastructures and
actively performing network research, starting from users’ needs. The federated NREN architecture scenario offers now significant interdomainservices and research capabilities.
3. The traditional testbed, focused on a small number of technologies has a usefulness limited to the specialized nature of users. It also implies a long set-up time and a fast obsolescence.
8Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
GN2 project
The GÉANT2network
2008
32 European NRENs’ Backbone
based on dark fibers operated by DWDM at multiple
10 Gb/s
9Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
Connects all Universities and research centres inItaly, plus ~1000 schools, Libraries, Research Hospitals (50 IRCCS),
Backbone aggregated capacity: ~120 GbpsAccess aggregated capacity: ~60 Gbps
615 user access circuits from 2Mbps to 10 Gbps62 backbone links43 POP (90% hosted by users)
PEERING: 65 Gbps – 42.5 Gbps to GEANT2– 3 x 2.5 Gbps IP Transit– 5 x 1 Gbps+10 Gbps National
Circuits leased from:• 8 national operators (Telecom Italia, Infracom,
Fastweb, Interoute, WIND BT-Italy, COLT)• 3 international operators (Global
Crossing,Telia, Level3)
The Network GARR-G
10Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA e-Infrastructure
11Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA Principles1. Create an agnostic and neutral (transparent) infrastructure2. Create “slices” which are a set of (virtual) network and computing
resources according to user’s request and are “disruptible”3. Provide to the user complete control within a slice down to the lowest
possible layer (in particular allow any application and protocol)4. Strive/engineer for reproducibility of experiments, i.e. given the same
initial conditions, the results of an experiment are the same5. Allow slices (if requested) to connect to the general Internet, to access
external services/nodes (e.g. for content/delivery, specialized HW)6. Ensure isolation between slices maintaining the possibility to cross-
connect slides on request7. Allow simultaneous use without conflict8. Force/be exposed to topology changes (various level of resiliency)9. Open to interconnect / federate with other e-Infrastructures10. Access granted through a User Policy Board
12Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Agenda
FEDERICA at a glance, vision and principles
Description and status
Boundaries and (control plane) challenges
13Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Work plan outline (I3)
Jan2008
Oct2008
Feb2010
Slic
es
14Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA Activities
NA1: Project ManagementNA2: Building and Consolidating the User
CommunityNA3: Standardization and LiaisonsNA4: Dissemination and Training
SA1: Infrastructure SupportSA2: Operational User support and Tool
bench development
JRA1: Network Control and ManagementJRA2: Novel Paradigms and User Control
Network Activities
Service Activities
Join Research Activities
15Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Pictorial of “Slices” Creation
FEDERICA substrate
FEDERICA Physical Layer substrateFEDERICA Data Link layer substrateFEDERICA Network Layer substrate
Slice 1
The user requests an Infrastructure made of L2 circuits, un-configured virtual nodes, to test a new BGP version.
Virtual Router/SwitchVirtual node
NRENs andGlobal Internet
16Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA e-Infrastructure
DRAFT 23-Apr-08Core circuits and
switches beingdelivered
Core Nodes
17Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
Topology version 8.4
1 Physical GbE from GN2+
1 Physical GbE tbd
Core Nodes
1 GbE VLAN or L2MPLS
GARR
DFN
CESNETSWITCH
Red.es GRNET
Hungarnet
PSNCHEAnet
i2CAT
KTHNORDUNET SUNET
FCCN
LegendaA high physical meshing allows a better distribution of slices in the substrate. Long delay and high capacity circuits are also available.
18Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Sample FEDERICA PoP
BGPPeering
GigabitEthernetFastEthernetRS-232
Legenda
FEDERICANetworkswitch
FEDERICAComputingNode (PC)
Out-Of-BandTerminal server
Other FEDERICA
PoPs
NRENProductionNetwork
The FEDERICA substrate(physical infrastructure andSingle IP AS public number)
Notes:- Each PC has many Gb Ethernet
interfaces- The FastEthernet Interfaces are to
decouple the control and data plane- OOB is not mandatory
Management
19Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Slicing the Core (Substrate)
FEDERICA substrate
Switches: Juniper MX480, (virtual and logical routing, MPLS, VLANs, IPv4 v6, QoS linecards)
V-Nodes: Up to 8-16 images/node, Unix OS, 4-8 Ethernet NICs, ~ 1 TB disk, 4core CPUs
20Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Access and Use Policies
• A User Policy Board will receive and approve project for the use of the infrastructure
• Access to the infrastructure is subject to the signature of an “Acceptable User Policy”, which includes providing feedback
• Usage of a FEDERICA slice will be free of charge if no additional equipment is requested
• Interconnection with other infrastructures (e.g. labs) is possible. The cost is to be defined case by case.
• The time duration of a single experiment will be in principle limited to facilitate turnover (3 months initially)
• Access is open to research groups from academia and private sectorwith priority to European Community funded projects.
• The code and tool bench produced will be Open Source
Users’ requirements are fundamental and are being collected
21Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
Federating and Interconnection with other projects/groups
• “Soft” model– Act as a user of the project (hosting its virtual resources with
possible interconnection to external resources using Internet)– Collaboration at the project/user group level to exchange
intermediate results, perform joint developments
• “Hard” model– Hosting in FEDERICA a physical resource – Interconnecting the testbeds using a dedicated network resource
Need for a resource/service description format (e.g. TMF, IPSphere, …)
22Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA - Goals Summary—Provide on European scale network and system agnostic e-Infrastructure
to be deployed in phases for Future Internet research (and not only). Provide its operation, maintenance and on-demand configuration
—Act as a forum and support for researchers/projects on “Future Internet”. Support of experimental activities to validate theoretical concepts, scenarios, architectures, control and management solutions. Users have full control of their slice
—Validate and gather experimental information for the next generation of research networking also through basic tool validation
—Dissemination and cooperation between NRENs and researchers’community
—Contribution to standards in form of requirements and experience
Inscope
—Internal extended research, e.g. advanced optical technology—Development of Grid applications (but open to hosting)—Offer raw computing power—Offer transit capacity
Out ofscope
23Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Agenda
FEDERICA at a glance, vision and principles
Description and status
Boundaries and (control plane) challenges
24Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu Initial Boundaries
• Equipment hosting, federation with other testbed
• Not all technologies available (e.g. wireless, nomadic nodes)
• Slower initial provisioning, compatible with decision process and overall management• Initial manual provisioning
• Rely more on software emulation• Less powerful switches outside the core
• Hardware QoS is available on two Juniper MX480 switches
• Packet switching and statistical multiplexing assumed by default
• Not considered a limiting factor, can be overcame later using WDM equipment
• Ethernet framing (large MTUs) as data link
• Equipment is ready for IPv6• IPv6 unconfigured, to be enabled according to users’ requests
• Larger virtual slices can be obtained reducing the number of concurrent users, user’s equip. may be added
• Scalability
25Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
Control Plane and Virtualization:an additional challenge…
- The main control plane is now tied to the “basic substrate”, its main and new function is to create “slices” made of “resources”. It implies the need to define:- user interface specification for request (UNI-type, but UNIs do not
have currently the required parameters) - resources abstraction and specification (network part ongoing in GN2
(AutoBAHN) and international collaboration with I2, ESnet)- signalling protocols for UNI communication- Interdomain communication specification
- Need to manage the basic substrate and one or more slices at the same time:- Control plane extensions in the layered framework- control plane capable of communication with one or more”hosted”
control planes
26Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu C.P. and Virtualization (cont)…
- Extension and definitions for naming, addressing, to allow the possibility of global reachability
- Monitoring interaction,synergy between the substrate and the slices in sharing the physical elements
FEDERICA will start with manual provisioning of virtual resources, a single IP AS and public IP network addresses for the substrate.
Overall neither the integrated model, nor the layered one may be adequate and further advances are required in HW and SW to allowvirtualization capabilities with resource assurances.
BUT …
27Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu THE challenge: Complexity
The major risk is to create a system which is too complex.
Virus Sobig-F(© F-secure)
Internet as BGP Peerings (CAIDA)
Internet as Circuits ?
28Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
Grazie per l’attenzione
29Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu FEDERICA Partners
30Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu “GARR”
L’acronimo GARR deriva da
“Gruppo per l’Armonizzazione delle Reti della Ricerca”.
Il Gruppo si e’ autocostituito nel 1987 per permettere una sinergia delle comunitàscientifica delle Università e degli Enti Pubblici di Ricerca italiani nella realizzazione, gestione, ricerca e utilizzazione delle reti di interconnesione, analogamente alle Reti della Ricerca e dell’Istruzione già esistenti in altri paesi.Nel 1989 Il GARR si costituisce formalmente grazie ad un progetto finalizzato del MURSTNel 2002 nasce l’Associazione senza fini di lucro “Consortium GARR”, essendo CNR, CRUI, ENEA ed INFN i membri fondatoriIl mandato principale del Consortium GARR e’ di gestire e sviluppare la rete della ricerca e dell’istruzione italiana.
31Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu La storia del GARRGARR è uno strumento per contribuire alla armonizzazione e sinergia tra mondo
dell’istruzione e della ricerca
1987: Inizio collaborazione per le reti fra enti di ricerca ed Università1989: GARR fondato con un progetto ministeriale (MURST) come
gruppo di lavoro per l’armonizzazione delle reti della ricerca1990-1994: GARR-1, prima rete comune a 2Mbps1994-1998: GARR-2, evoluzione di GARR-1 a 34Mbps1998-2003: GARR-B (Broadband) a 155Mbps2002: il GARR diventa “Consortium GARR” un un’entità legale senza fini
di lucro (fondatori CNR, ENEA, INFN, CRUI)2003-2006: sviluppo di GARR-G (Giganet) a 2.5-10 Gbps2008 GARR-X
32Convegno “Incontro con l’Ingegneria delle Telecomunicazioni”, June 18th, 2008
www.fp7-federica.eu
0
10
20
30
40
50
6060
50
40
30
20
10
0
60
50
40
30
20
10
0
Fase11999
Fase42002
Fase12003
Fase22000
Fase32001
BackBone 54Gbps
GARR-B GARR-G
Accesso 29Gbps
Fase32005
POP: 15 TI+ 1 GARR1 Carrier
POP: 15 TI + 4 GARR1 Carrier
BackBone 1Gbps
Accesso 500Mbps
Fase22004
POP: 7 TI + 30 GARR8 Carrier
Gig
abit/
sec
Gig
abit/
sec
Recommended