8
Integrated SDN/NFV Orchestration for the Dynamic Deployment of Mobile Virtual Backhaul Networks Over a Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo Mayoral, Ricard Vilalta, Ramon Casellas, Raül Muñoz, Stephan Pachnicke, Thomas Szyrkowiec, and Achim Autenrieth AbstractFuture 5G networks will bring important chal- lenges to network operators, such as the traffic load increase mainly due to the proliferation of mobile broadband commu- nications. This will force mobile network operators (MNOs) redesigning and investing in their infrastructures [e.g., new equipment for radio access network (RAN), backhaul] to cope with such data growth. Aiming at lowering both capital expenditures and operational expenditures, current net- working trends on network virtualization, software-defined networking (SDN), and network function virtualization (NFV) provide an appealing scenario to flexibly deal with the increase in traffic for MNOs without overdimensioning the deployed network resources. To this end, we rely on an implemented SDN/NFV orchestrator that automatically serves MNO capacity requests by computing and allocating virtual backhaul tenants. Such backhaul tenants are built over a common physical aggregation network, formed by heterogeneous technologies (e.g., packet and optical) that may be owned by different infrastructure providers. MNO RAN traffic is transported toward a mobile core network [i.e., evolved packet core (EPC)], where required backhaul resources are tailored to the capacity needs. The EPC func- tions are virtualized within the cloud (vEPC), leveraging the NFV advantages. This increases MNO flexibility where cloud resources are instantiated according to EPC needs. The goal of the SDN/NFV orchestrator is to jointly allocate both net- work and cloud resources, deploying virtual backhaul ten- ants and vEPC instances for a number of MNOs with different service and capacity requirements. Each MNOs backhaul is isolated and controlled independently via a virtualized SDN (vSDN) controller deployed in the cloud. The SDN/NFV orchestrator architecture is detailed and ex- perimentally validated in a setup provided by the Centre Tecnològic de Telecomunicacions de Catalunya and ADVA Optical Networking. Specifically, upon an MNO request, the orchestrator instantiates the vEPC and vSDN functions in the cloud and then composes the MNOs backhaul tenant over a multilayer (packet and optical) aggregation network. Index TermsMultilayer networks; Network function virtualization; Software-defined networking; Virtual mo- bile backhaul networks. I. INTRODUCTION F uture 5G networks will considerably challenge net- work operators aiming at dealing with stringent net- work requirements; such requirements are imposed by new expected advanced services (e.g., real-time high-quality services, assisted driving, smart metering). Specifically, these include not only a substantial increase of the overall data rate (×100 of 4G peak rates) but also reduced end-to- end latency (10 ms or less), enhanced energy efficiency, and massive connectivity [e.g., Internet of Things, guaranteed quality of service (QoS)] [1]. The goal, for network opera- tors, is to design and develop 5G network architectures from an end-to-end perspective (i.e., access, metro, and core network segments) in a cost-efficient manner. Mobile broadband communications are the drivers in- creasing the traffic volume in 5G networks. Hence, mobile network operators (MNOs) will need to invest in new solutions and/or enhance their infrastructure with radio access technology, fronthaul, backhaul, etc. Typically, as traffic increases, MNOs deploy new dedicated network ap- pliances for both the control and data plane. These deploy- ments are generally overdimensioned considering the load foreseen for the next three to five years for peak hours [2]. By doing so, some network resources may be (frequently) wasted, resulting in an ineffective strategy in terms of capital expenditures and operational expenditures. A more flexible, agile, and cost-efficient solution to attain better use of available resources relies on leveraging the features provided by network function virtualization (NFV) and software-defined networking (SDN) [3]. NFV deploys network functions, typically allocated in dedicated hardware, as software instances referred to as virtual network functions (VNFs) [4]. VNFs are executed in commercial off-the-shelf (COTS) hardware (servers) http://dx.doi.org/10.1364/JOCN.9.00A135 Manuscript received June 10, 2016; revised October 5, 2016; accepted October 20, 2016; published January 4, 2017 (Doc. ID 268166). R. Martínez (e-mail: [email protected]), A. Mayoral, R. Vilalta, R. Casellas, and R. Muñoz are with the Centre Tecnològic de Telecomunicacions de Catalunya (CTTC), Barcelona, Spain. S. Pachnicke is with Kiel University (CAU), Kiel, Germany. T. Szyrkowiec and A. Autenrieth are with ADVA Optical Networking SE, Martinsried, Germany. Martínez et al. VOL. 9, NO. 2/FEBRUARY 2017/J. OPT. COMMUN. NETW. A135 1943-0620/17/02A135-08 Journal © 2017 Optical Society of America

Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

Embed Size (px)

Citation preview

Page 1: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

Integrated SDN/NFV Orchestration forthe Dynamic Deployment of MobileVirtual Backhaul Networks Over a

Multilayer (Packet/Optical) AggregationInfrastructure

Ricardo Martínez, Arturo Mayoral, Ricard Vilalta, Ramon Casellas, Raül Muñoz,Stephan Pachnicke, Thomas Szyrkowiec, and Achim Autenrieth

Abstract—Future 5G networks will bring important chal-lenges to network operators, such as the traffic load increasemainly due to the proliferation ofmobile broadband commu-nications. This will force mobile network operators (MNOs)redesigning and investing in their infrastructures [e.g., newequipment for radio access network (RAN), backhaul] tocopewith such data growth. Aiming at lowering both capitalexpenditures and operational expenditures, current net-working trends on network virtualization, software-definednetworking (SDN), and network function virtualization(NFV) provide an appealing scenario to flexibly deal withthe increase in traffic for MNOs without overdimensioningthe deployed network resources. To this end, we rely onan implemented SDN/NFV orchestrator that automaticallyserves MNO capacity requests by computing and allocatingvirtual backhaul tenants. Such backhaul tenants are builtover a common physical aggregation network, formed byheterogeneous technologies (e.g., packet and optical) thatmay be owned by different infrastructure providers. MNORAN traffic is transported toward a mobile core network[i.e., evolved packet core (EPC)], where required backhaulresources are tailored to the capacity needs. The EPC func-tions are virtualizedwithin the cloud (vEPC), leveraging theNFVadvantages. This increasesMNO flexibilitywhere cloudresources are instantiated according to EPC needs. The goalof the SDN/NFV orchestrator is to jointly allocate both net-work and cloud resources, deploying virtual backhaul ten-ants and vEPC instances for a number of MNOs withdifferent service and capacity requirements. Each MNO’sbackhaul is isolated and controlled independently via avirtualized SDN (vSDN) controller deployed in the cloud.The SDN/NFV orchestrator architecture is detailed and ex-perimentally validated in a setup provided by the CentreTecnològic de Telecomunicacions de Catalunya and ADVAOptical Networking. Specifically, upon an MNO request, theorchestrator instantiates the vEPC and vSDN functions inthe cloud and then composes the MNO’s backhaul tenantover amultilayer (packet andoptical) aggregationnetwork.

Index Terms—Multilayer networks; Network functionvirtualization; Software-defined networking; Virtual mo-bile backhaul networks.

I. INTRODUCTION

F uture 5G networks will considerably challenge net-work operators aiming at dealing with stringent net-

work requirements; such requirements are imposed by newexpected advanced services (e.g., real-time high-qualityservices, assisted driving, smart metering). Specifically,these include not only a substantial increase of the overalldata rate (×100 of 4G peak rates) but also reduced end-to-end latency (10 ms or less), enhanced energy efficiency, andmassive connectivity [e.g., Internet of Things, guaranteedquality of service (QoS)] [1]. The goal, for network opera-tors, is to design and develop 5G network architecturesfrom an end-to-end perspective (i.e., access, metro, and corenetwork segments) in a cost-efficient manner.

Mobile broadband communications are the drivers in-creasing the traffic volume in 5G networks. Hence, mobilenetwork operators (MNOs) will need to invest in newsolutions and/or enhance their infrastructure with radioaccess technology, fronthaul, backhaul, etc. Typically, astraffic increases, MNOs deploy new dedicated network ap-pliances for both the control and data plane. These deploy-ments are generally overdimensioned considering the loadforeseen for the next three to five years for peak hours [2].By doing so, some network resources may be (frequently)wasted, resulting in an ineffective strategy in terms ofcapital expenditures and operational expenditures. A moreflexible, agile, and cost-efficient solution to attain betteruse of available resources relies on leveraging the featuresprovided by network function virtualization (NFV) andsoftware-defined networking (SDN) [3].

NFV deploys network functions, typically allocated indedicated hardware, as software instances referred to asvirtual network functions (VNFs) [4]. VNFs are executedin commercial off-the-shelf (COTS) hardware (servers)http://dx.doi.org/10.1364/JOCN.9.00A135

Manuscript received June 10, 2016; revised October 5, 2016; acceptedOctober 20, 2016; published January 4, 2017 (Doc. ID 268166).

R. Martínez (e-mail: [email protected]), A. Mayoral, R. Vilalta, R.Casellas, andR.Muñoz arewith theCentre Tecnològic de Telecomunicacionsde Catalunya (CTTC), Barcelona, Spain.

S. Pachnicke is with Kiel University (CAU), Kiel, Germany.T. Szyrkowiec and A. Autenrieth are with ADVA Optical Networking

SE, Martinsried, Germany.

Martínez et al. VOL. 9, NO. 2/FEBRUARY 2017/J. OPT. COMMUN. NETW. A135

1943-0620/17/02A135-08 Journal © 2017 Optical Society of America

Page 2: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

within data centers (DCs) enabled by virtualization tech-niques. This provides important benefits to network oper-ators in terms of cost, flexibility (reduced time to market,avoiding vendor lock-in, enhanced scalability, etc.), open-ness, homogeneity of configuration, migration, etc. NFVis applicable to any data plane packet processing andcontrol plane function in fixed and mobile network infra-structures [5,6]. In the MNO context, NFV is addressingthe use case for virtualizing the mobile core [virtualizedevolved packet core (vEPC)] that has recently receivedsignificant attention [6,7].

On the other hand, SDN deals with a logically central-ized control enabling the network programmability by de-coupling data and control planes [8]. SDN offers a logicalplane abstraction, hiding vendor-specific hardware thatfosters multivendor interoperability. Such an abstractionenables network virtualization that is the partitioning(slicing) of the physical infrastructure to create multiple co-existing and independent network tenants on top of it [3].

In light of the above benefits and to deal with the ex-pected 5G MNO’s capacity increase, virtualization of bothnetwork functions (via NFV) and infrastructure (via SDN)can be efficiently exploited. This work assumes that a num-ber of MNOs owning their radio access network (RAN)equipment (i.e., eNodeBs) are connected to a commonmultilayer (packet and optical switching) aggregation in-frastructure. This network, which may be owned by differ-ent physical infrastructure providers, is then shared byMNOs to deploy their backhaul and avoid deploying newdedicated network elements. The physical aggregation net-work is partitioned (via SDN) to compose the MNO’s vir-tual backhaul tenants on top of it. The MNO’s backhaultenant resources are computed to be adjusted to the actualMNO traffic demands. The MNO’s evolved packet core(EPC) functions [i.e., mobile management entity (MME),serving gateway (SGw), packet data network gateway(PGw), etc.] [9] are virtualized into the DCs, which arereached via the shared multilayer aggregation network.

This paper, based on the work produced by the authors in[10], presents and experimentally validates an integratedSDN/NFV orchestration architecture to dynamically com-pute and automatically deploy individual MNO virtualbackhaul tenants with their corresponding vEPC instanti-ated into the DC. The orchestrator system automatically

coordinates the virtualization of heterogeneous transporttechnologies (packet and optical) as well as computesand allocates the cloud resources for deploying VNFs, suchas vEPC and virtualized SDN (vSDN) controllers [3] (seeFig. 1). A proof of concept is carried out in a joint setupformed by the Centre Tecnològic de Telecomunicacionsde Catalunya (CTTC) SDN/NFVorchestrator, CTTC packetand DC domains, and the ADVA optical network hypervi-sor (ONH) [11] used for controlling the optical domain be-tween packet networks.

The rest of this paper is organized as follows: inSection II, we overview related work for virtualization ofboth network infrastructure and functions. In Section III,the adopted network scenario is detailed by means ofan example where an MNO’s SDN-controlled backhaultenants are deployed over a common physical aggregationinfrastructure. Section IV describes the implementedSDN/NFV orchestration architecture. The experimentalproof of concept over a CTTC–ADVA setup is reportedin Section V. Finally, we conclude the paper in Section VI.

II. RELATED WORK

In the case of network virtualization, virtual networkproviders aggregate infrastructures from different physicalinfrastructure providers enabling virtual network operatorsto offer their services over a common and shared infrastruc-ture. Recently, a number of works have been produced vir-tualizing network infrastructures composed by a singletechnology (e.g., optical domain [11]) or multitechnologies(packet and optical domains, e.g., [12]; wireless and opticaltechnologies, e.g., [13]; etc.). Regardless of the underlyingnetwork technologies, a key element for deploying (virtual)network tenants over a common infrastructure is the use ofa logically centralized control entity. This entity is respon-sible for providing the network slicing. That is, network re-sources are partitioned and abstracted and used tocompose network tenants. The computation and creationof the network tenants may satisfy specific services’ de-mands in terms of end-to-end connectivity, latency, band-width, availability, etc. The adoption of a centralizedSDN controller actually provides the required abstractionfunction of the physical network infrastructure beingcontrolled.

Fig. 1. Support of multiple MNOs’ virtual backhaul infrastructure over a common multilayer aggregation network.

A136 J. OPT. COMMUN. NETW./VOL. 9, NO. 2/FEBRUARY 2017 Martínez et al.

Page 3: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

For the network virtualization, two abstraction models/strategies can be used [11]: single virtual node and abstractlink model. The former represents either a completer or apartial network topology as a single virtual node. By doingso, the internal domain topology is hidden by the abstrac-tion function. Ports/interfaces of the virtual node are ingeneral simply mapped to the real physical interfaces ofthe complete/partial network domain. On the other hand,the abstract link model provides a summarized view of theinternal topology of the network domain. For instance, theabstract view on this model may have a representation ofinterconnected domain border nodes with their externallinks. Herein we will concentrate on the single virtual nodeapproach for abstracting network domains. In the contextof multiple domains managed by different physical infra-structure providers, the virtual node approach may be pre-ferred because internal domain details do not need to beshared.

NFV targets specific network functions being virtualizedin commodity hardware (COTS). As said, this brings costreductions, accelerates network and service deployment,etc. Those advantages of NFVare combined with the virtu-alization of physical network elements (e.g., packet and op-tical switches). Hence, both network and IT resources arejointly considered when composing multitenants [6,12–14](for fixed and mobile networks and services). NFV is appli-cable to all network segments (i.e., access, metro, and core)covering a number of network functions, such as virtualborder network gateways [5] and virtual mobile cores(vEPCs) [2,6,7]. The virtualization of specific network func-tions (VNFs) can be sorted by twomodels, partial or full [2].In the partial model, only selected network functions (e.g.,control plane oriented, as EPCMME) are virtualized in theDC, whereas other functionalities (e.g., EPC SGw/PGwdata plane’s forwarding and tunneling) are kept indedicated network elements. Conversely, in the fullmodel, all (control and data plane) functional entitiesare implemented as VNFs running in the DC’s virtualmachines (VMs).

The control and management of each individual networktenant is handled by a centralized SDN controller. In otherwords, each network tenant has its own (isolated) networkinfrastructure and functions, and the control and configu-ration are attained by an independent SDN controller.The dynamic deployment of SDN controllers per tenantalso leverages the NFV capabilities. That is, the DC’s cloudresources are used to instantiate a vSDN controller foreach network tenant [3,15].

For the dynamic deployment of MNOs’ virtual backhaulinfrastructures along with VNFs for vEPC and vSDN,herein we rely on some of the aforementioned conceptsand solutions.

III. DEPLOYMENT OF SDN-CONTROLLED MNOVIRTUAL BACKHAUL

Figure 2 depicts the physical topology of the consideredmultilayer (packet and optical switching) aggregation in-frastructure. This network allows the connectivity between

theMNO’s RAN to the DC domains where specific VNFs forvEPC and vSDN are instantiated. We assume that all themobile core functions are virtualized. Thus, herein we relyon the full virtualization model of the EPC [2], and weextend it with the virtualization of backhaul network infra-structure to connect the RAN and the vEPC domains.

The aggregation network leverages the statistical multi-plexing provided by packet switching (MPLS) and the hugetransport capacity of optical switching, applyingmultilayergrooming techniques [16]. An MNO creating/modifying itsbackhaul capacity is built over the aggregation network asinterconnected virtual packet domains. The MNO SDNcontroller’s vision (deployed as a VNF) is an abstractionof a set of connected packet domains (via an optical connec-tion) providing the connectivity between the RAN andvEPC. Each abstracted packet domain is represented bya single (virtual) packet switch node. The interfaces of sucha node are mapped to the physical incoming/outgoinglinks of a packet flow. In the example (Fig. 2), for theMNO1 topology the virtual packet node of the domainlinked to the RAN is formed by ingress link A and egresslink E of the corresponding physical packet network.

The view of each MNO’s vSDN controller allows each todynamically compute and set up packet (MPLS) tunnels forbackhauling upcoming mobile [long-term evolution (LTE)]control and user traffic flows (i.e., S1-MME and S1-U inter-faces [9]) between the RAN and vEPC functions. To thisend, the vSDN controller uses the OpenFlow protocol(OFP) to configure each virtual packet domain. A hypervi-sor layer exists between the vSDN controller and the net-work infrastructure (Section IV), performing abstraction,isolation, and translation functionalities mapping vSDNcontrol messages to their physical resources. The connec-tivity within the DC network domain is virtualized, con-necting the core packet domain and the deployed cloudVNFs (i.e., vSDN controller and vEPC).

In the example, each virtual MNO’s backhaul infrastruc-ture is created connecting (at the packet level), via a directlink/tunnel with the required capacity, the two abstracted

Fig. 2. Physical multilayer aggregation network connectingRANs and DCs and abstracted view of the backhaul networksper MNO.

Martínez et al. VOL. 9, NO. 2/FEBRUARY 2017/J. OPT. COMMUN. NETW. A137

Page 4: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

packet nodes linked to both the RAN and the mobile coredomains. The resulting virtual backhaul topology mostlydepends on the available physical network as well as thevirtual network strategy (i.e., partitioning and composi-tion) being adopted. That said, it is worth mentioning thatthe targeted automatic MNO backhaul creation supportsthe deployment of more advanced network topologies(e.g., mesh, rings). Indeed, these topologies are needed,for instance, by an MNO owning a number of remoteRAN domains. The traffic generated by each RAN domaincan be aggregated into higher capacity links and steeredtoward the EPC. Additionally, more complex virtual back-haul networks also provide appealing capabilities that5G services would require, such as effective recovery mech-anisms (e.g., disjoint paths) and QoS differentiation.

IV. SDN/NFV ORCHESTRATION ARCHITECTURE FOR

DYNAMIC DEPLOYMENT OF MNO VIRTUAL BACKHAUL

The SDN/NFV orchestrator architecture used to deployan MNO’s virtual backhaul tenants with the respectiveDC’s VNFs is represented in Fig. 3. The NFV orchestratordeploys VNFs on the compute domain within the DC/cloud.This infrastructure, along with the network domain(formed by the packet and optical switches), constitutesthe so-called NFV infrastructure [4]. Whenever a newVNF (e.g., vSDN controller or vEPC) needs to be deployed,the VNF manager is created to deploy and handle thetargeted VNFs.

In Fig. 3, following a bottom-up description, besides thenetwork infrastructure (packet and optical switches of theaggregation and DC domains), individual control planeinstances are dedicated to configure the network elementsforming an individual domain. These controllers/controlplane elements are part of the virtual infrastructure man-ager (VIM) defined by the European TelecommunicationsStandards Institute (ETSI) NFV architecture [4]. Four in-dependent controllers are deployed: 1) an SDN controllerfor the (MPLS) packet domain connected to the RAN;2) an ONH used for configuring the optical networkdomain; 3) an SDN controller for the packet network

connected to the DC; and 4) an SDN controller handlingthe flows within the DC networking.

The multidomain SDN orchestrator (MSO) is a unifiedtransport network operating system that takes over thecomposition of end-to-end provisioning services acrossmultiple domains of the aggregation network at an abstrac-tion level (i.e., using virtual node representation).Specifically, the MSO is a controller of controllers whoseimplementation follows the Internet Engineering TaskForce application-based network operations architecture[17]. The MSO coordinates, in a hierarchical way, theunderlying controllers to configure/program the computedphysical network resources on the involved (traversed) do-mains for setting up a new/modified end-to-end service.This model allows the SDN controllers to abstract theirrespective domain topology and resource information to-ward the MSO. Such a domain abstraction, in turn, allowsthe scalability of the whole control system to be ensured.

Another relevant architectural element is the multido-main network hypervisor (MNH) [18]. The MNH partitionsand aggregates the physical resources (i.e., nodes, links, op-tical spectrum, etc.) in each domain into virtual resources.Next, virtual resources are interconnected to compose theMNO’s backhaul tenants. Therefore, the MNH is respon-sible for the abstraction of the backhaul tenant at thepacket layer. Recall that such backhaul tenants are con-trolled by instantiated vSDN controllers. That is, theMNH provides the topology vision to the vSDN controllerabout each MNO’s backhaul infrastructure. The MNHprovides the hypervisor functions, such as abstraction,translation, and isolation (of the backhaul tenants), to eachvSDN controller as well as triggers the creation, modifica-tion, and deletion of the virtual backhaul infrastructure inresponse to received MNO demands.

The cloud and network orchestrator handles the co-ordination and management of cloud resources (VMs)and network resources within the aggregation infrastruc-ture. It provides an ecosystem for a cloud and network op-erating system toward deploying jointly MNO virtualbackhaul and its vEPC functions. It relies on a southboundinterface that enables communication with the MNH viathe so-called control orchestration protocol (COP) [19].The COP application programming interface (API) is atransport application interface used to basically retrieve(abstracted) network topology, to serve connectivity re-quests, and to perform end-to-end path computation. Forthe cloud resources, the cloud and network orchestratorhas a compute controller used for configuring the computedomain (i.e., VNF instantiation) and a hypervisor controlused to configure the VMs over the physical compute do-main. The cloud and network orchestrator provides thefunctionalities being defined as the VIM entity in theETSI NFV architecture. The cloud and network orchestra-tor in this work is also referred to as the SDN integrated ITand network orchestrator (SINO).

Finally, the NFV orchestrator manages the physical andvirtual resources to support the applications requesting thecreation/enhancement of theMNO virtual backhaul as wellas the corresponding VNFs (vEPC and vSDN) in the cloud.

Fig. 3. SDN/NFV orchestration architecture provisioning MNObackhaul virtual networks.

A138 J. OPT. COMMUN. NETW./VOL. 9, NO. 2/FEBRUARY 2017 Martínez et al.

Page 5: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

V. EXPERIMENTAL PROOF OF CONCEPT

This section addresses the experimental proof of concept(as a validation) of the implemented SDN/NFV orchestra-tor functionalities. Upon MNO demands, dynamic and au-tomatic computation and deployment of a packet backhaultenant over the aggregation network are performed alongwith instantiating VNFs (i.e., vEPC and vSDN). The proofof concept has been jointly conducted by CTTC and ADVA.In the following, the workflow is detailed, which is used bythe SDN/NFV orchestrator to coordinate the requiredfunctions and control of data plane elements describedin Section IV. It is worth saying that if the workflow iscompleted, the targeted backhaul tenant for the MNO re-quest is successfully deployed. Otherwise, if no sufficient(networking or cloud) resources are available to servethe MNO request, this is blocked by the SDN/NFV orches-trator. The experimental setup (including the networkelements, controllers, and DC infrastructure) used for val-idating the workflow is described, showing main protocolanalyzer captures demonstrating the exchanged messageswithin the SDN/NFV orchestrator.

A. Description of the Workflow

Figure 4 shows the workflow among the involved func-tional blocks of the SDN/NFV orchestrator (SINO) to man-age the creation of an SDN-controlled virtual backhaul andthe corresponding vEPC. The whole process is divided intotwo macroscopic steps:

• Step 1. The NFVorchestrator requests the creation of thevSDN controller (to control via OFP the virtual back-haul) and the vEPC mobile core (control and user plane)functions within the DC. Specifically, those requests aredelegated to the respective VNF managers. The VNFmanagers in turn communicate with the DC’s computecontrol, requiring the creation of VMs (specifyingcloud resources in terms of CPU and memory) withthe respective operating system image for the VNF

implementations (i.e., vEPC and vSDN). The responsedetermines the IP and MAC addresses of the involvedelements and functions: vSDN and vEPC (includingPGw, SGw, and MME).

• Step 2. The creation of the MNO virtual backhaul is car-ried out. This process entails building the virtual back-haul and enabling the connectivity to the correspondingcreated vSDN (in step 1). To do that, the MNH receivesand processes the request (including the IP address ofthe vSDN). The MNH computes, using the abstract viewof the underlying transport network (at the packet level),the domain sequence connecting theMNO’s RAN and thevEPC within the DC. To this end, the service require-ments to be offered by the MNO, such as peak data rateor maximum tolerated end-to-end latency, are taken intoaccount. In the considered multilayer aggregation net-work, it is first necessary for the traversed packet do-mains to be interconnected via an optical connectiontriggered by the MSO. In other words, MNH computesa sequenced set of virtual packet nodes that in the physi-cal infrastructure are connected via an optical domain.The configuration of the multilayer physical infrastruc-ture is handled by the MSO. When the optical connectionis set up (using the ADVA ONH), at the packet level, allthe domains are interconnected. For those packet do-mains the MSO subsequently requests packet flow pro-visioning specifying the ingress/egress links of thosedomains to derive the abstracted (virtual) packet nodeforming the targeted virtual backhaul. It is worth men-tioning that this process is performed twice to supportbidirectional packet communications within the back-haul. Last but not least, a layer 2 flow within the DC in-frastructure (Ethernet) is created to connect the virtualpacket node with the vEPC. Once the virtual backhaulconnectivity is ready, notification is sent to the NFVorchestrator, and at that time, the vSDN has a view ofthe virtual packet backhaul used to transport bothLTE control and user plane traffic between the RANand the deployed vEPC.

B. Experimental Setup and Validation

Figure 5 depicts the interconnection between the CTTCSDN/NFV orchestrator located at Barcelona, Spain, andthe ADVA ONH in Meiningen, Germany. For the sake ofclarification, the setup is conducted at the control planelevel. No data plane configuration is finally realized oncethe whole SDN/NFV orchestration process is completed.To enable remote communication within the SDN/NFV or-chestrator (SINO), in particular, the MSO located at CTTCand the ONH placed at ADVA, an OpenVPN tunnel be-tween CTTC and ADVA has been established over thepublic Internet. As a consequence, the MSO via the COPprotocol sends (routed over the OpenVPN tunnel) opticalconnection requests that will allow the connectivity be-tween packet domains connected to both RAN and DC.The rest of the SDN controllers are provided by CTTCand configure the packet domains connected to the RAN,the core network, and DC.

Fig. 4. Workflow for provisioning MNO virtual backhaul networkand VNFs.

Martínez et al. VOL. 9, NO. 2/FEBRUARY 2017/J. OPT. COMMUN. NETW. A139

Page 6: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

The vEPC implementation is based on the CTTC LENAemulator (NS-3 simulator) [20]. In a single VM equippedwith two network adapters (one for the S1 interface andanother for the SGi interface), needed control and userplane network functions for the creation/tunneling of mo-bile connections (bearers) are deployed: the EPC MME andthe SGw/PGw along with the required interfaces betweenthem (i.e., S11). The SGw and PGw are implementedwithin the same software process. In the vEPC implemen-tation, no authentication functionality (e.g., home sub-scriber server) is implemented.

The experimental validation to create MNO’s VNFs andthe backhaul tenant using the SINO architecture is shownin the captured messages in Fig. 6. Following the workflowsequence, first a client (CLIENT in Fig. 6) MNO requests tothe SINO the allocation of two VMs. These VMs host thetargeted vEPC and vSDN controllers for theMNO demand.In the setup both the SINO and the MNH are deployed inthe same building block (SINO-MNH) in Fig. 6. The usedcommands to instantiate VMs are implemented in aRESTful API (POST /create_vm/ method). As men-tioned, two separate VM requests are made to accommo-date the needed VNFs. As shown in the figure, therequired time (request–response) to create the VMs and in-stantiate the vSDN controller and vEPC functions isaround 22 s.

Once VMs are deployed, by means of another RESTfulAPI (POST /virtual_network/ method) the clientMNO triggers the request to create the virtual backhaulnetwork. This request is processed, as detailed in the work-flow, by the network hypervisor (i.e., SINO-MNH) andserved by the MSO. The message sent by SINO-MNH tothe MSO is made using the RESTful POST /rest-conf/config/calls/call/method. In this messagethe IP addressing of both the vEPC core network functionsand the vSDN within the DC are passed as contained in-formation. The MSO is responsible for coordinating amongthe different packet and optical domains the end-to-endconnectivity between both the MNO RAN and the vEPCand the vSDN controller with the SINO-MNH. To providethe connectivity at the packet layer through the multilayeraggregation network, the MSO requests the establishmentof an optical tunnel within the optical domain to the ADVAMNH (simply labeled ADVA in the following). Once the op-tical tunnel is created, at the packet level the domains ad-jacent to both the RAN and the DC networks are connected.Next, it is necessary to create the packet flows from theMNO’s RAN and the deployed vEPC. To do that, theMSO entity communicates with the packet domains’ con-trollers (SDN-CTL-1 and SDN-CTL-2). The RESTfulAPI messages to establish the packet flows requested bythe MSO are PUT /restconf/config/openday-light-inventory/ and PUT /controller/nb/v2/flowprogrammer/ methods.

The request and received response for the optical tunnelestablishment connecting the packet domains takesaround 10 ms (in Fig. 6, it is reflected by the exchangeof messages between MSO and ADVA ONH). It is worthmentioning that in this proof of concept, we assumed thatthe optical tunnel was pre-established (i.e., no transceiverconfiguration was done). This is why ADVA optical hyper-visor response is relatively fast. Conversely, if the opticaltransceivers need to be tuned and configured, the overallestablishment of the optical tunnel would take hundredsof milliseconds.

Once the optical tunnel is set up, it becomes a (virtual)packet link connecting both packet domains. Then, the MSOcommunicates with each computed packet domainSDN controller to request the required packet flow

Fig. 5. Topology of the connectivity between CTTC SDN/NFVorchestrator and the ADVA ONH.

Fig. 6. Capture of the experimental control messages for setting up the VNFs and virtual backhaul network.

A140 J. OPT. COMMUN. NETW./VOL. 9, NO. 2/FEBRUARY 2017 Martínez et al.

Page 7: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

configuration. In general, this takes around 11 ms. Thecomplete setup of the virtual backhaul tenant is done in78 ms. This includes not only the request/response mes-sages’ elapsed time for setting up the optical tunnel andthe packet domain configuration but also the processingtime (i.e., path computation, topology updating, etc.) per-formed within the MSO.

VI. CONCLUSIONS

In future 5G networks, mobile broadband communica-tions will dramatically increase the data traffic. To copewith this, traditionally MNOs invest with new applianceson their infrastructures (i.e., RAN, backhaul/fronthaul net-works, etc.) following overdimensioned strategies. This re-sults in a non-cost-effective solution because some of therolled-out network resources may not be fully utilized.Herein we seek a more flexible, agile, and cost-efficient sol-ution to better tailor the MNO capacity needs with the de-ployed network resources (in the backhaul). To that end, weexploit the benefits provided by both SDN and NFV con-cepts. Basically we implement an SDN/NFV orchestratorthat according to the MNO capacity demands automati-cally computes and allocates backhaul network tenantson top of a common physical aggregation network. The cre-ation of the (virtual) backhaul provides the connectivity be-tween the MNO’s RAN domains and the mobile corefunctions (EPC). For the latter, the EPC is virtualized inCOTS hardware (vEPC) within the DC. The createdMNO’s backhaul tenant is completely isolated and con-trolled by its own SDN controller, which is also dynamicallydeployed as a VNF (vSDN) within the DC. The SDN/NFVorchestrator architecture has been detailed along with therequired workflow used to deploy an MNO’s backhaul net-work and respective vEPC and vSDN from scratch. Finally,a proof of concept has been experimentally validated, suchas a global architecture (functional entities and interfaces)over a multilayer aggregation network provided by theCTTC SDN/NFV orchestrator and the ADVA ONH.

ACKNOWLEDGMENT

This work was partially funded by the EuropeanCommission’s FP7 IP COMBO project under grant agree-ment 317762 and by the Spanish MINECO DESTELLOproject (TEC2015-69256-R).

REFERENCES

[1] NGMN Alliance, “5G White Paper,” Feb. 2015.[2] T. Taleb, M. Corici, C. Parada, A. Jamakovic, S. Ruffino, G.

Karagiannis, and T. Magedanz, “EASE: EPC as a service toease mobile core deployment over cloud,” IEEE Netw., vol. 29,no. 2, pp. 78–88, 2015.

[3] R. Muñoz, R. Vilalta, R. Casellas, R. Martinez, T. Szyrkowiec,A. Autenrieth, V. López, and D. López, “Integrated SDN/NFVmanagement and orchestration architecture for dynamicdeployment of virtual SDN instances for virtual tenantnetworks,” J. Opt. Commun. Netw., vol. 7, no. 11, pp. B62–B70, 2015.

[4] ETSI, “Network function virtualization (NFV),” ETSI WhitePaper No. 3, Oct. 2014.

[5] C. Lange, D. Kosiankowski, and A. Gladisch, “Use-case basedcost and energy efficiency analysis of virtualization conceptsin operator networks,” in Proc. of European Conf. on OpticalCommunication (ECOC), Valencia, Spain, Sept. 2015.

[6] M. R. Sama, L. M. Contreras, J. Kaippallimalil, I. Akiyoshi, H.Qian, and H. Ni, “Software-defined control of the virtualizedmobile packet core,” IEEE Commun. Mag., vol. 53, no. 2,pp. 107–115, 2015.

[7] H. Hawilo, A. Shami, M.Mirahmadi, and R. Asal, “NFV: Stateof the art, challenges, and implementation in next generationmobile networks (vEPC),” IEEE Netw., vol. 28, no. 2, pp. 18–26, 2014.

[8] Open Networking Foundation (ONF), “SDN architectureoverview (Version 1.1),” ONF TR-504, Nov. 2014.

[9] 3GPP, “E-UTRA and E-UTRAN; overall description,” TS36600, Mar. 2012.

[10] R. Martínez, A. Mayoral, R.Vilalta, R. Casellas, R. Muñoz, S.Pachnicke, T. Szyrkowiec, and A. Autenrieth, “IntegratedSDN/NFV orchestration for the dynamic deployment ofmobile virtual backhaul networks over a multi-layer (packet/optical) aggregation infrastructure,” in Optical FiberCommunication Conf. and Expo. (OFC), Anaheim, CA, Mar.2016.

[11] A. Autenrieth, T. Szyrkowiec, K. Grobe, J.-P. Elbers, P.Kaczmarek, P. Kostecki, and W. Kellerer, “Evaluation of vir-tualization models for optical connectivity service providers,”in Proc. of Optical Network Design and Modeling (ONDM),Stockholm, Sweden, May 2014.

[12] R. Vilalta, A. Mayoral, R. Munoz, R. Casellas, and R.Martinez, “Multitenant transport networks with SDN/NFV,” J. Lightwave Technol., vol. 34, no. 6, pp. 1509–1515,2016.

[13] A. Tzanakaki, M. P. Anastasopoulos, G. S. Zervas, B. R.Rofoee, R. Nejabati, and D. Simeonidou, “Virtualization ofheterogeneous wireless-optical network and IT infrastruc-tures in support of cloud and mobile cloud services,” IEEECommun. Mag., vol. 51, no. 8, pp. 155–161, 2013.

[14] R. Muñoz, R. Casellas, R. Vilalta, and R. Martinez, “Dynamicand adaptive control plane solutions for flexi-grid optical net-works based on stateful PCE,” J. Lightwave Technol., vol. 32,no. 16, pp. 2703–2715, 2014.

[15] A. Blenk, A. Basta, and W. Kellerer, “HyperFlex: AnSDN virtualization architecture with flexible hypervisorfunction allocation,” in Proc. of IFIP/IEEE Int. Symp. onIntegrated Network Management (IM), Ottawa, Canada,May 2015.

[16] R. Martinez, R. Casellas, and R. Munoz, “Experimentalvalidation/evaluation of a GMPLS unified control plane inmulti-layer (MPLS-TP/WSON) networks,” in Optical FiberCommunication Conf. and Expo. (OFC), Los Angeles, CA,Mar. 2012.

[17] D. King and A. Farre, “A PCE-based architecture for applica-tion-based network operations,” IETF RFC 7491, Mar. 2015.

[18] R. Vilalta, R. Muñoz, R. Casellas, R. Martinez, S. Peng, R.Nejabati, D. Simeonidou, N. Yoshikane, T. Tsuritani, I.Morita, V. Lopez, T. Szyrkowiec, and A. Autenrieth,“Multidomain network hypervisor for abstraction and controlof OpenFlow-enabled multitenant multitechnology transportnetworks,” J. Opt. Commun. Netw., vol. 7, no. 11, pp. B55–B61, 2015.

[19] R. Vilalta, V. López, A. Mayoral, N. Yoshikane, M. Ruffini, D.Siracusa, R. Martínez, T. Szyrkowiec, A. Autenrieth, S. Peng,

Martínez et al. VOL. 9, NO. 2/FEBRUARY 2017/J. OPT. COMMUN. NETW. A141

Page 8: Integrated SDN/NFV Orchestration for the Dynamic ... · PDF fileIntegrated SDN/NFV Orchestration for ... Multilayer (Packet/Optical) Aggregation Infrastructure Ricardo Martínez, Arturo

R. Casellas, R. Nejabati, D. Simeonidou, X. Cao, T. Tsuritani,I. Morita, J. P. Fernández-Palacios, and R. Muñoz, “The needfor a control orchestration protocol in research project on op-tical networking,” in Proc. of European Conf. on Networks andCommunication (EuCnC), Paris, France, June 2015.

[20] CTTC LENA EPC/LTE Network Simulator [Online].Available: http://networks.cttc.es/mobile‑networks/software‑tools/lena/.

RicardoMartínez (SM’14) received an M.S. degree in 2002 and aPh.D. degree in 2007, both in telecommunications engineering,from the Universitat Politècnica de Catalunya–BarcelonaTechUniversity in Barcelona, Spain. He has been actively involvedin several public-funded (national and European Union) researchand development projects as well as industrial technology transferprojects. Since 2013, he has been a Senior Researcher of theCommunication Networks Division at the Centre Tecnològic deTelecomunicacions de Catalunya (CTTC) in Castelldefels, Spain.His research interests include control and network managementarchitectures, protocols, and traffic engineering mechanisms fornext-generation packet and optical transport networks within ag-gregation/metro and core segments.

Arturo Mayoral received a graduate degree in telecommunica-tions engineering from the Universidad Autonoma de Madrid,Madrid, Spain, in 2013, and is currently working toward anM.S. degree at the Universitat Politecnica de Catalunya,Barcelona, Spain. In 2012, he was an Intern Researcher inTelefonica I+D. In 2013, he began work as a Software Developerand Research Assistant in the Centre Tecnològic deTelecomunicacions de Catalunya (CTTC) in Castelldefels, Spain,within the ICT STRAUSS European–Japan Project.

RicardVilalta received anM.S. degree in telecommunications en-gineering in 2007 and a Ph.D. degree in telecommunications in2013, both from the Universitat Politècnica de Catalunya,Barcelona, Spain. He also studied audiovisual communication atthe Open University of Catalonia in Barcelona, Spain, and re-ceived a master’s degree in technology-based business innovationand administration from Barcelona University in Barcelona,Spain. Since 2010, has been a Researcher at the CentreTecnològic de Telecomunicacions de Catalunya (CTTC) inCastelldefels, Spain, in the Optical Networks and SystemsDepartment. His research is focused on optical network virtuali-zation and optical OpenFlow. He is currently a ResearchAssociate at the Open Networking Foundation.

Ramon Casellas (SM’12) received an M.S. degree in telecommu-nications engineering in 1999 from the Universitat Politècnica deCatalunya–BarcelonaTech University in Barcelona, Spain, andENST Telecom Paristech, Paris, France, within an Erasmus/Socrates double degree program. After working as anUndergraduate Researcher at both France Telecom researchand development (R&D) and British Telecom Labs, he receiveda Ph.D. degree in 2002 from the Ecole Nationale Supérieure desTélécommunications (ENST, Paris) He worked as an AssociateProfessor in the Networks and Computer Science Department ofthe ENST (Paris) and joined the Centre Tecnològic deTelecomunicacions de Catalunya (CTTC) in Castelldefels, Spain,Optical Networking Area in 2006, with a Torres Quevedo researchgrant. He currently is a Senior Research Associate and theCoordinator of the ADRENALINE testbed. He has been involvedin several international R&D and technology transfer projects. Hisresearch interests include network control and management, theGeneralized Multi-Protocol Label Switching/Path Computation

Element (GMPLS/PCE) architecture and protocols, software-defined networking, and traffic engineering mechanisms. Hecontributes to Internet Engineering Task Force standardizationwithin the Common Control and Management Plane (CCAMP)and PCE working groups. He is a Member of the IEEECommunications Society and a member of the Internet Society.

Raül Muñoz (SM’12) received an M.S. degree in telecommunica-tions engineering in 2001 and a Ph.D. degree in telecommunicationsin 2005, both from the Universitat Politècnica de Catalunya (UPC),Barcelona, Spain. After working as an Undergraduate Researcherat Telecom Italia Lab in Turin, Italy, in 2000, and as an AssistantProfessor at the UPC in 2001, he joined the Centre Tecnològic deTelecomunicacions de Catalunya (CTTC) in Castelldefels, Spain,in 2002. Currently, he is a Senior Researcher, Head of theOptical Networks and Systems Department, and Manager of theCommunication Networks Division. Since 2000, he has participatedin several research and development projects funded by theEuropean Commission’s Framework Programmes (FP7, FP6,and FP5) and the Spanishministries, as well as in technology trans-fer projects. He has led several Spanish research projects andcurrently leads the European Consortium of the EuropeanUnion–Japan project STRAUSS. His research interests includecontrol and management architectures, protocols, and traffic engi-neering algorithms for future optical transport networks.

Stephan Pachnicke (SM’12) received an M.S. degree in informa-tion engineering from City University, London, UK, in 2001, andDipl.-Ing. and Dr.-Ing. degrees in electrical engineering from theTechnical University (TU) of Dortmund in Dortmund, Germany,in 2002 and 2005, respectively. In 2005, he also received aDipl.-Wirt.-Ing. degree in business administration from FernUniversität, Hagen, Germany. In January 2012, he finished hishabilitation on optical transmission networks. From 2007 until2011, he worked as Oberingenieur at the Chair for HighFrequency Technology, TU Dortmund. Following that, he was withADVA Optical Networking SE in the Advanced Technology Group(CTO Office), where he was leading (European Union-funded) re-search projects on next-generation optical access and fixed-mobileconvergence. Since 2016, he has been a Full Professor of OpticalCommunications at Christian-Albrechts-University of Kiel,Germany. He is author or co-author of more than 100 scientificpublications and several (pending) patents. He is a member ofVerband der Elektrotechnik, Elektronik, Informationstechnik/Informationstechnische Gesellschaft (VDE/ITG) and a SeniorMember of the IEEE.

Thomas Szyrkowiec received an M.Sc. degree in informaticsfrom the Technische Universität München, Germany, in 2013.He joined ADVA Optical Networking, Munich, in 2013, wherehe is working as an Engineer in the Advanced Technology teamtoward his Ph.D. degree in cooperation with the Chair ofCommunication Networks at Technische Universität München.His research interests include software-defined networking andnetwork virtualization in flexible optical networks.

Achim Autenrieth received a Dipl.-Ing. and Dr.-Ing. degree inelectrical engineering and information technology from theMunich University of Technology, Germany, in 1996 and 2003, re-spectively. He is currently the Principle Research EngineerAdvanced Technology in the CTO Office at ADVA OpticalNetworking, where he is working on the design and evaluationof multilayer networks, control plane, and SDN concepts. He is au-thor or co-author of more than 75 reviewed and invited scientificpublications. He is aMember of IEEE andVDE/ITG and a TechnicalProgram Committee Member of ECOC, DRCN, and RNDM.

A142 J. OPT. COMMUN. NETW./VOL. 9, NO. 2/FEBRUARY 2017 Martínez et al.